Kopano Core High availebility
currently I’m doing a research for MS-Exchange alternatives - and found Kopano. But after reading some general concepts, I’m still missing some Information about HA and/or load balancing… as far as I understand, Kopano just needs a database (mysql) and somd disk storage (for attachements) to work. We already have a mariaDB/Galera Cluster and a HA storage system - i think it shouldn’t be a problem configuring a Kopano Core Server to use these resources. But is it possible to run several Kopano Core servers connecting to these shared resources (and maybe run a load balancer between the servers and the clients)?
Hi @Steampower ,
you can find some more detailed information about HA for Kopano at https://documentation.kopano.io/kopanocore_administrator_manual/high_availability.html
a mariaDB/Galera Cluster
the last time I check galera it had the “drawback” of requiring a full mirror of all databases on each node. depending on the amount of kopano servers and galera instances this can mean quite a lot of overhead. Depending on what else you are running on galera you may want to establish a dedicated sql server for kopano, since i/o is crucial for performance.
But is it possible to run several Kopano Core servers connecting to these shared resources (and maybe run a load balancer between the servers and the clients)?
yes, and through the possibility to define listening sockets and configuration locations you are even able to run multiple kopano-server processes on the same node (this is what we utilise in the described pacemaker scenario in the manual).
The one thing to keep in mind though, is that each users has an assigned “homeserver”, meaning you cannot balance users between backend servers, but you can use standard loadbalancing mechanisms to distribute user access between your frontend (webapp and z-push) nodes.
If you have any further question let me know and I can establish contact to our professional services department, which is also able to prepare a concept tailored to your environment.