Threshold limitation for z-push

Hi, What is the threshold limitation for z-push for 16 gb RAM and 8 Core CPU. Is there a way i can calculate this? Please help me on this.

Hi @sachithra ,

are you still using the Zimbra backend? For the Kopano backend we have seen a similar sized system to work just fine (low load average) with about 3000 devices for Z-Push 2.2, with 2.3 this number should be a lot higher.

Hi @fbartels

Yes, Thank you for your reply. We are still on zimbra backend and we are planing to update the Z-push version to 2.3.8 soon. Also we are planing to load balance the activesync connections to two z-push servers. Currenly there are on 200 on activesync. in the future it will be more.

Is there are way i can do a calucluation to see how much more i can go?

Hi @sachithra ,

I think the rule of thumb was that a single user consumes about 8mb of ram per average for 2.2 and in 2.3 it was brought down to 5mb. But again this was measured for the Kopano backend, so for Zimbra it may scale differently.

It definitely scales differently with the Zimbra backend, due to the DiffEngine used to synchronize.

Are you only connecting mobile phones? Or also clients like Outlook. Outlook will definitely have a much higher footprint than Androids or iOS devices.

HI @Sebastian , We are connecting both mobile and Outlook clients to this. Recently we experienced a huge cpu load due to unsynchronized device states between the server. Currenly everything is syncing via one server. We are palining to setup a mariadb cluster as z-push state machine.

@fbartels, Thank you. ill do a calculation according to this. System is running on VMware Esxi. Is 8 cores enough as you see? Im using apache as the web service.

If you have a cluster you definitely need to share the states. I would recommend using the SQL state machine (works much better than shared file states via NFS) and memcache.
Have a look at https://wiki.z-hub.io/display/ZP/State+Machines

Log in to reply

Looks like your connection to Kopano Community Forum was lost, please wait while we try to reconnect.