Too many open files - attachment storage
matthi last edited by
Dear Kopano community
I’m running kopano-server 10.0.4. Every 3 months the Kopano server stops working properly. Most users can’t login to the WebApp or sync their data with Active Sync.
Errors in the log:
Problem opening directory file "/var/lib/kopano/attachments": Too many open files - attachment storage atomicity not guaranteed Authentication by plugin failed for user "xxx": Trying to authenticate failed: Failure connecting any of the LDAP servers (0x00000000); username = xxx Failed to enable TLS on LDAP session: Can't contact LDAP server
The short term solution was to restart the Kopano daemon. But a clean restart of the service was not possible.
systemctl restart kopano-server Fatal error detected. Please report all following information. kopano-server 10.0.4 OS: Debian GNU/Linux 10 (buster) (Linux 4.19.0-8-amd64 x86_64) Thread name: kopano-server Peak RSS: 647032 Pid 14361 caught SIGSEGV (11), traceback: Backtrace: f0. /lib/x86_64-linux-gnu/libkcutil.so.0(+0x4f3b0) [0x7f0862c983b0] f1. /lib/x86_64-linux-gnu/libkcutil.so.0(+0x366c6) [0x7f0862c7f6c6] f2. /lib/x86_64-linux-gnu/libkcutil.so.0(+0x387ad) [0x7f0862c817ad] f3. /lib/x86_64-linux-gnu/libpthread.so.0(+0x12730) [0x7f085fa59730] f4. /lib/x86_64-linux-gnu/libpthread.so.0(pthread_rwlock_rdlock+0) [0x7f085fa53a00] f5. /lib/x86_64-linux-gnu/libkcserver.so.0(+0xd5d08) [0x7f0862b2bd08] f6. /lib/x86_64-linux-gnu/libkcserver.so.0(+0xdba92) [0x7f0862b31a92] f7. /lib/x86_64-linux-gnu/libkcserver.so.0(+0xb2061) [0x7f0862b08061] f8. /lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3) [0x7f085fa4efa3] f9. /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f) [0x7f085f65e4cf] Signal errno: Success, signal code: 1 Sender pid: 160, sender uid: 0, si_status: 0 Signal value: 0, faulting address: 0xa0 When reporting this traceback, please include Linux distribution name (and version), system architecture and Kopano version. kopano-server.service: Main process exited, code=killed, status=11/SEGV kopano-server.service: Failed with result 'signal'. Starting kopano-server version 10.0.4 (pid 12800 uid 0) Starting kopano-server version 10.0.4 (pid 12800 uid 998)
For me it looks like as it could be an issue with “Too many open files”
cat /proc/$(pidof kopano-server)/limits Max open files 8192 8192 files lsof -p $(pidof kopano-server)| wc -l 8263 after the restart of the Kopano server: lsof -p $(pidof kopano-server) | wc -l 131
- Is that a known issue?
- Do you have an advice, how to tune this?
Thank you for your help.
Gerald last edited by Gerald
@matthi An interesting observation. Unfortunately I had just restarted my server due to an ESXi upgrade so I have no data on how many files are in use after some uptime… Less than 1 day after starting it I am at about 225.
I have never run into this problem though, because the server gets automatically restarted whenever ubuntu issues updates that need a restart, which is unlikely to be more than a few weeks apart.
If you regularly have 3 months of uptime, my guess is your main problem is not the file descriptor leakage in kopano but the fact that you are not installing OS security updates as they get released? By the way: Kopano 10.0.4 is from May 2020 as far as I can tell, so I guess the first thing in a bug report Kopano-Support will tell you is to upgrade. I’m currently running mostly core-10.0.6.502 which was the last version before they switched the version number to 11.x and can recommend this version - no major issues so far.
Gerald last edited by Gerald
I have been logging the open files for two months now and they are fluctuating between 150 right after kopano-startup to a high of about 500. I’m currently at ~350 after an uptime of 17 days. I really don’t see any problem here matthi. I’m however certainly not willing to go for 3 months without kernel upgrades just to test this wild theory.
Maybe you are not rebooting due to kernel live patching, but I really have the feeling you are not installing security updates. So a lot more is rotten here than just an increasing number of open files.