Request for feedback: easily run Kopano through Docker
the last few weeks I have worked together with @ZokRadonh on making his Docker images easier to use (along with adding ldap and some demo users to the compose file).
It still has some open todos and probably some rough edges.
Feedback in the form or pull requests, issues and Github stars will be much appreciated.
You can find the project at:
Disclaimer: no, this is no officially supported project. But if there is enough positive feedback, it might be one day.
a small update on this. We already managed to find some outside contributors who rewrote the setup script to make it more flexible for already existing ldap trees and made it possible to select which webapp plugins should be installed within the container. Currently we are discussing adding a container for user self service (password reset).
Since my last posting a first step for continuous testing of the docker images has been done as well. For this we are using the Travis service, which is free for public projects on Github, see https://travis-ci.com/zokradonh/kopano-docker for more information.
A side benefit of testing and building with Travis is that through there we can also easily tag the built images, which means that users can easily stick to a specific version and upgrade when required. See https://hub.docker.com/r/zokradonh/kopano_core/tags as an example for available tags for the core containers.
peterm888 last edited by
The Kopano4s version by @TosoBoso is already a docker container, available in GitHub. This is intended for Synology devices but I suspect will run anywhere. It may be worth checking out.
yes I’ve seen the work that he has been doing to cater to the needs of the Synology home users.
There are a few key differences between his approach and the project above:
- he is just putting all related services into one single image/container
- imho this is not what containerisation/dockerisation is about. One should have containers running for the individual parts of a service. While you could definitely argue of this should also extend to parts like dagent. spooler and the gateway this should at least be the case for separately updated components like z-push, webapp, the user directory, the mta, the mysql database.
- the synology uses the db plugin by default
- the above project instead defaults to ldap but still leaves the possibility to the admin to override this through env variables
- the above project is completely configured through env variables. this gives the benefit of not needed to manually change configuration in a running container and therefore circumventing issues during upgrade if the users has manually added an incompatible configuration.
available in GitHub
This is not really true. I have constantly asked him to put the code that generates his app into a public git repo, partly because I also want to help him improve his code, partly because I hope this would attract other contributors for it, but so far I did not have success with this. Therefore a few days ago I collected a few old downloads of the app, extracted them and put them into a git repo to see what has changed between releases. So the repo that you have found it probably just my investigation repository.
- he is just putting all related services into one single image/container
TosoBoso last edited by
as I have been quoted, I’d like to clarify a few topics on Kopano4S, source code availability, portability, contribution, architecture etc.
Source Code Availability:
My reply to requests on source code availability of K4S stands and putting it to GitHub is work in progress:
#1 it is open source as outlined in license terms when installing (https://wiki.z-hub.io/display/K4S/Install-Update)
#2 source code is self-contained in each SPK which is a simple tar which can be opened e.g. by 7Zip as outlined in K4S Wiki
#3 I will put it on GitHub, but after some refactoring to script location (k4s 0.99 wip) to make it easier to read and segregate components (see below)
Portability and Differences:
Felix pointed some differences of K4S Docker container to a standard container and I will outline more while they should be easy to accommodate.
- K4S runs monolithic all services in one container utilizing Dockers --init tiny for process control (https://github.com/krallin/tini) which could be decomposed (see architecture section)
- MariaDB / MySQL is mounted via socket instead of using ports. All you need is to mount /run/mysqld and use mysqld10.sock (can softlink to mysqld.sock)
K4S is using Synology’s native MariaDB10 to do this but equally another MariaDB / MySQL container can be used and kopano server.cfg adjusted to use port vs. sockets.
- K4S uses mounts as opposed to volumes and all data lives on the docker host in standard file systems (on Synology by default as so-called Shares).
All you need to do is to have the mounts available on Docker host then do the trick as advised in repository:
docker run -d -v /run/mysqld:/run/mysqld:ro -v $K_ETC:/etc/kopano -v $K_LOG:/var/log/kopano -v $K_ATC:/var/lib/kopano/attachments $PORTS_EXPOSED --name kopano4s --hostname kopano4s tosoboso/kopano4s: $VER_IN_KOP
- No parameters are passed but K4S configuration is all kept and maintained on docker hosts file systems maintained via Install and Administration GUI which is the added value for end users.
All you need to do is keeping the configuration files maintained via Unix editor on your Docker host in /etc/kopano respectively setting the database credentials in server.cfg rather than passing parameters.
- Database used as User Plugin as opposed to LDAP: this is just the default and common with Synology community but LDAP is fully supported.
Synology has LDAP service and some power users have it running. Documentation, How-to on Wiki and GUI support K4S for LDAP to simplify it for end-users is planned.
Contribution & Complexity:
I’m happy for others to contribute and so do I by providing questions and insight to the nut-cracker issues (e.g. https://forum.kopano.io/topic/2172/solving-real-ip-issue-running-kopano-web-behind-reverse-proxy-in-docker).
I’m also happy with Felix support in particular to get the initial Docker container running in 2016 and with his feedback on tini plus I’m fully open to his imhos :-)
One issue needs more review by Kopano regarding the impact to Docker imho: switching to systemd and stopping the support for init scripts and cfg files in install scripts.
As Docker does not go particular well with systemd using the init scripts to to control services in Docker is very popular; I’m using it and now I have to maintain init scripts (see https://forum.kopano.io/topic/1448/no-config-files-created-during-install/19).
Back to source code item #3 I have time constraints and suffer the syndrom “can’t sharpen my blade have to cut wood” as I wanted to put a working refactored version to Git Hub.
Please understand K4S is more than just Docker with it’s Synology integration and GUI components and maintaining Synology nitty gritty to keep the platform alive on Synology is extra time consuming.
K4S has 3 main components: Synology specific Shell install scripts and json GUI configurations, Perl, JS UI for configuration and commands, Docker build scripts, scripts inside the container plus wrapper scripts so it behaves from Synology as if it was inside the container. In addition Kopano Community and Supported editions are maintained which brings constraints to auto builds and care not to expose sensitive serial numbers. In short K4S grew with toolchain for the poor and a bit monolithic which needs refactoring (see architecture section).
If I would dump all of currents content onto GitHub without explaining all parts and purpose, I see a limited gain. Before I put it on GitHub I will therefore split Synology scripts from Docker scripts (WIP)
Component Architecture vs. Microservices:
I appreciate Felix imho “all in one container is not what dockerisation is about” but there are legacy reasons for it (e.g. Plugins) and I’m simply not so religious. Still refactoring is on.
Imho component architecture is key which is maintained in K4S as all Data(base + files) live outside and computation is segregated then micro services comes as next goal.
Arguably any Kopano Docker incl. K4S should be split into components: Files, Database, Mail, Core, Web, Meet(ings).
- For Files Docker volumes and even volumes container is one solution while mounting Unix volumes is the other and the latter is more straightforward to me
Unix volumes can live on SSD with ext4, easier controlled and replicated via rsync while for certain features Docker and its volumes live on brtfs somewhere in hidden state. At leaset this is the Synology world.
- For Database one can use it from another Docker container or native on Docker host
again I prefer the latter and configuration is easy to be changed).
- For Mail incl. Amavis should be done legacy Plugin for SPAM and Dagent for Fetchmail dependencies had been blocking this
Postfix uses LMTP to talk to Kopano, so simply fetchmail using dagent needs to be changed using SMTP. With kopano-spamd the old spam plugin is no longer needed which used the command line
Further splitting all mail components (postfix, amavisd, spamassassin etc.) to different containers is something I would not consider as this is getting religous and difficult for certain postfix integration
- For Core all Kopano core services should live in one container but it does not make sense to split any further as mapi etc. had to be replicated over all containers
- For Web also they should live in own container and main barrier was legacy plugin passwd which afIk now changed to mapi support as opposed of command line kopano-passwd.
- For Web(meeting) respectively new kopano-meet packages they should live in own container same as 2016 Zarafa Docker pilot by Felix did.
However I plan adding them to monolithic single container first to get it running and will decompose each component as described above over time via bet versions
Hope that helps, was not TDL (to long didn’t listen) and I’m open for discussion - TosoBoso