Request for feedback: easily run Kopano through Docker
-
Hi @peterm888,
yes I’ve seen the work that he has been doing to cater to the needs of the Synology home users.
There are a few key differences between his approach and the project above:
- he is just putting all related services into one single image/container
- imho this is not what containerisation/dockerisation is about. One should have containers running for the individual parts of a service. While you could definitely argue of this should also extend to parts like dagent. spooler and the gateway this should at least be the case for separately updated components like z-push, webapp, the user directory, the mta, the mysql database.
- the synology uses the db plugin by default
- the above project instead defaults to ldap but still leaves the possibility to the admin to override this through env variables
- the above project is completely configured through env variables. this gives the benefit of not needed to manually change configuration in a running container and therefore circumventing issues during upgrade if the users has manually added an incompatible configuration.
@peterm888 said in Request for feedback: easily run Kopano through Docker:
available in GitHub
This is not really true. I have constantly asked him to put the code that generates his app into a public git repo, partly because I also want to help him improve his code, partly because I hope this would attract other contributors for it, but so far I did not have success with this. Therefore a few days ago I collected a few old downloads of the app, extracted them and put them into a git repo to see what has changed between releases. So the repo that you have found it probably just my investigation repository.
- he is just putting all related services into one single image/container
-
All,
as I have been quoted, I’d like to clarify a few topics on Kopano4S, source code availability, portability, contribution, architecture etc.
Source Code Availability:
My reply to requests on source code availability of K4S stands and putting it to GitHub is work in progress:
#1 it is open source as outlined in license terms when installing (https://wiki.z-hub.io/display/K4S/Install-Update)
#2 source code is self-contained in each SPK which is a simple tar which can be opened e.g. by 7Zip as outlined in K4S Wiki
#3 I will put it on GitHub, but after some refactoring to script location (k4s 0.99 wip) to make it easier to read and segregate components (see below)In the meantime anyone can check for inspiration the core Docker components in scripts directory of the spk: Dockerfile, init.sh, kopano-postfix.sh, kopano-fetchmail.sh.
Portability and Differences:
Felix pointed some differences of K4S Docker container to a standard container and I will outline more while they should be easy to accommodate.- K4S runs monolithic all services in one container utilizing Dockers --init tiny for process control (https://github.com/krallin/tini) which could be decomposed (see architecture section)
- MariaDB / MySQL is mounted via socket instead of using ports. All you need is to mount /run/mysqld and use mysqld10.sock (can softlink to mysqld.sock)
K4S is using Synology’s native MariaDB10 to do this but equally another MariaDB / MySQL container can be used and kopano server.cfg adjusted to use port vs. sockets.
- K4S uses mounts as opposed to volumes and all data lives on the docker host in standard file systems (on Synology by default as so-called Shares).
All you need to do is to have the mounts available on Docker host then do the trick as advised in repository:
docker run -d -v /run/mysqld:/run/mysqld:ro -v $K_ETC:/etc/kopano -v $K_LOG:/var/log/kopano -v $K_ATC:/var/lib/kopano/attachments $PORTS_EXPOSED --name kopano4s --hostname kopano4s tosoboso/kopano4s: $VER_IN_KOP- No parameters are passed but K4S configuration is all kept and maintained on docker hosts file systems maintained via Install and Administration GUI which is the added value for end users.
All you need to do is keeping the configuration files maintained via Unix editor on your Docker host in /etc/kopano respectively setting the database credentials in server.cfg rather than passing parameters.
- Database used as User Plugin as opposed to LDAP: this is just the default and common with Synology community but LDAP is fully supported.
Synology has LDAP service and some power users have it running. Documentation, How-to on Wiki and GUI support K4S for LDAP to simplify it for end-users is planned.
Contribution & Complexity:
I’m happy for others to contribute and so do I by providing questions and insight to the nut-cracker issues (e.g. https://forum.kopano.io/topic/2172/solving-real-ip-issue-running-kopano-web-behind-reverse-proxy-in-docker).
I’m also happy with Felix support in particular to get the initial Docker container running in 2016 and with his feedback on tini plus I’m fully open to his imhos :-)
One issue needs more review by Kopano regarding the impact to Docker imho: switching to systemd and stopping the support for init scripts and cfg files in install scripts.
As Docker does not go particular well with systemd using the init scripts to to control services in Docker is very popular; I’m using it and now I have to maintain init scripts (see https://forum.kopano.io/topic/1448/no-config-files-created-during-install/19).Back to source code item #3 I have time constraints and suffer the syndrom “can’t sharpen my blade have to cut wood” as I wanted to put a working refactored version to Git Hub.
Please understand K4S is more than just Docker with it’s Synology integration and GUI components and maintaining Synology nitty gritty to keep the platform alive on Synology is extra time consuming.K4S has 3 main components: Synology specific Shell install scripts and json GUI configurations, Perl, JS UI for configuration and commands, Docker build scripts, scripts inside the container plus wrapper scripts so it behaves from Synology as if it was inside the container. In addition Kopano Community and Supported editions are maintained which brings constraints to auto builds and care not to expose sensitive serial numbers. In short K4S grew with toolchain for the poor and a bit monolithic which needs refactoring (see architecture section).
If I would dump all of currents content onto GitHub without explaining all parts and purpose, I see a limited gain. Before I put it on GitHub I will therefore split Synology scripts from Docker scripts (WIP)
Component Architecture vs. Microservices:
I appreciate Felix imho “all in one container is not what dockerisation is about” but there are legacy reasons for it (e.g. Plugins) and I’m simply not so religious. Still refactoring is on.
Imho component architecture is key which is maintained in K4S as all Data(base + files) live outside and computation is segregated then micro services comes as next goal.
Arguably any Kopano Docker incl. K4S should be split into components: Files, Database, Mail, Core, Web, Meet(ings).- For Files Docker volumes and even volumes container is one solution while mounting Unix volumes is the other and the latter is more straightforward to me
Unix volumes can live on SSD with ext4, easier controlled and replicated via rsync while for certain features Docker and its volumes live on brtfs somewhere in hidden state. At leaset this is the Synology world.
- For Database one can use it from another Docker container or native on Docker host
again I prefer the latter and configuration is easy to be changed).
- For Mail incl. Amavis should be done legacy Plugin for SPAM and Dagent for Fetchmail dependencies had been blocking this
Postfix uses LMTP to talk to Kopano, so simply fetchmail using dagent needs to be changed using SMTP. With kopano-spamd the old spam plugin is no longer needed which used the command line
Further splitting all mail components (postfix, amavisd, spamassassin etc.) to different containers is something I would not consider as this is getting religous and difficult for certain postfix integration- For Core all Kopano core services should live in one container but it does not make sense to split any further as mapi etc. had to be replicated over all containers
- For Web also they should live in own container and main barrier was legacy plugin passwd which afIk now changed to mapi support as opposed of command line kopano-passwd.
See https://forum.kopano.io/topic/660/installing-password-plugin-resolved/15 and https://github.com/dducret/kopano-webapp-passwd/blob/master/builds
- For Web(meeting) respectively new kopano-meet packages they should live in own container same as 2016 Zarafa Docker pilot by Felix did.
However I plan adding them to monolithic single container first to get it running and will decompose each component as described above over time via bet versions
Hope that helps, was not TDL (to long didn’t listen) and I’m open for discussion - TosoBoso
-
@TosoBoso: Would it be imaginable for you to just re-publish an updated version of Z-Push so those of us already using a fully working Mail/CalDAV/CardDAV server configuration on the Synology could add the push functionality for their mobile devices to it?
I tried to install and use Kopano4S and was quite disappointed about how basic its functionality was compared to Synology MailPlus. It absolutely seems to want to reinvent the wheel w/o the necessity.
I also tried to install the above mentioned Docker image for Z-Push but failed again because of lack of documentation adapted to the Synology Docker environment (and I do not expect at all from the authors to buy such a hardware for being able just to explain how to install and use it).
Last but not least I also tried to install Z-Push on a Raspberry Pi but failed once again as there are no Z-Push packages available for it at all. All this is such a shame as everything seems to be doable in one way or another but isn’t done mainly for communication reasons. There were people here in the forum having successfully installed Z-Push on a Synology but prefer not to talk about it. Then there are packages available like K4S containing an “overkill” approach, outdated packages from the Zarafa period and official packages too far away from the Synology GUI approach.
Sigh …
-
@nexttoyou said in Request for feedback: easily run Kopano through Docker:
I also tried to install the above mentioned Docker image for Z-Push but failed again because of lack of documentation adapted to the Synology Docker environment
The above images are indeed only designed to work in conjunction with Kopano. If you want to add compatibility with other z-push backends I would be open to accept a pull request.
@nexttoyou said in Request for feedback: easily run Kopano through Docker:
Last but not least I also tried to install Z-Push on a Raspberry Pi but failed once again as there are no Z-Push packages available for it at all
The packages from the z-push repository are built with “noarch” which should make it possible to install them on e.g. a Debian based raspberry pi.
-
@fbartels said in Request for feedback: easily run Kopano through Docker:
The above images are indeed only designed to work in conjunction with Kopano.
Ah, thanks, I ignored that those images were designed to be used primarily with Kopano but it is of course an understandable approach.
If you want to add compatibility with other z-push backends I would be open to accept a pull request.
I would if I could but Docker and me are not too privy with each other.
The packages from the z-push repository are built with “noarch” which should make it possible to install them on e.g. a Debian based raspberry pi.
I thought that too and it also was my hope. But it seems to be more complicated than imagined:
sudo apt-get install z-push-config-apache z-push-backend-caldav z-push-backend-carddav z-push-backend-imap z-push-backend-combined z-push-ipc-sharedmemory Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package z-push-config-apache E: Unable to locate package z-push-backend-caldav E: Unable to locate package z-push-backend-carddav E: Unable to locate package z-push-backend-imap E: Unable to locate package z-push-backend-combined E: Unable to locate package z-push-ipc-sharedmemory
I’ve not entirely given up on my Z-Push dream but as the famous Catweazle already said many times I currently have to conclude too that “Nothing works!”
-
@nexttoyou said in Request for feedback: easily run Kopano through Docker:
But it seems to be more complicated than imagined:
The easiest explanation for the shown logging is that you simply did not add the z-push repository to your local system. But this strays further away from the original topic of this thread. I would recommend to make a dedicated thread for this.
-
a small replace on this. We already managed to discover some backyard contributors who rewrote the setup script to make it greater bendy for already present ldap bushes and made it feasible to pick out which webapp plugins ought to be established inside the container. Currently we are discussing including a container for consumer self provider (password reset).
-
@Edwardwsr can you share a link to your project? There is btw already an optional password reset web ui in the original project.
-
Hello,
Docker is not an option for us, we support at least 4 virtualizations like XEN with and without DRBD, Hyper-V, VMWare, KVM. It doesn’t make sense for us to support Docker, but it does make additional effort.Walter
-
@WalterHof said in Request for feedback: easily run Kopano through Docker:
we support […] virtualization[s]
I don’t think it has to be “virtualisation” OR “containerisation”, but can work hand in hand. But I agree that getting into containerisation takes some effort and may not be worth while in all environments (especially when throwing Kubernetes into the mix)