as I have been quoted, I’d like to clarify a few topics on Kopano4S, source code availability, portability, contribution, architecture etc.
Source Code Availability:
My reply to requests on source code availability of K4S stands and putting it to GitHub is work in progress:
#1 it is open source as outlined in license terms when installing (https://wiki.z-hub.io/display/K4S/Install-Update)
#2 source code is self-contained in each SPK which is a simple tar which can be opened e.g. by 7Zip as outlined in K4S Wiki
#3 I will put it on GitHub, but after some refactoring to script location (k4s 0.99 wip) to make it easier to read and segregate components (see below)
In the meantime anyone can check for inspiration the core Docker components in scripts directory of the spk: Dockerfile, init.sh, kopano-postfix.sh, kopano-fetchmail.sh.
Portability and Differences:
Felix pointed some differences of K4S Docker container to a standard container and I will outline more while they should be easy to accommodate.
K4S runs monolithic all services in one container utilizing Dockers --init tiny for process control (https://github.com/krallin/tini) which could be decomposed (see architecture section)
MariaDB / MySQL is mounted via socket instead of using ports. All you need is to mount /run/mysqld and use mysqld10.sock (can softlink to mysqld.sock)
K4S is using Synology’s native MariaDB10 to do this but equally another MariaDB / MySQL container can be used and kopano server.cfg adjusted to use port vs. sockets.
K4S uses mounts as opposed to volumes and all data lives on the docker host in standard file systems (on Synology by default as so-called Shares).
All you need to do is to have the mounts available on Docker host then do the trick as advised in repository:
docker run -d -v /run/mysqld:/run/mysqld:ro -v $K_ETC:/etc/kopano -v $K_LOG:/var/log/kopano -v $K_ATC:/var/lib/kopano/attachments $PORTS_EXPOSED --name kopano4s --hostname kopano4s tosoboso/kopano4s: $VER_IN_KOP
No parameters are passed but K4S configuration is all kept and maintained on docker hosts file systems maintained via Install and Administration GUI which is the added value for end users.
All you need to do is keeping the configuration files maintained via Unix editor on your Docker host in /etc/kopano respectively setting the database credentials in server.cfg rather than passing parameters.
Database used as User Plugin as opposed to LDAP: this is just the default and common with Synology community but LDAP is fully supported.
Synology has LDAP service and some power users have it running. Documentation, How-to on Wiki and GUI support K4S for LDAP to simplify it for end-users is planned.
Contribution & Complexity:
I’m happy for others to contribute and so do I by providing questions and insight to the nut-cracker issues (e.g. https://forum.kopano.io/topic/2172/solving-real-ip-issue-running-kopano-web-behind-reverse-proxy-in-docker).
I’m also happy with Felix support in particular to get the initial Docker container running in 2016 and with his feedback on tini plus I’m fully open to his imhos :-)
One issue needs more review by Kopano regarding the impact to Docker imho: switching to systemd and stopping the support for init scripts and cfg files in install scripts.
As Docker does not go particular well with systemd using the init scripts to to control services in Docker is very popular; I’m using it and now I have to maintain init scripts (see https://forum.kopano.io/topic/1448/no-config-files-created-during-install/19).
Back to source code item #3 I have time constraints and suffer the syndrom “can’t sharpen my blade have to cut wood” as I wanted to put a working refactored version to Git Hub.
Please understand K4S is more than just Docker with it’s Synology integration and GUI components and maintaining Synology nitty gritty to keep the platform alive on Synology is extra time consuming.
K4S has 3 main components: Synology specific Shell install scripts and json GUI configurations, Perl, JS UI for configuration and commands, Docker build scripts, scripts inside the container plus wrapper scripts so it behaves from Synology as if it was inside the container. In addition Kopano Community and Supported editions are maintained which brings constraints to auto builds and care not to expose sensitive serial numbers. In short K4S grew with toolchain for the poor and a bit monolithic which needs refactoring (see architecture section).
If I would dump all of currents content onto GitHub without explaining all parts and purpose, I see a limited gain. Before I put it on GitHub I will therefore split Synology scripts from Docker scripts (WIP)
Component Architecture vs. Microservices:
I appreciate Felix imho “all in one container is not what dockerisation is about” but there are legacy reasons for it (e.g. Plugins) and I’m simply not so religious. Still refactoring is on.
Imho component architecture is key which is maintained in K4S as all Data(base + files) live outside and computation is segregated then micro services comes as next goal.
Arguably any Kopano Docker incl. K4S should be split into components: Files, Database, Mail, Core, Web, Meet(ings).
For Files Docker volumes and even volumes container is one solution while mounting Unix volumes is the other and the latter is more straightforward to me
Unix volumes can live on SSD with ext4, easier controlled and replicated via rsync while for certain features Docker and its volumes live on brtfs somewhere in hidden state. At leaset this is the Synology world.
For Database one can use it from another Docker container or native on Docker host
again I prefer the latter and configuration is easy to be changed).
For Mail incl. Amavis should be done legacy Plugin for SPAM and Dagent for Fetchmail dependencies had been blocking this
Postfix uses LMTP to talk to Kopano, so simply fetchmail using dagent needs to be changed using SMTP. With kopano-spamd the old spam plugin is no longer needed which used the command line
Further splitting all mail components (postfix, amavisd, spamassassin etc.) to different containers is something I would not consider as this is getting religous and difficult for certain postfix integration
For Core all Kopano core services should live in one container but it does not make sense to split any further as mapi etc. had to be replicated over all containers
For Web also they should live in own container and main barrier was legacy plugin passwd which afIk now changed to mapi support as opposed of command line kopano-passwd.
See https://forum.kopano.io/topic/660/installing-password-plugin-resolved/15 and https://github.com/dducret/kopano-webapp-passwd/blob/master/builds
For Web(meeting) respectively new kopano-meet packages they should live in own container same as 2016 Zarafa Docker pilot by Felix did.
However I plan adding them to monolithic single container first to get it running and will decompose each component as described above over time via bet versions
Hope that helps, was not TDL (to long didn’t listen) and I’m open for discussion - TosoBoso
just browsing through the Linode page it is not 100% clear to me what the mentioned service includes. If you manage to convince the Linode team to setup and maintain (what is imho the definition of a “managed service”) your Kopano installation for you it should work.
For this we use Univention Corporate Server. One of the best LDAP/GUI implementations we know.
For Zimbra: Have used this also for some time… Problem was NO, in mean really NO support within the official forums…
Hi @genz ,
you have (presumably) currently https://download.kopano.io/supported/core:/final/RHEL_7/ configured as your repository url, the url for the pre-final packages would then be https://download.kopano.io/supported/core:/pre-final/RHEL_7/.
This is also explained in https://documentation.kopano.io/kopanocore_administrator_manual/installing.html#installing-kopano-core-through-the-kopano-package-repositories
That is not a realistic business scenario. Which company does not need mails older than 1 month?
Anyway, I got the Zarafa connector working, apparently Outlook had a problem that was fixed by repairing the Office installation. For the time being we will continue to work like that. I finished the Z-push installation, so we are ready to switch to ActiveSync if needed.
Instructions for the upgrade of the Kopano apps from UCS 4.2 to 4.3 can be found at https://wiki.z-hub.io/display/K4U/Updating+Kopano+packages+directly+from+the+Kopano+download+server#UpdatingKopanopackagesdirectlyfromtheKopanodownloadserver-WorkaroundupdatingfromUCS4.2to4.3withrepositories
This is a workaround and as you experienced this workaround may not always work. Anyways I would advise to wait with the upgrade until a newer 8.6 has been released. I have added this hint to the description.
I’d rather recommend to get in contact with the collar support over this. From your description is sounds like the unique user Id of that store changes when this “rewriting of config” happens.
The store creation error I would explain in the way that the server recognises that it has created a store previously for the same user details. (so rather a symptom instead of the cause).
@burgessja said in Possible to create multiple Global Address Books, or push contact lists to certain users?:
The only thing missing
its there once you enable it in the config.php of webapp:
// Set true to hide public contact folders in address-book folder list,
// false will show public contact folders in address-book folder list.
Ok, for the certificates, you have multiple options here, this is a bit how you want to use it.
You did not mention you os, so i’ll show the debian steps.
If you dont have official certificates, i do suggest you use LetEncrypt Certificates.
If you have other certificates just look what i do here. ;-) and repeat this with your certificates.
I’ll show the debian steps for letsencrypt
apt-get install ca-certificates letsencrypt
letsencrypt certonly --standalone -d mail.example.com
you can add other domainnames for example also, again adjust to your needs.
letsencrypt certonly --standalone -d example.com -d www.example.com -d mail.example.com
The command starts an interactive configuration script which will ask a couple of questions to setup the certificate correctly.
Select Yes to use the default vhost file and specify the settings manually.
Enter the email server’s domain name like mail.example.com.
On the first installation on any specific host, you’ll need to enter a contact email. ( email@example.com )
Next, read the Let’s Encrypt Terms of Service and select Agree to continue.
Then select whether you wish to use both HTTP and HTTPS or to require all traffic
to use encryption by highlighting either the Easy or the Secure option and selecting OK.
If its correct you now have a webserver with https (mail.example.com)
Tip: look at /etc/letsencrypt/options-ssl-apache.conf
You can automatic include these in you apache ssl vhost with. ( if its not already in there. )
IncludeOptional or Include /etc/letsencrypt/options-ssl-apache.conf
IncludeOptional does not make apache complain if the file is missing then its starting
apache, but then it starts without these settings, so use with care.
Configure your postfix to use these certs.
sudo postconf -e 'smtpd_tls_cert_file = /etc/letsencrypt/live/mail.example.com/fullchain.pem'
sudo postconf -e 'smtpd_tls_key_file = /etc/letsencrypt/live/mail.example.com/privkey.pem'
configure postfix to use the TLS encryptions
sudo postconf -e 'smtp_tls_security_level = may'
sudo postconf -e 'smtpd_tls_security_level = may'
sudo postconf -e 'smtp_tls_note_starttls_offer = yes'
sudo postconf -e 'smtpd_tls_loglevel = 1'
sudo postconf -e 'smtpd_tls_received_header = yes'
And we now can restart these services.
systemctl restart postfix apache2
check your logs of its all correct.
Now you kopano outlook client. ! Do note, this might be a bit different from the official doc.
But it works great.
ln -s /etc/letsencrypt/live/mail.example.com/privkey.pem /etc/kopano/ssl/privkey.pem
ln -s /etc/letsencrypt/live/mail.example.com/cert.pem /etc/kopano/ssl/server.pem
I use symlinks here so you van use the default settings from server.conf.
Now for the setting server_ssl_ca_ ( file or path )
For _file, the default can be result for that run :
ln -s /etc/ssl/certs/ca-certificates.crt /etc/kopano/ssl/cacert.pem
Or use for _path
server_ssl_ca_path = /etc/ssl/certs
both should work fine.
Tip: here, if you have your own CA Root. Have a look here:
ln -s /etc/letsencrypt/live/mail.example.com/privkey.pem /etc/kopano/gateway/privkey.pem
ln -s /etc/letsencrypt/live/mail.example.com/cert.pem /etc/kopano/gateway/cert.pem
Since the is a mail setup and you want to protect your mail.
i’ve changed the kopano server.cfg and gateway.conf defaults to :
server_ssl_protocols = !SSLv3 !TLSv1 TLSv1.1
server_ssl_ciphers = EECDH+ECDSA+AESGCM:EECDH+aRSA+AESGCM:EECDH+ECDSA+SHA384:EECDH+ECDSA+SHA256:EECDH+aRSA+SHA384:EECDH+aRSA+SHA256:EECDH+aRSA+RC4:EECDH:EDH+aRSA:!RC4:!aNULL:!eNULL:!LOW:!3DES:!MD5:!EXP:!PSK:!SRP:!DSS
server_ssl_prefer_server_ciphers = yes
Now i suggest, start with these requirements for above setup.
Postfix: setup and A PTR and MX record in the dns for mail.example.com
Apache: configure a vhost with the servername mail.example.com ( use this one for your webapp and z-push also )