kopano-migration-imap : Unable to get reminder property: not found
-
Hello,
we are currently trying to migrate a Zarafa 7.2.6 into a Kopano 8.7.9 by using kopano-migration-imap. The problem is, that the gateway.log on the Zarafa Server contains a lot of errors and we are not sure, whether any mail are copied at all:
Tue May 5 12:44:18 2020: [debug ] [22145] > * 939 FETCH (RFC822.SIZE 13240 UID 2756600)
Tue May 5 12:44:18 2020: [error ] [22145] Unable to get reminder property: not found (0x8004010f)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 940 FETCH (RFC822.SIZE 19992 UID 2756601)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 941 FETCH (RFC822.SIZE 1928 UID 2756602)
Tue May 5 12:44:18 2020: [error ] [22145] Unable to get reminder property: not found (0x8004010f)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 942 FETCH (RFC822.SIZE 33018 UID 2756603)
Tue May 5 12:44:18 2020: [error ] [22145] Unable to get reminder property: not found (0x8004010f)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 943 FETCH (RFC822.SIZE 6621 UID 2756604)
Tue May 5 12:44:18 2020: [error ] [22145] Unable to get reminder property: not found (0x8004010f)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 944 FETCH (RFC822.SIZE 7040 UID 2756605)
Tue May 5 12:44:18 2020: [error ] [22145] Unable to get reminder property: not found (0x8004010f)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 945 FETCH (RFC822.SIZE 4714 UID 2756606)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 946 FETCH (RFC822.SIZE 4109 UID 2756627)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 947 FETCH (RFC822.SIZE 15425 UID 2756628)
Tue May 5 12:44:18 2020: [error ] [22145] Unable to get reminder property: not found (0x8004010f)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 948 FETCH (RFC822.SIZE 4391 UID 2756629)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 949 FETCH (RFC822.SIZE 11873 UID 2756630)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 950 FETCH (RFC822.SIZE 2490276 UID 2756631)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 951 FETCH (RFC822.SIZE 82791 UID 2756632)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 952 FETCH (RFC822.SIZE 7657 UID 2756633)
Tue May 5 12:44:18 2020: [debug ] [22145] > * 953 FETCH (RFC822.SIZE 9008 UID 2756634)
Tue May 5 12:44:23 2020: [debug ] [22145] > * 954 FETCH (RFC822.SIZE 18862083 UID 2756635)
Tue May 5 12:44:35 2020: [debug ] [22145] > * 955 FETCH (RFC822.SIZE 24544608 UID 2756636)By the way, would it be possible to dump the Zarafa database, copy it to the Kopano server and import it, followed by syncing the attachments? I’m a little bit concerned whether the gap between the two versions is to wide.
Thanks for any feedback and help,
Stefan
-
@smguenther said in kopano-migration-imap : Unable to get reminder property: not found:
would it be possible to dump the Zarafa database, copy it to the Kopano server and import it, followed by syncing the attachments?
Yes, that should be no problem. The official statement to this can be found at https://kb.kopano.io/display/WIKI/Migrating+from+ZCP+7.1+or+earlier+to+Kopano+Core+8.5+or+later
And instead of trying to sync mailboxes via imap the fallback I would use instead would be to go through kopano-backup. For this you can simply point kopano-backup to the http socket of the old one.
-
Hi Felix,
thank you for the fast answer.
Our major problem is, that we have nearly 300 GB in attachments, a 40 GB database, 40 active users currently working on the ofld server and an old piece of hardware with only 32 GB RAM and a Xeon E3 .
This hardware was good enough approx. eight years ago, but now it’s the bottleneck for this migration.Do you agree, that splitting the migration into db and attachements would be the least troublesome way?
Regards,
Stefan
-
@smguenther said in kopano-migration-imap : Unable to get reminder property: not found:
Do you agree, that splitting the migration into db and attachements would be the least troublesome way?
Yes, probably. You only have to account for the time it takes to export and import the database. If you only have a small window where your server can be offline you could speed it up by working with mysql master/slave replication.