Compare commits

..

527 Commits

Author SHA1 Message Date
FreddleSpl0it
8d211ea767 Set HOST env vars 2025-06-02 15:54:16 +02:00
FreddleSpl0it
4cc463a728 Set HOST env vars 2025-06-02 15:53:39 +02:00
FreddleSpl0it
909eb8a63a [Postfix] Add extra.cf via jinja2 include 2025-06-02 15:52:59 +02:00
FreddleSpl0it
4c4a440cdf Use consistent naming for host-related env vars 2025-05-31 22:31:08 +02:00
FreddleSpl0it
743bfcec60 Check if mysql has been initialized before trying to upgrade 2025-05-31 21:50:49 +02:00
FreddleSpl0it
744aa5d137 Add jinja2 filters urlencode and escape_quotes 2025-05-31 21:48:36 +02:00
FreddleSpl0it
e8d155d7e0 Set SOGo related hosts from env var 2025-05-30 20:46:49 +02:00
FreddleSpl0it
6382e3128d Remove backslash escape from jinja2 templates 2025-05-30 20:01:03 +02:00
FreddleSpl0it
bc11aed753 Change config overwrites.json path 2025-05-30 19:59:54 +02:00
FreddleSpl0it
a6e13daa8c [MySQL] Remove container startup delay 2025-05-23 09:50:28 +02:00
FreddleSpl0it
eb7d2628ac Optimize python bootstrapper 2025-05-23 09:49:08 +02:00
FreddleSpl0it
5f93ff04a9 Add psutil module for bootstrapping 2025-05-23 09:47:25 +02:00
FreddleSpl0it
f35def48cb restructure configuration directories 2025-05-22 15:37:15 +02:00
FreddleSpl0it
dba9675a9b Use config.json to render multiple Jinja2 templates 2025-05-22 14:07:39 +02:00
FreddleSpl0it
13b4f86d29 Add support for custom_templates folder to override Jinja2 templates 2025-05-22 13:42:17 +02:00
FreddleSpl0it
3eb17a5f78 [Dovecot] Suppress doveconf -P output on startup 2025-05-22 13:35:18 +02:00
FreddleSpl0it
c38a4c203e Set Jinja2 template folder to absolute path 2025-05-22 13:24:35 +02:00
FreddleSpl0it
f329549c2e [PHP-FPM] use python bootstrapper to start PHP-FPM container 2025-05-22 13:06:35 +02:00
FreddleSpl0it
767d746419 [MySQL] Check if MySQL supports timezone conversion on startup 2025-05-22 12:59:37 +02:00
FreddleSpl0it
9174a05af3 [MySQL] Optimize mysql upgrade logic 2025-05-22 08:28:03 +02:00
FreddleSpl0it
faf8fa8c2c [Mysql] use python bootstrapper to start MYSQL container 2025-05-22 07:54:29 +02:00
FreddleSpl0it
55d90afee4 [Clamd] add hooks volume 2025-05-21 14:04:04 +02:00
FreddleSpl0it
669f75182d [Clamd] use python bootstrapper to start CLAMD container 2025-05-21 14:02:49 +02:00
FreddleSpl0it
5a39ae45cb [Postfix] gitignore main.cf 2025-05-21 10:38:14 +02:00
FreddleSpl0it
8baa3c9fb5 [Dovecot] gitignore dovecot.conf 2025-05-21 10:32:04 +02:00
FreddleSpl0it
b8888521f1 [Rspamd] use python bootstrapper to start RSPAMD container 2025-05-21 09:40:38 +02:00
FreddleSpl0it
2efea9c832 [Dovecot] use python bootstrapper to start DOVECOT container 2025-05-21 07:56:28 +02:00
FreddleSpl0it
5a097ed5f7 [Postfix] use python bootstrapper to start POSTFIX container 2025-05-19 13:13:40 +02:00
FreddleSpl0it
cde2ba4851 [Nginx] use python bootstrapper to start NGINX container 2025-05-19 09:45:00 +02:00
FreddleSpl0it
1d482ed425 [SOGo] use python bootstrapper to start SOGo container 2025-05-16 13:40:26 +02:00
FreddleSpl0it
d3185c3c68 Add python bootstrapper for containers 2025-05-16 13:37:49 +02:00
FreddleSpl0it
03d979c089 [Web] Fix get custom_login 2025-05-13 10:14:58 +02:00
FreddleSpl0it
ffa2933873 increase Olefy, Rspamd and Watchdog docker images 2025-05-13 09:49:53 +02:00
FreddleSpl0it
7f47a3f00e Merge pull request #6530 from mailcow/feat/auto-create-user-option
[Web] Add identity_provider option to disable auto-creation of users …
2025-05-12 13:24:34 +02:00
FreddleSpl0it
1bcab9a9a5 Merge pull request #6518 from seclution/patch-2
fix: typo in default_template
2025-05-12 13:08:07 +02:00
FreddleSpl0it
1b2f424edc [Web] Add identity_provider option to disable auto-creation of users on login 2025-05-12 12:20:23 +02:00
krzsztf1
486b297409 Fix typo in rspamd rule in composites.conf (#6515)
Co-authored-by: Krzysztof Nowak <k.nowak@intalio.pl>
2025-05-09 18:33:19 +02:00
FreddleSpl0it
75d7f06b25 Merge pull request #6521 from mailcow/feat/login-quicklinks
[Web] Add quick links to other login pages and mailcow login toggle
2025-05-09 15:24:36 +02:00
FreddleSpl0it
ea0944d743 [Web] Add quick links to other login pages and option to disable mailcow login form 2025-05-09 15:13:44 +02:00
Kai Biebel
cb6ffe65c8 fix: typo in default_template 2025-05-09 11:24:49 +02:00
FreddleSpl0it
580dabd276 Merge pull request #6509 from mailcow/update/postscreen_access.cidr
[Postfix] update postscreen_access.cidr
2025-05-09 10:02:50 +02:00
FreddleSpl0it
846862aa80 Merge pull request #6506 from mrclschstr/staging
[Fix] Moving mails by functions.quarantine.inc.php to inbox failed
2025-05-09 10:00:56 +02:00
FreddleSpl0it
e7a1f24c78 Merge pull request #6483 from PseudoResonance/oauth2-redirect-extra-domain
Allow additional domains in OAuth2 redirect URLs
2025-05-09 09:48:08 +02:00
FreddleSpl0it
8ff0e029f0 Merge pull request #6398 from marvinruder/feat/skip-olefy
feat(olefy): Allow disabling Olefy
2025-05-08 15:34:51 +02:00
FreddleSpl0it
0680b21938 Merge pull request #6342 from mailcow/fix/6339
rspamd: remove .info from fishy tlds (default)
2025-05-08 14:36:39 +02:00
FreddleSpl0it
0c8e7bfeca Merge pull request #6376 from NickBouwhuis/staging
feat/replace bgp.he.net with bgp.tools
2025-05-08 14:31:57 +02:00
FreddleSpl0it
badcd27b93 Merge pull request #6485 from Habetdin/fix/update-legacy-links
Update legacy admin dashboard links
2025-05-08 14:16:44 +02:00
FreddleSpl0it
7d3ef3d67f Merge pull request #6487 from mailcow/fix/6469
[Web] Fix force password update at next login
2025-05-08 14:04:56 +02:00
FreddleSpl0it
5b89e253a6 Merge remote-tracking branch 'origin/staging' into fix/6469 2025-05-08 13:50:50 +02:00
FreddleSpl0it
a90f4c2a2e Merge pull request #6495 from Kuehn-Andreas/fix/redirection-after-login-with-skip-SOGo
Check if skip_sogo is not set before redirecting to SOGo
2025-05-08 11:53:47 +02:00
FreddleSpl0it
db7b917944 Merge pull request #6488 from mailcow/fix/6470
[Dovecot] Fix EAS login issue with app passwords and improve auth cache handling in Dovecot
2025-05-08 11:49:55 +02:00
FreddleSpl0it
401b744808 [Dovecot] return PASSDB_RESULT_PASSWORD_MISMATCH instead of PASSDB_RESULT_INTERNAL_FAILURE 2025-05-08 11:38:29 +02:00
milkmaker
0c83255573 update postscreen_access.cidr 2025-05-01 00:21:11 +00:00
Marcel Schuster
d55f0fc366 Update functions.quarantine.inc.php
Fix regex for quarantine release functions
2025-04-29 22:14:43 +02:00
milkmaker
06b3ba91a0 [Web] Updated lang.zh-cn.json (#6502)
Co-authored-by: Easton Man <me@eastonman.com>
2025-04-27 18:47:08 +02:00
DerLinkman
aa4125fe62 sogo: enabled SOGoEnableMailCleaning per default 2025-04-23 15:13:52 +02:00
Andreas Kühn
d8c6ed9191 Check if skip_sogo is not set before redirecting to SOGo 2025-04-22 14:23:33 +02:00
FreddleSpl0it
cb47fa406f [Web] Fix force password update at next login 2025-04-15 13:48:13 +02:00
FreddleSpl0it
c4d0f35008 [Dovecot] Fix EAS login and improve logging 2025-04-15 10:49:56 +02:00
Habetdin
0d3e8dd738 Update legacy admin dashboard links 2025-04-13 16:09:47 +03:00
PseudoResonance
692355a08a Allow additional domains in OAuth2 redirect URLs 2025-04-12 06:24:37 -07:00
renovate[bot]
a370499aaa Update dependency Imagick/imagick to v3.8.0 (#6480) 2025-04-11 08:30:54 +02:00
milkmaker
84f67d6608 Translations update from Weblate (#6478)
* [Web] Updated lang.de-de.json

Co-authored-by: Jonas Leiner <jonas.leiner.123@gmail.com>

* [Web] Updated lang.fr-fr.json

Co-authored-by: Neuronnexion <support@nnx.com>

---------

Co-authored-by: Jonas Leiner <jonas.leiner.123@gmail.com>
Co-authored-by: Neuronnexion <support@nnx.com>
2025-04-10 17:16:44 +02:00
milkmaker
4ac839cf49 Translations update from Weblate (#6463)
* [Web] Updated lang.hu-hu.json

Co-authored-by: Anonymous <noreply@weblate.org>
Co-authored-by: milkmaker <milkmaker@mailcow.de>

* [Web] Updated lang.lv-lv.json

[Web] Updated lang.lv-lv.json

Co-authored-by: Anonymous <noreply@weblate.org>
Co-authored-by: Edgars Andersons <Edgars+Mailcow+Weblate@gaitenis.id.lv>
Co-authored-by: milkmaker <milkmaker@mailcow.de>

* [Web] Added lang.bg-bg.json

Co-authored-by: Peter <magic@kthx.at>
Co-authored-by: milkmaker <milkmaker@mailcow.de>

* [Web] Updated lang.tr-tr.json

Co-authored-by: Anonymous <noreply@weblate.org>
Co-authored-by: milkmaker <milkmaker@mailcow.de>

* [Web] Updated lang.uk-ua.json

Co-authored-by: Anonymous <noreply@weblate.org>
Co-authored-by: milkmaker <milkmaker@mailcow.de>

* [Web] Updated lang.zh-cn.json

Co-authored-by: Easton Man <me@eastonman.com>
Co-authored-by: milkmaker <milkmaker@mailcow.de>

---------

Co-authored-by: Anonymous <noreply@weblate.org>
Co-authored-by: Edgars Andersons <Edgars+Mailcow+Weblate@gaitenis.id.lv>
Co-authored-by: Peter <magic@kthx.at>
Co-authored-by: Easton Man <me@eastonman.com>
2025-04-10 17:11:30 +02:00
FreddleSpl0it
b96a5b1efd [Web] Fix AJAX urls to absolute path 2025-04-09 08:07:46 +02:00
FreddleSpl0it
766c5e8580 [Dovecot] Ignore app passwords protocol access on SOGo request 2025-04-09 08:02:30 +02:00
FreddleSpl0it
3ddad9dee8 Merge pull request #6460 from mailcow/ui/improve-ldap-ssl-labels
[Web] Improve clarity of LDAP SSL/TLS settings
2025-04-07 08:58:19 +02:00
FreddleSpl0it
2c10c39bc4 [Web] Update 2FA Info tooltip 2025-04-07 08:06:43 +02:00
FreddleSpl0it
0eb8f38792 [Web] Update LDAP SSL/TLS tooltips 2025-04-07 07:59:43 +02:00
FreddleSpl0it
402bf53a5c [Web] Improve clarity of LDAP SSL/TLS settings 2025-04-04 13:18:42 +02:00
FreddleSpl0it
428a59dd3f Merge branch 'fix/dovecot-lua-timeout' into staging 2025-04-03 14:18:33 +02:00
FreddleSpl0it
153890b283 Merge pull request #6439 from mailcow/fix/6430
[SOGo] Use JS for mailcow logout
2025-04-03 12:57:24 +02:00
FreddleSpl0it
a741c2ba4a Merge pull request #6426 from sardaukar/fix/typo-on-backup-and-restore-script
Fix tiny typo
2025-04-03 12:41:46 +02:00
FreddleSpl0it
741e5c719f Merge pull request #6438 from mailcow/fix/6405
[Netfilter] Downgrade to 1.61
2025-04-03 12:39:48 +02:00
FreddleSpl0it
34e4f93db9 Merge pull request #6451 from mailcow/fix/6437
[Web] Fix transport routing test
2025-04-03 12:38:43 +02:00
FreddleSpl0it
3758135dc3 Merge pull request #6450 from mailcow/fix/sasl_logs
Fix sasl_logs
2025-04-03 12:38:13 +02:00
FreddleSpl0it
6794e6ff43 [Dovecot] Add service for authentication cache_key 2025-04-03 12:31:43 +02:00
FreddleSpl0it
62f816e64a [Web] Check app password before user password on web login 2025-04-03 12:19:04 +02:00
FreddleSpl0it
e65478076b [Web] Prevent user sync for mismatched authsource 2025-04-03 11:58:35 +02:00
FreddleSpl0it
ceeabded73 [Web] Fix transport routing test 2025-04-03 10:29:47 +02:00
FreddleSpl0it
805634f9a9 Fix sasl_logs 2025-04-03 10:19:30 +02:00
DerLinkman
a92832d115 update README.md to include first 50 and 100$ monthly sponsors 2025-04-02 14:39:24 +02:00
milkmaker
4c5f485587 update postscreen_access.cidr (#6443) 2025-04-01 22:00:11 +02:00
FreddleSpl0it
db3a577ae3 [Web] Fix password reset 2025-04-01 16:39:15 +02:00
FreddleSpl0it
e452917de9 [SOGo] Show mailcow Settings Button to SOGoSuperUsers 2025-03-31 12:14:43 +02:00
FreddleSpl0it
f37961b7d0 [SOGo] Use JS for mailcow logout 2025-03-31 11:32:01 +02:00
FreddleSpl0it
0157cbddaf [Netfilter] Downgrade to 1.61 2025-03-31 10:36:20 +02:00
Bruno Antunes
65d872cc14 Fix tiny typo 2025-03-27 20:21:25 +00:00
FreddleSpl0it
4ad2422810 [Dovecot] Increase Timeout for HTTP Login Request 2025-03-27 16:52:15 +01:00
FreddleSpl0it
9b41b24522 Merge pull request #6402 from marvinruder/fix/long-dropdown-label
fix(ui): Swap translations for oversized dropdown
2025-03-27 08:07:51 +01:00
FreddleSpl0it
1c9d80f554 Merge pull request #6406 from mailcow/fix/6392
[Web] Fix SOGo access after Passwordless auth
2025-03-27 07:42:07 +01:00
FreddleSpl0it
7172cad257 Merge pull request #6407 from mailcow/fix/6396
[Web] Fix oauth2 redirect after user login
2025-03-27 07:41:08 +01:00
FreddleSpl0it
b550c6f88e Merge pull request #6408 from mailcow/fix/6373
[Swagger] Fix type property for /api/v1/add/bcc endpoint
2025-03-27 07:40:19 +01:00
FreddleSpl0it
5baf9eb375 Merge pull request #6409 from mailcow/fix/6372
[Web] Check if mailbox is active before renaming
2025-03-27 07:40:03 +01:00
FreddleSpl0it
4eb89f67ed Merge pull request #6410 from mailcow/fix/6395
[Web] Use absolute paths for flag SVGs
2025-03-27 07:39:34 +01:00
FreddleSpl0it
efdc798238 Merge pull request #6411 from mailcow/fix/6340
[Nginx] Move conf.d include before SNI vhosts
2025-03-27 07:39:06 +01:00
Marvin A. Ruder
8408b82e9c Add new option with description to existing configuration files during next update
* Remove Olefy settings file from rspamd configuration
* Have rspamd container generate Olefy settings file at startup if not disabled

Signed-off-by: Marvin A. Ruder <signed@mruder.dev>
2025-03-26 17:17:13 +01:00
FreddleSpl0it
65fb4c2aa8 [Nginx] Move conf.d include before SNI vhosts 2025-03-26 13:04:43 +01:00
FreddleSpl0it
a5ca3353da [Web] Use absolute paths for flag SVGs 2025-03-26 10:59:56 +01:00
FreddleSpl0it
95aa35e133 [Web] Check if mailbox is active before renaming 2025-03-26 10:10:22 +01:00
FreddleSpl0it
21b11ed999 [Swagger] Fix type property for /api/v1/add/bcc endpoint 2025-03-26 09:24:03 +01:00
FreddleSpl0it
348107dae8 [Web] Fix oauth2 redirect after user login 2025-03-26 09:13:05 +01:00
FreddleSpl0it
fcb1b29c89 [Web] Fix SOGo access after Passwordless auth 2025-03-26 08:32:34 +01:00
Marvin A. Ruder
05fc4f7aba fix(ui): Swap translations for oversized dropdown
* Fix other typos
* Fixes #6400

Signed-off-by: Marvin A. Ruder <signed@mruder.dev>
2025-03-25 21:24:22 +01:00
Marvin A. Ruder
cd3b1ab828 Allow disabling Olefy
* Fixes #6389

Signed-off-by: Marvin A. Ruder <signed@mruder.dev>
2025-03-25 20:24:33 +01:00
FreddleSpl0it
d584dd387e Merge pull request #6390 from mailcow/nightly
Nightly 2025-03 to staging
2025-03-25 07:36:20 +01:00
FreddleSpl0it
986b0afbfa ldap-sync: Fix template selection 2025-03-24 15:33:42 +01:00
FreddleSpl0it
59d139bc63 Merge branch 'nightly' into staging 2025-03-24 13:39:43 +01:00
FreddleSpl0it
ad5f07f077 update.sh: add 2025-03 as major version 2025-03-24 11:47:27 +01:00
FreddleSpl0it
cf2d3c1b4e Merge branch 'staging' into nightly 2025-03-24 11:38:59 +01:00
FreddleSpl0it
91c82e8a67 Merge pull request #6384 from mailcow/feat/update-components-alp-3.21
os: updated alpine containers to 3.21
2025-03-24 11:30:58 +01:00
FreddleSpl0it
ba7437a8f3 Merge pull request #6380 from mailcow/feat/legacy-switch
Add Legacy Updates
2025-03-20 14:25:13 +01:00
FreddleSpl0it
684256b66e update.sh: Fix legacy typo 2025-03-20 14:23:28 +01:00
FreddleSpl0it
70ba361583 update.sh: Fix text in legacy update prompt 2025-03-20 14:15:15 +01:00
FreddleSpl0it
94d4817ecb [Web] Add default_template parameter to edit/identity-provider documentation 2025-03-20 13:38:27 +01:00
FreddleSpl0it
72ced70e33 [Web] Fix mailbox authsource selection 2025-03-20 13:08:42 +01:00
FreddleSpl0it
887b7114a8 Add default template for IdP attribute mapping 2025-03-19 14:35:32 +01:00
Nick Bouwhuis
ceebc56e62 feat/replace bgp.he.net with bgp.tools 2025-03-18 10:08:08 +00:00
FreddleSpl0it
8910135f02 [Web] Add edit/identity-provider Api Documentation 2025-03-17 13:21:28 +01:00
DerLinkman
463e3ab78c rspamd: update rspamd to 3.11.1 (#6374) 2025-03-14 12:18:59 +01:00
FreddleSpl0it
2a15914324 Fix major update prompt 2025-03-14 11:22:57 +01:00
FreddleSpl0it
e21696ff27 Add error message when mailbox creation fails 2025-03-14 09:36:40 +01:00
FreddleSpl0it
5a7275843a Add error message when mailbox creation fails 2025-03-14 09:30:33 +01:00
FreddleSpl0it
c93106f9d6 [Web] Fix redirect after renaming mailbox 2025-03-14 09:29:02 +01:00
FreddleSpl0it
43c1597051 [Web] Check if authsource is configured before adding or updating a mailbox 2025-03-14 09:19:39 +01:00
FreddleSpl0it
c3aa4f7418 [Web] Add authsource property to mailbox API Documentation 2025-03-14 09:17:51 +01:00
FreddleSpl0it
cb08132a74 [Web] Fix authentication when mailbox or domain is deactivated 2025-03-13 14:39:03 +01:00
FreddleSpl0it
2596b9d386 [Web] Improve auth logging and language strings 2025-03-12 11:42:14 +01:00
Marvin A. Ruder
062539b7d7 dkim: Add support for 3072 and 4096 bit RSA keys (#6365)
* dkim: Add support for 3072 and 4096 bit RSA keys

Signed-off-by: Marvin A. Ruder <signed@mruder.dev>

* php: added missing ; in dkim function

* php: make 4096 DKIM default

* db: update schema to set dkim 4096 as default

* Revert "db: update schema to set dkim 4096 as default"

This reverts commit 790b40a695.

* Revert "php: make 4096 DKIM default"

This reverts commit 7e643376c7.

---------

Signed-off-by: Marvin A. Ruder <signed@mruder.dev>
Co-authored-by: DerLinkman <niklas.meyer@servercow.de>
2025-03-11 15:30:46 +01:00
DerLinkman
18acbc7a4c cold-standby: changed texts + removed --no-parallel for pull 2025-03-11 12:35:13 +01:00
DerLinkman
2f93f1d0c5 os: fixes for newer mariadb-client versions (especially on alpine 3.21) 2025-03-10 16:45:57 +01:00
DerLinkman
0860a7503e os: updated alpine containers to 3.21 2025-03-10 11:56:12 +01:00
renovate[bot]
86df78255d chore(deps): update dependency composer/composer to v2.8.6 (#5719)
Signed-off-by: milkmaker <milkmaker@mailcow.de>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-03-10 11:39:19 +01:00
FreddleSpl0it
aac0a900ce [Web] Fix JSON parsing issue for api requests 2025-03-10 10:49:27 +01:00
milkmaker
03565df48d [Web] Updated lang.ko-kr.json (#6356)
Co-authored-by: dongsu8142 <dongsu8142@naver.com>
2025-03-07 21:37:31 +01:00
FreddleSpl0it
5f15475b55 [Rspamd] Remove redis.conf from tracking 2025-03-07 15:18:42 +01:00
FreddleSpl0it
25d34b5acf [Web] Remove default ui help text 2025-03-07 14:52:08 +01:00
FreddleSpl0it
6b165887d8 Merge branch 'staging' into nightly 2025-03-07 13:21:57 +01:00
FreddleSpl0it
82eb3c64cd [Web] Use SQL password only when authsource is mailcow 2025-03-07 13:15:27 +01:00
FreddleSpl0it
bc21e7fe50 [Web] Separate FIDO2 logins 2025-03-07 13:12:48 +01:00
FreddleSpl0it
6f9c8deab7 [Web] Support old style app links 2025-03-07 09:56:20 +01:00
FreddleSpl0it
8761d8fc47 [Web] Fix app layout issue 2025-03-07 09:54:35 +01:00
milkmaker
0435766c17 [Web] Updated lang.ko-kr.json (#6353)
Co-authored-by: dongsu8142 <dongsu8142@naver.com>
2025-03-05 17:43:37 +01:00
renovate[bot]
79f4cf4021 chore(deps): update docker/build-push-action action to v6 (#6334) 2025-03-05 16:35:46 +01:00
milkmaker
81803836f0 [Web] Updated lang.ko-kr.json (#6350)
Co-authored-by: dongsu8142 <dongsu8142@naver.com>
2025-03-03 22:49:23 +01:00
milkmaker
4bd267515a update postscreen_access.cidr (#6345) 2025-03-01 13:32:21 +01:00
DerLinkman
70190e5230 rspamd: remove .info from fishy tlds (default) 2025-02-28 15:38:05 +01:00
DerLinkman
5296085189 update.sh: corrected typos inside update.sh 2025-02-27 11:47:08 +01:00
DerLinkman
a4c2cf4c67 scripts: adapted new docker image names to docker_garbage function + removed dup 2025-02-27 11:44:52 +01:00
Peter
3c9d0c9d57 use ghcr.io for backupimage (#6333)
* use ghcr.io for backup image

* backup script: use renamed script + improved build of image

---------

Co-authored-by: DerLinkman <niklas.meyer@servercow.de>
2025-02-27 10:58:23 +01:00
FreddleSpl0it
35a6f81d0d [Redis] use 7.4.2-alpine image 2025-02-27 09:28:52 +01:00
FreddleSpl0it
4b31c04e3e Merge pull request #6330 from mailcow/feat/major-update-prompt
Prompt user before applying major updates
2025-02-27 08:15:21 +01:00
FreddleSpl0it
3d9cc2f6dd add 2025-02 to major versions 2025-02-27 08:14:34 +01:00
DerLinkman
704dd50262 compose: use ghcr.io for new/current mailcow docker images instead of docker hub (#6332) 2025-02-26 15:20:57 +01:00
FreddleSpl0it
8d0c03b2fc small adjustment for legacy version 2025-02-26 10:39:41 +01:00
FreddleSpl0it
c4a0e370b7 Merge pull request #6155 from PseudoResonance/fix2752
Fix #2752 - Allow domain recipients for address rewrite
2025-02-26 10:01:03 +01:00
FreddleSpl0it
b77ff2f51c Add switch to legacy version 2025-02-26 09:47:59 +01:00
FreddleSpl0it
787fa49d0c prompt user before applying major updates 2025-02-25 12:08:21 +01:00
DerLinkman
a6c38590ca rspamd: upgraded rspamd to 3.11.0-2 (incl. NIXSPAM Removal) (#6328) 2025-02-25 09:23:10 +01:00
PseudoResonance
e52323bf1d Fix @ prefixing domain rewrite and update localization 2025-02-24 22:36:17 -08:00
PseudoResonance
f15ee39b63 Fix #2752: Domain recipient for address rewrite
(cherry picked from commit 40f6d691d8774d6f813153974f8fe462a8db9ab3)
2025-02-24 22:07:23 -08:00
FreddleSpl0it
fcebe98557 Merge branch 'staging' into nightly 2025-02-24 15:09:36 +01:00
FreddleSpl0it
6ec5e88793 Merge pull request #6309 from mailcow/fix/6308
[Dovecot][Netfilter] Fix dovecot failed login regex
2025-02-24 11:26:06 +01:00
FreddleSpl0it
7d35646342 [Netfilter] adjust dovecot failed login regex 2025-02-24 09:20:41 +01:00
FreddleSpl0it
321965adee [Netfilter] Fix dovecot password mismatch regex 2025-02-18 15:05:59 +01:00
Peter
7bce5d836b Move sed cmd to remove discontinued DNSBLs (#6315)
* Move sed cmd to remove discontinued DNSBLs

* compose: bump postfix version

---------

Co-authored-by: DerLinkman <niklas.meyer@servercow.de>
2025-02-18 11:20:03 +01:00
FreddleSpl0it
351f4ce787 [Redis] Add support for masterauth via env var 2025-02-18 11:16:06 +01:00
FreddleSpl0it
a567d5dc31 [Nginx] Add support for trusted proxies via env var 2025-02-18 11:03:34 +01:00
DerLinkman
4ac541f671 [Mariadb] Update to 10.11 (LTS) (#5152)
* [Mariadb] Update to 10.11 (LTS)

* mysql: set default collation to general_ci
2025-02-17 15:48:25 +01:00
Dmitriy Alekseev
f6dc0b463f Update Rspamd to 3.11.0 and enable SMTPUTF8 for outgoing mail (#6216)
* Update Rspamd to 3.11

* Enable SMTPUTF8 and hide it from SMTPD greeting

* Update options.inc

* compose: increased rspamd tag
2025-02-17 14:41:39 +01:00
DerLinkman
16e22e23dc sogo: switched apt source to sogo again (supports aarch64 now) 2025-02-17 14:31:50 +01:00
FreddleSpl0it
d8afa6f393 [Dovecot][Netfilter] Fix dovecot failed login regex 2025-02-14 13:12:12 +01:00
milkmaker
836e3f15b7 [Web] Updated lang.es-es.json (#6307)
Co-authored-by: Julie GINESTIERE <julien.ginestiere+git@gmail.com>
2025-02-13 19:32:39 +01:00
FreddleSpl0it
aaa7e4a184 [Web] Fix incorrect session lifetime in sogo-auth.php 2025-02-13 11:54:55 +01:00
FreddleSpl0it
3912341b32 [SOGo] rename custom logo 2025-02-12 11:31:14 +01:00
FreddleSpl0it
735d5f0e56 Merge pull request #6220 from Babybatrick/staging
Adding lines to docker-compose.yml to allow for simpler SOGo web client UI customisation
2025-02-12 10:54:16 +01:00
FreddleSpl0it
f375794fb7 Merge pull request #6223 from mailcow/ffdhe2048
Ffdhe2048
2025-02-12 10:48:22 +01:00
renovate[bot]
4ed3017a02 chore(deps): update devops-infra/action-pull-request action to v0.6.0 (#6302) 2025-02-12 06:56:10 +01:00
FreddleSpl0it
ef2f5f7be0 [Dovecot] Use Redis ACL user quota_notify with restricted access 2025-02-11 16:59:18 +01:00
FreddleSpl0it
54728bf780 [Dovecot] Fix create sogo-sso.conf 2025-02-11 14:40:38 +01:00
Henry Williams
743e88fd67 Update generate_config.sh version checking for wider compatibility (#6270)
* Update generate_config.sh version checking for wider compatibility 

fix: replace `grep -oP` with `grep -oE` for broader compatibility

The `-P` option (Perl-compatible regex) is not supported in all versions of `grep`, particularly the default BSD `grep` on macOS. This change replaces `-P` with `-E` (extended regex), which is more widely available and ensures compatibility across different environments.

Tested on macOS and Linux.

* Update generate_config.sh to remove use of platform dependent grep

Replaced version checking using free-form text. Instead, uses Docker’s built-in templating instead of parsing free-form text. This gives cross-platform consistency without dependency on particular versions of grep.
2025-02-11 13:55:03 +01:00
DerLinkman
ac2f0c7db1 Merge pull request #6286 from mailcow/fix-workflow-staging
Fix check_prs_if_on_staging workflow
2025-02-11 13:52:44 +01:00
FreddleSpl0it
f64c6aa1d4 Merge pull request #6269 from mailcow/staging
Automatic PR to nightly from 2025-01-27T10:00:26Z
2025-02-07 15:10:10 +01:00
FreddleSpl0it
e2cf22ff9e Merge pull request #6268 from mailcow/feat/nightly-separated-login
[Web] Separate Login pages
2025-02-07 15:09:39 +01:00
FreddleSpl0it
55dcae4a01 [Web] Fix Generic-OIDC connection test 2025-02-07 15:05:43 +01:00
FreddleSpl0it
f0016eeecd [Web] Add german translation for idp settings 2025-02-07 14:19:20 +01:00
FreddleSpl0it
3544a2246e [Nginx] fix ADDITIONAL_SERVER_NAMES array 2025-02-04 13:30:00 +01:00
FreddleSpl0it
97890b71f1 [Nginx] Invert SKIP container condition 2025-02-03 12:22:13 +01:00
FreddleSpl0it
e645f931dc [Nginx] Add env var for HTTP to HTTPS redirection 2025-02-03 12:05:08 +01:00
FreddleSpl0it
bbdec0960a Merge pull request #6290 from mailcow/fix/nginx-vhosts
[Nginx] Use vhosts for additional server names
2025-02-03 11:35:09 +01:00
milkmaker
41ba7d97fa update postscreen_access.cidr (#6287) 2025-02-01 17:06:07 +01:00
Peter
83fc2c6387 It's github-token now 2025-01-31 17:20:28 +01:00
DerLinkman
aac4c6b5f4 postfix: added master.pid removal and startsecs to supervisord (#6284) 2025-01-31 12:49:39 +01:00
FreddleSpl0it
3c0f775e2f Merge pull request #6281 from mailcow/fix/6275
[Nginx] Fix
2025-01-31 10:49:21 +01:00
FreddleSpl0it
3a81b84cf7 [Nginx] Fix #6275 2025-01-30 14:49:18 +01:00
FreddleSpl0it
a2e87e0880 [Web] Add validation for server_name against allow list 2025-01-30 11:47:55 +01:00
DerLinkman
2407aa7895 Merge branch 'feat/clamd-rebuild' into staging 2025-01-29 14:01:39 +01:00
FreddleSpl0it
0ad327bbe5 [Nginx] Use separate vhosts for additional server names 2025-01-29 09:51:45 +01:00
DerLinkman
1a087bb2c8 clamd: cleanup dockerfile 2025-01-28 14:49:11 +01:00
DerLinkman
65bc581fab clamd: remove exposed ports from buildfile 2025-01-28 14:36:43 +01:00
DerLinkman
60a2270d1e clamd: update to 1.4.2 + build from source instead using alpine packages 2025-01-28 14:25:56 +01:00
FreddleSpl0it
cb5cae3e44 Merge branch 'nightly' into feat/nightly-separated-login 2025-01-27 16:37:09 +01:00
FreddleSpl0it
8ed51e500f Merge pull request #6260 from mailcow/manitu
Remove discontinued Nixspam DNSBL
2025-01-27 16:21:29 +01:00
FreddleSpl0it
aca01c8aa2 [Web] Separate Login pages 2025-01-27 15:59:50 +01:00
FreddleSpl0it
45d14254f2 [Postfix] Remove discontinued Nixspam DNSBL from existing dns_blocklists.cf 2025-01-24 10:06:50 +01:00
FreddleSpl0it
de6bd222fc [Web] increase db_version 2025-01-24 09:25:19 +01:00
Michael Kuron
04116982a5 Remove discontinued Nixspam DNSBL 2025-01-23 22:16:54 +01:00
FreddleSpl0it
36d4fcbf39 Merge pull request #6255 from mailcow/staging
Automatic PR to nightly from 2025-01-23T11:01:42Z
2025-01-23 15:21:39 +01:00
FreddleSpl0it
04058ab06e [Nginx] move conf.d include to end of nginx.conf 2025-01-23 14:54:28 +01:00
FreddleSpl0it
9d791d0c4f Merge branch 'staging' into nightly 2025-01-23 12:06:47 +01:00
FreddleSpl0it
da02e26172 [Web] Delete old session_id after regenerate 2025-01-23 11:59:01 +01:00
DerLinkman
43f945fe01 dovecot: fix index timeout seconds 2025-01-23 11:51:41 +01:00
DerLinkman
e76c0ba9a6 Merge branch 'staging' 2025-01-23 11:31:01 +01:00
DerLinkman
d83111568e update.sh: remove accidentally added exit at end of solr volume removal 2025-01-23 11:30:05 +01:00
FreddleSpl0it
1b578caabb Merge pull request #6251 from mailcow/staging
2025-01
2025-01-23 11:16:38 +01:00
FreddleSpl0it
1e77f8d8a1 Merge pull request #6250 from mailcow/staging
Automatic PR to nightly from 2025-01-22T19:10:32Z
2025-01-23 11:01:55 +01:00
FreddleSpl0it
5f45f8ae34 [Web] Fix mailbox datatable search 2025-01-23 09:18:45 +01:00
DerLinkman
1dac8f1f66 scripts: changed SKIP_FTS text to warn on lower threaded systems 2025-01-23 08:42:22 +01:00
DerLinkman
5a04942d89 update.sh: changed SKIP_FTS default to y instead n for updates 2025-01-23 08:38:14 +01:00
DerLinkman
a30f6696a3 update.sh: fixed --force for solr-removal + code optimization 2025-01-23 08:30:48 +01:00
FreddleSpl0it
d430b595c1 Merge branch 'staging' into nightly 2025-01-23 08:11:45 +01:00
FreddleSpl0it
1fca328266 [Nginx] Disable IPv6 listener for Rspamd dynmaps when DISABLE_IPv6=y 2025-01-22 15:11:46 +01:00
FreddleSpl0it
7bcd61ecb5 [Nginx] Generate includes for custom configs 2025-01-22 14:30:47 +01:00
renovate[bot]
ee7a8624fc chore(deps): update actions/stale action to v9.1.0 (#6247)
Signed-off-by: milkmaker <milkmaker@mailcow.de>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-01-21 06:38:13 +01:00
DerLinkman
4708b1398b update.sh: fix mailcow fts update versioning 2025-01-20 15:41:48 +01:00
DerLinkman
746915cbdd fts: change autoindex to occur on mailboxes of receiving 20 or more mails daily 2025-01-20 14:21:15 +01:00
Alyx
36db68677c Reduce sa rules download retry limit to 5 (#6225)
Reduces the retry limit for the sa rules download to a more reasonable 5 retries to prevent running in a timeout condition.
2025-01-20 14:10:29 +01:00
gwelch-contegix
08599c1960 Fix community support url (#6245) 2025-01-20 14:09:31 +01:00
DerLinkman
31e001ebee flatcurve: change default amount of processes to 1 2025-01-16 11:37:15 +01:00
FreddleSpl0it
1e70a20188 [SOGo] Add mailcow Buttons to SOGo navbar 2025-01-15 16:15:25 +01:00
FreddleSpl0it
8048e0a53c [Web] Fix permission exception in IdP actions 2025-01-15 12:48:10 +01:00
FreddleSpl0it
8fea9fc21f Merge pull request #6211 from jan-oratowski/patch-1
Fix missing property in Create Sync Job request
2025-01-14 12:18:29 +01:00
FreddleSpl0it
2f1884e94b Merge pull request #6205 from PhoenixPeca/master
Improve the existing validation flow for sieve filter
2025-01-14 12:08:56 +01:00
FreddleSpl0it
24b3d8f850 Merge pull request #6001 from marekfilip/feat/temp-email-aliases
add temporary email description
2025-01-14 11:52:44 +01:00
FreddleSpl0it
d280025b51 [Web] Regenerate session_id on successful login 2025-01-14 11:30:41 +01:00
FreddleSpl0it
abd789f629 [Web] Escape mailbox name before querying aliases 2025-01-14 11:18:20 +01:00
milkmaker
69f6a82905 [Web] Updated lang.fr-fr.json (#6238)
Co-authored-by: Neuronnexion <support@nnx.com>
2025-01-09 06:51:42 +01:00
milkmaker
10328981b6 Translations update from Weblate (#6235)
* [Web] Updated lang.fr-fr.json

Co-authored-by: Neuronnexion <support@nnx.com>

* [Web] Updated lang.zh-cn.json

Co-authored-by: Easton Man <me@eastonman.com>

---------

Co-authored-by: Neuronnexion <support@nnx.com>
Co-authored-by: Easton Man <me@eastonman.com>
2025-01-05 15:25:45 +01:00
Filip Marek
150b2bbd9d Merge branch 'mailcow:master' into feat/temp-email-aliases 2025-01-03 11:40:01 +01:00
milkmaker
40a8bc808a update postscreen_access.cidr (#6232) 2025-01-01 03:26:18 +01:00
Dmitriy Alekseev
d92aa4b15d Update dhparams.pem
Use https://ssl-config.mozilla.org/ffdhe2048.txt due to better security of the key
2024-12-20 15:39:41 +01:00
milkmaker
2d2dacb70e [Web] Updated lang.fr-fr.json (#6221)
[Web] Updated lang.fr-fr.json

Co-authored-by: Neuronnexion <support@nnx.com>
Co-authored-by: Peter <magic@kthx.at>
2024-12-19 17:10:43 +01:00
Amin
ade20d79d4 Uploading of the necessary files, after new volumes were added to docker-compose.yml (sogo-mailcow container)
After new volumes were added to docker-compose.yml in the sogo-mailcow container, it is necessary to include the specified files in the path, in order for docker to correctly start after running `docker compose up` command, otherwise error will appear, as necessary files would be missing.
The files uploaded are original SOGo UI elements, obtained from the sogo-mailcow container. Whenever users will need to change the UI elements, they would just need to change these files. Hence simplifying the process.
2024-12-19 22:13:27 +08:00
Amin
65bc8f0972 Update docker-compose.yml (sogo-mailcow)
This commit includes the addition of 3 lines, in the volumes part of the sogo-mailcow container, to allow for better customisation of the user interface on the web client page.
2024-12-19 21:59:05 +08:00
Jan Oratowski
c6f6eda0bf Fix missing property in Create Sync Job request
In example there was property called "user1", but it was missing from request definition.

This resulted in nswagger generating incorrect C# API code.
2024-12-14 15:27:37 +01:00
milkmaker
357a4d7fb3 [Web] Updated lang.fr-fr.json (#6209)
Co-authored-by: Neuronnexion <support@nnx.com>
2024-12-13 12:21:12 +01:00
DerLinkman
1c6684a539 compose: fix dovecot tagging 2024-12-12 17:02:21 +01:00
DerLinkman
de80c120c9 update.sh: added silent fix for removing old fts.conf in order to update properly 2024-12-12 16:57:32 +01:00
Niklas Meyer
3e8bb06a37 dovecot: replace solr fts with flatcurve (xapian) (#5680)
* fts-flatcurve: inital implementation

* fts: removed solr from compose.yml

* flatcurve: added heap and proc logic to dovecot

* added logic for update.sh & generate for Flatcurve

* delete old iteration of fts-flatcurve.conf

* updated default fts.conf

* updated .gitignore to exclude fts.conf for further git updates

* Remove autogeneration of fts.conf (disable override)

* cleanup all left solr stuff

* renamed SKIP_FLATCURVE to SKIP_FTS

* cleanup leftovers solr in lang files

* moved lazy_expunge plugin only to mail_plugins

* added fts timeout value

* compose: remove dev image of dovecot

* updated japanese translation
2024-12-12 16:44:42 +01:00
milkmaker
b087ac9e27 Translations update from Weblate (#6206)
* [Web] Updated lang.fr-fr.json

Co-authored-by: Neuronnexion <support@nnx.com>

* [Web] Updated lang.si-si.json

Co-authored-by: Matjaž Tekavec <matjaz@moj-svet.si>

---------

Co-authored-by: Neuronnexion <support@nnx.com>
Co-authored-by: Matjaž Tekavec <matjaz@moj-svet.si>
2024-12-11 18:10:51 +01:00
Phoenix Eve Aspacio
d09e4ff020 Convert AJAX to POST request
This AJAX request sends form data in $_GET request query. This is problematic and unreliable when validating superrrr loooooong conditions, especially in environments that use reverse-proxy.

Been having this problem and this PR solves it. :)
2024-12-11 10:06:10 +08:00
Phoenix Eve Aspacio
f065842402 Updated to $_REQUEST.
tested from my end.
2024-12-11 10:03:47 +08:00
Niklas Meyer
3875e8377a sogo: added SOGoDisableOrganizerEventCheck value to sogo.conf (#6204) 2024-12-10 15:59:02 +01:00
Christian 🦄
7c8e5c10ca Add create command to prevent external: true warnings (#6203)
This is related to https://github.com/mailcow/mailcow-dockerized/issues/5970 and https://community.mailcow.email/d/2126-backup-restore/2

It adds `docker compose create` to the script which gets executed directly after the sync of the mailcow-dockerized directory. This way the Docker daemon on the remote side creates everything and we get rid of the warning "volume "XYZ" already exists but was not created by Docker Compose. Use `external: true` to use an existing volume"

This is helpful if you use the create-cold-standby.sh script to migrate your mailcow installation to another server and don't want to get those warnings after migration.

Co-authored-by: Niklas Meyer <niklas.meyer@servercow.de>
2024-12-10 09:25:29 +01:00
Filip Marek
1a8e1a2677 add escape html for description 2024-12-09 23:07:43 +01:00
Filip Marek
0d635e2658 increase migrations verion 2024-12-09 23:07:43 +01:00
Filip Marek
60ca25026d add temporary email description 2024-12-09 23:07:02 +01:00
FreddleSpl0it
69b03791a2 Add missing Redis authentication 2024-12-09 13:54:44 +01:00
Peter
ed2837edd8 Remove legacy Nextcloud settings (#6050) 2024-12-09 13:49:24 +01:00
FreddleSpl0it
fa3b789fbb [Web] fix issue #6185 2024-12-09 13:07:00 +01:00
FreddleSpl0it
49e05f5120 [Web] fix oauth2 redirect after login 2024-12-09 11:36:05 +01:00
FreddleSpl0it
24453993f3 Merge pull request #6186 from h3ssan/feat/search-mailbox-by-full-name
Implement search mailboxes by fullname
2024-12-09 10:21:39 +01:00
FreddleSpl0it
8853e2c44a [Nginx] Use SOGo IPv4 for upstream 2024-12-09 09:50:16 +01:00
FreddleSpl0it
c9dd102741 [Dovecot] use auth_cache 2024-12-06 12:55:44 +01:00
Tatsuya Yokota
d1af52b4e7 Add initial Japanese language files (#6198)
* Add initial Japanese language files

* Reordered language list: moved Japanese (日本語) below Italian (Italiano)

---------

Co-authored-by: Tatsuya Yokota <git@acoustype.com>
2024-12-06 09:44:16 +01:00
FreddleSpl0it
bbddfc3eab [Web] rearrange login buttons 2024-12-05 15:21:07 +01:00
FreddleSpl0it
a41bb55c83 Merge remote-tracking branch 'origin/staging' into nightly 2024-12-05 14:33:41 +01:00
FreddleSpl0it
b6174fae23 Merge pull request #6194 from mailcow/feat/nightly-enhancements
[Nightly] Enhancements
2024-12-05 13:18:39 +01:00
FreddleSpl0it
1d6513ffba [Web] fix idp login alerts and updates 2024-12-04 14:49:31 +01:00
i-curve
6e8e13cebc fix: check docker version fail in generate_config.sh #6187 (#6188)
close #6187

Signed-off-by: i-curve <i-curve@qq.com>
Co-authored-by: Niklas Meyer <niklas.meyer@servercow.de>
2024-12-04 12:28:14 +01:00
FreddleSpl0it
896a9638d6 Fix mailcowauth 2024-12-02 14:16:43 +01:00
FreddleSpl0it
83e53eb524 [Web] fix incomplete session on broken logins 2024-12-02 11:55:17 +01:00
FreddleSpl0it
f36184df64 [Web] update mailbox on idp login 2024-12-02 10:35:45 +01:00
FreddleSpl0it
6fa1c9f63d [Web] protect /get/identity-provider 2024-12-02 10:24:15 +01:00
milkmaker
f3060b37a6 update postscreen_access.cidr (#6189) 2024-12-01 17:49:28 +01:00
milkmaker
59c68f2603 Translations update from Weblate (#6190) 2024-12-01 17:49:10 +01:00
FreddleSpl0it
ccc8595665 [SOGo] redirect to /user if unauthenticated 2024-12-01 16:51:56 +01:00
FreddleSpl0it
45c13c687b [Web] update user based on template after login 2024-12-01 16:36:16 +01:00
FreddleSpl0it
d61a08c2a9 [Web] hide auth heading for external managed users 2024-11-30 14:39:05 +01:00
FreddleSpl0it
c8c4cfd939 [Web] add ignore ssl option for keycloak and generic-oidc provider 2024-11-30 14:37:07 +01:00
FreddleSpl0it
ec4b9b088c [Web] support multiple ldap hosts separated by comma 2024-11-29 18:59:07 +01:00
FreddleSpl0it
b2db8e6b31 [Dovecot] init identity provider before user login 2024-11-29 16:52:34 +01:00
FreddleSpl0it
05e4bd7602 [Web] use global vars for iam_provider and iam_settings 2024-11-29 15:50:35 +01:00
Hassan A Hashim
31185e3de1 Implement search mailboxes by fullname 2024-11-27 14:47:57 +03:00
Habetdin
4dbfd3abad Update lang.ru-ru.json (#6184) 2024-11-25 16:01:17 +01:00
FreddleSpl0it
b4e6002bcf Merge pull request #6076 from Habetdin/staging
Only show active protocols on "last login" in mailbox overview
2024-11-21 10:24:41 +01:00
FreddleSpl0it
6af907cff0 Merge pull request #6182 from mailcow/fix/4518
[Web] allow dots in dkim selectors
2024-11-20 13:11:34 +01:00
FreddleSpl0it
ba282233ea [Web] allow dots in dkim selectors 2024-11-20 13:05:02 +01:00
FreddleSpl0it
6f4c2b3361 Merge pull request #6181 from mailcow/fix/5703
[Web] Add additional columns to _sogo_static_view
2024-11-20 11:15:35 +01:00
FreddleSpl0it
d08b9aec32 [Web] Add additional columns to _sogo_static_view 2024-11-20 11:09:49 +01:00
FreddleSpl0it
bb310600b2 Merge pull request #6180 from mailcow/fix/6046
[Web] add missing translation for ratelimit in templates overview
2024-11-20 10:02:34 +01:00
FreddleSpl0it
fe7211f27f [Web] add missing translation for ratelimit in templates overview 2024-11-20 09:57:14 +01:00
FreddleSpl0it
8e9a9364a8 Merge pull request #6146 from mailcow/feat/redis-pw
Enable password protection for Redis
2024-11-19 15:32:36 +01:00
FreddleSpl0it
6831f94fdb [Redis] redis-cli suppress auth warning 2024-11-19 15:10:52 +01:00
FreddleSpl0it
b0de756a7c [Redis] Rename docker-entrypoint.sh to redis-conf.sh 2024-11-19 14:54:36 +01:00
FreddleSpl0it
922f8777b0 Merge pull request #6168 from mailcow/fix/f2b-banlist
[Web] remove f2b banlist from json_api.php
2024-11-19 14:32:31 +01:00
FreddleSpl0it
c1903f121d [Redis] set password via docker-entrypoint.sh 2024-11-19 14:25:31 +01:00
FreddleSpl0it
89fb1322c6 Enable password protection for Redis 2024-11-19 14:25:31 +01:00
FreddleSpl0it
852d944cfb [Web] remove f2b banlist from json_api.php 2024-11-19 14:13:37 +01:00
Niklas Meyer
bca4e1a03d update.sh: precaution ask for deletion of dns_blocklists.cf if old format (#6154) 2024-11-19 14:13:37 +01:00
FreddleSpl0it
326a446f8b Merge pull request #6177 from mailcow/feat/jinja2-nginx
[Nginx] Use jinja2 for templating nginx configuration
2024-11-19 14:08:37 +01:00
FreddleSpl0it
70ca5fde95 [Nginx] Use jinja2 for templating nginx configuration 2024-11-19 08:39:52 +01:00
DerLinkman
5ad4ab5b60 update.sh: fixed typos 2024-11-15 16:39:06 +01:00
Niklas Meyer
bd9f4ba0a5 Merge pull request #6173 from mailcow/staging
2024-11b
2024-11-15 16:21:17 +01:00
DerLinkman
d10d64dd92 mysql: increased thread_stack to 192k since 10.5.27 2024-11-15 16:18:22 +01:00
FreddleSpl0it
6d1f7482ed [Web] broadcast maildir move to dovecot containers on mailbox_rename 2024-11-15 16:18:21 +01:00
FreddleSpl0it
b9f52df3f1 [Web] update _sogo_static_view on password reset 2024-11-15 16:18:21 +01:00
FreddleSpl0it
dc379267a9 Merge remote-tracking branch 'origin/staging' into nightly 2024-11-12 16:07:55 +01:00
Niklas Meyer
4d688c5500 2024-11a (#6160)
* update.sh: precaution ask for deletion of dns_blocklists.cf if old format (#6154)

* [Web] Updated lang.zh-cn.json (#6151)

[Web] Updated lang.zh-cn.json

Co-authored-by: Easton Man <me@eastonman.com>

* compose: bump sogo version to include 5.11.2 (#6156)

* php: use correct php image + workaround of #6149 (#6159)

* compose: bump php-fpm container to correctly use patched c-ares

* [Web] check $containers_info contains required fields

---------

Co-authored-by: FreddleSpl0it <patschul@posteo.de>

---------

Co-authored-by: milkmaker <milkmaker@mailcow.de>
Co-authored-by: Easton Man <me@eastonman.com>
Co-authored-by: FreddleSpl0it <patschul@posteo.de>
2024-11-12 15:57:17 +01:00
Niklas Meyer
b90375b6e5 php: use correct php image + workaround of #6149 (#6159)
* compose: bump php-fpm container to correctly use patched c-ares

* [Web] check $containers_info contains required fields

---------

Co-authored-by: FreddleSpl0it <patschul@posteo.de>
2024-11-12 15:56:23 +01:00
FreddleSpl0it
9542698e95 Merge remote-tracking branch 'origin/staging' into nightly 2024-11-12 15:10:03 +01:00
Niklas Meyer
afe0ba74d2 compose: bump sogo version to include 5.11.2 (#6156) 2024-11-12 11:11:34 +01:00
milkmaker
dc5a28111d [Web] Updated lang.zh-cn.json (#6151)
[Web] Updated lang.zh-cn.json

Co-authored-by: Easton Man <me@eastonman.com>
2024-11-11 21:39:15 +01:00
Niklas Meyer
52f3f93aee update.sh: precaution ask for deletion of dns_blocklists.cf if old format (#6154) 2024-11-11 16:50:14 +01:00
Habetdin
6550f0a3e8 Only show active protocols on "last login" in mailbox overview 2024-11-11 12:44:05 +03:00
FreddleSpl0it
0a58aa293a Merge pull request #6141 from mailcow/staging
2024-11
2024-11-07 11:41:45 +01:00
milkmaker
be79f320d2 Translations update from Weblate (#6140)
* [Web] Updated lang.lv-lv.json

Co-authored-by: Edgars Andersons <Edgars+Mailcow+Weblate@gaitenis.id.lv>

* [Web] Updated lang.tr-tr.json

Co-authored-by: Furkan <furkan43500@gmail.com>

---------

Co-authored-by: Edgars Andersons <Edgars+Mailcow+Weblate@gaitenis.id.lv>
Co-authored-by: Furkan <furkan43500@gmail.com>
2024-11-06 19:08:53 +01:00
Niklas Meyer
6ec1e357c3 fix: broken sogo cron notifications (for appointments etc.) (#6128) 2024-11-05 16:21:14 +01:00
milkmaker
8b2f71f97e update postscreen_access.cidr (#6129) 2024-11-05 16:20:57 +01:00
renovate[bot]
93cf99cc9e chore(deps): update thollander/actions-comment-pull-request action to v3.0.1 (#6130)
Signed-off-by: milkmaker <milkmaker@mailcow.de>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-11-02 20:38:18 +01:00
FreddleSpl0it
d8c8e4ab1b [DockerApi] Fix IMAP ACL migration issue when renaming mailbox 2024-10-31 11:00:03 +01:00
FreddleSpl0it
2d76ffc88c Merge pull request #6045 from mailcow/feat/rename-mbox
[Web][DockerApi] Add Feature to Rename Email Addresses
2024-10-25 10:49:58 +02:00
FreddleSpl0it
672bb345fd Fix mailbox_rename de-de translation 2024-10-25 10:47:53 +02:00
milkmaker
5c88030b5a Translations update from Weblate (#6123)
* [Web] Updated lang.lv-lv.json

Co-authored-by: Edgars Andersons <Edgars+Mailcow+Weblate@gaitenis.id.lv>

* [Web] Updated lang.zh-tw.json

[Web] Updated lang.zh-tw.json

Co-authored-by: SamWang8891 <g348.8891@gmail.com>
Co-authored-by: milkmaker <milkmaker@mailcow.de>

---------

Co-authored-by: Edgars Andersons <Edgars+Mailcow+Weblate@gaitenis.id.lv>
Co-authored-by: SamWang8891 <g348.8891@gmail.com>
2024-10-22 21:52:42 +02:00
Niklas Meyer
b106945c73 Feat/rspamd 3.10.2 (#6122)
* rspamd: update to 3.10.2

* rspamd: fix broken archive_extension gz
2024-10-21 16:03:51 +02:00
milkmaker
502a7100ca [Web] Updated lang.zh-cn.json (#6120)
Co-authored-by: SamWang8891 <g348.8891@gmail.com>
2024-10-19 22:24:45 +02:00
Niklas Meyer
ee2791d93a rspamd: update to 3.10.1 (#6115)
* rspamd: upgrade to 3.10.1

* rspamd: adapt 30s task timeout per default now
2024-10-18 15:50:45 +02:00
SamWang8891
399630cf34 Update lang.zh-tw.json (#6114) 2024-10-17 14:50:05 +02:00
Patrik Kernstock
fce93609dd Update mime_types.conf configuration (#6013)
In the last months and years, the default `mime_types.conf` of rspamd has changed and it might be also useful to make some adjustments to the weight of certain file extensions.

This PR is removing all file extensions from `mime_types.conf` which are already in rspamd's default configuration at [rspamd/src/plugins/lua/mime_types.lua](https://github.com/rspamd/rspamd/blob/master/src/plugins/lua/mime_types.lua). If file extension is not present or has a different score compared to rspamd default, it is still in the list.

There are also a few major differences to certain file extensions, which might be useful to discuss and carefully adjust. For example, `.exe` files are rated very 'badly' due to high chance of being malicious, so are other extensions like `bat`, `cmd`, etc.

Current suggestion:
```lua
# Extensions that are treated as 'bad'
# Number is score multiply factor
bad_extensions = {
  apk = 4,
  appx = 4,
  appxbundle = 4,
  bat = 8,
  cab = 20,
  cmd = 8,
  com = 20,
  diagcfg = 4,
  diagpack = 4,
  dmg = 8,
  ex = 20,
  ex_ = 20,
  exe = 20,
  img = 4,
  jar = 8,
  jnlp = 8,
  js = 8,
  jse = 8,
  lnk = 20,
  mjs = 8,
  msi = 4,
  msix = 4,
  msixbundle = 4,
  ps1 = 8,
  scr = 20,
  sct = 20,
  vb = 20,
  vbe = 20,
  vbs = 20,
  vhd = 4,
  py = 4,
  reg = 8,
  scf = 8,
  vhdx = 4,
};

# Extensions that are particularly penalized for archives
bad_archive_extensions = {
  pptx = 0.5,
  docx = 0.5,
  xlsx = 0.5,
  pdf = 1.0,
  jar = 12,
  jnlp = 12,
  bat = 12,
  cmd = 12,
};

# Used to detect another archive in archive
archive_extensions = {
  tar = 1,
  ['tar.gz'] = 1,
};
```

**As a important reminder**: For all remaining and additional file extensions and score weights, please check above default rspamd configuration!
2024-10-17 09:11:55 +02:00
Niklas Meyer
38907b5032 dovecot: activate lazy_expunge plugin per default (unconfigured) (#6112) 2024-10-16 15:56:40 +02:00
Peter
5a0f20b9ea Update dependency twig/twig to v3.14.0 (#6071) 2024-10-16 15:29:16 +02:00
Niklas Meyer
8dcaffe925 php: upgrade to alpine 3.20 (base os) (#6106) 2024-10-16 10:35:54 +02:00
Niklas Meyer
c53bf85480 postfix: add X-Original-To header per default (#6110) 2024-10-16 10:35:39 +02:00
Niklas Meyer
982e823c71 sogo: upgrade to 5.11.1 (#6109) 2024-10-15 16:13:51 +02:00
renovate[bot]
382056ec18 chore(deps): update dependency krakjoe/apcu to v5.1.24 (#6087)
Signed-off-by: milkmaker <milkmaker@mailcow.de>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-10-15 11:24:26 +02:00
renovate[bot]
4c9690e87c chore(deps): update dependency php/pecl-mail-mailparse to v3.1.8 (#6096)
Signed-off-by: milkmaker <milkmaker@mailcow.de>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-10-15 11:09:23 +02:00
renovate[bot]
9a58e5e35a chore(deps): update dependency phpredis/phpredis to v6.1.0 (#6098)
Signed-off-by: milkmaker <milkmaker@mailcow.de>
2024-10-15 10:45:32 +02:00
renovate[bot]
932cf453de chore(deps): update dependency nextcloud/server to v28.0.11 (#6101)
Signed-off-by: milkmaker <milkmaker@mailcow.de>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-10-15 10:34:57 +02:00
milkmaker
1538fda71c update postscreen_access.cidr (#6093) 2024-10-15 10:34:39 +02:00
renovate[bot]
54a0d53deb chore(deps): update thollander/actions-comment-pull-request action to v3 (#6102)
Signed-off-by: milkmaker <milkmaker@mailcow.de>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-10-15 10:34:19 +02:00
Niklas Meyer
fda95301ba fix: added tls1.0/1.1 patch for openssl when using older tls versions in override (#6105) 2024-10-15 10:32:08 +02:00
FreddleSpl0it
f9304dcd9b [Web] check if $iam_provider is null on ldap_mbox_login 2024-10-09 12:34:39 +02:00
FreddleSpl0it
1528e8766a [DockerApi] correctly escape user input 2024-09-06 15:59:52 +02:00
FreddleSpl0it
0b9b8c9060 [Web] Ensure correct SOGo SSO password is used after Dovecot restart 2024-09-06 10:05:00 +02:00
Hassan A Hashim
220fdbb168 Add missing Russian translation (#6065) 2024-09-06 07:14:34 +02:00
milkmaker
fe3d08515e [Web] Language file updated by 'Cleanup translation files' addon (#6064) 2024-09-06 07:13:59 +02:00
airon-assustadus
22f7f61ac9 feat/brazilian-translations (#6048)
# What
- Adding some brazilian translations that were missing

Co-authored-by: Airon Teixeira <airon@ymail.com>
2024-09-05 15:09:49 +02:00
FreddleSpl0it
0d2046baeb Merge branch 'staging' into nightly 2024-09-05 14:53:37 +02:00
FreddleSpl0it
29d8cfe2ba [Web] Set min-width and text-align for last login badges 2024-09-05 14:02:04 +02:00
FreddleSpl0it
f2e35dff68 [Web] rename user in sender_acl table 2024-09-05 12:40:30 +02:00
FreddleSpl0it
b1368d29d1 Merge pull request #5724 from q16marvin/master
show last sso login in mailbox table
2024-09-05 12:02:16 +02:00
FreddleSpl0it
0d704a57f5 Merge pull request #6057 from mailcow/fix/sogo-auto-reply
[SOGo] Fix vacation auto reply date shifting
2024-09-05 11:19:40 +02:00
FreddleSpl0it
462137ede7 Merge pull request #6044 from mailcow/feat/redis-session-store
[PHP-FPM] Use redis as session store
2024-09-05 10:55:07 +02:00
Niklas Meyer
bb6f405841 compose: added clamd as depends_on to rspamd (#6062) 2024-09-04 14:42:30 +02:00
renovate[bot]
8b2d67169b chore(deps): update peter-evans/create-pull-request action to v7 (#6059)
Signed-off-by: milkmaker <milkmaker@mailcow.de>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-09-03 19:42:10 +02:00
Finn Hoffhenke
710cec996c feat: Added check for newer version tags on remote (#6054) 2024-09-02 15:40:29 +02:00
Niklas Meyer
0129f84a32 Merge pull request #6056 from mailcow/update/postscreen_access.cidr
[Postfix] update postscreen_access.cidr
2024-09-02 15:37:24 +02:00
FreddleSpl0it
ae3653a925 [SOGo] vacation auto reply date shifting #5394 2024-09-02 10:22:51 +02:00
FreddleSpl0it
82fcddb177 [Web] Fix catch block in LDAP connection test 2024-09-02 10:12:51 +02:00
FreddleSpl0it
320bd31d37 [Web] fix LDAP "ignore ssl errors" option 2024-09-02 10:02:10 +02:00
FreddleSpl0it
b307e0a0d5 [PHP-FPM] Add missing space in log message 2024-09-02 09:57:33 +02:00
milkmaker
af0c61b90a update postscreen_access.cidr 2024-09-01 00:19:09 +00:00
milkmaker
7203735532 [Web] Updated lang.it-it.json (#6053)
Co-authored-by: Stefano <stefano.vassena@gmail.com>
2024-08-29 20:27:23 +02:00
FreddleSpl0it
ef238e5332 [LDAP] skip sync user if username_field in LDAP is empty 2024-08-28 11:28:37 +02:00
FreddleSpl0it
4f9e37c0c3 [Web] rename user in bcc_maps, recipient_maps and imapsync table 2024-08-28 11:16:29 +02:00
FreddleSpl0it
d21c1bfa72 [Web] add error handling for get_acl call 2024-08-28 10:48:44 +02:00
FreddleSpl0it
822d9a7de6 [Web] rename goto in alias table 2024-08-27 10:07:07 +02:00
milkmaker
0066040bdc Translations update from Weblate (#6049)
* [Web] Updated lang.cs-cz.json

Co-authored-by: Kristian Feldsam <feldsam@gmail.com>

* [Web] Updated lang.fr-fr.json

Co-authored-by: Samuel F <20537389+samuelfranzini@users.noreply.github.com>

---------

Co-authored-by: Kristian Feldsam <feldsam@gmail.com>
Co-authored-by: Samuel F <20537389+samuelfranzini@users.noreply.github.com>
2024-08-24 14:09:28 +02:00
FreddleSpl0it
8e7b27aae4 [DockerApi] rework doveadm__get_acl function 2024-08-23 09:30:23 +02:00
FreddleSpl0it
c62b467ac4 [PHP-FPM] Use redis as session store 2024-08-22 11:16:01 +02:00
FreddleSpl0it
be5a181be5 [Web][DockerApi] migrate imap acl on mbox rename 2024-08-22 10:10:05 +02:00
FreddleSpl0it
dbf87e99fc [Web] Convert LDAP username_field and attribute_field to lowercase 2024-08-21 10:48:04 +02:00
FreddleSpl0it
10dfd0a443 [Web][DockerApi] Add the ability to rename the local part of a mailbox 2024-08-21 10:10:34 +02:00
milkmaker
cc5138da13 Translations update from Weblate (#6039)
* [Web] Updated lang.fr-fr.json

[Web] Updated lang.fr-fr.json

Co-authored-by: GeistFighter <lorentzjohan1@gmail.com>
Co-authored-by: Samuel F <20537389+samuelfranzini@users.noreply.github.com>

* [Web] Updated lang.fi-fi.json

Co-authored-by: Berttas <mika@tarh.fi>

* [Web] Updated lang.ru-ru.json

Co-authored-by: Habetdin <15926758+Habetdin@users.noreply.github.com>

* [Web] Updated lang.uk-ua.json

Co-authored-by: DRago_Angel <dragoangel@users.noreply.translate.mailcow.email>

* [Web] Updated lang.pt-br.json

Co-authored-by: xmacaba <lixo@macaba.com.br>

---------

Co-authored-by: GeistFighter <lorentzjohan1@gmail.com>
Co-authored-by: Samuel F <20537389+samuelfranzini@users.noreply.github.com>
Co-authored-by: Berttas <mika@tarh.fi>
Co-authored-by: Habetdin <15926758+Habetdin@users.noreply.github.com>
Co-authored-by: DRago_Angel <dragoangel@users.noreply.translate.mailcow.email>
Co-authored-by: xmacaba <lixo@macaba.com.br>
2024-08-20 21:34:04 +02:00
milkmaker
aeeac63e1f Fix: Escape a ' character in update.sh (#6034) (#6035)
Co-authored-by: Hassan A Hashim <h3ssan@protonmail.com>
2024-08-20 14:22:51 +02:00
Niklas Meyer
ffcd242048 Merge pull request #6027 from mailcow/staging
Automatic PR to nightly from 2024-08-19T12:28:50Z
2024-08-20 13:41:54 +02:00
DerLinkman
e21157c10d Merge branch 'staging' into nightly 2024-08-19 11:42:12 +02:00
FreddleSpl0it
fa3c453d6e Use DN instead of DistinguishedName for LDAP login 2024-08-15 12:49:57 +02:00
FreddleSpl0it
962ac39e4a Merge remote-tracking branch 'origin/staging' into nightly 2024-08-15 12:45:52 +02:00
Niklas Meyer
ebc8e6b838 Merge pull request #6008 from mailcow/staging
Automatic PR to nightly from 2024-08-15T07:42:17Z
2024-08-15 09:53:17 +02:00
DerLinkman
1fc964d72e compose: bump dovecot image to newest nightly (20240814) 2024-08-14 10:12:10 +02:00
DerLinkman
5571d80ae6 Merge branch 'staging' into nightly 2024-08-14 10:10:34 +02:00
DerLinkman
3396e1b427 Merge branch 'staging' into nightly 2024-08-13 16:03:30 +02:00
FreddleSpl0it
58a5a4578c [Web] use cn as fallback ldap login 2024-08-13 12:14:05 +02:00
FreddleSpl0it
519d95cb8b [Web] extend ldap auth logging 2024-08-13 09:30:54 +02:00
FreddleSpl0it
092d3cd80b [Web] extend ldap auth logging 2024-08-12 10:14:53 +02:00
FreddleSpl0it
c034f4bd27 [Web] fix wrong log type on PDOException 2024-08-12 10:10:33 +02:00
FreddleSpl0it
73d60eb085 Merge pull request #5997 from mailcow/fix/nightly-tfa
[Web] fix incorrect user role assignment after TFA verification
2024-08-08 17:11:57 +02:00
FreddleSpl0it
b39b7c24a5 [Web] fix incorrect user role assignment after TFA verification 2024-08-08 17:04:56 +02:00
DerLinkman
d0ecb72e08 Merge branch 'staging' into nightly 2024-08-08 08:44:12 +02:00
DerLinkman
772d5c51fd Merge branch 'staging' into nightly 2024-08-07 14:21:23 +02:00
FreddleSpl0it
9b86ff764e Merge pull request #5975 from mailcow/staging
Automatic PR to nightly from 2024-08-01T03:13:55Z
2024-08-01 11:07:55 +02:00
FreddleSpl0it
57bc03b878 Merge remote-tracking branch 'origin/staging' into nightly 2024-07-31 10:35:44 +02:00
FreddleSpl0it
f7ae2a6162 [Nightly][SOGo] Update 5.10.0 2024-06-06 12:12:30 +02:00
FreddleSpl0it
3080a70287 [Nightly][SOGo] Update to 5.10.0 2024-06-03 09:20:54 +02:00
Patrick Schult
caee770e36 Merge pull request #5850 from mailcow/fix/nightly-autodiscover
[Web] Fix autodiscover fails with external IdP
2024-04-19 20:38:56 +02:00
FreddleSpl0it
95d6eeb37a [Web] revert include prerequisites in autodiscover - include autoload 2024-04-19 20:32:44 +02:00
FreddleSpl0it
eadf70d809 [Web] include prerequisites in autodiscover 2024-04-18 14:23:35 +02:00
Patrick Schult
6e9c3e2687 Merge pull request #5821 from mailcow/fix/web-nightly
Fix/web nightly
2024-04-04 09:33:22 +02:00
FreddleSpl0it
cf2fda66e2 [Web] escape html of alert messages 2024-04-04 09:31:20 +02:00
FreddleSpl0it
cd24057f1a [Web] use SEC_FETCH_DEST header to block api requests 2024-04-04 09:31:03 +02:00
FreddleSpl0it
c68a436a22 [Web] fix invalid rspamd map check 2024-04-04 09:30:30 +02:00
FreddleSpl0it
0807c122f6 [Web] set default LDAP options on get 2024-03-08 15:11:49 +01:00
FreddleSpl0it
e0bda6ca6a [Web] prevent multiple dual-logins 2024-03-08 14:05:37 +01:00
FreddleSpl0it
2ba64e93f9 [Web] allow SSL / TLS connections for LDAP 2024-03-08 13:50:20 +01:00
FreddleSpl0it
e1c3ad9fe8 [Web] return idp instance after init 2024-03-08 13:15:35 +01:00
FreddleSpl0it
ffbf1758e0 [Web] fix identity_provider ArgumentCountError 2024-02-26 13:40:34 +01:00
Patrick Schult
a3af2d8392 Merge pull request #5764 from mailcow/fix/nightly-issues
Fix nightly issues with new ldap provider
2024-02-26 13:33:44 +01:00
FreddleSpl0it
39a4b115ed [SOGo] fix plist_ldap.sh example 2024-02-26 13:14:08 +01:00
FreddleSpl0it
881c2d6e02 [SOGo] remove custom logout from toolbar 2024-02-26 13:13:50 +01:00
FreddleSpl0it
d237157c0b init identity_provider only after all conditions are met 2024-02-26 13:12:44 +01:00
FreddleSpl0it
6928eb632e [Dovecot] move sogo sso to mailcowauth.php 2024-02-26 13:10:08 +01:00
FreddleSpl0it
010d898786 [Web] apply LDAP filter 2024-02-23 10:01:56 +01:00
FreddleSpl0it
766c270b1f [SOGo] fix custom html elements and wrong redirection 2024-02-23 09:12:17 +01:00
FreddleSpl0it
916d0fd46a [Web] catch all exceptions on ldap connect 2024-02-23 08:12:06 +01:00
Patrick Schult
9561526f33 Merge pull request #5754 from mailcow/feat/idp-ldap2
[Nightly] add LDAP as direct IdP
2024-02-20 15:33:58 +01:00
FreddleSpl0it
45811bc2dc [Web] fix mailbox datatable search 2024-02-20 15:00:06 +01:00
FreddleSpl0it
132e37bfec [SOGo] use bash script for ldap plist template 2024-02-20 12:42:37 +01:00
FreddleSpl0it
b3e26e14ef [Ofelia] add ldap sync cronjob 2024-02-20 12:01:08 +01:00
FreddleSpl0it
a3bb889def [Web] fix set_tfa for ldap users 2024-02-20 11:41:02 +01:00
FreddleSpl0it
3a1dcb3aaf [Web] fix set_tfa for ldap users 2024-02-20 11:34:01 +01:00
FreddleSpl0it
d22cafacc8 [Web] fix ldap filter if empty 2024-02-20 11:21:25 +01:00
FreddleSpl0it
78e7266368 [Web] add LDAP query filter 2024-02-20 10:46:23 +01:00
FreddleSpl0it
a06c78362a [Web] add ldap idp 2024-02-20 10:31:14 +01:00
FreddleSpl0it
d479d18507 [Web] update directorytree/ldaprecord 2024-02-20 10:30:11 +01:00
q16marvin
19deda31bc Update functions.mailbox.inc.php 2024-02-09 11:23:47 +01:00
q16marvin
4f47534824 Update mailbox.js 2024-02-09 11:23:09 +01:00
DerLinkman
40146839ef docker-compose: bumped versions 2024-02-08 12:42:36 +01:00
DerLinkman
448f85abe8 Remove unused files 2024-02-08 12:42:36 +01:00
FreddleSpl0it
9a4b79a629 [Web] fix idp mailbox login 2024-02-08 12:42:35 +01:00
DerLinkman
058b79ed5c dovecot: corrected dockerfile inside nightly 2024-02-08 12:42:35 +01:00
FreddleSpl0it
216398355b fix functions.inc.php, update sogo and dovecot nightly image 2024-02-08 12:42:35 +01:00
Geert Hauwaerts
1cda16523d Fixed SQL query for retrieving SSO users. 2024-02-08 12:42:34 +01:00
FreddleSpl0it
2f1e1438e9 [Web] add log messages to verify-sso function 2024-02-08 12:42:34 +01:00
FreddleSpl0it
9039ab4e12 [Web] add missing file to autodiscover.php 2024-02-08 12:42:34 +01:00
DerLinkman
db47696ba7 Updated Dovecot Image to use OpenSSL 3.0 fix 2024-02-08 12:42:33 +01:00
FreddleSpl0it
eb9e3b8391 [Web] add configurable client scopes for generic-oidc 2024-02-08 12:42:33 +01:00
FreddleSpl0it
ba32f1131e [Web] dont rtrim generic-oidc urls 2024-02-08 12:42:32 +01:00
DerLinkman
27ef04baa0 Update Dovecot to reuse lz4 compression 2024-02-08 12:42:32 +01:00
DerLinkman
b3a94e79e3 Use dedicated nightly images on nightly branch + updates of images itself 2024-02-08 12:42:32 +01:00
FreddleSpl0it
3a4c0c84a3 fix keycloak mailpassword flow 2024-02-08 12:42:31 +01:00
Mirko Ceroni
73a044ec14 Update sogo-auth.php 2024-02-08 12:42:31 +01:00
Mirko Ceroni
389eb99c10 Update sogo-auth.php
initialize db before validating credentials
2024-02-08 12:42:31 +01:00
FreddleSpl0it
597d98e1d7 Fixes #5408 2024-02-08 12:42:30 +01:00
FreddleSpl0it
981307a1c6 [Web] add missing break 2024-02-08 12:42:30 +01:00
FreddleSpl0it
2d51881ae3 [Web] fix user protocol badges 2024-02-08 12:42:30 +01:00
FreddleSpl0it
788f03e993 [Dovecot] remove passwd-verify.lua generation 2024-02-08 12:42:29 +01:00
DerLinkman
81024b8c12 Clamd using Alpine Packages instead self compile 2024-02-08 12:42:29 +01:00
DerLinkman
89c5064213 Rebased Dovecot on Alpine + fixed logging 2024-02-08 12:42:29 +01:00
DerLinkman
4b18a99e55 Small fixes for CLAMD Health Check 2024-02-08 12:42:28 +01:00
DerLinkman
92d2cca7c3 Added missing Labels to Dockerfiles 2024-02-08 12:42:28 +01:00
DerLinkman
466e36ecbb Optimized Build Process for Dovecot 2024-02-08 12:42:28 +01:00
DerLinkman
7ec7bd21cb Changed Dovecot Base to Bullseye again (Self compile) 2024-02-08 12:42:27 +01:00
DerLinkman
38db7226a8 Optimized CLAMAV Builds to match exact version instead of Repo 2024-02-08 12:42:27 +01:00
DerLinkman
60f9412bb8 Switched to Alpine Edge (for IMAPSYNC Deps) 2024-02-08 12:42:26 +01:00
DerLinkman
737c0502ac Rebased Dovecot on Alpine 3.17 instead Bullseye (ARM64 Support) 2024-02-08 12:42:26 +01:00
DerLinkman
6da41b1027 Removed Test self compiled SOGo Dockerfile 2024-02-08 12:42:26 +01:00
DerLinkman
2bd46ae0fd Changed Maintainer to tinc within Dockerfiles 2024-02-08 12:42:25 +01:00
DerLinkman
c15ab10b1b Updated Clamd Building to be x86 and ARM Compatible 2024-02-08 12:42:25 +01:00
DerLinkman
ddaeebc822 [Rspamd] Update to 3.6 (Ratelimit fix) 2024-02-08 12:42:25 +01:00
FreddleSpl0it
e64293c82f [Web] minor fixes 2024-02-08 12:42:24 +01:00
FreddleSpl0it
ccc17e4a20 [SOGo] deny direct login on external users 2024-02-08 12:42:24 +01:00
FreddleSpl0it
a53ef2ed7a [SOGo] remove sogo_view and triggers 2024-02-08 12:42:24 +01:00
FreddleSpl0it
185c36cdfe [Web] catch update_sogo exceptions 2024-02-08 12:42:23 +01:00
FreddleSpl0it
9beb47c067 [Web] fix malformed_username check 2024-02-08 12:42:23 +01:00
FreddleSpl0it
3d486678ae [Web] remove keycloak sync disabled warning 2024-02-08 12:42:23 +01:00
FreddleSpl0it
04e2494af8 deny changes on identity provider if it's in use 2024-02-08 12:42:22 +01:00
FreddleSpl0it
7b47159478 rework auth - move dovecot sasl log to php 2024-02-08 12:42:22 +01:00
FreddleSpl0it
17b6ac3313 [Web] allow mailbox authsource to be switchable 2024-02-08 12:42:22 +01:00
FreddleSpl0it
43600cd127 [Web] fix identity-provider settings layout 2024-02-08 12:42:21 +01:00
FreddleSpl0it
6d3a32c1d9 [Web] trim CRON_LOG 2024-02-08 12:42:21 +01:00
FreddleSpl0it
21fa3c8458 [Web] remove unnecessary if block 2024-02-08 12:42:21 +01:00
FreddleSpl0it
6df663825a [Web] add curl timeouts to oidc requests 2024-02-08 12:42:20 +01:00
FreddleSpl0it
8ce4600562 [Web] update lang files 2024-02-08 12:42:20 +01:00
FreddleSpl0it
3179c0e712 [Dovecot] mailcowauth minor fixes 2024-02-08 12:42:19 +01:00
FreddleSpl0it
37254738e2 [Web] improve identity-provider template 2024-02-08 12:42:19 +01:00
FreddleSpl0it
a4cce147aa [Web] improve attribute sync performance & make authsource editable 2024-02-08 12:42:19 +01:00
FreddleSpl0it
b176585a9c [Web] add crontasks logs 2024-02-08 12:42:18 +01:00
FreddleSpl0it
f8647bb15e [Web] add keycloak sync crontask 2024-02-08 12:42:18 +01:00
FreddleSpl0it
85368971fd [Web] handle fatal errors on getAccessToken 2024-02-08 12:42:18 +01:00
FreddleSpl0it
e4284b8e19 [Web] fix attribute mapping list 2024-02-08 12:42:17 +01:00
FreddleSpl0it
5545d8a56c [Web] hide auth settings for external users 2024-02-08 12:42:17 +01:00
FreddleSpl0it
4dc3222f03 [Web] fix bug on mailbox login 2024-02-08 12:42:17 +01:00
FreddleSpl0it
7cf6a9d808 [Web] update lang.en-gb.json 2024-02-08 12:42:16 +01:00
FreddleSpl0it
95a15d18a7 [Web] update guzzlehttp/psr7 2024-02-08 12:42:16 +01:00
FreddleSpl0it
cee771a3fb [Web] update stevenmaguire/oauth2-keycloak and firebase/php-jwt 2024-02-08 12:42:16 +01:00
FreddleSpl0it
a805d3b2e3 [Web] add league/oauth2-client 2024-02-08 12:42:15 +01:00
FreddleSpl0it
b251c58b23 update gitignore 2024-02-08 12:42:15 +01:00
FreddleSpl0it
cc7516685f [Web] functions.auth.inc.php corrections 2024-02-08 12:42:15 +01:00
FreddleSpl0it
ad19ff5429 [Web] remove ropc flow 2024-02-08 12:42:14 +01:00
FreddleSpl0it
e784c98a5a [Web] add "add mailbox_from_template" function 2024-02-08 12:42:14 +01:00
FreddleSpl0it
28679eb916 [Web] add generic-oidc provider 2024-02-08 12:42:13 +01:00
FreddleSpl0it
c8fec24da3 [Web] add "edit mailbox_from_template" function 2024-02-08 12:42:13 +01:00
FreddleSpl0it
0c1e2ed6f2 [Web] revert configurable authsource 2024-02-08 12:42:13 +01:00
FreddleSpl0it
90476ae057 [Web] rename var for tab-config-identity-provider.twig 2024-02-08 12:42:12 +01:00
FreddleSpl0it
3b6a1d50bd [Web] add generic-oidc provider 2024-02-08 12:42:12 +01:00
FreddleSpl0it
1ab1505c88 [Web] remove sso login alertbox 2024-02-08 12:42:12 +01:00
FreddleSpl0it
593e581cf3 [Web] move iam sso functions 2024-02-08 12:42:11 +01:00
FreddleSpl0it
e202d00beb [Dovecot] group auth files 2024-02-08 12:42:11 +01:00
FreddleSpl0it
dca5f1baab [Web] move /process/login to internal endpoint 2024-02-08 12:42:11 +01:00
FreddleSpl0it
f0689e08d9 [Web] iam - add switch for direct login flow 2024-02-08 12:42:10 +01:00
FreddleSpl0it
5bbb12b53e [Dovecot] fix wrong lua syntax 2024-02-08 12:42:10 +01:00
FreddleSpl0it
c6a56e0748 [Web] add IAM delete button & fix add mbox modal 2024-02-08 12:42:10 +01:00
FreddleSpl0it
3c62a7fd9f [Web] IAM - add delete option & fix test connection 2024-02-08 12:42:09 +01:00
FreddleSpl0it
61ab17d8a1 [Web] fix iam attribute mapping ui 2024-02-08 12:42:09 +01:00
FreddleSpl0it
d4ae616460 replace ropc flow with keycloak rest api flow 2024-02-08 12:42:09 +01:00
FreddleSpl0it
b7a18255fe [Web] rename role mapping to attribute mapping 2024-02-08 12:42:08 +01:00
FreddleSpl0it
1c73a16ca0 new dovecout lua auth - use https 2024-02-08 12:42:08 +01:00
FreddleSpl0it
1aeb36d40e [Web] create ratelimit acl on iam mbox creation 2 2024-02-08 12:42:07 +01:00
FreddleSpl0it
f251c9826e [Web] create ratelimit acl on iam mbox creation 2024-02-08 12:42:07 +01:00
FreddleSpl0it
204063819c [Web] fix broken sogo-sso 2024-02-08 12:42:07 +01:00
FreddleSpl0it
13f8882616 [Web] fix app_pass ignore_access 2024-02-08 12:42:06 +01:00
FreddleSpl0it
eba1d469c8 [Web] keycloak auth functions 2024-02-08 12:42:06 +01:00
FreddleSpl0it
6e9980bf0f [Web] add manage identity provider 2024-02-08 12:42:06 +01:00
FreddleSpl0it
67c9c5b8ed [Web] remove u2f lib from prerequisites 2024-02-08 12:42:05 +01:00
FreddleSpl0it
cd3660a96d [Web] add oauth2-keycloak lib 2024-02-08 12:42:05 +01:00
FreddleSpl0it
9d8c1a01ac [Web] remove u2f lib 2024-02-08 12:42:05 +01:00
FreddleSpl0it
0a77cad2dd [Web] limit identity_provider function better 2024-02-08 12:42:04 +01:00
FreddleSpl0it
f6869da3a0 [Web] manage keycloak identity provider 2024-02-08 12:42:04 +01:00
FreddleSpl0it
6adad79e5c [Web] organize auth functions+api auth w/ dovecot 2024-02-08 12:42:04 +01:00
FreddleSpl0it
50d4d59626 [Web] update de-de + en-gb lang 2024-02-08 12:42:03 +01:00
FreddleSpl0it
56a9f1a411 [Web] organize user landing 2024-02-08 12:42:03 +01:00
FreddleSpl0it
84ff6ff2c5 [Web] fix user login history 2024-02-08 12:42:03 +01:00
FreddleSpl0it
6e35574c72 [Web] add app hide option 2024-02-08 12:42:02 +01:00
FreddleSpl0it
415c1d0574 [Web] add seperate link for logged in users 2024-02-08 12:42:02 +01:00
FreddleSpl0it
cfce7086a5 [Web] few style changes 2024-02-08 12:42:01 +01:00
FreddleSpl0it
c90d637a48 [Web] redirect to sogo after failed sogo-auth 2024-02-08 12:42:01 +01:00
1185 changed files with 65176 additions and 17609 deletions

View File

@@ -1,7 +1,7 @@
blank_issues_enabled: false
contact_links:
- name: ❓ Community-driven support (Free)
url: https://docs.mailcow.email/#get-support
url: https://docs.mailcow.email/#community-support-and-chat
about: Please use the community forum for questions or assistance
- name: 🔥 Premium Support (Paid)
url: https://www.servercow.de/mailcow?lang=en#support

View File

@@ -15,12 +15,6 @@
"data\/web\/inc\/lib\/vendor\/**"
],
"regexManagers": [
{
"fileMatch": ["^helper-scripts\/nextcloud.sh$"],
"matchStrings": [
"#\\srenovate:\\sdatasource=(?<datasource>.*?) depName=(?<depName>.*?)( versioning=(?<versioning>.*?))?( extractVersion=(?<extractVersion>.*?))?\\s.*?_VERSION=(?<currentValue>.*)"
]
},
{
"fileMatch": ["(^|/)Dockerfile[^/]*$"],
"matchStrings": [

View File

@@ -10,9 +10,9 @@ jobs:
if: github.event.pull_request.base.ref != 'staging' #check if the target branch is not staging
steps:
- name: Send message
uses: thollander/actions-comment-pull-request@v2.5.0
uses: thollander/actions-comment-pull-request@v3.0.1
with:
GITHUB_TOKEN: ${{ secrets.CHECKIFPRISSTAGING_ACTION_PAT }}
github-token: ${{ secrets.CHECKIFPRISSTAGING_ACTION_PAT }}
message: |
Thanks for contributing!

View File

@@ -14,7 +14,7 @@ jobs:
pull-requests: write
steps:
- name: Mark/Close Stale Issues and Pull Requests 🗑️
uses: actions/stale@v9.0.0
uses: actions/stale@v9.1.0
with:
repo-token: ${{ secrets.STALE_ACTION_PAT }}
days-before-stale: 60

View File

@@ -23,7 +23,6 @@ jobs:
- "postfix-mailcow"
- "rspamd-mailcow"
- "sogo-mailcow"
- "solr-mailcow"
- "unbound-mailcow"
- "watchdog-mailcow"
runs-on: ubuntu-latest

View File

@@ -12,7 +12,7 @@ jobs:
with:
fetch-depth: 0
- name: Run the Action
uses: devops-infra/action-pull-request@v0.5.5
uses: devops-infra/action-pull-request@v0.6.0
with:
github_token: ${{ secrets.PRTONIGHTLY_ACTION_PAT }}
title: Automatic PR to nightly from ${{ github.event.repository.updated_at}}

View File

@@ -9,6 +9,8 @@ on:
jobs:
docker_image_build:
runs-on: ubuntu-latest
permissions:
packages: write
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -19,11 +21,13 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
- name: Login to GHCR
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
username: ${{ secrets.BACKUPIMAGEBUILD_ACTION_DOCKERHUB_USERNAME }}
password: ${{ secrets.BACKUPIMAGEBUILD_ACTION_DOCKERHUB_TOKEN }}
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push
uses: docker/build-push-action@v6
@@ -32,4 +36,4 @@ jobs:
platforms: linux/amd64,linux/arm64
file: data/Dockerfiles/backup/Dockerfile
push: true
tags: mailcow/backup:latest
tags: ghcr.io/mailcow/backup:latest

View File

@@ -22,7 +22,7 @@ jobs:
bash helper-scripts/update_postscreen_whitelist.sh
- name: Create Pull Request
uses: peter-evans/create-pull-request@v6
uses: peter-evans/create-pull-request@v7
with:
token: ${{ secrets.mailcow_action_Update_postscreen_access_cidr_pat }}
commit-message: update postscreen_access.cidr

48
.gitignore vendored
View File

@@ -1,6 +1,3 @@
!data/conf/nginx/dynmaps.conf
!data/conf/nginx/meta_exporter.conf
!data/conf/nginx/site.conf
!/**/.gitkeep
*.iml
.idea
@@ -8,47 +5,33 @@
data/assets/ssl-example/*
data/assets/ssl/*
data/conf/borgmatic/
data/conf/clamav/whitelist.ign2
data/conf/dovecot/acl_anyone
data/conf/dovecot/dovecot-master.passwd
data/conf/dovecot/dovecot-master.userdb
data/conf/clamav/rendered_configs
data/conf/dovecot/extra.conf
data/conf/dovecot/mail_replica.conf
data/conf/dovecot/global_sieve_*
data/conf/dovecot/last_login
data/conf/dovecot/lua
data/conf/dovecot/mail_plugins*
data/conf/dovecot/shared_namespace.conf
data/conf/dovecot/sni.conf
data/conf/dovecot/sogo-sso.conf
data/conf/dovecot/sogo_trusted_ip.conf
data/conf/dovecot/sql
data/conf/nextcloud-*.bak
data/conf/nginx/*.active
data/conf/nginx/*.bak
data/conf/nginx/*.conf
data/conf/nginx/*.custom
data/conf/dovecot/rendered_configs
data/conf/nginx/rendered_configs
data/conf/phpfpm/sogo-sso/sogo-sso.pass
data/conf/phpfpm/rendered_configs
data/conf/portainer/
data/conf/postfix/allow_mailcow_local.regexp
data/conf/postfix/custom_postscreen_whitelist.cidr
data/conf/postfix/custom_transport.pcre
data/conf/postfix/extra.cf
data/conf/postfix/sni.map
data/conf/postfix/sni.map.db
data/conf/postfix/sql
data/conf/postfix/dns_blocklists.cf
data/conf/postfix/dnsbl_reply.map
data/conf/postfix/rendered_configs
data/conf/rspamd/custom/*
data/conf/rspamd/local.d/*
data/conf/rspamd/override.d/*
data/conf/rspamd/rendered_configs
data/conf/sogo/custom-theme.js
data/conf/sogo/plist_ldap
data/conf/sogo/sieve.creds
data/conf/sogo/sogo-full.svg
data/conf/sogo/cron.creds
data/conf/sogo/custom-fulllogo.svg
data/conf/sogo/custom-shortlogo.svg
data/conf/sogo/custom-fulllogo.png
data/conf/sogo/rendered_configs
data/conf/mysql/rendered_configs
data/gitea/
data/gogs/
data/hooks/clamd/*
data/hooks/dovecot/*
data/hooks/mariadb/*
data/hooks/nginx/*
data/hooks/phpfpm/*
data/hooks/postfix/*
data/hooks/rspamd/*
@@ -69,3 +52,4 @@ rebuild-images.sh
refresh_images.sh
update_diffs/
create_cold_standby.sh
!data/conf/nginx/mailcow_auth.conf

View File

@@ -13,6 +13,22 @@ You can also [get a SAL](https://www.servercow.de/mailcow?lang=en#sal) which is
Or just spread the word: moo.
## Many thanks to our GitHub Sponsors ❤️
A big thank you to everyone supporting us on GitHub Sponsors—your contributions mean the world to us! Special thanks to the following amazing supporters:
### 100$/Month Sponsors
<a href="https://www.colba.net/" target=_blank><img
src="https://avatars.githubusercontent.com/u/204464723" height="58"
/></a>
<a href="https://www.maehdros.com/" target=_blank><img
src="https://avatars.githubusercontent.com/u/173894712" height="58"
/></a>
### 50$/Month Sponsors
<a href="https://github.com/vnukhr" target=_blank><img
src="https://avatars.githubusercontent.com/u/7805987?s=52&v=4" height="58"
/></a>
## Info, documentation and support
Please see [the official documentation](https://docs.mailcow.email/) for installation and support instructions. 🐄

View File

@@ -1,8 +1,7 @@
FROM alpine:3.20
FROM alpine:3.21
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
RUN apk upgrade --no-cache \
&& apk add --update --no-cache \
bash \
@@ -15,7 +14,7 @@ RUN apk upgrade --no-cache \
tini \
tzdata \
python3 \
acme-tiny --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community/
acme-tiny
COPY acme.sh /srv/acme.sh
COPY functions.sh /srv/functions.sh

View File

@@ -4,9 +4,9 @@ exec 5>&1
# Do not attempt to write to slave
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
export REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT}"
export REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT} -a ${REDISPASS} --no-auth-warning"
else
export REDIS_CMDLINE="redis-cli -h redis -p 6379"
export REDIS_CMDLINE="redis-cli -h redis -p 6379 -a ${REDISPASS} --no-auth-warning"
fi
until [[ $(${REDIS_CMDLINE} PING) == "PONG" ]]; do
@@ -138,7 +138,7 @@ log_f "Resolver OK"
log_f "Waiting for domain table..."
while [[ -z ${DOMAIN_TABLE} ]]; do
curl --silent http://nginx.${COMPOSE_PROJECT_NAME}_mailcow-network/ >/dev/null 2>&1
DOMAIN_TABLE=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SHOW TABLES LIKE 'domain'" -Bs)
DOMAIN_TABLE=$(mariadb --skip-ssl --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SHOW TABLES LIKE 'domain'" -Bs)
[[ -z ${DOMAIN_TABLE} ]] && sleep 10
done
log_f "OK" no_date
@@ -231,7 +231,7 @@ while true; do
#########################################
# IP and webroot challenge verification #
SQL_DOMAINS=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT domain FROM domain WHERE backupmx=0 and active=1" -Bs)
SQL_DOMAINS=$(mariadb --skip-ssl --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT domain FROM domain WHERE backupmx=0 and active=1" -Bs)
if [[ ! $? -eq 0 ]]; then
log_f "Failed to read SQL domains, retrying in 1 minute..."
sleep 1m

View File

@@ -124,7 +124,7 @@ case "$SUCCESS" in
;;
*) # non-zero is non-fun
log_f "Failed to obtain certificate ${CERT} for domains '${CERT_DOMAINS[*]}'"
redis-cli -h redis SET ACME_FAIL_TIME "$(date +%s)"
redis-cli -h redis -a ${REDISPASS} --no-auth-warning SET ACME_FAIL_TIME "$(date +%s)"
exit 100${SUCCESS}
;;
esac

View File

@@ -1,3 +1,3 @@
FROM debian:bookworm-slim
RUN apt update && apt install pigz
RUN apt update && apt install pigz -y --no-install-recommends

View File

@@ -0,0 +1,66 @@
import os
import sys
import signal
import ipaddress
def handle_sigterm(signum, frame):
print("Received SIGTERM, exiting gracefully...")
sys.exit(0)
def get_mysql_config(service_name):
db_config = {
"user": os.getenv("DBUSER") or os.getenv("MYSQL_USER"),
"password": os.getenv("DBPASS") or os.getenv("MYSQL_PASSWORD"),
"database": os.getenv("DBNAME") or os.getenv("MYSQL_DATABASE"),
"connection_timeout": 2,
"service_table": "service_settings",
"service_types": [service_name]
}
db_host = os.getenv("DB_HOST")
if db_host.startswith("/"):
db_config["host"] = "localhost"
db_config["unix_socket"] = db_host
else:
db_config["host"] = db_host
return db_config
def get_redis_config():
redis_config = {
"read_host": os.getenv("REDIS_HOST"),
"read_port": 6379,
"write_host": os.getenv("REDIS_SLAVEOF_IP") or os.getenv("REDIS_HOST"),
"write_port": int(os.getenv("REDIS_SLAVEOF_PORT") or 6379),
"password": os.getenv("REDISPASS"),
"db": 0
}
return redis_config
def main():
signal.signal(signal.SIGTERM, handle_sigterm)
container_name = os.getenv("CONTAINER_NAME")
service_name = container_name.replace("-mailcow", "").replace("-", "")
module_name = f"Bootstrap{service_name.capitalize()}"
try:
mod = __import__(f"modules.{module_name}", fromlist=[module_name])
Bootstrap = getattr(mod, module_name)
except (ImportError, AttributeError) as e:
print(f"Failed to load bootstrap module for: {container_name}{module_name}")
print(str(e))
sys.exit(1)
b = Bootstrap(
container=container_name,
service=service_name,
db_config=get_mysql_config(service_name),
redis_config=get_redis_config()
)
b.bootstrap()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,827 @@
import os
import pwd
import grp
import shutil
import secrets
import string
import subprocess
import time
import socket
import re
import redis
import hashlib
import json
import psutil
import signal
from urllib.parse import quote
from pathlib import Path
import dns.resolver
import mysql.connector
class BootstrapBase:
def __init__(self, container, service, db_config, redis_config):
self.container = container
self.service = service
self.db_config = db_config
self.redis_config = redis_config
self.env = None
self.env_vars = None
self.mysql_conn = None
self.redis_connr = None
self.redis_connw = None
def render_config(self, config_dir):
"""
Renders multiple Jinja2 templates from a config.json file in a given directory.
Args:
config_dir (str or Path): Path to the directory containing config.json
Behavior:
- Renders each template defined in config.json
- Writes the result to the specified output path
- Also copies the rendered file to: <config_dir>/rendered_configs/<relative_output_path>
"""
config_dir = Path(config_dir)
config_path = config_dir / "config.json"
if not config_path.exists():
print(f"config.json not found in: {config_dir}")
return
with config_path.open("r") as f:
entries = json.load(f)
for entry in entries:
template_name = entry["template"]
output_path = Path(entry["output"])
clean_blank_lines = entry.get("clean_blank_lines", False)
if_not_exists = entry.get("if_not_exists", False)
if if_not_exists and output_path.exists():
print(f"Skipping {output_path} (already exists)")
continue
output_path.parent.mkdir(parents=True, exist_ok=True)
try:
template = self.env.get_template(template_name)
except Exception as e:
print(f"Template not found: {template_name} ({e})")
continue
rendered = template.render(self.env_vars)
if clean_blank_lines:
rendered = "\n".join(line for line in rendered.splitlines() if line.strip())
rendered = rendered.replace('\r\n', '\n').replace('\r', '\n')
with output_path.open("w") as f:
f.write(rendered)
rendered_copy_path = config_dir / "rendered_configs" / output_path.name
rendered_copy_path.parent.mkdir(parents=True, exist_ok=True)
self.copy_file(output_path, rendered_copy_path)
print(f"Rendered {template_name}{output_path}")
def prepare_template_vars(self, overwrite_path, extra_vars = None):
"""
Loads and merges environment variables for Jinja2 templates from multiple sources, and registers custom template filters.
This method combines variables from:
1. System environment variables
2. The MySQL `service_settings` table (filtered by service type if defined)
3. An optional `extra_vars` dictionary
4. A JSON overwrite file (if it exists at the given path)
Also registers custom Jinja2 filters.
Args:
overwrite_path (str or Path): Path to a JSON file containing key-value overrides.
extra_vars (dict, optional): Additional variables to merge into the environment.
Returns:
dict: A dictionary containing all resolved template variables.
Raises:
Prints errors if database fetch or JSON parsing fails, but does not raise exceptions.
"""
# 1. setup filters
self.env.filters['sha1'] = self.sha1_filter
self.env.filters['urlencode'] = self.urlencode_filter
self.env.filters['escape_quotes'] = self.escape_quotes_filter
# 2. Load env vars
env_vars = dict(os.environ)
# 3. Load from MySQL
try:
cursor = self.mysql_conn.cursor()
if self.db_config['service_types']:
placeholders = ','.join(['%s'] * len(self.db_config['service_types']))
sql = f"SELECT `key`, `value` FROM {self.db_config['service_table']} WHERE `type` IN ({placeholders})"
cursor.execute(sql, self.db_config['service_types'])
else:
cursor.execute(f"SELECT `key`, `value` FROM {self.db_config['service_table']}")
for key, value in cursor.fetchall():
env_vars[key] = value
cursor.close()
except Exception as e:
print(f"Failed to fetch DB service settings: {e}")
# 4. Load extra vars
if extra_vars:
env_vars.update(extra_vars)
# 5. Load overwrites
overwrite_path = Path(overwrite_path)
if overwrite_path.exists():
try:
with overwrite_path.open("r") as f:
overwrite_data = json.load(f)
env_vars.update(overwrite_data)
except Exception as e:
print(f"Failed to parse overwrites: {e}")
return env_vars
def set_timezone(self):
"""
Sets the system timezone based on the TZ environment variable.
If the TZ variable is set, writes its value to /etc/timezone.
"""
timezone = os.getenv("TZ")
if timezone:
with open("/etc/timezone", "w") as f:
f.write(timezone + "\n")
def set_syslog_redis(self):
"""
Reconfigures syslog-ng to use a Redis slave configuration.
If the REDIS_SLAVEOF_IP environment variable is set, replaces the syslog-ng config
with the Redis slave-specific config.
"""
redis_slave_ip = os.getenv("REDIS_SLAVEOF_IP")
if redis_slave_ip:
shutil.copy("/etc/syslog-ng/syslog-ng-redis_slave.conf", "/etc/syslog-ng/syslog-ng.conf")
def rsync_file(self, src, dst, recursive=False, owner=None, mode=None):
"""
Copies files or directories using rsync, with optional ownership and permissions.
Args:
src (str or Path): Source file or directory.
dst (str or Path): Destination directory.
recursive (bool): If True, copies contents recursively.
owner (tuple): Tuple of (user, group) to set ownership.
mode (int): File mode (e.g., 0o644) to set permissions after sync.
"""
src_path = Path(src)
dst_path = Path(dst)
dst_path.mkdir(parents=True, exist_ok=True)
rsync_cmd = ["rsync", "-a"]
if recursive:
rsync_cmd.append(str(src_path) + "/")
else:
rsync_cmd.append(str(src_path))
rsync_cmd.append(str(dst_path))
try:
subprocess.run(rsync_cmd, check=True)
except Exception as e:
print(f"Rsync failed: {e}")
if owner:
self.set_owner(dst_path, *owner, recursive=True)
if mode:
self.set_permissions(dst_path, mode)
def set_permissions(self, path, mode):
"""
Sets file or directory permissions.
Args:
path (str or Path): Path to the file or directory.
mode (int): File mode to apply, e.g., 0o644.
Raises:
FileNotFoundError: If the path does not exist.
"""
file_path = Path(path)
if not file_path.exists():
raise FileNotFoundError(f"Cannot chmod: {file_path} does not exist")
os.chmod(file_path, mode)
def set_owner(self, path, user, group=None, recursive=False):
"""
Changes ownership of a file or directory.
Args:
path (str or Path): Path to the file or directory.
user (str or int): Username or UID for new owner.
group (str or int, optional): Group name or GID; defaults to user's group if not provided.
recursive (bool): If True and path is a directory, ownership is applied recursively.
Raises:
FileNotFoundError: If the path does not exist.
"""
# Resolve UID
uid = int(user) if str(user).isdigit() else pwd.getpwnam(user).pw_uid
# Resolve GID
if group is not None:
gid = int(group) if str(group).isdigit() else grp.getgrnam(group).gr_gid
else:
gid = uid if isinstance(user, int) or str(user).isdigit() else grp.getgrnam(user).gr_gid
p = Path(path)
if not p.exists():
raise FileNotFoundError(f"{path} does not exist")
if recursive and p.is_dir():
for sub_path in p.rglob("*"):
os.chown(sub_path, uid, gid)
os.chown(p, uid, gid)
def fix_permissions(self, path, user=None, group=None, mode=None, recursive=False):
"""
Sets owner and/or permissions on a file or directory.
Args:
path (str or Path): Target path.
user (str|int, optional): Username or UID.
group (str|int, optional): Group name or GID.
mode (int, optional): File mode (e.g. 0o644).
recursive (bool): Apply recursively if path is a directory.
"""
if user or group:
self.set_owner(path, user, group, recursive)
if mode:
self.set_permissions(path, mode)
def move_file(self, src, dst, overwrite=True):
"""
Moves a file from src to dst, optionally overwriting existing files.
Args:
src (str or Path): Source file path.
dst (str or Path): Destination path.
overwrite (bool): If False, raises error if dst exists.
Raises:
FileNotFoundError: If the source file does not exist.
FileExistsError: If the destination file exists and overwrite is False.
"""
src_path = Path(src)
dst_path = Path(dst)
if not src_path.exists():
raise FileNotFoundError(f"Source file does not exist: {src}")
dst_path.parent.mkdir(parents=True, exist_ok=True)
if dst_path.exists() and not overwrite:
raise FileExistsError(f"Destination already exists: {dst} (set overwrite=True to overwrite)")
shutil.move(str(src_path), str(dst_path))
def copy_file(self, src, dst, overwrite=True):
"""
Copies a file from src to dst using shutil.
Args:
src (str or Path): Source file path.
dst (str or Path): Destination file path.
overwrite (bool): Whether to overwrite the destination if it exists.
Raises:
FileNotFoundError: If the source file doesn't exist.
FileExistsError: If the destination exists and overwrite is False.
IOError: If the copy operation fails.
"""
src_path = Path(src)
dst_path = Path(dst)
if not src_path.is_file():
raise FileNotFoundError(f"Source file not found: {src_path}")
if dst_path.exists() and not overwrite:
raise FileExistsError(f"Destination exists: {dst_path}")
dst_path.parent.mkdir(parents=True, exist_ok=True)
shutil.copy2(src_path, dst_path)
def remove(self, path, recursive=False, wipe_contents=False, exclude=None):
"""
Removes a file or directory with optional exclusion logic.
Args:
path (str or Path): The file or directory path to remove.
recursive (bool): If True, directories will be removed recursively.
wipe_contents (bool): If True and path is a directory, only its contents are removed, not the dir itself.
exclude (list[str], optional): List of filenames to exclude from deletion.
Raises:
FileNotFoundError: If the path does not exist.
ValueError: If a directory is passed without recursive or wipe_contents.
"""
path = Path(path)
exclude = set(exclude or [])
if not path.exists():
raise FileNotFoundError(f"Cannot remove: {path} does not exist")
if wipe_contents and path.is_dir():
for child in path.iterdir():
if child.name in exclude:
continue
if child.is_dir():
shutil.rmtree(child)
else:
child.unlink()
elif path.is_file():
if path.name not in exclude:
path.unlink()
elif path.is_dir():
if recursive:
shutil.rmtree(path)
else:
raise ValueError(f"{path} is a directory. Use recursive=True or wipe_contents=True to remove it.")
def create_dir(self, path):
"""
Creates a directory if it does not exist.
If the directory is missing, it will be created along with any necessary parent directories.
Args:
path (str or Path): The directory path to create.
"""
dir_path = Path(path)
if not dir_path.exists():
print(f"Creating directory: {dir_path}")
dir_path.mkdir(parents=True, exist_ok=True)
def patch_exists(self, target_file, patch_file, reverse=False):
"""
Checks whether a patch can be applied (or reversed) to a target file.
Args:
target_file (str): File to test the patch against.
patch_file (str): Patch file to apply.
reverse (bool): If True, checks whether the patch can be reversed.
Returns:
bool: True if patch is applicable, False otherwise.
"""
cmd = ["patch", "-sfN", "--dry-run", target_file, "<", patch_file]
if reverse:
cmd.insert(1, "-R")
try:
result = subprocess.run(
" ".join(cmd),
shell=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL
)
return result.returncode == 0
except Exception as e:
print(f"Patch dry-run failed: {e}")
return False
def apply_patch(self, target_file, patch_file, reverse=False):
"""
Applies a patch file to a target file.
Args:
target_file (str): File to be patched.
patch_file (str): Patch file containing the diff.
reverse (bool): If True, applies the patch in reverse (rollback).
Logs:
Success or failure of the patching operation.
"""
cmd = ["patch", target_file, "<", patch_file]
if reverse:
cmd.insert(0, "-R")
try:
subprocess.run(" ".join(cmd), shell=True, check=True)
print(f"Applied patch {'(reverse)' if reverse else ''} to {target_file}")
except subprocess.CalledProcessError as e:
print(f"Patch failed: {e}")
def isYes(self, value):
"""
Determines whether a given string represents a "yes"-like value.
Args:
value (str): Input string to evaluate.
Returns:
bool: True if value is "yes" or "y" (case-insensitive), otherwise False.
"""
return value.lower() in ["yes", "y"]
def is_port_open(self, host, port):
"""
Checks whether a TCP port is open on a given host.
Args:
host (str): The hostname or IP address to check.
port (int): The TCP port number to test.
Returns:
bool: True if the port is open and accepting connections, False otherwise.
"""
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
sock.settimeout(1)
result = sock.connect_ex((host, port))
return result == 0
def resolve_docker_dns_record(self, hostname, record_type="A"):
"""
Resolves DNS A or AAAA records for a given hostname.
Args:
hostname (str): The domain to query.
record_type (str): "A" for IPv4, "AAAA" for IPv6. Default is "A".
Returns:
list[str]: A list of resolved IP addresses.
Raises:
Exception: If resolution fails or no results are found.
"""
try:
resolver = dns.resolver.Resolver()
resolver.nameservers = ["127.0.0.11"]
answers = resolver.resolve(hostname, record_type)
return [answer.to_text() for answer in answers]
except Exception as e:
raise Exception(f"Failed to resolve {record_type} record for {hostname}: {e}")
def kill_proc(self, process_name):
"""
Sends SIGTERM to all running processes matching the given name.
Args:
process_name (str): Name of the process to terminate.
Returns:
int: Number of processes successfully signaled.
"""
killed = 0
for proc in psutil.process_iter(['name']):
try:
if proc.info['name'] == process_name:
proc.send_signal(signal.SIGTERM)
killed += 1
except (psutil.NoSuchProcess, psutil.AccessDenied):
continue
return killed
def connect_mysql(self, socket=None):
"""
Establishes a connection to the MySQL database using the provided configuration.
Continuously retries the connection until the database is reachable. Stores
the connection in `self.mysql_conn` once successful.
Logs:
Connection status and retry errors to stdout.
Args:
socket (str, optional): Custom UNIX socket path to override the default.
"""
print("Connecting to MySQL...")
config = {
"host": self.db_config['host'],
"user": self.db_config['user'],
"password": self.db_config['password'],
"database": self.db_config['database'],
'connection_timeout': self.db_config['connection_timeout']
}
if self.db_config['unix_socket']:
config["unix_socket"] = socket or self.db_config['unix_socket']
while True:
try:
self.mysql_conn = mysql.connector.connect(**config)
if self.mysql_conn.is_connected():
print("MySQL is up and ready!")
break
except mysql.connector.Error as e:
print(f"Waiting for MySQL... ({e})")
time.sleep(2)
def close_mysql(self):
"""
Closes the MySQL connection if it's currently open and connected.
Safe to call even if the connection has already been closed.
"""
if self.mysql_conn and self.mysql_conn.is_connected():
self.mysql_conn.close()
def connect_redis(self, max_retries=10, delay=2):
"""
Connects to both read and write Redis servers and stores the connections.
Read server: tries indefinitely until successful.
Write server: tries up to `max_retries` before giving up.
Sets:
self.redis_connr: Redis client for read
self.redis_connw: Redis client for write
"""
use_rw = self.redis_config['read_host'] == self.redis_config['write_host'] and self.redis_config['read_port'] == self.redis_config['write_port']
if use_rw:
print("Connecting to Redis read server...")
else:
print("Connecting to Redis server...")
while True:
try:
clientr = redis.Redis(
host=self.redis_config['read_host'],
port=self.redis_config['read_port'],
password=self.redis_config['password'],
db=self.redis_config['db'],
decode_responses=True
)
if clientr.ping():
self.redis_connr = clientr
print("Redis read server is up and ready!")
if use_rw:
break
else:
self.redis_connw = clientr
return
except redis.RedisError as e:
print(f"Waiting for Redis read... ({e})")
time.sleep(delay)
print("Connecting to Redis write server...")
for attempt in range(max_retries):
try:
clientw = redis.Redis(
host=self.redis_config['write_host'],
port=self.redis_config['write_port'],
password=self.redis_config['password'],
db=self.redis_config['db'],
decode_responses=True
)
if clientw.ping():
self.redis_connw = clientw
print("Redis write server is up and ready!")
return
except redis.RedisError as e:
print(f"Waiting for Redis write... (attempt {attempt + 1}/{max_retries}) ({e})")
time.sleep(delay)
print("Redis write server is unreachable.")
def close_redis(self):
"""
Closes the Redis read/write connections if open.
"""
if self.redis_connr:
try:
self.redis_connr.close()
except Exception as e:
print(f"Error while closing Redis read connection: {e}")
finally:
self.redis_connr = None
if self.redis_connw:
try:
self.redis_connw.close()
except Exception as e:
print(f"Error while closing Redis write connection: {e}")
finally:
self.redis_connw = None
def wait_for_schema_update(self, init_file_path="init_db.inc.php", check_interval=5):
"""
Waits until the current database schema version matches the expected version
defined in a PHP initialization file.
Compares the `version` value in the `versions` table for `application = 'db_schema'`
with the `$db_version` value extracted from the specified PHP file.
Args:
init_file_path (str): Path to the PHP file containing the expected version string.
check_interval (int): Time in seconds to wait between version checks.
Logs:
Current vs. expected schema versions until they match.
"""
print("Checking database schema version...")
while True:
current_version = self._get_current_db_version()
expected_version = self._get_expected_schema_version(init_file_path)
if current_version == expected_version:
print(f"DB schema is up to date: {current_version}")
break
print(f"Waiting for schema update... (DB: {current_version}, Expected: {expected_version})")
time.sleep(check_interval)
def wait_for_host(self, host, retry_interval=1.0, count=1):
"""
Waits for a host to respond to ICMP ping.
Args:
host (str): Hostname or IP to ping.
retry_interval (float): Seconds to wait between pings.
count (int): Number of ping packets to send per check (default 1).
"""
while True:
try:
result = subprocess.run(
["ping", "-c", str(count), host],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL
)
if result.returncode == 0:
print(f"{host} is reachable via ping.")
break
except Exception:
pass
print(f"Waiting for {host}...")
time.sleep(retry_interval)
def wait_for_dns(self, domain, retry_interval=1, timeout=30):
"""
Waits until the domain resolves via DNS using pure Python (socket).
Args:
domain (str): The domain to resolve.
retry_interval (int): Time (seconds) to wait between attempts.
timeout (int): Maximum total wait time (seconds).
Returns:
bool: True if resolved, False if timed out.
"""
start = time.time()
while True:
try:
socket.gethostbyname(domain)
print(f"{domain} is resolving via DNS.")
return True
except socket.gaierror:
pass
if time.time() - start > timeout:
print(f"DNS resolution for {domain} timed out.")
return False
print(f"Waiting for DNS for {domain}...")
time.sleep(retry_interval)
def _get_current_db_version(self):
"""
Fetches the current schema version from the database.
Executes a SELECT query on the `versions` table where `application = 'db_schema'`.
Returns:
str or None: The current schema version as a string, or None if not found or on error.
Logs:
Error message if the query fails.
"""
try:
cursor = self.mysql_conn.cursor()
cursor.execute("SELECT version FROM versions WHERE application = 'db_schema'")
result = cursor.fetchone()
cursor.close()
return result[0] if result else None
except Exception as e:
print(f"Error fetching current DB schema version: {e}")
return None
def _get_expected_schema_version(self, filepath):
"""
Extracts the expected database schema version from a PHP initialization file.
Looks for a line in the form of: `$db_version = "..."` and extracts the version string.
Args:
filepath (str): Path to the PHP file containing the `$db_version` definition.
Returns:
str or None: The extracted version string, or None if not found or on error.
Logs:
Error message if the file cannot be read or parsed.
"""
try:
with open(filepath, "r") as f:
content = f.read()
match = re.search(r'\$db_version\s*=\s*"([^"]+)"', content)
if match:
return match.group(1)
except Exception as e:
print(f"Error reading expected schema version from {filepath}: {e}")
return None
def rand_pass(self, length=22):
"""
Generates a secure random password using allowed characters.
Allowed characters include upper/lowercase letters, digits, underscores, and hyphens.
Args:
length (int): Length of the password to generate. Default is 22.
Returns:
str: A securely generated random password string.
"""
allowed_chars = string.ascii_letters + string.digits + "_-"
return ''.join(secrets.choice(allowed_chars) for _ in range(length))
def run_command(self, command, check=True, shell=False, input_stream=None, log_output=True):
"""
Executes a shell command and optionally logs output.
Args:
command (str or list): Command to run.
check (bool): Raise if non-zero exit.
shell (bool): Run in shell.
input_stream: stdin stream.
log_output (bool): If True, print output.
Returns:
subprocess.CompletedProcess
"""
try:
result = subprocess.run(
command,
shell=shell,
check=check,
stdin=input_stream,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
if log_output:
if result.stdout:
print(result.stdout.strip())
if result.stderr:
print(result.stderr.strip())
return result
except subprocess.CalledProcessError as e:
print(f"Command failed with exit code {e.returncode}: {e.cmd}")
print(e.stderr.strip())
if check:
raise
return e
def sha1_filter(self, value):
return hashlib.sha1(value.encode()).hexdigest()
def urlencode_filter(self, value):
return quote(value, safe='')
def escape_quotes_filter(self, value):
return value.replace('"', r'\"')

View File

@@ -0,0 +1,60 @@
from jinja2 import Environment, FileSystemLoader
from modules.BootstrapBase import BootstrapBase
from pathlib import Path
import os
import sys
import time
class BootstrapClamd(BootstrapBase):
def bootstrap(self):
# Skip Clamd if set
if self.isYes(os.getenv("SKIP_CLAMD", "")):
print("SKIP_CLAMD is set, skipping ClamAV startup...")
time.sleep(365 * 24 * 60 * 60)
sys.exit(1)
# Connect to MySQL
self.connect_mysql()
print("Cleaning up tmp files...")
tmp_files = Path("/var/lib/clamav").glob("clamav-*.tmp")
for tmp_file in tmp_files:
try:
self.remove(tmp_file)
print(f"Removed: {tmp_file}")
except Exception as e:
print(f"Failed to remove {tmp_file}: {e}")
self.create_dir("/run/clamav")
self.create_dir("/var/lib/clamav")
# Setup Jinja2 Environment and load vars
self.env = Environment(
loader=FileSystemLoader([
'/service_config/custom_templates',
'/service_config/config_templates'
]),
keep_trailing_newline=True,
lstrip_blocks=True,
trim_blocks=True
)
extra_vars = {
}
self.env_vars = self.prepare_template_vars('/service_config/overwrites.json', extra_vars)
print("Set Timezone")
self.set_timezone()
print("Render config")
self.render_config("/service_config")
# Fix permissions
self.set_owner("/var/lib/clamav", "clamav", "clamav", recursive=True)
self.set_owner("/run/clamav", "clamav", "clamav", recursive=True)
self.set_permissions("/var/lib/clamav", 0o755)
for item in Path("/var/lib/clamav").glob("*"):
self.set_permissions(item, 0o644)
self.set_permissions("/run/clamav", 0o750)
# Copying to /etc/clamav to expose file as-is to administrator
self.copy_file("/var/lib/clamav/whitelist.ign2", "/etc/clamav/whitelist.ign2")

View File

@@ -0,0 +1,289 @@
from jinja2 import Environment, FileSystemLoader
from modules.BootstrapBase import BootstrapBase
from pathlib import Path
import os
import pwd
import hashlib
class BootstrapDovecot(BootstrapBase):
def bootstrap(self):
# Connect to MySQL
self.connect_mysql()
self.wait_for_schema_update()
# Connect to Redis
self.connect_redis()
if self.redis_connw:
self.redis_connw.set("DOVECOT_REPL_HEALTH", 1)
# Wait for DNS
self.wait_for_dns("mailcow.email")
# Create missing directories
self.create_dir("/etc/dovecot/sql/")
self.create_dir("/etc/dovecot/auth/")
self.create_dir("/var/vmail/_garbage")
self.create_dir("/var/vmail/sieve")
self.create_dir("/etc/sogo")
self.create_dir("/var/volatile")
# Setup Jinja2 Environment and load vars
self.env = Environment(
loader=FileSystemLoader([
'/service_config/custom_templates',
'/service_config/config_templates'
]),
keep_trailing_newline=True,
lstrip_blocks=True,
trim_blocks=True
)
extra_vars = {
"VALID_CERT_DIRS": self.get_valid_cert_dirs(),
"RAND_USER": self.rand_pass(),
"RAND_PASS": self.rand_pass(),
"RAND_PASS2": self.rand_pass(),
"ENV_VARS": dict(os.environ)
}
self.env_vars = self.prepare_template_vars('/service_config/overwrites.json', extra_vars)
print("Set Timezone")
self.set_timezone()
print("Render config")
self.render_config("/service_config")
files = [
"/etc/dovecot/mail_plugins",
"/etc/dovecot/mail_plugins_imap",
"/etc/dovecot/mail_plugins_lmtp",
"/templates/quarantine.tpl"
]
for file in files:
self.set_permissions(file, 0o644)
try:
# Migrate old sieve_after file
self.move_file("/etc/dovecot/sieve_after", "/var/vmail/sieve/global_sieve_after.sieve")
except Exception as e:
pass
try:
# Cleanup random user maildirs
self.remove("/var/vmail/mailcow.local", wipe_contents=True)
except Exception as e:
pass
try:
# Cleanup PIDs
self.remove("/tmp/quarantine_notify.pid")
except Exception as e:
pass
try:
self.remove("/var/run/dovecot/master.pid")
except Exception as e:
pass
# Check permissions of vmail/index/garbage directories.
# Do not do this every start-up, it may take a very long time. So we use a stat check here.
files = [
"/var/vmail",
"/var/vmail/_garbage",
"/var/vmail_index"
]
for file in files:
path = Path(file)
try:
stat_info = path.stat()
current_user = pwd.getpwuid(stat_info.st_uid).pw_name
if current_user != "vmail":
print(f"Ownership of {path} is {current_user}, fixing to vmail:vmail...")
self.set_owner(path, user="vmail", group="vmail", recursive=True)
else:
print(f"Ownership of {path} is already correct (vmail)")
except Exception as e:
print(f"Error checking ownership of {path}: {e}")
# Compile sieve scripts
files = [
"/var/vmail/sieve/global_sieve_before.sieve",
"/var/vmail/sieve/global_sieve_after.sieve",
"/usr/lib/dovecot/sieve/report-spam.sieve",
"/usr/lib/dovecot/sieve/report-ham.sieve",
]
for file in files:
self.run_command(["sievec", file], check=False)
# Fix permissions
for path in Path("/etc/dovecot/sql").glob("*.conf"):
self.set_owner(path, "root", "root")
self.set_permissions(path, 0o640)
files = [
"/etc/dovecot/auth/passwd-verify.lua",
*Path("/etc/dovecot/sql").glob("dovecot-dict-sql-sieve*"),
*Path("/etc/dovecot/sql").glob("dovecot-dict-sql-quota*")
]
for file in files:
self.set_owner(file, "root", "dovecot")
self.set_permissions("/etc/dovecot/auth/passwd-verify.lua", 0o640)
for file in ["/var/vmail/sieve", "/var/volatile", "/var/vmail_index"]:
self.set_owner(file, "vmail", "vmail", recursive=True)
self.run_command(["adduser", "vmail", "tty"])
self.run_command(["chmod", "g+rw", "/dev/console"])
self.set_owner("/dev/console", "root", "tty")
files = [
"/usr/lib/dovecot/sieve/rspamd-pipe-ham",
"/usr/lib/dovecot/sieve/rspamd-pipe-spam",
"/usr/local/bin/imapsync_runner.pl",
"/usr/local/bin/imapsync",
"/usr/local/bin/trim_logs.sh",
"/usr/local/bin/sa-rules.sh",
"/usr/local/bin/clean_q_aged.sh",
"/usr/local/bin/maildir_gc.sh",
"/usr/local/sbin/stop-supervisor.sh",
"/usr/local/bin/quota_notify.py",
"/usr/local/bin/repl_health.sh",
"/usr/local/bin/optimize-fts.sh"
]
for file in files:
self.set_permissions(file, 0o755)
# Collect SA rules once now
self.run_command(["/usr/local/bin/sa-rules.sh"], check=False)
self.generate_mail_crypt_keys()
self.cleanup_imapsync_jobs()
self.generate_guid_version()
def get_valid_cert_dirs(self):
"""
Returns a mapping of domains to their certificate directory path.
Example:
{
"example.com": "/etc/ssl/mail/example.com/",
"www.example.com": "/etc/ssl/mail/example.com/"
}
"""
sni_map = {}
base_path = Path("/etc/ssl/mail")
if not base_path.exists():
return sni_map
for cert_dir in base_path.iterdir():
if not cert_dir.is_dir():
continue
domains_file = cert_dir / "domains"
cert_file = cert_dir / "cert.pem"
key_file = cert_dir / "key.pem"
if not (domains_file.exists() and cert_file.exists() and key_file.exists()):
continue
with open(domains_file, "r") as f:
domains = [line.strip() for line in f if line.strip()]
for domain in domains:
sni_map[domain] = str(cert_dir)
return sni_map
def generate_mail_crypt_keys(self):
"""
Ensures mail_crypt EC keypair exists. Generates if missing. Adjusts permissions.
"""
key_dir = Path("/mail_crypt")
priv_key = key_dir / "ecprivkey.pem"
pub_key = key_dir / "ecpubkey.pem"
# Generate keys if they don't exist or are empty
if not priv_key.exists() or priv_key.stat().st_size == 0 or \
not pub_key.exists() or pub_key.stat().st_size == 0:
self.run_command(
"openssl ecparam -name prime256v1 -genkey | openssl pkey -out /mail_crypt/ecprivkey.pem",
shell=True
)
self.run_command(
"openssl pkey -in /mail_crypt/ecprivkey.pem -pubout -out /mail_crypt/ecpubkey.pem",
shell=True
)
# Set ownership to UID 401 (dovecot)
self.set_owner(priv_key, user='401')
self.set_owner(pub_key, user='401')
def cleanup_imapsync_jobs(self):
"""
Cleans up stale imapsync locks and resets running status in the database.
Deletes the imapsync_busy.lock file if present and sets `is_running` to 0
in the `imapsync` table, if it exists.
Logs:
Any issues with file operations or SQL execution.
"""
lock_file = Path("/tmp/imapsync_busy.lock")
if lock_file.exists():
try:
lock_file.unlink()
except Exception as e:
print(f"Failed to remove lock file: {e}")
try:
cursor = self.mysql_conn.cursor()
cursor.execute("SHOW TABLES LIKE 'imapsync'")
result = cursor.fetchone()
if result:
cursor.execute("UPDATE imapsync SET is_running='0'")
self.mysql_conn.commit()
cursor.close()
except Exception as e:
print(f"Error updating imapsync table: {e}")
def generate_guid_version(self):
"""
Waits for the `versions` table to be created, then generates a GUID
based on the mail hostname and Dovecot's public key and inserts it
into the `versions` table.
If the key or hash is missing or malformed, marks it as INVALID.
"""
try:
result = self.run_command(["doveconf", "-P"], check=True, log_output=False)
pubkey_path = None
for line in result.stdout.splitlines():
if "mail_crypt_global_public_key" in line:
parts = line.split('<')
if len(parts) > 1:
pubkey_path = parts[1].strip()
break
if pubkey_path and Path(pubkey_path).exists():
with open(pubkey_path, "rb") as key_file:
pubkey_data = key_file.read()
hostname = self.env_vars.get("MAILCOW_HOSTNAME", "mailcow.local").encode("utf-8")
concat = hostname + pubkey_data
guid = hashlib.sha256(concat).hexdigest()
if len(guid) == 64:
version_value = guid
else:
version_value = "INVALID"
cursor = self.mysql_conn.cursor()
cursor.execute(
"REPLACE INTO versions (application, version) VALUES (%s, %s)",
("GUID", version_value)
)
self.mysql_conn.commit()
cursor.close()
else:
print("Public key not found or unreadable. GUID not generated.")
except Exception as e:
print(f"Failed to generate or store GUID: {e}")

View File

@@ -0,0 +1,163 @@
from jinja2 import Environment, FileSystemLoader
from modules.BootstrapBase import BootstrapBase
import os
import time
import subprocess
class BootstrapMysql(BootstrapBase):
def bootstrap(self):
dbuser = "root"
dbpass = os.getenv("MYSQL_ROOT_PASSWORD", "")
socket = "/tmp/mysql-temp.sock"
# Check if mysql has been initialized
if os.path.exists("/var/lib/mysql/mysql/db.frm"):
print("Starting temporary mysqld for upgrade...")
self.start_temporary(socket)
self.connect_mysql(socket)
print("Running mysql_upgrade...")
self.upgrade_mysql(dbuser, dbpass, socket)
print("Checking timezone support with CONVERT_TZ...")
self.check_and_import_timezone_support(dbuser, dbpass, socket)
print("Shutting down temporary mysqld...")
self.close_mysql()
self.stop_temporary(dbuser, dbpass, socket)
# Setup Jinja2 Environment and load vars
self.env = Environment(
loader=FileSystemLoader([
'/service_config/custom_templates',
'/service_config/config_templates'
]),
keep_trailing_newline=True,
lstrip_blocks=True,
trim_blocks=True
)
extra_vars = {
}
self.env_vars = self.prepare_template_vars('/service_config/overwrites.json', extra_vars)
print("Set Timezone")
self.set_timezone()
print("Render config")
self.render_config("/service_config")
def start_temporary(self, socket):
"""
Starts a temporary mysqld process in the background using the given UNIX socket.
The server is started with networking disabled (--skip-networking).
Args:
socket (str): Path to the UNIX socket file for MySQL to listen on.
Returns:
subprocess.Popen: The running mysqld process object.
"""
return subprocess.Popen([
"mysqld",
"--user=mysql",
"--skip-networking",
f"--socket={socket}"
])
def stop_temporary(self, dbuser, dbpass, socket):
"""
Shuts down the temporary mysqld instance gracefully.
Uses mariadb-admin to issue a shutdown command to the running server.
Args:
dbuser (str): The MySQL username with shutdown privileges (typically 'root').
dbpass (str): The password for the MySQL user.
socket (str): Path to the UNIX socket the server is listening on.
"""
self.run_command([
"mariadb-admin",
"shutdown",
f"--socket={socket}",
"-u", dbuser,
f"-p{dbpass}"
])
def upgrade_mysql(self, dbuser, dbpass, socket, max_retries=5, wait_interval=3):
"""
Executes mysql_upgrade to check and fix any schema or table incompatibilities.
Retries the upgrade command if it fails, up to a maximum number of attempts.
Args:
dbuser (str): MySQL username with privilege to perform the upgrade.
dbpass (str): Password for the MySQL user.
socket (str): Path to the MySQL UNIX socket for local communication.
max_retries (int): Maximum number of attempts before giving up. Default is 5.
wait_interval (int): Number of seconds to wait between retries. Default is 3.
Returns:
bool: True if upgrade succeeded, False if all attempts failed.
"""
retries = 0
while retries < max_retries:
result = self.run_command([
"mysql_upgrade",
"-u", dbuser,
f"-p{dbpass}",
f"--socket={socket}"
], check=False)
if result.returncode == 0:
print("mysql_upgrade completed successfully.")
break
else:
print(f"mysql_upgrade failed (try {retries+1}/{max_retries})")
retries += 1
time.sleep(wait_interval)
else:
print("mysql_upgrade failed after all retries.")
return False
def check_and_import_timezone_support(self, dbuser, dbpass, socket):
"""
Checks if MySQL supports timezone conversion (CONVERT_TZ).
If not, it imports timezone info using mysql_tzinfo_to_sql piped into mariadb.
"""
try:
cursor = self.mysql_conn.cursor()
cursor.execute("SELECT CONVERT_TZ('2019-11-02 23:33:00','Europe/Berlin','UTC')")
result = cursor.fetchone()
cursor.close()
if not result or result[0] is None:
print("Timezone conversion failed or returned NULL. Importing timezone info...")
# Use mysql_tzinfo_to_sql piped into mariadb
tz_dump = subprocess.Popen(
["mysql_tzinfo_to_sql", "/usr/share/zoneinfo"],
stdout=subprocess.PIPE
)
self.run_command([
"mariadb",
"--socket", socket,
"-u", dbuser,
f"-p{dbpass}",
"mysql"
], input_stream=tz_dump.stdout)
tz_dump.stdout.close()
tz_dump.wait()
print("Timezone info successfully imported.")
else:
print(f"Timezone support is working. Sample result: {result[0]}")
except Exception as e:
print(f"Failed to verify or import timezone info: {e}")

View File

@@ -0,0 +1,65 @@
from jinja2 import Environment, FileSystemLoader
from modules.BootstrapBase import BootstrapBase
import os
class BootstrapNginx(BootstrapBase):
def bootstrap(self):
# Connect to MySQL
self.connect_mysql()
# wait for Hosts
php_service = os.getenv("PHPFPM_HOST") or "php-fpm-mailcow"
rspamd_service = os.getenv("RSPAMD_HOST") or "rspamd-mailcow"
sogo_service = os.getenv("SOGO_HOST")
self.wait_for_host(php_service)
if not self.isYes(os.getenv("SKIP_RSPAMD", False)):
self.wait_for_host(rspamd_service)
if not self.isYes(os.getenv("SKIP_SOGO", False)):
self.wait_for_host(sogo_service)
# Setup Jinja2 Environment and load vars
self.env = Environment(
loader=FileSystemLoader([
'/service_config/custom_templates',
'/service_config/config_templates'
]),
keep_trailing_newline=True,
lstrip_blocks=True,
trim_blocks=True
)
extra_vars = {
"VALID_CERT_DIRS": self.get_valid_cert_dirs(),
'TRUSTED_PROXIES': [item.strip() for item in os.getenv("TRUSTED_PROXIES", "").split(",") if item.strip()],
'ADDITIONAL_SERVER_NAMES': [item.strip() for item in os.getenv("ADDITIONAL_SERVER_NAMES", "").split(",") if item.strip()],
}
self.env_vars = self.prepare_template_vars('/service_config/overwrites.json', extra_vars)
print("Set Timezone")
self.set_timezone()
print("Render config")
self.render_config("/service_config")
def get_valid_cert_dirs(self):
ssl_dir = '/etc/ssl/mail/'
valid_cert_dirs = []
for d in os.listdir(ssl_dir):
full_path = os.path.join(ssl_dir, d)
if not os.path.isdir(full_path):
continue
cert_path = os.path.join(full_path, 'cert.pem')
key_path = os.path.join(full_path, 'key.pem')
domains_path = os.path.join(full_path, 'domains')
if os.path.isfile(cert_path) and os.path.isfile(key_path) and os.path.isfile(domains_path):
with open(domains_path, 'r') as file:
domains = file.read().strip()
domains_list = domains.split()
if domains_list and os.getenv("MAILCOW_HOSTNAME", "") not in domains_list:
valid_cert_dirs.append({
'cert_path': full_path + '/',
'domains': domains
})
return valid_cert_dirs

View File

@@ -0,0 +1,202 @@
from jinja2 import Environment, FileSystemLoader
from modules.BootstrapBase import BootstrapBase
import os
import ipaddress
class BootstrapPhpfpm(BootstrapBase):
def bootstrap(self):
self.connect_mysql()
self.connect_redis()
# Setup Jinja2 Environment and load vars
self.env = Environment(
loader=FileSystemLoader([
'/service_config/custom_templates',
'/service_config/config_templates'
]),
keep_trailing_newline=True,
lstrip_blocks=True,
trim_blocks=True
)
extra_vars = {
}
self.env_vars = self.prepare_template_vars('/service_config/overwrites.json', extra_vars)
print("Set Timezone")
self.set_timezone()
# Prepare Redis and MySQL Database
# TODO: move to dockerapi
if self.isYes(os.getenv("MASTER", "")):
print("We are master, preparing...")
self.prepare_redis()
self.setup_apikeys(
os.getenv("API_ALLOW_FROM", "").strip(),
os.getenv("API_KEY", "").strip(),
os.getenv("API_KEY_READ_ONLY", "").strip()
)
self.setup_mysql_events()
print("Render config")
self.render_config("/service_config")
self.copy_file("/usr/local/etc/php/conf.d/opcache-recommended.ini", "/php-conf/opcache-recommended.ini")
self.copy_file("/usr/local/etc/php-fpm.d/z-pools.conf", "/php-conf/pools.conf")
self.copy_file("/usr/local/etc/php/conf.d/zzz-other.ini", "/php-conf/other.ini")
self.copy_file("/usr/local/etc/php/conf.d/upload.ini", "/php-conf/upload.ini")
self.copy_file("/usr/local/etc/php/conf.d/session_store.ini", "/php-conf/session_store.ini")
self.set_owner("/global_sieve", 82, 82, recursive=True)
self.set_owner("/web/templates/cache", 82, 82, recursive=True)
self.remove("/web/templates/cache", wipe_contents=True, exclude=[".gitkeep"])
print("Running DB init...")
self.run_command(["php", "-c", "/usr/local/etc/php", "-f", "/web/inc/init_db.inc.php"], check=False)
def prepare_redis(self):
print("Setting default Redis keys if missing...")
# Q_RELEASE_FORMAT
if self.redis_connw and self.redis_connr.get("Q_RELEASE_FORMAT") is None:
self.redis_connw.set("Q_RELEASE_FORMAT", "raw")
# Q_MAX_AGE
if self.redis_connw and self.redis_connr.get("Q_MAX_AGE") is None:
self.redis_connw.set("Q_MAX_AGE", 365)
# PASSWD_POLICY hash defaults
if self.redis_connw and self.redis_connr.hget("PASSWD_POLICY", "length") is None:
self.redis_connw.hset("PASSWD_POLICY", mapping={
"length": 6,
"chars": 0,
"special_chars": 0,
"lowerupper": 0,
"numbers": 0
})
# DOMAIN_MAP
print("Rebuilding DOMAIN_MAP from MySQL...")
if self.redis_connw:
self.redis_connw.delete("DOMAIN_MAP")
domains = set()
try:
cursor = self.mysql_conn.cursor()
cursor.execute("SELECT domain FROM domain")
domains.update(row[0] for row in cursor.fetchall())
cursor.execute("SELECT alias_domain FROM alias_domain")
domains.update(row[0] for row in cursor.fetchall())
cursor.close()
if domains:
for domain in domains:
if self.redis_connw:
self.redis_conn.hset("DOMAIN_MAP", domain, 1)
print(f"{len(domains)} domains added to DOMAIN_MAP.")
else:
print("No domains found to insert into DOMAIN_MAP.")
except Exception as e:
print(f"Failed to rebuild DOMAIN_MAP: {e}")
def setup_apikeys(self, api_allow_from, api_key_rw, api_key_ro):
if not api_allow_from or api_allow_from == "invalid":
return
print("Validating API_ALLOW_FROM IPs...")
ip_list = [ip.strip() for ip in api_allow_from.split(",")]
validated_ips = []
for ip in ip_list:
try:
ipaddress.ip_network(ip, strict=False)
validated_ips.append(ip)
except ValueError:
continue
if not validated_ips:
print("No valid IPs found in API_ALLOW_FROM")
return
allow_from_str = ",".join(validated_ips)
cursor = self.mysql_conn.cursor()
try:
if api_key_rw and api_key_rw != "invalid":
print("Setting RW API key...")
cursor.execute("DELETE FROM api WHERE access = 'rw'")
cursor.execute(
"INSERT INTO api (api_key, active, allow_from, access) VALUES (%s, %s, %s, %s)",
(api_key_rw, 1, allow_from_str, "rw")
)
if api_key_ro and api_key_ro != "invalid":
print("Setting RO API key...")
cursor.execute("DELETE FROM api WHERE access = 'ro'")
cursor.execute(
"INSERT INTO api (api_key, active, allow_from, access) VALUES (%s, %s, %s, %s)",
(api_key_ro, 1, allow_from_str, "ro")
)
self.mysql_conn.commit()
print("API key(s) set successfully.")
except Exception as e:
print(f"Failed to configure API keys: {e}")
self.mysql_conn.rollback()
finally:
cursor.close()
def setup_mysql_events(self):
print("Creating scheduled MySQL EVENTS...")
queries = [
"DROP EVENT IF EXISTS clean_spamalias;",
"""
CREATE EVENT clean_spamalias
ON SCHEDULE EVERY 1 DAY
DO
DELETE FROM spamalias WHERE validity < UNIX_TIMESTAMP();
""",
"DROP EVENT IF EXISTS clean_oauth2;",
"""
CREATE EVENT clean_oauth2
ON SCHEDULE EVERY 1 DAY
DO
BEGIN
DELETE FROM oauth_refresh_tokens WHERE expires < NOW();
DELETE FROM oauth_access_tokens WHERE expires < NOW();
DELETE FROM oauth_authorization_codes WHERE expires < NOW();
END;
""",
"DROP EVENT IF EXISTS clean_sasl_log;",
"""
CREATE EVENT clean_sasl_log
ON SCHEDULE EVERY 1 DAY
DO
BEGIN
DELETE sasl_log.* FROM sasl_log
LEFT JOIN (
SELECT username, service, MAX(datetime) AS lastdate
FROM sasl_log
GROUP BY username, service
) AS last
ON sasl_log.username = last.username AND sasl_log.service = last.service
WHERE datetime < DATE_SUB(NOW(), INTERVAL 31 DAY)
AND datetime < lastdate;
DELETE FROM sasl_log
WHERE username NOT IN (SELECT username FROM mailbox)
AND datetime < DATE_SUB(NOW(), INTERVAL 31 DAY);
END;
"""
]
try:
cursor = self.mysql_conn.cursor()
for query in queries:
cursor.execute(query)
self.mysql_conn.commit()
cursor.close()
print("MySQL EVENTS created successfully.")
except Exception as e:
print(f"Failed to create MySQL EVENTS: {e}")
self.mysql_conn.rollback()

View File

@@ -0,0 +1,83 @@
from jinja2 import Environment, FileSystemLoader
from modules.BootstrapBase import BootstrapBase
from pathlib import Path
class BootstrapPostfix(BootstrapBase):
def bootstrap(self):
# Connect to MySQL
self.connect_mysql()
# Wait for DNS
self.wait_for_dns("mailcow.email")
self.create_dir("/opt/postfix/conf/sql/")
# Setup Jinja2 Environment and load vars
self.env = Environment(
loader=FileSystemLoader([
'/service_config/custom_templates',
'/service_config/config_templates'
]),
keep_trailing_newline=True,
lstrip_blocks=True,
trim_blocks=True
)
extra_vars = {
"VALID_CERT_DIRS": self.get_valid_cert_dirs()
}
self.env_vars = self.prepare_template_vars('/service_config/overwrites.json', extra_vars)
print("Set Timezone")
self.set_timezone()
print("Set Syslog redis")
self.set_syslog_redis()
print("Render config")
self.render_config("/service_config")
# Create aliases DB
self.run_command(["newaliases"])
# Create SNI Config
self.run_command(["postmap", "-F", "hash:/opt/postfix/conf/sni.map"])
# Fix Postfix permissions
self.set_owner("/opt/postfix/conf/sql", user="root", group="postfix", recursive=True)
self.set_owner("/opt/postfix/conf/custom_transport.pcre", user="root", group="postfix")
for cf_file in Path("/opt/postfix/conf/sql").glob("*.cf"):
self.set_permissions(cf_file, 0o640)
self.set_permissions("/opt/postfix/conf/custom_transport.pcre", 0o640)
self.set_owner("/var/spool/postfix/public", user="root", group="postdrop", recursive=True)
self.set_owner("/var/spool/postfix/maildrop", user="root", group="postdrop", recursive=True)
self.run_command(["postfix", "set-permissions"], check=False)
# Checking if there is a leftover of a crashed postfix container before starting a new one
pid_file = Path("/var/spool/postfix/pid/master.pid")
if pid_file.exists():
print(f"Removing stale Postfix PID file: {pid_file}")
pid_file.unlink()
def get_valid_cert_dirs(self):
certs = {}
base_path = Path("/etc/ssl/mail")
if not base_path.exists():
return certs
for cert_dir in base_path.iterdir():
if not cert_dir.is_dir():
continue
domains_file = cert_dir / "domains"
cert_file = cert_dir / "cert.pem"
key_file = cert_dir / "key.pem"
if not (domains_file.exists() and cert_file.exists() and key_file.exists()):
continue
with open(domains_file, "r") as f:
domains = [line.strip() for line in f if line.strip()]
if domains:
certs[str(cert_dir)] = domains
return certs

View File

@@ -0,0 +1,132 @@
from jinja2 import Environment, FileSystemLoader
from modules.BootstrapBase import BootstrapBase
from pathlib import Path
import time
import platform
class BootstrapRspamd(BootstrapBase):
def bootstrap(self):
# Connect to MySQL
self.connect_mysql()
# Connect to MySQL
self.connect_redis()
# get dovecot ips
dovecot_v4 = []
dovecot_v6 = []
while not dovecot_v4 and not dovecot_v6:
try:
dovecot_v4 = self.resolve_docker_dns_record("dovecot-mailcow", "A")
dovecot_v6 = self.resolve_docker_dns_record("dovecot-mailcow", "AAAA")
except Exception as e:
print(e)
if not dovecot_v4 and not dovecot_v6:
print("Waiting for Dovecot IPs...")
time.sleep(3)
# get rspamd ips
rspamd_v4 = []
rspamd_v6 = []
while not rspamd_v4 and not rspamd_v6:
try:
rspamd_v4 = self.resolve_docker_dns_record("rspamd-mailcow", "A")
rspamd_v6 = self.resolve_docker_dns_record("rspamd-mailcow", "AAAA")
except Exception:
print(e)
if not rspamd_v4 and not rspamd_v6:
print("Waiting for Rspamd IPs...")
time.sleep(3)
# wait for Services
services = [
["php-fpm-mailcow", 9001],
["php-fpm-mailcow", 9002]
]
for service in services:
while not self.is_port_open(service[0], service[1]):
print(f"Waiting for {service[0]} on port {service[1]}...")
time.sleep(1)
print(f"Service {service[0]} on port {service[1]} is ready!")
for dir_path in ["/etc/rspamd/plugins.d", "/etc/rspamd/custom"]:
Path(dir_path).mkdir(parents=True, exist_ok=True)
for file_path in ["/etc/rspamd/rspamd.conf.local", "/etc/rspamd/rspamd.conf.override"]:
Path(file_path).touch(exist_ok=True)
self.set_permissions("/var/lib/rspamd", 0o755)
# Setup Jinja2 Environment and load vars
self.env = Environment(
loader=FileSystemLoader([
'/service_config/custom_templates',
'/service_config/config_templates'
]),
keep_trailing_newline=True,
lstrip_blocks=True,
trim_blocks=True
)
extra_vars = {
"DOVECOT_V4": dovecot_v4[0],
"DOVECOT_V6": dovecot_v6[0],
"RSPAMD_V4": rspamd_v4[0],
"RSPAMD_V6": rspamd_v6[0],
}
self.env_vars = self.prepare_template_vars('/service_config/overwrites.json', extra_vars)
print("Set Timezone")
self.set_timezone()
print("Render config")
self.render_config("/service_config")
# Fix missing default global maps, if any
# These exists in mailcow UI and should not be removed
files = [
"/etc/rspamd/custom/global_mime_from_blacklist.map",
"/etc/rspamd/custom/global_rcpt_blacklist.map",
"/etc/rspamd/custom/global_smtp_from_blacklist.map",
"/etc/rspamd/custom/global_mime_from_whitelist.map",
"/etc/rspamd/custom/global_rcpt_whitelist.map",
"/etc/rspamd/custom/global_smtp_from_whitelist.map",
"/etc/rspamd/custom/bad_languages.map",
"/etc/rspamd/custom/sa-rules",
"/etc/rspamd/custom/dovecot_trusted.map",
"/etc/rspamd/custom/rspamd_trusted.map",
"/etc/rspamd/custom/mailcow_networks.map",
"/etc/rspamd/custom/ip_wl.map",
"/etc/rspamd/custom/fishy_tlds.map",
"/etc/rspamd/custom/bad_words.map",
"/etc/rspamd/custom/bad_asn.map",
"/etc/rspamd/custom/bad_words_de.map",
"/etc/rspamd/custom/bulk_header.map",
"/etc/rspamd/custom/bad_header.map"
]
for file in files:
path = Path(file)
path.parent.mkdir(parents=True, exist_ok=True)
path.touch(exist_ok=True)
# Fix permissions
paths_rspamd = [
"/var/lib/rspamd",
"/etc/rspamd/local.d",
"/etc/rspamd/override.d",
"/etc/rspamd/rspamd.conf.local",
"/etc/rspamd/rspamd.conf.override",
"/etc/rspamd/plugins.d"
]
for path in paths_rspamd:
self.set_owner(path, "_rspamd", "_rspamd", recursive=True)
self.set_owner("/etc/rspamd/custom", "_rspamd", "_rspamd")
self.set_permissions("/etc/rspamd/custom", 0o755)
custom_path = Path("/etc/rspamd/custom")
for child in custom_path.iterdir():
if child.is_file():
self.set_owner(child, 82, 82)
self.set_permissions(child, 0o644)
# Provide additional lua modules
arch = platform.machine()
self.run_command(["ln", "-s", f"/usr/lib/{arch}-linux-gnu/liblua5.1-cjson.so.0.0.0", "/usr/lib/rspamd/cjson.so"], check=False)

View File

@@ -0,0 +1,138 @@
from jinja2 import Environment, FileSystemLoader
from modules.BootstrapBase import BootstrapBase
from pathlib import Path
import os
import sys
import time
class BootstrapSogo(BootstrapBase):
def bootstrap(self):
# Skip SOGo if set
if self.isYes(os.getenv("SKIP_SOGO", "")):
print("SKIP_SOGO is set, skipping SOGo startup...")
time.sleep(365 * 24 * 60 * 60)
sys.exit(1)
# Connect to MySQL
self.connect_mysql()
# Wait until port is free
while self.is_port_open(os.getenv("SOGO_HOST"), 20000):
print("Port 20000 still in use — terminating sogod...")
self.kill_proc("sogod")
time.sleep(3)
# Wait for schema to update to expected version
self.wait_for_schema_update(init_file_path="init_db.inc.php")
# Setup Jinja2 Environment and load vars
self.env = Environment(
loader=FileSystemLoader([
'/service_config/custom_templates',
'/service_config/config_templates'
]),
keep_trailing_newline=True,
lstrip_blocks=True,
trim_blocks=True
)
extra_vars = {
"SQL_DOMAINS": self.get_domains(),
"IAM_SETTINGS": self.get_identity_provider_settings()
}
self.env_vars = self.prepare_template_vars('/service_config/overwrites.json', extra_vars)
print("Set Timezone")
self.set_timezone()
print("Set Syslog redis")
self.set_syslog_redis()
print("Render config")
self.render_config("/service_config")
print("Fix permissions")
self.set_owner("/var/lib/sogo", "sogo", "sogo", recursive=True)
self.set_permissions("/var/lib/sogo/GNUstep/Defaults/sogod.plist", 0o600)
# Rename custom logo
logo_src = Path("/etc/sogo/sogo-full.svg")
if logo_src.exists():
print("Set Logo")
self.move_file(logo_src, "/etc/sogo/custom-fulllogo.svg")
# Rsync web content
print("Syncing web content")
self.rsync_file("/usr/lib/GNUstep/SOGo/", "/sogo_web/", recursive=True)
# Chown backup path
self.set_owner("/sogo_backup", "sogo", "sogo", recursive=True)
def get_domains(self):
"""
Retrieves a list of domains and their GAL (Global Address List) status.
Executes a SQL query to select:
- `domain`
- a human-readable GAL status ("YES" or "NO")
- `ldap_gal` as a boolean (True/False)
Returns:
list[dict]: A list of dicts with keys: domain, gal_status, ldap_gal.
Example: [{"domain": "example.com", "gal_status": "YES", "ldap_gal": True}]
Logs:
Error messages if the query fails.
"""
query = """
SELECT domain,
CASE gal WHEN '1' THEN 'YES' ELSE 'NO' END AS gal_status,
ldap_gal = 1 AS ldap_gal
FROM domain;
"""
try:
cursor = self.mysql_conn.cursor()
cursor.execute(query)
result = cursor.fetchall()
cursor.close()
return [
{
"domain": row[0],
"gal_status": row[1],
"ldap_gal": bool(row[2])
}
for row in result
]
except Exception as e:
print(f"Error fetching domains: {e}")
return []
def get_identity_provider_settings(self):
"""
Retrieves all key-value identity provider settings.
Returns:
dict: Settings in the format { key: value }
Logs:
Error messages if the query fails.
"""
query = "SELECT `key`, `value` FROM identity_provider;"
try:
cursor = self.mysql_conn.cursor()
cursor.execute(query)
result = cursor.fetchall()
cursor.close()
iam_settings = {row[0]: row[1] for row in result}
if iam_settings['authsource'] == "ldap":
protocol = "ldaps" if iam_settings.get("use_ssl") else "ldap"
starttls = "/????!StartTLS" if iam_settings.get("use_tls") else ""
iam_settings['ldap_url'] = f"{protocol}://{iam_settings['host']}:{iam_settings['port']}{starttls}"
return iam_settings
except Exception as e:
print(f"Error fetching identity provider settings: {e}")
return {}

View File

@@ -1,25 +1,129 @@
FROM alpine:3.20
FROM alpine:3.21 AS builder
WORKDIR /src
ENV CLAMD_VERSION=1.4.2
RUN apk upgrade --no-cache \
&& apk add --update --no-cache \
g++ \
gcc \
gdb \
make \
cmake \
py3-pytest \
python3 \
valgrind \
bzip2-dev \
check-dev \
curl-dev \
json-c-dev \
libmilter-dev \
libxml2-dev \
linux-headers \
ncurses-dev \
openssl-dev \
pcre2-dev \
zlib-dev \
cargo \
rust
RUN wget -P /src https://www.clamav.net/downloads/production/clamav-${CLAMD_VERSION}.tar.gz \
&& tar xzfv /src/clamav-${CLAMD_VERSION}.tar.gz \
&& cd /src/clamav-${CLAMD_VERSION} \
&& cmake . \
-D CMAKE_BUILD_TYPE="Release" \
-D CMAKE_INSTALL_PREFIX="/usr" \
-D CMAKE_INSTALL_LIBDIR="/usr/lib" \
-D APP_CONFIG_DIRECTORY="/etc/clamav" \
-D DATABASE_DIRECTORY="/var/lib/clamav" \
-D ENABLE_CLAMONACC=OFF \
-D ENABLE_EXAMPLES=OFF \
-D ENABLE_MILTER=ON \
-D ENABLE_MAN_PAGES=OFF \
-D ENABLE_STATIC_LIB=OFF \
-D ENABLE_JSON_SHARED=ON \
&& cmake --build . \
&& make DESTDIR="/clamav" -j$(($(nproc) - 1)) install \
&& rm -r "/clamav/usr/lib/pkgconfig/" \
&& sed -e "s|^\(Example\)|\# \1|" \
-e "s|.*\(LocalSocket\) .*|\1 /tmp/clamd.sock|" \
-e "s|.*\(TCPSocket\) .*|\1 3310|" \
-e "s|.*\(TCPAddr\) .*|#\1 0.0.0.0|" \
-e "s|.*\(User\) .*|\1 clamav|" \
-e "s|^\#\(LogFile\) .*|\1 /var/log/clamav/clamd.log|" \
-e "s|^\#\(LogTime\).*|\1 yes|" \
"/clamav/etc/clamav/clamd.conf.sample" > "/clamav/etc/clamav/clamd.conf" \
&& sed -e "s|^\(Example\)|\# \1|" \
-e "s|.*\(DatabaseOwner\) .*|\1 clamav|" \
-e "s|^\#\(UpdateLogFile\) .*|\1 /var/log/clamav/freshclam.log|" \
-e "s|^\#\(NotifyClamd\).*|\1 /etc/clamav/clamd.conf|" \
-e "s|^\#\(ScriptedUpdates\).*|\1 yes|" \
"/clamav/etc/clamav/freshclam.conf.sample" > "/clamav/etc/clamav/freshclam.conf" \
&& sed -e "s|^\(Example\)|\# \1|" \
-e "s|.*\(MilterSocket\) .*|\1 inet:7357|" \
-e "s|.*\(User\) .*|\1 clamav|" \
-e "s|^\#\(LogFile\) .*|\1 /var/log/clamav/milter.log|" \
-e "s|^\#\(LogTime\).*|\1 yes|" \
-e "s|.*\(\ClamdSocket\) .*|\1 unix:/tmp/clamd.sock|" \
"/clamav/etc/clamav/clamav-milter.conf.sample" > "/clamav/etc/clamav/clamav-milter.conf" || exit 1
FROM alpine:3.21
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
RUN apk upgrade --no-cache \
&& apk add --update --no-cache \
rsync \
clamav \
bind-tools \
bash \
tini
tzdata \
rsync \
bind-tools \
bash \
tini \
json-c \
libbz2 \
libcurl \
libmilter \
libxml2 \
ncurses-libs \
pcre2 \
zlib \
libgcc \
py3-pip \
&& addgroup -S "clamav" && \
adduser -D -G "clamav" -h "/var/lib/clamav" -s "/bin/false" -S "clamav" && \
install -d -m 755 -g "clamav" -o "clamav" "/var/log/clamav" && \
chown -R clamav:clamav /var/lib/clamav
# init
COPY clamd.sh /clamd.sh
RUN chmod +x /sbin/tini
RUN apk add --no-cache --virtual .build-deps \
gcc \
musl-dev \
python3-dev \
linux-headers \
&& pip install --break-system-packages psutil \
&& apk del .build-deps
# healthcheck
COPY healthcheck.sh /healthcheck.sh
COPY clamdcheck.sh /usr/local/bin
RUN chmod +x /healthcheck.sh
RUN chmod +x /usr/local/bin/clamdcheck.sh
RUN pip install --break-system-packages \
mysql-connector-python \
jinja2 \
redis \
dnspython
COPY --from=builder "/clamav" "/"
COPY data/Dockerfiles/bootstrap /bootstrap
COPY data/Dockerfiles/clamd/docker-entrypoint.sh /docker-entrypoint.sh
COPY data/Dockerfiles/clamd/clamd.sh /clamd.sh
COPY data/Dockerfiles/clamd/healthcheck.sh /healthcheck.sh
COPY data/Dockerfiles/clamd/clamdcheck.sh /usr/local/bin
HEALTHCHECK --start-period=6m CMD "/healthcheck.sh"
ENTRYPOINT []
RUN chmod +x /docker-entrypoint.sh \
/clamd.sh \
/healthcheck.sh \
/usr/local/bin/clamdcheck.sh \
/sbin/tini
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["/sbin/tini", "-g", "--", "/clamd.sh"]

View File

@@ -1,48 +1,5 @@
#!/bin/bash
if [[ "${SKIP_CLAMD}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "SKIP_CLAMD=y, skipping ClamAV..."
sleep 365d
exit 0
fi
# Cleaning up garbage
echo "Cleaning up tmp files..."
rm -rf /var/lib/clamav/clamav-*.tmp
# Prepare whitelist
mkdir -p /run/clamav /var/lib/clamav
if [[ -s /etc/clamav/whitelist.ign2 ]]; then
echo "Copying non-empty whitelist.ign2 to /var/lib/clamav/whitelist.ign2"
cp /etc/clamav/whitelist.ign2 /var/lib/clamav/whitelist.ign2
fi
if [[ ! -f /var/lib/clamav/whitelist.ign2 ]]; then
echo "Creating /var/lib/clamav/whitelist.ign2"
cat <<EOF > /var/lib/clamav/whitelist.ign2
# Please restart ClamAV after changing signatures
Example-Signature.Ignore-1
PUA.Win.Trojan.EmbeddedPDF-1
PUA.Pdf.Trojan.EmbeddedJavaScript-1
PUA.Pdf.Trojan.OpenActionObjectwithJavascript-1
EOF
fi
chown clamav:clamav -R /var/lib/clamav /run/clamav
chmod 755 /var/lib/clamav
chmod 644 -R /var/lib/clamav/*
chmod 750 /run/clamav
stat /var/lib/clamav/whitelist.ign2
dos2unix /var/lib/clamav/whitelist.ign2
sed -i '/^\s*$/d' /var/lib/clamav/whitelist.ign2
# Copying to /etc/clamav to expose file as-is to administrator
cp -p /var/lib/clamav/whitelist.ign2 /etc/clamav/whitelist.ign2
BACKGROUND_TASKS=()
echo "Running freshclam..."
@@ -91,6 +48,7 @@ done
) &
BACKGROUND_TASKS+=($!)
echo "$(clamd -V) is starting... please wait a moment."
nice -n10 clamd &
BACKGROUND_TASKS+=($!)

View File

@@ -11,4 +11,4 @@ if [ "${CLAMAV_NO_CLAMD:-}" != "false" ]; then
echo "Clamd is up"
fi
exit 0
exit 0

View File

@@ -0,0 +1,20 @@
#!/bin/bash
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
echo "Running hook ${file}"
"${file}"
fi
done
python3 -u /bootstrap/main.py
BOOTSTRAP_EXIT_CODE=$?
if [ $BOOTSTRAP_EXIT_CODE -ne 0 ]; then
echo "Bootstrap failed with exit code $BOOTSTRAP_EXIT_CODE. Not starting Clamd."
exit $BOOTSTRAP_EXIT_CODE
fi
echo "Bootstrap succeeded. Starting Clamd..."
exec "$@"

View File

@@ -1,4 +1,4 @@
FROM alpine:3.20
FROM alpine:3.21
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
@@ -19,9 +19,9 @@ RUN apk add --update --no-cache python3 \
docker
RUN mkdir /app/modules
COPY docker-entrypoint.sh /app/
COPY main.py /app/main.py
COPY modules/ /app/modules/
COPY data/Dockerfiles/dockerapi/docker-entrypoint.sh /app/
COPY data/Dockerfiles/dockerapi/main.py /app/main.py
COPY data/Dockerfiles/dockerapi/modules/ /app/modules/
ENTRYPOINT ["/bin/sh", "/app/docker-entrypoint.sh"]
CMD ["python", "main.py"]

View File

@@ -34,9 +34,9 @@ async def lifespan(app: FastAPI):
# Init redis client
if os.environ['REDIS_SLAVEOF_IP'] != "":
redis_client = redis = await aioredis.from_url(f"redis://{os.environ['REDIS_SLAVEOF_IP']}:{os.environ['REDIS_SLAVEOF_PORT']}/0")
redis_client = redis = await aioredis.from_url(f"redis://{os.environ['REDIS_SLAVEOF_IP']}:{os.environ['REDIS_SLAVEOF_PORT']}/0", password=os.environ['REDISPASS'])
else:
redis_client = redis = await aioredis.from_url("redis://redis-mailcow:6379/0")
redis_client = redis = await aioredis.from_url(f"redis://{os.environ['REDIS_HOST']}:6379/0", password=os.environ['REDISPASS'])
# Init docker clients
sync_docker_client = docker.DockerClient(base_url='unix://var/run/docker.sock', version='auto')
@@ -90,7 +90,7 @@ async def get_container(container_id : str):
if container._id == container_id:
container_info = await container.show()
return Response(content=json.dumps(container_info, indent=4), media_type="application/json")
res = {
"type": "danger",
"msg": "no container found"
@@ -130,7 +130,7 @@ async def get_containers():
async def post_containers(container_id : str, post_action : str, request: Request):
global dockerapi
try :
try:
request_json = await request.json()
except Exception as err:
request_json = {}
@@ -191,7 +191,7 @@ async def post_container_update_stats(container_id : str):
stats = json.loads(await dockerapi.redis_client.get(container_id + '_stats'))
return Response(content=json.dumps(stats, indent=4), media_type="application/json")
# PubSub Handler
async def handle_pubsub_messages(channel: aioredis.client.PubSub):
@@ -241,10 +241,10 @@ async def handle_pubsub_messages(channel: aioredis.client.PubSub):
else:
dockerapi.logger.error("api call: missing container_name, post_action or request")
else:
dockerapi.logger.error("Unknwon PubSub recieved - %s" % json.dumps(data_json))
dockerapi.logger.error("Unknown PubSub received - %s" % json.dumps(data_json))
else:
dockerapi.logger.error("Unknwon PubSub recieved - %s" % json.dumps(data_json))
dockerapi.logger.error("Unknown PubSub received - %s" % json.dumps(data_json))
await asyncio.sleep(0.0)
except asyncio.TimeoutError:
pass

View File

@@ -159,7 +159,7 @@ class DockerApi:
postqueue_r = container.exec_run(["/bin/bash", "-c", "/usr/sbin/postqueue " + i], user='postfix')
# todo: check each exit code
res = { 'type': 'success', 'msg': 'Scheduled immediate delivery'}
return Response(content=json.dumps(res, indent=4), media_type="application/json")
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: exec - cmd: mailq - task: list
def container_post__exec__mailq__list(self, request_json, **kwargs):
if 'container_id' in kwargs:
@@ -318,7 +318,7 @@ class DockerApi:
if 'username' in request_json and 'script_name' in request_json:
for container in self.sync_docker_client.containers.list(filters=filters):
cmd = ["/bin/bash", "-c", "/usr/bin/doveadm sieve get -u '" + request_json['username'].replace("'", "'\\''") + "' '" + request_json['script_name'].replace("'", "'\\''") + "'"]
cmd = ["/bin/bash", "-c", "/usr/bin/doveadm sieve get -u '" + request_json['username'].replace("'", "'\\''") + "' '" + request_json['script_name'].replace("'", "'\\''") + "'"]
sieve_return = container.exec_run(cmd)
return self.exec_run_handler('utf8_text_only', sieve_return)
# api call: container_post - post_action: exec - cmd: maildir - task: cleanup
@@ -342,6 +342,30 @@ class DockerApi:
cmd = ["/bin/bash", "-c", cmd_vmail]
maildir_cleanup = container.exec_run(cmd, user='vmail')
return self.exec_run_handler('generic', maildir_cleanup)
# api call: container_post - post_action: exec - cmd: maildir - task: move
def container_post__exec__maildir__move(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'old_maildir' in request_json and 'new_maildir' in request_json:
for container in self.sync_docker_client.containers.list(filters=filters):
vmail_name = request_json['old_maildir'].replace("'", "'\\''")
new_vmail_name = request_json['new_maildir'].replace("'", "'\\''")
cmd_vmail = f"if [[ -d '/var/vmail/{vmail_name}' ]]; then /bin/mv '/var/vmail/{vmail_name}' '/var/vmail/{new_vmail_name}'; fi"
index_name = request_json['old_maildir'].split("/")
new_index_name = request_json['new_maildir'].split("/")
if len(index_name) > 1 and len(new_index_name) > 1:
index_name = index_name[1].replace("'", "'\\''") + "@" + index_name[0].replace("'", "'\\''")
new_index_name = new_index_name[1].replace("'", "'\\''") + "@" + new_index_name[0].replace("'", "'\\''")
cmd_vmail_index = f"if [[ -d '/var/vmail_index/{index_name}' ]]; then /bin/mv '/var/vmail_index/{index_name}' '/var/vmail_index/{new_index_name}_index'; fi"
cmd = ["/bin/bash", "-c", cmd_vmail + " && " + cmd_vmail_index]
else:
cmd = ["/bin/bash", "-c", cmd_vmail]
maildir_move = container.exec_run(cmd, user='vmail')
return self.exec_run_handler('generic', maildir_move)
# api call: container_post - post_action: exec - cmd: rspamd - task: worker_password
def container_post__exec__rspamd__worker_password(self, request_json, **kwargs):
if 'container_id' in kwargs:
@@ -374,6 +398,121 @@ class DockerApi:
self.logger.error('failed changing Rspamd password')
res = { 'type': 'danger', 'msg': 'command did not complete' }
return Response(content=json.dumps(res, indent=4), media_type="application/json")
# api call: container_post - post_action: exec - cmd: sogo - task: rename
def container_post__exec__sogo__rename_user(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
if 'old_username' in request_json and 'new_username' in request_json:
for container in self.sync_docker_client.containers.list(filters=filters):
old_username = request_json['old_username'].replace("'", "'\\''")
new_username = request_json['new_username'].replace("'", "'\\''")
sogo_return = container.exec_run(["/bin/bash", "-c", f"sogo-tool rename-user '{old_username}' '{new_username}'"], user='sogo')
return self.exec_run_handler('generic', sogo_return)
# api call: container_post - post_action: exec - cmd: doveadm - task: get_acl
def container_post__exec__doveadm__get_acl(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(filters=filters):
id = request_json['id'].replace("'", "'\\''")
shared_folders = container.exec_run(["/bin/bash", "-c", f"doveadm mailbox list -u '{id}'"])
shared_folders = shared_folders.output.decode('utf-8')
shared_folders = shared_folders.splitlines()
formatted_acls = []
mailbox_seen = []
for shared_folder in shared_folders:
if "Shared" not in shared_folder:
mailbox = shared_folder.replace("'", "'\\''")
if mailbox in mailbox_seen:
continue
acls = container.exec_run(["/bin/bash", "-c", f"doveadm acl get -u '{id}' '{mailbox}'"])
acls = acls.output.decode('utf-8').strip().splitlines()
if len(acls) >= 2:
for acl in acls[1:]:
user_id, rights = acl.split(maxsplit=1)
user_id = user_id.split('=')[1]
mailbox_seen.append(mailbox)
formatted_acls.append({ 'user': id, 'id': user_id, 'mailbox': mailbox, 'rights': rights.split() })
elif "Shared" in shared_folder and "/" in shared_folder:
shared_folder = shared_folder.split("/")
if len(shared_folder) < 3:
continue
user = shared_folder[1].replace("'", "'\\''")
mailbox = '/'.join(shared_folder[2:]).replace("'", "'\\''")
if mailbox in mailbox_seen:
continue
acls = container.exec_run(["/bin/bash", "-c", f"doveadm acl get -u '{user}' '{mailbox}'"])
acls = acls.output.decode('utf-8').strip().splitlines()
if len(acls) >= 2:
for acl in acls[1:]:
user_id, rights = acl.split(maxsplit=1)
user_id = user_id.split('=')[1].replace("'", "'\\''")
if user_id == id and mailbox not in mailbox_seen:
mailbox_seen.append(mailbox)
formatted_acls.append({ 'user': user, 'id': id, 'mailbox': mailbox, 'rights': rights.split() })
return Response(content=json.dumps(formatted_acls, indent=4), media_type="application/json")
# api call: container_post - post_action: exec - cmd: doveadm - task: delete_acl
def container_post__exec__doveadm__delete_acl(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(filters=filters):
user = request_json['user'].replace("'", "'\\''")
mailbox = request_json['mailbox'].replace("'", "'\\''")
id = request_json['id'].replace("'", "'\\''")
if user and mailbox and id:
acl_delete_return = container.exec_run(["/bin/bash", "-c", f"doveadm acl delete -u '{user}' '{mailbox}' 'user={id}'"])
return self.exec_run_handler('generic', acl_delete_return)
# api call: container_post - post_action: exec - cmd: doveadm - task: set_acl
def container_post__exec__doveadm__set_acl(self, request_json, **kwargs):
if 'container_id' in kwargs:
filters = {"id": kwargs['container_id']}
elif 'container_name' in kwargs:
filters = {"name": kwargs['container_name']}
for container in self.sync_docker_client.containers.list(filters=filters):
user = request_json['user'].replace("'", "'\\''")
mailbox = request_json['mailbox'].replace("'", "'\\''")
id = request_json['id'].replace("'", "'\\''")
rights = ""
available_rights = [
"admin",
"create",
"delete",
"expunge",
"insert",
"lookup",
"post",
"read",
"write",
"write-deleted",
"write-seen"
]
for right in request_json['rights']:
right = right.replace("'", "'\\''").lower()
if right in available_rights:
rights += right + " "
if user and mailbox and id and rights:
acl_set_return = container.exec_run(["/bin/bash", "-c", f"doveadm acl set -u '{user}' '{mailbox}' 'user={id}' {rights}"])
return self.exec_run_handler('generic', acl_set_return)
# Collect host stats
async def get_host_stats(self, wait=5):
@@ -462,7 +601,7 @@ class DockerApi:
except:
pass
return ''.join(total_data)
try :
socket = container.exec_run([shell_cmd], stdin=True, socket=True, user=user).output._sock
if not cmd.endswith("\n"):

View File

@@ -1,4 +1,4 @@
FROM alpine:3.20
FROM alpine:3.21
LABEL maintainer="The Infrastructure Company GmbH <info@servercow.de>"
@@ -34,9 +34,13 @@ RUN addgroup -g 5000 vmail \
lua5.3-sql-mysql \
icu-data-full \
mariadb-connector-c \
lua-sec \
mariadb-dev \
glib-dev \
gcompat \
mariadb-client \
perl \
perl-dev \
perl-ntlm \
perl-cgi \
perl-crypt-openssl-rsa \
@@ -65,7 +69,7 @@ RUN addgroup -g 5000 vmail \
perl-par-packer \
perl-parse-recdescent \
perl-lockfile-simple \
libproc \
libproc2 \
perl-readonly \
perl-regexp-common \
perl-sys-meminfo \
@@ -83,11 +87,11 @@ RUN addgroup -g 5000 vmail \
perl-proc-processtable \
perl-app-cpanminus \
procps \
python3 \
py3-mysqlclient \
python3 py3-pip python3-dev \
py3-html2text \
py3-jinja2 \
py3-redis \
linux-headers \
musl-dev \
gcc \
redis \
syslog-ng \
syslog-ng-redis \
@@ -105,32 +109,42 @@ RUN addgroup -g 5000 vmail \
dovecot-submissiond \
dovecot-pigeonhole-plugin \
dovecot-pop3d \
dovecot-fts-solr \
dovecot-fts-flatcurve \
&& arch=$(arch | sed s/aarch64/arm64/ | sed s/x86_64/amd64/) \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$arch" \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true
COPY trim_logs.sh /usr/local/bin/trim_logs.sh
COPY clean_q_aged.sh /usr/local/bin/clean_q_aged.sh
COPY syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
COPY syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng-redis_slave.conf
COPY imapsync /usr/local/bin/imapsync
COPY imapsync_runner.pl /usr/local/bin/imapsync_runner.pl
COPY report-spam.sieve /usr/lib/dovecot/sieve/report-spam.sieve
COPY report-ham.sieve /usr/lib/dovecot/sieve/report-ham.sieve
COPY rspamd-pipe-ham /usr/lib/dovecot/sieve/rspamd-pipe-ham
COPY rspamd-pipe-spam /usr/lib/dovecot/sieve/rspamd-pipe-spam
COPY sa-rules.sh /usr/local/bin/sa-rules.sh
COPY maildir_gc.sh /usr/local/bin/maildir_gc.sh
COPY docker-entrypoint.sh /
COPY supervisord.conf /etc/supervisor/supervisord.conf
COPY stop-supervisor.sh /usr/local/sbin/stop-supervisor.sh
COPY quarantine_notify.py /usr/local/bin/quarantine_notify.py
COPY quota_notify.py /usr/local/bin/quota_notify.py
COPY repl_health.sh /usr/local/bin/repl_health.sh
COPY optimize-fts.sh /usr/local/bin/optimize-fts.sh
RUN pip install --break-system-packages \
mysql-connector-python \
jinja2 \
redis \
dnspython \
psutil
COPY data/Dockerfiles/bootstrap /bootstrap
COPY data/Dockerfiles/dovecot/trim_logs.sh /usr/local/bin/trim_logs.sh
COPY data/Dockerfiles/dovecot/clean_q_aged.sh /usr/local/bin/clean_q_aged.sh
COPY data/Dockerfiles/dovecot/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
COPY data/Dockerfiles/dovecot/syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng-redis_slave.conf
COPY data/Dockerfiles/dovecot/imapsync /usr/local/bin/imapsync
COPY data/Dockerfiles/dovecot/imapsync_runner.pl /usr/local/bin/imapsync_runner.pl
COPY data/Dockerfiles/dovecot/report-spam.sieve /usr/lib/dovecot/sieve/report-spam.sieve
COPY data/Dockerfiles/dovecot/report-ham.sieve /usr/lib/dovecot/sieve/report-ham.sieve
COPY data/Dockerfiles/dovecot/rspamd-pipe-ham /usr/lib/dovecot/sieve/rspamd-pipe-ham
COPY data/Dockerfiles/dovecot/rspamd-pipe-spam /usr/lib/dovecot/sieve/rspamd-pipe-spam
COPY data/Dockerfiles/dovecot/sa-rules.sh /usr/local/bin/sa-rules.sh
COPY data/Dockerfiles/dovecot/docker-entrypoint.sh /
COPY data/Dockerfiles/dovecot/supervisord.conf /etc/supervisor/supervisord.conf
COPY data/Dockerfiles/dovecot/stop-supervisor.sh /usr/local/sbin/stop-supervisor.sh
COPY data/Dockerfiles/dovecot/quarantine_notify.py /usr/local/bin/quarantine_notify.py
COPY data/Dockerfiles/dovecot/quota_notify.py /usr/local/bin/quota_notify.py
COPY data/Dockerfiles/dovecot/repl_health.sh /usr/local/bin/repl_health.sh
COPY data/Dockerfiles/dovecot/optimize-fts.sh /usr/local/bin/optimize-fts.sh
RUN chmod +x /docker-entrypoint.sh \
/usr/local/sbin/stop-supervisor.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]

View File

@@ -2,7 +2,7 @@
source /source_env.sh
MAX_AGE=$(redis-cli --raw -h redis-mailcow GET Q_MAX_AGE)
MAX_AGE=$(redis-cli --raw -h redis-mailcow -a ${REDISPASS} --no-auth-warning GET Q_MAX_AGE)
if [[ -z ${MAX_AGE} ]]; then
echo "Max age for quarantine items not defined"
@@ -15,6 +15,6 @@ if ! [[ ${MAX_AGE} =~ ${NUM_REGEXP} ]] ; then
exit 1
fi
TO_DELETE=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT COUNT(id) FROM quarantine WHERE created < NOW() - INTERVAL ${MAX_AGE//[!0-9]/} DAY" -BN)
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "DELETE FROM quarantine WHERE created < NOW() - INTERVAL ${MAX_AGE//[!0-9]/} DAY"
TO_DELETE=$(mariadb --skip-ssl --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT COUNT(id) FROM quarantine WHERE created < NOW() - INTERVAL ${MAX_AGE//[!0-9]/} DAY" -BN)
mariadb --skip-ssl --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "DELETE FROM quarantine WHERE created < NOW() - INTERVAL ${MAX_AGE//[!0-9]/} DAY"
echo "Deleted ${TO_DELETE} items from quarantine table (max age is ${MAX_AGE//[!0-9]/} days)"

View File

@@ -1,478 +1,4 @@
#!/bin/bash
set -e
# Wait for MySQL to warm-up
while ! mariadb-admin status --ssl=false --socket=/var/run/mysqld/mysqld.sock -u${DBUSER} -p${DBPASS} --silent; do
echo "Waiting for database to come up..."
sleep 2
done
until dig +short mailcow.email > /dev/null; do
echo "Waiting for DNS..."
sleep 1
done
# Do not attempt to write to slave
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT}"
else
REDIS_CMDLINE="redis-cli -h redis -p 6379"
fi
until [[ $(${REDIS_CMDLINE} PING) == "PONG" ]]; do
echo "Waiting for Redis..."
sleep 2
done
${REDIS_CMDLINE} SET DOVECOT_REPL_HEALTH 1 > /dev/null
# Create missing directories
[[ ! -d /etc/dovecot/sql/ ]] && mkdir -p /etc/dovecot/sql/
[[ ! -d /etc/dovecot/lua/ ]] && mkdir -p /etc/dovecot/lua/
[[ ! -d /etc/dovecot/conf.d/ ]] && mkdir -p /etc/dovecot/conf.d/
[[ ! -d /var/vmail/_garbage ]] && mkdir -p /var/vmail/_garbage
[[ ! -d /var/vmail/sieve ]] && mkdir -p /var/vmail/sieve
[[ ! -d /etc/sogo ]] && mkdir -p /etc/sogo
[[ ! -d /var/volatile ]] && mkdir -p /var/volatile
# Set Dovecot sql config parameters, escape " in db password
DBPASS=$(echo ${DBPASS} | sed 's/"/\\"/g')
# Create quota dict for Dovecot
if [[ "${MASTER}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
QUOTA_TABLE=quota2
else
QUOTA_TABLE=quota2replica
fi
cat <<EOF > /etc/dovecot/sql/dovecot-dict-sql-quota.conf
# Autogenerated by mailcow
connect = "host=/var/run/mysqld/mysqld.sock dbname=${DBNAME} user=${DBUSER} password=${DBPASS}"
map {
pattern = priv/quota/storage
table = ${QUOTA_TABLE}
username_field = username
value_field = bytes
}
map {
pattern = priv/quota/messages
table = ${QUOTA_TABLE}
username_field = username
value_field = messages
}
EOF
# Create dict used for sieve pre and postfilters
cat <<EOF > /etc/dovecot/sql/dovecot-dict-sql-sieve_before.conf
# Autogenerated by mailcow
connect = "host=/var/run/mysqld/mysqld.sock dbname=${DBNAME} user=${DBUSER} password=${DBPASS}"
map {
pattern = priv/sieve/name/\$script_name
table = sieve_before
username_field = username
value_field = id
fields {
script_name = \$script_name
}
}
map {
pattern = priv/sieve/data/\$id
table = sieve_before
username_field = username
value_field = script_data
fields {
id = \$id
}
}
EOF
cat <<EOF > /etc/dovecot/sql/dovecot-dict-sql-sieve_after.conf
# Autogenerated by mailcow
connect = "host=/var/run/mysqld/mysqld.sock dbname=${DBNAME} user=${DBUSER} password=${DBPASS}"
map {
pattern = priv/sieve/name/\$script_name
table = sieve_after
username_field = username
value_field = id
fields {
script_name = \$script_name
}
}
map {
pattern = priv/sieve/data/\$id
table = sieve_after
username_field = username
value_field = script_data
fields {
id = \$id
}
}
EOF
echo -n ${ACL_ANYONE} > /etc/dovecot/acl_anyone
if [[ "${FLATCURVE_EXPERIMENTAL}" =~ ^([yY][eE][sS]|[yY]) ]]; then
echo -e "\e[33mActivating Flatcurve as FTS Backend...\e[0m"
echo -e "\e[33mDepending on your previous setup a full reindex might be needed... \e[0m"
echo -e "\e[34mVisit https://docs.mailcow.email/manual-guides/Dovecot/u_e-dovecot-fts/#fts-related-dovecot-commands to learn how to reindex\e[0m"
echo -n 'quota acl zlib mail_crypt mail_crypt_acl mail_log notify fts fts_flatcurve listescape replication' > /etc/dovecot/mail_plugins
echo -n 'quota imap_quota imap_acl acl zlib imap_zlib imap_sieve mail_crypt mail_crypt_acl notify mail_log fts fts_flatcurve listescape replication' > /etc/dovecot/mail_plugins_imap
echo -n 'quota sieve acl zlib mail_crypt mail_crypt_acl fts fts_flatcurve notify listescape replication' > /etc/dovecot/mail_plugins_lmtp
elif [[ "${SKIP_SOLR}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo -n 'quota acl zlib mail_crypt mail_crypt_acl mail_log notify listescape replication' > /etc/dovecot/mail_plugins
echo -n 'quota imap_quota imap_acl acl zlib imap_zlib imap_sieve mail_crypt mail_crypt_acl notify listescape replication mail_log' > /etc/dovecot/mail_plugins_imap
echo -n 'quota sieve acl zlib mail_crypt mail_crypt_acl notify listescape replication' > /etc/dovecot/mail_plugins_lmtp
else
echo -n 'quota acl zlib mail_crypt mail_crypt_acl mail_log notify fts fts_solr listescape replication' > /etc/dovecot/mail_plugins
echo -n 'quota imap_quota imap_acl acl zlib imap_zlib imap_sieve mail_crypt mail_crypt_acl notify mail_log fts fts_solr listescape replication' > /etc/dovecot/mail_plugins_imap
echo -n 'quota sieve acl zlib mail_crypt mail_crypt_acl fts fts_solr notify listescape replication' > /etc/dovecot/mail_plugins_lmtp
fi
chmod 644 /etc/dovecot/mail_plugins /etc/dovecot/mail_plugins_imap /etc/dovecot/mail_plugins_lmtp /templates/quarantine.tpl
cat <<EOF > /etc/dovecot/sql/dovecot-dict-sql-userdb.conf
# Autogenerated by mailcow
driver = mysql
connect = "host=/var/run/mysqld/mysqld.sock dbname=${DBNAME} user=${DBUSER} password=${DBPASS}"
user_query = SELECT CONCAT(JSON_UNQUOTE(JSON_VALUE(attributes, '$.mailbox_format')), mailbox_path_prefix, '%d/%n/${MAILDIR_SUB}:VOLATILEDIR=/var/volatile/%u:INDEX=/var/vmail_index/%u') AS mail, '%s' AS protocol, 5000 AS uid, 5000 AS gid, concat('*:bytes=', quota) AS quota_rule FROM mailbox WHERE username = '%u' AND (active = '1' OR active = '2')
iterate_query = SELECT username FROM mailbox WHERE active = '1' OR active = '2';
EOF
cat <<EOF > /etc/dovecot/lua/passwd-verify.lua
function auth_password_verify(req, pass)
if req.domain == nil then
return dovecot.auth.PASSDB_RESULT_USER_UNKNOWN, "No such user"
end
if cur == nil then
script_init()
end
if req.user == nil then
req.user = ''
end
respbody = {}
-- check against mailbox passwds
local cur,errorString = con:execute(string.format([[SELECT password FROM mailbox
WHERE username = '%s'
AND active = '1'
AND domain IN (SELECT domain FROM domain WHERE domain='%s' AND active='1')
AND IFNULL(JSON_UNQUOTE(JSON_VALUE(mailbox.attributes, '$.force_pw_update')), 0) != '1'
AND IFNULL(JSON_UNQUOTE(JSON_VALUE(attributes, '$.%s_access')), 1) = '1']], con:escape(req.user), con:escape(req.domain), con:escape(req.service)))
local row = cur:fetch ({}, "a")
while row do
if req.password_verify(req, row.password, pass) == 1 then
con:execute(string.format([[REPLACE INTO sasl_log (service, app_password, username, real_rip)
VALUES ("%s", 0, "%s", "%s")]], con:escape(req.service), con:escape(req.user), con:escape(req.real_rip)))
cur:close()
con:close()
return dovecot.auth.PASSDB_RESULT_OK, ""
end
row = cur:fetch (row, "a")
end
-- check against app passwds for imap and smtp
-- app passwords are only available for imap, smtp, sieve and pop3 when using sasl
if req.service == "smtp" or req.service == "imap" or req.service == "sieve" or req.service == "pop3" then
local cur,errorString = con:execute(string.format([[SELECT app_passwd.id, %s_access AS has_prot_access, app_passwd.password FROM app_passwd
INNER JOIN mailbox ON mailbox.username = app_passwd.mailbox
WHERE mailbox = '%s'
AND app_passwd.active = '1'
AND mailbox.active = '1'
AND app_passwd.domain IN (SELECT domain FROM domain WHERE domain='%s' AND active='1')]], con:escape(req.service), con:escape(req.user), con:escape(req.domain)))
local row = cur:fetch ({}, "a")
while row do
if req.password_verify(req, row.password, pass) == 1 then
-- if password is valid and protocol access is 1 OR real_rip matches SOGo, proceed
if tostring(req.real_rip) == "__IPV4_SOGO__" then
cur:close()
con:close()
return dovecot.auth.PASSDB_RESULT_OK, ""
elseif row.has_prot_access == "1" then
con:execute(string.format([[REPLACE INTO sasl_log (service, app_password, username, real_rip)
VALUES ("%s", %d, "%s", "%s")]], con:escape(req.service), row.id, con:escape(req.user), con:escape(req.real_rip)))
cur:close()
con:close()
return dovecot.auth.PASSDB_RESULT_OK, ""
end
end
row = cur:fetch (row, "a")
end
end
cur:close()
con:close()
return dovecot.auth.PASSDB_RESULT_PASSWORD_MISMATCH, "Failed to authenticate"
-- PoC
-- local reqbody = string.format([[{
-- "success":0,
-- "service":"%s",
-- "app_password":false,
-- "username":"%s",
-- "real_rip":"%s"
-- }]], con:escape(req.service), con:escape(req.user), con:escape(req.real_rip))
-- http.request {
-- method = "POST",
-- url = "http://nginx:8081/sasl_log.php",
-- source = ltn12.source.string(reqbody),
-- headers = {
-- ["content-type"] = "application/json",
-- ["content-length"] = tostring(#reqbody)
-- },
-- sink = ltn12.sink.table(respbody)
-- }
end
function auth_passdb_lookup(req)
return dovecot.auth.PASSDB_RESULT_USER_UNKNOWN, ""
end
function script_init()
mysql = require "luasql.mysql"
http = require "socket.http"
http.TIMEOUT = 5
ltn12 = require "ltn12"
env = mysql.mysql()
con = env:connect("__DBNAME__","__DBUSER__","__DBPASS__","localhost")
return 0
end
function script_deinit()
con:close()
env:close()
end
EOF
# Temporarily set FTS depending on user choice inside mailcow.conf. Will be removed as soon as Solr is dropped
if [[ "${FLATCURVE_EXPERIMENTAL}" =~ ^([yY][eE][sS]|[yY])$ ]]; then
cat <<EOF > /etc/dovecot/conf.d/fts.conf
# Autogenerated by mailcow
plugin {
fts_autoindex = yes
fts_autoindex_exclude = \Junk
fts_autoindex_exclude2 = \Trash
fts = flatcurve
# Maximum term length can be set via the 'maxlen' argument (maxlen is
# specified in bytes, not number of UTF-8 characters)
fts_tokenizer_email_address = maxlen=100
fts_tokenizer_generic = algorithm=simple maxlen=30
# These are not flatcurve settings, but required for Dovecot FTS. See
# Dovecot FTS Configuration link above for further information.
fts_languages = en es de
fts_tokenizers = generic email-address
# OPTIONAL: Recommended default FTS core configuration
fts_filters = normalizer-icu snowball stopwords
fts_filters_en = lowercase snowball english-possessive stopwords
}
EOF
elif [[ ! "${SKIP_SOLR}" =~ ^([yY][eE][sS]|[yY])$ ]]; then
cat <<EOF > /etc/dovecot/conf.d/fts.conf
# Autogenerated by mailcow
plugin {
fts = solr
fts_autoindex = yes
fts_autoindex_exclude = \Junk
fts_autoindex_exclude2 = \Trash
fts_solr = url=http://solr:8983/solr/dovecot-fts/
fts_tokenizers = generic email-address
fts_tokenizer_generic = algorithm=simple
fts_filters = normalizer-icu snowball stopwords
fts_filters_en = lowercase snowball english-possessive stopwords
}
EOF
fi
# Replace patterns in app-passdb.lua
sed -i "s/__DBUSER__/${DBUSER}/g" /etc/dovecot/lua/passwd-verify.lua
sed -i "s/__DBPASS__/${DBPASS}/g" /etc/dovecot/lua/passwd-verify.lua
sed -i "s/__DBNAME__/${DBNAME}/g" /etc/dovecot/lua/passwd-verify.lua
sed -i "s/__IPV4_SOGO__/${IPV4_NETWORK}.248/g" /etc/dovecot/lua/passwd-verify.lua
# Migrate old sieve_after file
[[ -f /etc/dovecot/sieve_after ]] && mv /etc/dovecot/sieve_after /etc/dovecot/global_sieve_after
# Create global sieve scripts
cat /etc/dovecot/global_sieve_after > /var/vmail/sieve/global_sieve_after.sieve
cat /etc/dovecot/global_sieve_before > /var/vmail/sieve/global_sieve_before.sieve
# Check permissions of vmail/index/garbage directories.
# Do not do this every start-up, it may take a very long time. So we use a stat check here.
if [[ $(stat -c %U /var/vmail/) != "vmail" ]] ; then chown -R vmail:vmail /var/vmail ; fi
if [[ $(stat -c %U /var/vmail/_garbage) != "vmail" ]] ; then chown -R vmail:vmail /var/vmail/_garbage ; fi
if [[ $(stat -c %U /var/vmail_index) != "vmail" ]] ; then chown -R vmail:vmail /var/vmail_index ; fi
# Cleanup random user maildirs
rm -rf /var/vmail/mailcow.local/*
# Cleanup PIDs
[[ -f /tmp/quarantine_notify.pid ]] && rm /tmp/quarantine_notify.pid
# create sni configuration
echo "" > /etc/dovecot/sni.conf
for cert_dir in /etc/ssl/mail/*/ ; do
if [[ ! -f ${cert_dir}domains ]] || [[ ! -f ${cert_dir}cert.pem ]] || [[ ! -f ${cert_dir}key.pem ]]; then
continue
fi
domains=($(cat ${cert_dir}domains))
for domain in ${domains[@]}; do
echo 'local_name '${domain}' {' >> /etc/dovecot/sni.conf;
echo ' ssl_cert = <'${cert_dir}'cert.pem' >> /etc/dovecot/sni.conf;
echo ' ssl_key = <'${cert_dir}'key.pem' >> /etc/dovecot/sni.conf;
echo '}' >> /etc/dovecot/sni.conf;
done
done
# Create random master for SOGo sieve features
RAND_USER=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 16 | head -n 1)
RAND_PASS=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 24 | head -n 1)
if [[ ! -z ${DOVECOT_MASTER_USER} ]] && [[ ! -z ${DOVECOT_MASTER_PASS} ]]; then
RAND_USER=${DOVECOT_MASTER_USER}
RAND_PASS=${DOVECOT_MASTER_PASS}
fi
echo ${RAND_USER}@mailcow.local:{SHA1}$(echo -n ${RAND_PASS} | sha1sum | awk '{print $1}'):::::: > /etc/dovecot/dovecot-master.passwd
echo ${RAND_USER}@mailcow.local::5000:5000:::: > /etc/dovecot/dovecot-master.userdb
echo ${RAND_USER}@mailcow.local:${RAND_PASS} > /etc/sogo/sieve.creds
if [[ -z ${MAILDIR_SUB} ]]; then
MAILDIR_SUB_SHARED=
else
MAILDIR_SUB_SHARED=/${MAILDIR_SUB}
fi
cat <<EOF > /etc/dovecot/shared_namespace.conf
# Autogenerated by mailcow
namespace {
type = shared
separator = /
prefix = Shared/%%u/
location = maildir:%%h${MAILDIR_SUB_SHARED}:INDEX=~${MAILDIR_SUB_SHARED}/Shared/%%u
subscriptions = no
list = children
}
EOF
cat <<EOF > /etc/dovecot/sogo_trusted_ip.conf
# Autogenerated by mailcow
remote ${IPV4_NETWORK}.248 {
disable_plaintext_auth = no
}
EOF
# Create random master Password for SOGo SSO
RAND_PASS=$(cat /dev/urandom | tr -dc 'a-z0-9' | fold -w 32 | head -n 1)
echo -n ${RAND_PASS} > /etc/phpfpm/sogo-sso.pass
cat <<EOF > /etc/dovecot/sogo-sso.conf
# Autogenerated by mailcow
passdb {
driver = static
args = allow_real_nets=${IPV4_NETWORK}.248/32 password={plain}${RAND_PASS}
}
EOF
if [[ "${MASTER}" =~ ^([nN][oO]|[nN])+$ ]]; then
# Toggling MASTER will result in a rebuild of containers, so the quota script will be recreated
cat <<'EOF' > /usr/local/bin/quota_notify.py
#!/usr/bin/python3
import sys
sys.exit()
EOF
fi
# Set mail_replica for HA setups
if [[ -n ${MAILCOW_REPLICA_IP} && -n ${DOVEADM_REPLICA_PORT} ]]; then
cat <<EOF > /etc/dovecot/mail_replica.conf
# Autogenerated by mailcow
mail_replica = tcp:${MAILCOW_REPLICA_IP}:${DOVEADM_REPLICA_PORT}
EOF
fi
# 401 is user dovecot
if [[ ! -s /mail_crypt/ecprivkey.pem || ! -s /mail_crypt/ecpubkey.pem ]]; then
openssl ecparam -name prime256v1 -genkey | openssl pkey -out /mail_crypt/ecprivkey.pem
openssl pkey -in /mail_crypt/ecprivkey.pem -pubout -out /mail_crypt/ecpubkey.pem
chown 401 /mail_crypt/ecprivkey.pem /mail_crypt/ecpubkey.pem
else
chown 401 /mail_crypt/ecprivkey.pem /mail_crypt/ecpubkey.pem
fi
# Compile sieve scripts
sievec /var/vmail/sieve/global_sieve_before.sieve
sievec /var/vmail/sieve/global_sieve_after.sieve
sievec /usr/lib/dovecot/sieve/report-spam.sieve
sievec /usr/lib/dovecot/sieve/report-ham.sieve
# Fix permissions
chown root:root /etc/dovecot/sql/*.conf
chown root:dovecot /etc/dovecot/sql/dovecot-dict-sql-sieve* /etc/dovecot/sql/dovecot-dict-sql-quota* /etc/dovecot/lua/passwd-verify.lua
chmod 640 /etc/dovecot/sql/*.conf /etc/dovecot/lua/passwd-verify.lua
chown -R vmail:vmail /var/vmail/sieve
chown -R vmail:vmail /var/volatile
chown -R vmail:vmail /var/vmail_index
adduser vmail tty
chmod g+rw /dev/console
chown root:tty /dev/console
chmod +x /usr/lib/dovecot/sieve/rspamd-pipe-ham \
/usr/lib/dovecot/sieve/rspamd-pipe-spam \
/usr/local/bin/imapsync_runner.pl \
/usr/local/bin/imapsync \
/usr/local/bin/trim_logs.sh \
/usr/local/bin/sa-rules.sh \
/usr/local/bin/clean_q_aged.sh \
/usr/local/bin/maildir_gc.sh \
/usr/local/sbin/stop-supervisor.sh \
/usr/local/bin/quota_notify.py \
/usr/local/bin/repl_health.sh \
/usr/local/bin/optimize-fts.sh
# Prepare environment file for cronjobs
printenv | sed 's/^\(.*\)$/export \1/g' > /source_env.sh
# Clean old PID if any
[[ -f /var/run/dovecot/master.pid ]] && rm /var/run/dovecot/master.pid
# Clean stopped imapsync jobs
rm -f /tmp/imapsync_busy.lock
IMAPSYNC_TABLE=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SHOW TABLES LIKE 'imapsync'" -Bs)
[[ ! -z ${IMAPSYNC_TABLE} ]] && mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "UPDATE imapsync SET is_running='0'"
# Envsubst maildir_gc
echo "$(envsubst < /usr/local/bin/maildir_gc.sh)" > /usr/local/bin/maildir_gc.sh
# GUID generation
while [[ ${VERSIONS_OK} != 'OK' ]]; do
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = \"${DBNAME}\" AND TABLE_NAME = 'versions'") ]]; then
VERSIONS_OK=OK
else
echo "Waiting for versions table to be created..."
sleep 3
fi
done
PUBKEY_MCRYPT=$(doveconf -P 2> /dev/null | grep -i mail_crypt_global_public_key | cut -d '<' -f2)
if [ -f ${PUBKEY_MCRYPT} ]; then
GUID=$(cat <(echo ${MAILCOW_HOSTNAME}) /mail_crypt/ecpubkey.pem | sha256sum | cut -d ' ' -f1 | tr -cd "[a-fA-F0-9.:/] ")
if [ ${#GUID} -eq 64 ]; then
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
REPLACE INTO versions (application, version) VALUES ("GUID", "${GUID}");
EOF
else
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
REPLACE INTO versions (application, version) VALUES ("GUID", "INVALID");
EOF
fi
fi
# Collect SA rules once now
/usr/local/bin/sa-rules.sh
# Run hooks
for file in /hooks/*; do
@@ -482,12 +8,24 @@ for file in /hooks/*; do
fi
done
# For some strange, unknown and stupid reason, Dovecot may run into a race condition, when this file is not touched before it is read by dovecot/auth
# May be related to something inside Docker, I seriously don't know
touch /etc/dovecot/lua/passwd-verify.lua
python3 -u /bootstrap/main.py
BOOTSTRAP_EXIT_CODE=$?
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
cp /etc/syslog-ng/syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng.conf
# Fix OpenSSL 3.X TLS1.0, 1.1 support (https://community.mailcow.email/d/4062-hi-all/20)
if grep -qE 'ssl_min_protocol\s*=\s*(TLSv1|TLSv1\.1)\s*$' /etc/dovecot/dovecot.conf /etc/dovecot/extra.conf; then
sed -i '/\[openssl_init\]/a ssl_conf = ssl_configuration' /etc/ssl/openssl.cnf
echo "[ssl_configuration]" >> /etc/ssl/openssl.cnf
echo "system_default = tls_system_default" >> /etc/ssl/openssl.cnf
echo "[tls_system_default]" >> /etc/ssl/openssl.cnf
echo "MinProtocol = TLSv1" >> /etc/ssl/openssl.cnf
echo "CipherString = DEFAULT@SECLEVEL=0" >> /etc/ssl/openssl.cnf
fi
exec "$@"
if [ $BOOTSTRAP_EXIT_CODE -ne 0 ]; then
echo "Bootstrap failed with exit code $BOOTSTRAP_EXIT_CODE. Not starting Dovecot."
exit $BOOTSTRAP_EXIT_CODE
fi
echo "Bootstrap succeeded. Starting Dovecot..."
/usr/sbin/dovecot -F

View File

@@ -1,2 +0,0 @@
#!/bin/bash
[ -d /var/vmail/_garbage/ ] && /usr/bin/find /var/vmail/_garbage/ -mindepth 1 -maxdepth 1 -type d -cmin +${MAILDIR_GC_TIME} -exec rm -r {} \;

View File

@@ -1,7 +1,7 @@
#!/bin/bash
if [[ "${SKIP_SOLR}" =~ ^([yY][eE][sS]|[yY])+$ && ! "${FLATCURVE_EXPERIMENTAL}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
if [[ "${SKIP_FTS}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
exit 0
else
doveadm fts optimize -A
fi
fi

View File

@@ -31,7 +31,7 @@ try:
while True:
try:
r = redis.StrictRedis(host='redis', decode_responses=True, port=6379, db=0)
r = redis.StrictRedis(host='redis', decode_responses=True, port=6379, db=0, password=os.environ['REDISPASS'])
r.ping()
except Exception as ex:
print('%s - trying again...' % (ex))

View File

@@ -14,6 +14,11 @@ import sys
import html2text
from subprocess import Popen, PIPE, STDOUT
# Don't run if role is not master
if os.getenv("MASTER").lower() in ["n", "no"]:
sys.exit()
if len(sys.argv) > 2:
percent = int(sys.argv[1])
username = str(sys.argv[2])
@@ -23,7 +28,7 @@ else:
while True:
try:
r = redis.StrictRedis(host='redis', decode_responses=True, port=6379, db=0)
r = redis.StrictRedis(host='redis', decode_responses=True, port=6379, db=0, username='quota_notify', password='')
r.ping()
except Exception as ex:
print('%s - trying again...' % (ex))

View File

@@ -4,9 +4,9 @@ source /source_env.sh
# Do not attempt to write to slave
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT}"
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT} -a ${REDISPASS} --no-auth-warning"
else
REDIS_CMDLINE="redis-cli -h redis -p 6379"
REDIS_CMDLINE="redis-cli -h redis -p 6379 -a ${REDISPASS} --no-auth-warning"
fi
# Is replication active?

View File

@@ -11,7 +11,7 @@ else
fi
# Deploy
if curl --connect-timeout 15 --retry 10 --max-time 30 https://www.spamassassin.heinlein-support.de/$(dig txt 1.4.3.spamassassin.heinlein-support.de +short | tr -d '"' | tr -dc '0-9').tar.gz --output /tmp/sa-rules-heinlein.tar.gz; then
if curl --connect-timeout 15 --retry 5 --max-time 30 https://www.spamassassin.heinlein-support.de/$(dig txt 1.4.3.spamassassin.heinlein-support.de +short | tr -d '"' | tr -dc '0-9').tar.gz --output /tmp/sa-rules-heinlein.tar.gz; then
if gzip -t /tmp/sa-rules-heinlein.tar.gz; then
tar xfvz /tmp/sa-rules-heinlein.tar.gz -C /tmp/sa-rules-heinlein
cat /tmp/sa-rules-heinlein/*cf > /etc/rspamd/custom/sa-rules

View File

@@ -11,8 +11,8 @@ stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autostart=true
[program:dovecot]
command=/usr/sbin/dovecot -F
[program:bootstrap]
command=/docker-entrypoint.sh
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr

View File

@@ -20,6 +20,7 @@ destination d_redis_ui_log {
host("`REDIS_SLAVEOF_IP`")
persist-name("redis1")
port(`REDIS_SLAVEOF_PORT`)
auth("`REDISPASS`")
command("LPUSH" "DOVECOT_MAILLOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
@@ -28,6 +29,7 @@ destination d_redis_f2b_channel {
host("`REDIS_SLAVEOF_IP`")
persist-name("redis2")
port(`REDIS_SLAVEOF_PORT`)
auth("`REDISPASS`")
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};
@@ -36,8 +38,13 @@ filter f_replica {
not match("User has no mail_replica in userdb" value("MESSAGE"));
not match("Error: sync: Unknown user in remote" value("MESSAGE"));
};
filter f_dovecot_auth_try {
not match("- trying the next passdb" value("MESSAGE")) and
not match("- trying the next userdb" value("MESSAGE"));
};
log {
source(s_dgram);
filter(f_dovecot_auth_try);
filter(f_replica);
destination(d_stdout);
filter(f_mail);

View File

@@ -20,6 +20,7 @@ destination d_redis_ui_log {
host("redis-mailcow")
persist-name("redis1")
port(6379)
auth("`REDISPASS`")
command("LPUSH" "DOVECOT_MAILLOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
@@ -28,6 +29,7 @@ destination d_redis_f2b_channel {
host("redis-mailcow")
persist-name("redis2")
port(6379)
auth("`REDISPASS`")
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};
@@ -36,8 +38,13 @@ filter f_replica {
not match("User has no mail_replica in userdb" value("MESSAGE"));
not match("Error: sync: Unknown user in remote" value("MESSAGE"));
};
filter f_dovecot_auth_try {
not match("- trying the next passdb" value("MESSAGE")) and
not match("- trying the next userdb" value("MESSAGE"));
};
log {
source(s_dgram);
filter(f_dovecot_auth_try);
filter(f_replica);
destination(d_stdout);
filter(f_mail);

View File

@@ -10,9 +10,9 @@ catch_non_zero() {
source /source_env.sh
# Do not attempt to write to slave
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT}"
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT} -a ${REDISPASS} --no-auth-warning"
else
REDIS_CMDLINE="redis-cli -h redis -p 6379"
REDIS_CMDLINE="redis-cli -h redis -p 6379 -a ${REDISPASS} --no-auth-warning"
fi
catch_non_zero "${REDIS_CMDLINE} LTRIM ACME_LOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM POSTFIX_MAILLOG 0 ${LOG_LINES}"
@@ -23,3 +23,4 @@ catch_non_zero "${REDIS_CMDLINE} LTRIM AUTODISCOVER_LOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM API_LOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM RL_LOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM WATCHDOG_LOG 0 ${LOG_LINES}"
catch_non_zero "${REDIS_CMDLINE} LTRIM CRON_LOG 0 ${LOG_LINES}"

View File

@@ -0,0 +1,28 @@
FROM mariadb:10.11
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
RUN apt-get update && \
apt-get install -y --no-install-recommends \
python3 \
python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN pip install \
mysql-connector-python \
jinja2 \
redis \
dnspython \
psutil
COPY data/Dockerfiles/bootstrap /bootstrap
COPY data/Dockerfiles/mariadb/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["mysqld"]

View File

@@ -0,0 +1,20 @@
#!/bin/bash
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
echo "Running hook ${file}"
"${file}"
fi
done
python3 -u /bootstrap/main.py
BOOTSTRAP_EXIT_CODE=$?
if [ $BOOTSTRAP_EXIT_CODE -ne 0 ]; then
echo "Bootstrap failed with exit code $BOOTSTRAP_EXIT_CODE. Not starting MariaDB."
exit $BOOTSTRAP_EXIT_CODE
fi
echo "Bootstrap succeeded. Starting MariaDB..."
exec /usr/local/bin/docker-entrypoint.sh "$@"

View File

@@ -1,4 +1,4 @@
FROM alpine:3.20
FROM alpine:3.21
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"

View File

@@ -85,11 +85,10 @@ def refreshF2bregex():
f2bregex[3] = r'warning: .*\[([0-9a-f\.:]+)\]: SASL .+ authentication failed: (?!.*Connection lost to authentication server).+'
f2bregex[4] = r'warning: non-SMTP command from .*\[([0-9a-f\.:]+)]:.+'
f2bregex[5] = r'NOQUEUE: reject: RCPT from \[([0-9a-f\.:]+)].+Protocol error.+'
f2bregex[6] = r'-login: Disconnected.+ \(auth failed, .+\): user=.*, method=.+, rip=([0-9a-f\.:]+),'
f2bregex[7] = r'-login: Aborted login.+ \(auth failed .+\): user=.+, rip=([0-9a-f\.:]+), lip.+'
f2bregex[8] = r'-login: Aborted login.+ \(tried to use disallowed .+\): user=.+, rip=([0-9a-f\.:]+), lip.+'
f2bregex[9] = r'SOGo.+ Login from \'([0-9a-f\.:]+)\' for user .+ might not have worked'
f2bregex[10] = r'([0-9a-f\.:]+) \"GET \/SOGo\/.* HTTP.+\" 403 .+'
f2bregex[6] = r'\w+\([^,]+,([0-9a-f\.:]+),<[^>]+>\): Password mismatch \(SHA1 of given password: [a-f0-9]+\)'
f2bregex[7] = r'\w+\([^,]+,([0-9a-f\.:]+),<[^>]+>\): unknown user \(SHA1 of given password: [a-f0-9]+\)'
f2bregex[8] = r'SOGo.+ Login from \'([0-9a-f\.:]+)\' for user .+ might not have worked'
f2bregex[9] = r'([0-9a-f\.:]+) \"GET \/SOGo\/.* HTTP.+\" 403 .+'
r.set('F2B_REGEX', json.dumps(f2bregex, ensure_ascii=False))
else:
try:
@@ -106,7 +105,7 @@ def get_ip(address):
ip = ip.ipv4_mapped
if ip.is_private or ip.is_loopback:
return False
return ip
def ban(address):
@@ -434,9 +433,9 @@ if __name__ == '__main__':
redis_slaveof_ip = os.getenv('REDIS_SLAVEOF_IP', '')
redis_slaveof_port = os.getenv('REDIS_SLAVEOF_PORT', '')
if "".__eq__(redis_slaveof_ip):
r = redis.StrictRedis(host=os.getenv('IPV4_NETWORK', '172.22.1') + '.249', decode_responses=True, port=6379, db=0)
r = redis.StrictRedis(host=os.getenv('IPV4_NETWORK', '172.22.1') + '.249', decode_responses=True, port=6379, db=0, password=os.environ['REDISPASS'])
else:
r = redis.StrictRedis(host=redis_slaveof_ip, decode_responses=True, port=redis_slaveof_port, db=0)
r = redis.StrictRedis(host=redis_slaveof_ip, decode_responses=True, port=redis_slaveof_port, db=0, password=os.environ['REDISPASS'])
r.ping()
pubsub = r.pubsub()
except Exception as ex:
@@ -452,7 +451,7 @@ if __name__ == '__main__':
# clear bans in redis
r.delete('F2B_ACTIVE_BANS')
r.delete('F2B_PERM_BANS')
refreshF2boptions()
watch_thread = Thread(target=watch)

View File

@@ -0,0 +1,36 @@
FROM nginx:alpine
LABEL maintainer "The Infrastructure Company GmbH <info@servercow.de>"
ENV PIP_BREAK_SYSTEM_PACKAGES=1
RUN apk add --no-cache nginx \
python3 py3-pip \
supervisor
RUN apk add --no-cache --virtual .build-deps \
gcc \
musl-dev \
python3-dev \
linux-headers \
&& pip install --break-system-packages psutil \
&& apk del .build-deps
RUN pip install --break-system-packages \
mysql-connector-python \
jinja2 \
redis \
dnspython
RUN mkdir -p /etc/nginx/includes
COPY data/Dockerfiles/bootstrap /bootstrap
COPY data/Dockerfiles/nginx/docker-entrypoint.sh /
COPY data/Dockerfiles/nginx/supervisord.conf /etc/supervisor/supervisord.conf
COPY data/Dockerfiles/nginx/stop-supervisor.sh /usr/local/sbin/stop-supervisor.sh
RUN chmod +x /docker-entrypoint.sh
RUN chmod +x /usr/local/sbin/stop-supervisor.sh
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]

View File

@@ -0,0 +1,20 @@
#!/bin/sh
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
echo "Running hook ${file}"
"${file}"
fi
done
python3 -u /bootstrap/main.py
BOOTSTRAP_EXIT_CODE=$?
if [ $BOOTSTRAP_EXIT_CODE -ne 0 ]; then
echo "Bootstrap failed with exit code $BOOTSTRAP_EXIT_CODE. Not starting Nginx."
exit $BOOTSTRAP_EXIT_CODE
fi
echo "Bootstrap succeeded. Starting Nginx..."
nginx -g "daemon off;"

View File

@@ -0,0 +1,8 @@
#!/bin/bash
printf "READY\n";
while read line; do
echo "Processing Event: $line" >&2;
kill -3 $(cat "/var/run/supervisord.pid")
done < /dev/stdin

View File

@@ -0,0 +1,27 @@
[supervisord]
nodaemon=true
user=root
[program:syslog-ng]
command=/usr/sbin/syslog-ng --foreground --no-caps
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autostart=true
priority=1
[program:bootstrap]
command=/docker-entrypoint.sh
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
priority=2
startretries=10
autorestart=true
stopwaitsecs=120
[eventlistener:processes]
command=/usr/local/sbin/stop-supervisor.sh
events=PROCESS_STATE_STOPPED, PROCESS_STATE_EXITED, PROCESS_STATE_FATAL

View File

@@ -1,4 +1,4 @@
FROM alpine:3.20
FROM alpine:3.21
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"

View File

@@ -32,6 +32,13 @@ import time
import magic
import re
skip_olefy = os.getenv('SKIP_OLEFY', '')
if skip_olefy.lower() in ['yes', 'y']:
print("SKIP_OLEFY=y, skipping Olefy...")
time.sleep(365 * 24 * 60 * 60)
sys.exit(0)
# merge variables from /etc/olefy.conf and the defaults
olefy_listen_addr_string = os.getenv('OLEFY_BINDADDRESS', '127.0.0.1,::1')
olefy_listen_port = int(os.getenv('OLEFY_BINDPORT', '10050'))
@@ -113,7 +120,7 @@ def oletools( stream, tmp_file_name, lid ):
out = bytes(out.decode('utf-8', 'ignore').replace(' ', ' ').replace('\t', '').replace('\n', '').replace('XLMMacroDeobfuscator: pywin32 is not installed (only is required if you want to use MS Excel)', ''), encoding="utf-8")
failed = False
if out.__len__() < 30:
logger.error('{} olevba returned <30 chars - rc: {!r}, response: {!r}, error: {!r}'.format(lid,cmd_tmp.returncode,
logger.error('{} olevba returned <30 chars - rc: {!r}, response: {!r}, error: {!r}'.format(lid,cmd_tmp.returncode,
out.decode('utf-8', 'ignore'), err.decode('utf-8', 'ignore')))
out = b'[ { "error": "Unhandled error - too short olevba response" } ]'
failed = True

View File

@@ -1,19 +1,19 @@
FROM php:8.2-fpm-alpine3.18
FROM php:8.2-fpm-alpine3.21
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
# renovate: datasource=github-tags depName=krakjoe/apcu versioning=semver-coerced extractVersion=^v(?<version>.*)$
ARG APCU_PECL_VERSION=5.1.23
ARG APCU_PECL_VERSION=5.1.24
# renovate: datasource=github-tags depName=Imagick/imagick versioning=semver-coerced extractVersion=(?<version>.*)$
ARG IMAGICK_PECL_VERSION=3.7.0
ARG IMAGICK_PECL_VERSION=3.8.0
# renovate: datasource=github-tags depName=php/pecl-mail-mailparse versioning=semver-coerced extractVersion=^v(?<version>.*)$
ARG MAILPARSE_PECL_VERSION=3.1.6
ARG MAILPARSE_PECL_VERSION=3.1.8
# renovate: datasource=github-tags depName=php-memcached-dev/php-memcached versioning=semver-coerced extractVersion=^v(?<version>.*)$
ARG MEMCACHED_PECL_VERSION=3.2.0
# renovate: datasource=github-tags depName=phpredis/phpredis versioning=semver-coerced extractVersion=(?<version>.*)$
ARG REDIS_PECL_VERSION=6.0.2
ARG REDIS_PECL_VERSION=6.1.0
# renovate: datasource=github-tags depName=composer/composer versioning=semver-coerced extractVersion=(?<version>.*)$
ARG COMPOSER_VERSION=2.6.6
ARG COMPOSER_VERSION=2.8.6
RUN apk add -U --no-cache autoconf \
aspell-dev \
@@ -63,6 +63,7 @@ RUN apk add -U --no-cache autoconf \
samba-client \
zlib-dev \
tzdata \
python3 py3-pip \
&& pecl install APCu-${APCU_PECL_VERSION} \
&& pecl install imagick-${IMAGICK_PECL_VERSION} \
&& pecl install mailparse-${MAILPARSE_PECL_VERSION} \
@@ -72,12 +73,12 @@ RUN apk add -U --no-cache autoconf \
&& pecl clear-cache \
&& docker-php-ext-configure intl \
&& docker-php-ext-configure exif \
&& docker-php-ext-configure gd --with-freetype=/usr/include/ \
&& docker-php-ext-configure gd --with-freetype=/usr/include/ \
--with-jpeg=/usr/include/ \
--with-webp \
--with-xpm \
--with-avif \
&& docker-php-ext-install -j 4 exif gd gettext intl ldap opcache pcntl pdo pdo_mysql pspell soap sockets sysvsem zip bcmath gmp \
&& docker-php-ext-install -j 4 exif gd gettext intl ldap opcache pcntl pdo pdo_mysql pspell soap sockets zip bcmath gmp \
&& docker-php-ext-configure imap --with-imap --with-imap-ssl \
&& docker-php-ext-install -j 4 imap \
&& curl --silent --show-error https://getcomposer.org/installer | php -- --version=${COMPOSER_VERSION} \
@@ -107,8 +108,26 @@ RUN apk add -U --no-cache autoconf \
pcre-dev \
zlib-dev
COPY ./docker-entrypoint.sh /
RUN apk add --no-cache --virtual .build-deps \
gcc \
musl-dev \
python3-dev \
linux-headers \
&& pip install --break-system-packages psutil \
&& apk del .build-deps
RUN pip install --break-system-packages \
mysql-connector-python \
jinja2 \
redis \
dnspython
COPY data/Dockerfiles/bootstrap /bootstrap
COPY data/Dockerfiles/phpfpm/docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["php-fpm"]

View File

@@ -1,210 +1,5 @@
#!/bin/bash
function array_by_comma { local IFS=","; echo "$*"; }
# Wait for containers
while ! mariadb-admin status --ssl=false --socket=/var/run/mysqld/mysqld.sock -u${DBUSER} -p${DBPASS} --silent; do
echo "Waiting for SQL..."
sleep 2
done
# Do not attempt to write to slave
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT}"
else
REDIS_CMDLINE="redis-cli -h redis -p 6379"
fi
until [[ $(${REDIS_CMDLINE} PING) == "PONG" ]]; do
echo "Waiting for Redis..."
sleep 2
done
# Check mysql_upgrade (master and slave)
CONTAINER_ID=
until [[ ! -z "${CONTAINER_ID}" ]] && [[ "${CONTAINER_ID}" =~ ^[[:alnum:]]*$ ]]; do
CONTAINER_ID=$(curl --silent --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/json | jq -r ".[] | {name: .Config.Labels[\"com.docker.compose.service\"], project: .Config.Labels[\"com.docker.compose.project\"], id: .Id}" 2> /dev/null | jq -rc "select( .name | tostring | contains(\"mysql-mailcow\")) | select( .project | tostring | contains(\"${COMPOSE_PROJECT_NAME,,}\")) | .id" 2> /dev/null)
echo "Could not get mysql-mailcow container id... trying again"
sleep 2
done
echo "MySQL @ ${CONTAINER_ID}"
SQL_LOOP_C=0
SQL_CHANGED=0
until [[ ${SQL_UPGRADE_STATUS} == 'success' ]]; do
if [ ${SQL_LOOP_C} -gt 4 ]; then
echo "Tried to upgrade MySQL and failed, giving up after ${SQL_LOOP_C} retries and starting container (oops, not good)"
break
fi
SQL_FULL_UPGRADE_RETURN=$(curl --silent --insecure -XPOST https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/${CONTAINER_ID}/exec -d '{"cmd":"system", "task":"mysql_upgrade"}' --silent -H 'Content-type: application/json')
SQL_UPGRADE_STATUS=$(echo ${SQL_FULL_UPGRADE_RETURN} | jq -r .type)
SQL_LOOP_C=$((SQL_LOOP_C+1))
echo "SQL upgrade iteration #${SQL_LOOP_C}"
if [[ ${SQL_UPGRADE_STATUS} == 'warning' ]]; then
SQL_CHANGED=1
echo "MySQL applied an upgrade, debug output:"
echo ${SQL_FULL_UPGRADE_RETURN}
sleep 3
while ! mariadb-admin status --ssl=false --socket=/var/run/mysqld/mysqld.sock -u${DBUSER} -p${DBPASS} --silent; do
echo "Waiting for SQL to return, please wait"
sleep 2
done
continue
elif [[ ${SQL_UPGRADE_STATUS} == 'success' ]]; then
echo "MySQL is up-to-date - debug output:"
echo ${SQL_FULL_UPGRADE_RETURN}
else
echo "No valid reponse for mysql_upgrade was received, debug output:"
echo ${SQL_FULL_UPGRADE_RETURN}
fi
done
# doing post-installation stuff, if SQL was upgraded (master and slave)
if [ ${SQL_CHANGED} -eq 1 ]; then
POSTFIX=$(curl --silent --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/json | jq -r ".[] | {name: .Config.Labels[\"com.docker.compose.service\"], project: .Config.Labels[\"com.docker.compose.project\"], id: .Id}" 2> /dev/null | jq -rc "select( .name | tostring | contains(\"postfix-mailcow\")) | select( .project | tostring | contains(\"${COMPOSE_PROJECT_NAME,,}\")) | .id" 2> /dev/null)
if [[ -z "${POSTFIX}" ]] || ! [[ "${POSTFIX}" =~ ^[[:alnum:]]*$ ]]; then
echo "Could not determine Postfix container ID, skipping Postfix restart."
else
echo "Restarting Postfix"
curl -X POST --silent --insecure https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/${POSTFIX}/restart | jq -r '.msg'
echo "Sleeping 5 seconds..."
sleep 5
fi
fi
# Check mysql tz import (master and slave)
TZ_CHECK=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT CONVERT_TZ('2019-11-02 23:33:00','Europe/Berlin','UTC') AS time;" -BN 2> /dev/null)
if [[ -z ${TZ_CHECK} ]] || [[ "${TZ_CHECK}" == "NULL" ]]; then
SQL_FULL_TZINFO_IMPORT_RETURN=$(curl --silent --insecure -XPOST https://dockerapi.${COMPOSE_PROJECT_NAME}_mailcow-network/containers/${CONTAINER_ID}/exec -d '{"cmd":"system", "task":"mysql_tzinfo_to_sql"}' --silent -H 'Content-type: application/json')
echo "MySQL mysql_tzinfo_to_sql - debug output:"
echo ${SQL_FULL_TZINFO_IMPORT_RETURN}
fi
if [[ "${MASTER}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "We are master, preparing..."
# Set a default release format
if [[ -z $(${REDIS_CMDLINE} --raw GET Q_RELEASE_FORMAT) ]]; then
${REDIS_CMDLINE} --raw SET Q_RELEASE_FORMAT raw
fi
# Set max age of q items - if unset
if [[ -z $(${REDIS_CMDLINE} --raw GET Q_MAX_AGE) ]]; then
${REDIS_CMDLINE} --raw SET Q_MAX_AGE 365
fi
# Set default password policy - if unset
if [[ -z $(${REDIS_CMDLINE} --raw HGET PASSWD_POLICY length) ]]; then
${REDIS_CMDLINE} --raw HSET PASSWD_POLICY length 6
${REDIS_CMDLINE} --raw HSET PASSWD_POLICY chars 0
${REDIS_CMDLINE} --raw HSET PASSWD_POLICY special_chars 0
${REDIS_CMDLINE} --raw HSET PASSWD_POLICY lowerupper 0
${REDIS_CMDLINE} --raw HSET PASSWD_POLICY numbers 0
fi
# Trigger db init
echo "Running DB init..."
php -c /usr/local/etc/php -f /web/inc/init_db.inc.php
# Recreating domain map
echo "Rebuilding domain map in Redis..."
declare -a DOMAIN_ARR
${REDIS_CMDLINE} DEL DOMAIN_MAP > /dev/null
while read line
do
DOMAIN_ARR+=("$line")
done < <(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT domain FROM domain" -Bs)
while read line
do
DOMAIN_ARR+=("$line")
done < <(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT alias_domain FROM alias_domain" -Bs)
if [[ ! -z ${DOMAIN_ARR} ]]; then
for domain in "${DOMAIN_ARR[@]}"; do
${REDIS_CMDLINE} HSET DOMAIN_MAP ${domain} 1 > /dev/null
done
fi
# Set API options if env vars are not empty
if [[ ${API_ALLOW_FROM} != "invalid" ]] && [[ ! -z ${API_ALLOW_FROM} ]]; then
IFS=',' read -r -a API_ALLOW_FROM_ARR <<< "${API_ALLOW_FROM}"
declare -a VALIDATED_API_ALLOW_FROM_ARR
REGEX_IP6='^([0-9a-fA-F]{0,4}:){1,7}[0-9a-fA-F]{0,4}(/([0-9]|[1-9][0-9]|1[0-1][0-9]|12[0-8]))?$'
REGEX_IP4='^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+(/([0-9]|[1-2][0-9]|3[0-2]))?$'
for IP in "${API_ALLOW_FROM_ARR[@]}"; do
if [[ ${IP} =~ ${REGEX_IP6} ]] || [[ ${IP} =~ ${REGEX_IP4} ]]; then
VALIDATED_API_ALLOW_FROM_ARR+=("${IP}")
fi
done
VALIDATED_IPS=$(array_by_comma ${VALIDATED_API_ALLOW_FROM_ARR[*]})
if [[ ! -z ${VALIDATED_IPS} ]]; then
if [[ ${API_KEY} != "invalid" ]] && [[ ! -z ${API_KEY} ]]; then
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
DELETE FROM api WHERE access = 'rw';
INSERT INTO api (api_key, active, allow_from, access) VALUES ("${API_KEY}", "1", "${VALIDATED_IPS}", "rw");
EOF
fi
if [[ ${API_KEY_READ_ONLY} != "invalid" ]] && [[ ! -z ${API_KEY_READ_ONLY} ]]; then
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
DELETE FROM api WHERE access = 'ro';
INSERT INTO api (api_key, active, allow_from, access) VALUES ("${API_KEY_READ_ONLY}", "1", "${VALIDATED_IPS}", "ro");
EOF
fi
fi
fi
# Create events (master only, STATUS for event on slave will be SLAVESIDE_DISABLED)
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
DROP EVENT IF EXISTS clean_spamalias;
DELIMITER //
CREATE EVENT clean_spamalias
ON SCHEDULE EVERY 1 DAY DO
BEGIN
DELETE FROM spamalias WHERE validity < UNIX_TIMESTAMP();
END;
//
DELIMITER ;
DROP EVENT IF EXISTS clean_oauth2;
DELIMITER //
CREATE EVENT clean_oauth2
ON SCHEDULE EVERY 1 DAY DO
BEGIN
DELETE FROM oauth_refresh_tokens WHERE expires < NOW();
DELETE FROM oauth_access_tokens WHERE expires < NOW();
DELETE FROM oauth_authorization_codes WHERE expires < NOW();
END;
//
DELIMITER ;
DROP EVENT IF EXISTS clean_sasl_log;
DELIMITER //
CREATE EVENT clean_sasl_log
ON SCHEDULE EVERY 1 DAY DO
BEGIN
DELETE sasl_log.* FROM sasl_log
LEFT JOIN (
SELECT username, service, MAX(datetime) AS lastdate
FROM sasl_log
GROUP BY username, service
) AS last ON sasl_log.username = last.username AND sasl_log.service = last.service
WHERE datetime < DATE_SUB(NOW(), INTERVAL 31 DAY) AND datetime < lastdate;
DELETE FROM sasl_log
WHERE username NOT IN (SELECT username FROM mailbox) AND
datetime < DATE_SUB(NOW(), INTERVAL 31 DAY);
END;
//
DELIMITER ;
EOF
fi
# Create dummy for custom overrides of mailcow style
[[ ! -f /web/css/build/0081-custom-mailcow.css ]] && echo '/* Autogenerated by mailcow */' > /web/css/build/0081-custom-mailcow.css
# Fix permissions for global filters
chown -R 82:82 /global_sieve/*
# Fix permissions on twig cache folder
chown -R 82:82 /web/templates/cache
# Clear cache
find /web/templates/cache/* -not -name '.gitkeep' -delete
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
@@ -213,4 +8,13 @@ for file in /hooks/*; do
fi
done
python3 -u /bootstrap/main.py
BOOTSTRAP_EXIT_CODE=$?
if [ $BOOTSTRAP_EXIT_CODE -ne 0 ]; then
echo "Bootstrap failed with exit code $BOOTSTRAP_EXIT_CODE. Not starting PHP-FPM."
exit $BOOTSTRAP_EXIT_CODE
fi
echo "Bootstrap succeeded. Starting PHP-FPM..."
exec "$@"

View File

@@ -34,23 +34,31 @@ RUN groupadd -g 102 postfix \
syslog-ng-core \
syslog-ng-mod-redis \
tzdata \
python3 python3-pip \
&& rm -rf /var/lib/apt/lists/* \
&& touch /etc/default/locale \
&& printf '#!/bin/bash\n/usr/sbin/postconf -c /opt/postfix/conf "$@"' > /usr/local/sbin/postconf \
&& chmod +x /usr/local/sbin/postconf
COPY supervisord.conf /etc/supervisor/supervisord.conf
COPY syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
COPY syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng-redis_slave.conf
COPY postfix.sh /opt/postfix.sh
COPY rspamd-pipe-ham /usr/local/bin/rspamd-pipe-ham
COPY rspamd-pipe-spam /usr/local/bin/rspamd-pipe-spam
COPY whitelist_forwardinghosts.sh /usr/local/bin/whitelist_forwardinghosts.sh
COPY stop-supervisor.sh /usr/local/sbin/stop-supervisor.sh
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN pip install --break-system-packages \
mysql-connector-python \
jinja2 \
redis \
dnspython \
psutil
RUN chmod +x /opt/postfix.sh \
/usr/local/bin/rspamd-pipe-ham \
COPY data/Dockerfiles/bootstrap /bootstrap
COPY data/Dockerfiles/postfix/supervisord.conf /etc/supervisor/supervisord.conf
COPY data/Dockerfiles/postfix/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
COPY data/Dockerfiles/postfix/syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng-redis_slave.conf
COPY data/Dockerfiles/postfix/rspamd-pipe-ham /usr/local/bin/rspamd-pipe-ham
COPY data/Dockerfiles/postfix/rspamd-pipe-spam /usr/local/bin/rspamd-pipe-spam
COPY data/Dockerfiles/postfix/whitelist_forwardinghosts.sh /usr/local/bin/whitelist_forwardinghosts.sh
COPY data/Dockerfiles/postfix/stop-supervisor.sh /usr/local/sbin/stop-supervisor.sh
COPY data/Dockerfiles/postfix/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /usr/local/bin/rspamd-pipe-ham \
/docker-entrypoint.sh \
/usr/local/bin/rspamd-pipe-spam \
/usr/local/bin/whitelist_forwardinghosts.sh \
/usr/local/sbin/stop-supervisor.sh
@@ -58,6 +66,5 @@ RUN rm -rf /tmp/* /var/tmp/*
EXPOSE 588
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]

View File

@@ -8,8 +8,33 @@ for file in /hooks/*; do
fi
done
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
cp /etc/syslog-ng/syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng.conf
python3 -u /bootstrap/main.py
BOOTSTRAP_EXIT_CODE=$?
if [ $BOOTSTRAP_EXIT_CODE -ne 0 ]; then
echo "Bootstrap failed with exit code $BOOTSTRAP_EXIT_CODE. Not starting Postfix."
exit $BOOTSTRAP_EXIT_CODE
fi
exec "$@"
# Fix OpenSSL 3.X TLS1.0, 1.1 support (https://community.mailcow.email/d/4062-hi-all/20)
if grep -qE '\!SSLv2|\!SSLv3|>=TLSv1(\.[0-1])?$' /opt/postfix/conf/main.cf /opt/postfix/conf/extra.cf; then
sed -i '/\[openssl_init\]/a ssl_conf = ssl_configuration' /etc/ssl/openssl.cnf
echo "[ssl_configuration]" >> /etc/ssl/openssl.cnf
echo "system_default = tls_system_default" >> /etc/ssl/openssl.cnf
echo "[tls_system_default]" >> /etc/ssl/openssl.cnf
echo "MinProtocol = TLSv1" >> /etc/ssl/openssl.cnf
echo "CipherString = DEFAULT@SECLEVEL=0" >> /etc/ssl/openssl.cnf
fi
# Start Postfix
postconf -c /opt/postfix/conf > /dev/null
if [[ $? != 0 ]]; then
echo "Postfix configuration error, refusing to start."
exit 1
else
echo "Bootstrap succeeded. Starting Postfix..."
postfix -c /opt/postfix/conf start
sleep 126144000
fi

View File

@@ -1,519 +0,0 @@
#!/bin/bash
trap "postfix stop" EXIT
[[ ! -d /opt/postfix/conf/sql/ ]] && mkdir -p /opt/postfix/conf/sql/
# Wait for MySQL to warm-up
while ! mariadb-admin status --ssl=false --socket=/var/run/mysqld/mysqld.sock -u${DBUSER} -p${DBPASS} --silent; do
echo "Waiting for database to come up..."
sleep 2
done
until dig +short mailcow.email > /dev/null; do
echo "Waiting for DNS..."
sleep 1
done
cat <<EOF > /etc/aliases
# Autogenerated by mailcow
null: /dev/null
watchdog: /dev/null
ham: "|/usr/local/bin/rspamd-pipe-ham"
spam: "|/usr/local/bin/rspamd-pipe-spam"
EOF
newaliases;
# create sni configuration
if [[ "${SKIP_LETS_ENCRYPT}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo -n "" > /opt/postfix/conf/sni.map
else
echo -n "" > /opt/postfix/conf/sni.map;
for cert_dir in /etc/ssl/mail/*/ ; do
if [[ ! -f ${cert_dir}domains ]] || [[ ! -f ${cert_dir}cert.pem ]] || [[ ! -f ${cert_dir}key.pem ]]; then
continue;
fi
IFS=" " read -r -a domains <<< "$(cat "${cert_dir}domains")"
for domain in "${domains[@]}"; do
echo -n "${domain} ${cert_dir}key.pem ${cert_dir}cert.pem" >> /opt/postfix/conf/sni.map;
echo "" >> /opt/postfix/conf/sni.map;
done
done
fi
postmap -F hash:/opt/postfix/conf/sni.map;
cat <<EOF > /opt/postfix/conf/sql/mysql_relay_ne.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT IF(EXISTS(SELECT address, domain FROM alias
WHERE address = '%s'
AND domain IN (
SELECT domain FROM domain
WHERE backupmx = '1'
AND relay_all_recipients = '1'
AND relay_unknown_only = '1')
), 'lmtp:inet:dovecot:24', NULL) AS 'transport'
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_relay_recipient_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT DISTINCT
CASE WHEN '%d' IN (
SELECT domain FROM domain
WHERE relay_all_recipients=1
AND domain='%d'
AND backupmx=1
)
THEN '%s' ELSE (
SELECT goto FROM alias WHERE address='%s' AND active='1'
)
END AS result;
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_tls_policy_override_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT(policy, ' ', parameters) AS tls_policy FROM tls_policy_override WHERE active = '1' AND dest = '%s'
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_tls_enforce_in_policy.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT IF(EXISTS(
SELECT 'TLS_ACTIVE' FROM alias
LEFT OUTER JOIN mailbox ON mailbox.username = alias.goto
WHERE (address='%s'
OR address IN (
SELECT CONCAT('%u', '@', target_domain) FROM alias_domain
WHERE alias_domain='%d'
)
) AND JSON_UNQUOTE(JSON_VALUE(attributes, '$.tls_enforce_in')) = '1' AND mailbox.active = '1'
), 'reject_plaintext_session', NULL) AS 'tls_enforce_in';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_sender_dependent_default_transport_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT GROUP_CONCAT(transport SEPARATOR '') AS transport_maps
FROM (
SELECT IF(EXISTS(SELECT 'smtp_type' FROM alias
LEFT OUTER JOIN mailbox ON mailbox.username = alias.goto
WHERE (address = '%s'
OR address IN (
SELECT CONCAT('%u', '@', target_domain) FROM alias_domain
WHERE alias_domain = '%d'
)
)
AND JSON_UNQUOTE(JSON_VALUE(attributes, '$.tls_enforce_out')) = '1'
AND mailbox.active = '1'
), 'smtp_enforced_tls:', 'smtp:') AS 'transport'
UNION ALL
SELECT COALESCE(
(SELECT hostname FROM relayhosts
LEFT OUTER JOIN mailbox ON JSON_UNQUOTE(JSON_VALUE(mailbox.attributes, '$.relayhost')) = relayhosts.id
WHERE relayhosts.active = '1'
AND (
mailbox.username IN (SELECT alias.goto from alias
JOIN mailbox ON mailbox.username = alias.goto
WHERE alias.active = '1'
AND alias.address = '%s'
AND alias.address NOT LIKE '@%%'
)
)
),
(SELECT hostname FROM relayhosts
LEFT OUTER JOIN domain ON domain.relayhost = relayhosts.id
WHERE relayhosts.active = '1'
AND (domain.domain = '%d'
OR domain.domain IN (
SELECT target_domain FROM alias_domain
WHERE alias_domain = '%d'
)
)
)
)
) AS transport_view;
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_transport_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT('smtp_via_transport_maps:', nexthop) AS transport FROM transports
WHERE active = '1'
AND destination = '%s';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_resource_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT 'null@localhost' FROM mailbox
WHERE kind REGEXP 'location|thing|group' AND username = '%s';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_sasl_passwd_maps_sender_dependent.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT_WS(':', username, password) AS auth_data FROM relayhosts
WHERE id IN (
SELECT COALESCE(
(SELECT id FROM relayhosts
LEFT OUTER JOIN domain ON domain.relayhost = relayhosts.id
WHERE relayhosts.active = '1'
AND (domain.domain = '%d'
OR domain.domain IN (
SELECT target_domain FROM alias_domain
WHERE alias_domain = '%d'
)
)
),
(SELECT id FROM relayhosts
LEFT OUTER JOIN mailbox ON JSON_UNQUOTE(JSON_VALUE(mailbox.attributes, '$.relayhost')) = relayhosts.id
WHERE relayhosts.active = '1'
AND (
mailbox.username IN (
SELECT alias.goto from alias
JOIN mailbox ON mailbox.username = alias.goto
WHERE alias.active = '1'
AND alias.address = '%s'
AND alias.address NOT LIKE '@%%'
)
)
)
)
)
AND active = '1'
AND username != '';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_sasl_passwd_maps_transport_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT_WS(':', username, password) AS auth_data FROM transports
WHERE nexthop = '%s'
AND active = '1'
AND username != ''
LIMIT 1;
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_alias_domain_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT username FROM mailbox, alias_domain
WHERE alias_domain.alias_domain = '%d'
AND mailbox.username = CONCAT('%u', '@', alias_domain.target_domain)
AND (mailbox.active = '1' OR mailbox.active = '2')
AND alias_domain.active='1'
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_alias_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT goto FROM alias
WHERE address='%s'
AND (active='1' OR active='2');
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_recipient_bcc_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT bcc_dest FROM bcc_maps
WHERE local_dest='%s'
AND type='rcpt'
AND active='1';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_sender_bcc_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT bcc_dest FROM bcc_maps
WHERE local_dest='%s'
AND type='sender'
AND active='1';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_recipient_canonical_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT new_dest FROM recipient_maps
WHERE old_dest='%s'
AND active='1';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_domains_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT alias_domain from alias_domain WHERE alias_domain='%s' AND active='1'
UNION
SELECT domain FROM domain
WHERE domain='%s'
AND active = '1'
AND backupmx = '0'
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_mailbox_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT(JSON_UNQUOTE(JSON_VALUE(attributes, '$.mailbox_format')), mailbox_path_prefix, '%d/%u/') FROM mailbox WHERE username='%s' AND (active = '1' OR active = '2')
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_relay_domain_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT domain FROM domain WHERE domain='%s' AND backupmx = '1' AND active = '1'
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_sender_acl.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
# First select queries domain and alias_domain to determine if domains are active.
query = SELECT goto FROM alias
WHERE id IN (
SELECT COALESCE (
(
SELECT id FROM alias
WHERE address='%s'
AND (active='1' OR active='2')
), (
SELECT id FROM alias
WHERE address='@%d'
AND (active='1' OR active='2')
)
)
)
AND active='1'
AND (domain IN
(SELECT domain FROM domain
WHERE domain='%d'
AND active='1')
OR domain in (
SELECT alias_domain FROM alias_domain
WHERE alias_domain='%d'
AND active='1'
)
)
UNION
SELECT logged_in_as FROM sender_acl
WHERE send_as='@%d'
OR send_as='%s'
OR send_as='*'
OR send_as IN (
SELECT CONCAT('@',target_domain) FROM alias_domain
WHERE alias_domain = '%d')
OR send_as IN (
SELECT CONCAT('%u','@',target_domain) FROM alias_domain
WHERE alias_domain = '%d')
AND logged_in_as NOT IN (
SELECT goto FROM alias
WHERE address='%s')
UNION
SELECT username FROM mailbox, alias_domain
WHERE alias_domain.alias_domain = '%d'
AND mailbox.username = CONCAT('%u','@',alias_domain.target_domain)
AND (mailbox.active = '1' OR mailbox.active ='2')
AND alias_domain.active='1';
EOF
# MX based routing
cat <<EOF > /opt/postfix/conf/sql/mysql_mbr_access_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT CONCAT('FILTER smtp_via_transport_maps:', nexthop) as transport FROM transports
WHERE '%s' REGEXP destination
AND active='1'
AND is_mx_based='1';
EOF
cat <<EOF > /opt/postfix/conf/sql/mysql_virtual_spamalias_maps.cf
# Autogenerated by mailcow
user = ${DBUSER}
password = ${DBPASS}
hosts = unix:/var/run/mysqld/mysqld.sock
dbname = ${DBNAME}
query = SELECT goto FROM spamalias
WHERE address='%s'
AND validity >= UNIX_TIMESTAMP()
EOF
if [ ! -f /opt/postfix/conf/dns_blocklists.cf ]; then
cat <<EOF > /opt/postfix/conf/dns_blocklists.cf
# This file can be edited.
# Delete this file and restart postfix container to revert any changes.
postscreen_dnsbl_sites = wl.mailspike.net=127.0.0.[18;19;20]*-2
hostkarma.junkemailfilter.com=127.0.0.1*-2
list.dnswl.org=127.0.[0..255].0*-2
list.dnswl.org=127.0.[0..255].1*-4
list.dnswl.org=127.0.[0..255].2*-6
list.dnswl.org=127.0.[0..255].3*-8
ix.dnsbl.manitu.net*2
bl.spamcop.net*2
bl.suomispam.net*2
hostkarma.junkemailfilter.com=127.0.0.2*3
hostkarma.junkemailfilter.com=127.0.0.4*2
hostkarma.junkemailfilter.com=127.0.1.2*1
backscatter.spameatingmonkey.net*2
bl.ipv6.spameatingmonkey.net*2
bl.spameatingmonkey.net*2
b.barracudacentral.org=127.0.0.2*7
bl.mailspike.net=127.0.0.2*5
bl.mailspike.net=127.0.0.[10;11;12]*4
EOF
fi
DNSBL_CONFIG=$(grep -v '^#' /opt/postfix/conf/dns_blocklists.cf | grep '\S')
if [ ! -z "$DNSBL_CONFIG" ]; then
echo -e "\e[33mChecking if ASN for your IP is listed for Spamhaus Bad ASN List...\e[0m"
if [ -n "$SPAMHAUS_DQS_KEY" ]; then
echo -e "\e[32mDetected SPAMHAUS_DQS_KEY variable from mailcow.conf...\e[0m"
echo -e "\e[33mUsing DQS Blocklists from Spamhaus!\e[0m"
SPAMHAUS_DNSBL_CONFIG=$(cat <<EOF
${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net=127.0.0.[4..7]*6
${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net=127.0.0.[10;11]*8
${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net=127.0.0.3*4
${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net=127.0.0.2*3
postscreen_dnsbl_reply_map = texthash:/opt/postfix/conf/dnsbl_reply.map
EOF
cat <<EOF > /opt/postfix/conf/dnsbl_reply.map
# Autogenerated by mailcow, using Spamhaus DQS reply domains
${SPAMHAUS_DQS_KEY}.sbl.dq.spamhaus.net sbl.spamhaus.org
${SPAMHAUS_DQS_KEY}.xbl.dq.spamhaus.net xbl.spamhaus.org
${SPAMHAUS_DQS_KEY}.pbl.dq.spamhaus.net pbl.spamhaus.org
${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net zen.spamhaus.org
${SPAMHAUS_DQS_KEY}.dbl.dq.spamhaus.net dbl.spamhaus.org
${SPAMHAUS_DQS_KEY}.zrd.dq.spamhaus.net zrd.spamhaus.org
EOF
)
else
if [ -f "/opt/postfix/conf/dnsbl_reply.map" ]; then
rm /opt/postfix/conf/dnsbl_reply.map
fi
response=$(curl --connect-timeout 15 --max-time 30 -s -o /dev/null -w "%{http_code}" "https://asn-check.mailcow.email")
if [ "$response" -eq 503 ]; then
echo -e "\e[31mThe AS of your IP is listed as a banned AS from Spamhaus!\e[0m"
echo -e "\e[33mNo SPAMHAUS_DQS_KEY found... Skipping Spamhaus blocklists entirely!\e[0m"
SPAMHAUS_DNSBL_CONFIG=""
elif [ "$response" -eq 200 ]; then
echo -e "\e[32mThe AS of your IP is NOT listed as a banned AS from Spamhaus!\e[0m"
echo -e "\e[33mUsing the open Spamhaus blocklists.\e[0m"
SPAMHAUS_DNSBL_CONFIG=$(cat <<EOF
zen.spamhaus.org=127.0.0.[10;11]*8
zen.spamhaus.org=127.0.0.[4..7]*6
zen.spamhaus.org=127.0.0.3*4
zen.spamhaus.org=127.0.0.2*3
EOF
)
else
echo -e "\e[31mWe couldn't determine your AS... (maybe DNS/Network issue?) Response Code: $response\e[0m"
echo -e "\e[33mDeactivating Spamhaus DNS Blocklists to be on the safe site!\e[0m"
SPAMHAUS_DNSBL_CONFIG=""
fi
fi
fi
# Reset main.cf
sed -i '/Overrides/q' /opt/postfix/conf/main.cf
echo >> /opt/postfix/conf/main.cf
# Append postscreen dnsbl sites to main.cf
if [ ! -z "$DNSBL_CONFIG" ]; then
echo -e "${DNSBL_CONFIG}\n${SPAMHAUS_DNSBL_CONFIG}" >> /opt/postfix/conf/main.cf
fi
# Append user overrides
echo -e "\n# User Overrides" >> /opt/postfix/conf/main.cf
touch /opt/postfix/conf/extra.cf
sed -i '/\$myhostname/! { /myhostname/d }' /opt/postfix/conf/extra.cf
echo -e "myhostname = ${MAILCOW_HOSTNAME}\n$(cat /opt/postfix/conf/extra.cf)" > /opt/postfix/conf/extra.cf
cat /opt/postfix/conf/extra.cf >> /opt/postfix/conf/main.cf
if [ ! -f /opt/postfix/conf/custom_transport.pcre ]; then
echo "Creating dummy custom_transport.pcre"
touch /opt/postfix/conf/custom_transport.pcre
fi
if [[ ! -f /opt/postfix/conf/custom_postscreen_whitelist.cidr ]]; then
echo "Creating dummy custom_postscreen_whitelist.cidr"
cat <<EOF > /opt/postfix/conf/custom_postscreen_whitelist.cidr
# Autogenerated by mailcow
# Rules are evaluated in the order as specified.
# Blacklist 192.168.* except 192.168.0.1.
# 192.168.0.1 permit
# 192.168.0.0/16 reject
EOF
fi
# Fix Postfix permissions
chown -R root:postfix /opt/postfix/conf/sql/ /opt/postfix/conf/custom_transport.pcre
chmod 640 /opt/postfix/conf/sql/*.cf /opt/postfix/conf/custom_transport.pcre
chgrp -R postdrop /var/spool/postfix/public
chgrp -R postdrop /var/spool/postfix/maildrop
postfix set-permissions
# Check Postfix configuration
postconf -c /opt/postfix/conf > /dev/null
if [[ $? != 0 ]]; then
echo "Postfix configuration error, refusing to start."
exit 1
else
postfix -c /opt/postfix/conf start
sleep 126144000
fi

View File

@@ -11,13 +11,14 @@ stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autostart=true
[program:postfix]
command=/opt/postfix.sh
[program:bootstrap]
command=/docker-entrypoint.sh
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=true
startsecs=10
[eventlistener:processes]
command=/usr/local/sbin/stop-supervisor.sh

View File

@@ -20,6 +20,7 @@ destination d_redis_ui_log {
host("`REDIS_SLAVEOF_IP`")
persist-name("redis1")
port(`REDIS_SLAVEOF_PORT`)
auth("`REDISPASS`")
command("LPUSH" "POSTFIX_MAILLOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
@@ -28,6 +29,7 @@ destination d_redis_f2b_channel {
host("`REDIS_SLAVEOF_IP`")
persist-name("redis2")
port(`REDIS_SLAVEOF_PORT`)
auth("`REDISPASS`")
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};

View File

@@ -20,6 +20,7 @@ destination d_redis_ui_log {
host("redis-mailcow")
persist-name("redis1")
port(6379)
auth("`REDISPASS`")
command("LPUSH" "POSTFIX_MAILLOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
@@ -28,6 +29,7 @@ destination d_redis_f2b_channel {
host("redis-mailcow")
persist-name("redis2")
port(6379)
auth("`REDISPASS`")
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};

View File

@@ -1,12 +1,12 @@
FROM debian:bookworm-slim
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
LABEL maintainer="The Infrastructure Company GmbH <info@servercow.de>"
ARG DEBIAN_FRONTEND=noninteractive
ARG RSPAMD_VER=rspamd_3.9.1-1~82f43560f
ARG RSPAMD_VER=rspamd_3.11.1-1~ab0b44951
ARG CODENAME=bookworm
ENV LC_ALL=C
RUN apt-get update && apt-get install -y \
RUN apt-get update && apt-get install -y --no-install-recommends \
tzdata \
ca-certificates \
gnupg2 \
@@ -14,10 +14,11 @@ RUN apt-get update && apt-get install -y \
dnsutils \
netcat-traditional \
wget \
redis-tools \
procps \
redis-tools \
procps \
nano \
lua-cjson \
python3 python3-pip \
&& arch=$(arch | sed s/aarch64/arm64/ | sed s/x86_64/amd64/) \
&& wget -P /tmp https://rspamd.com/apt-stable/pool/main/r/rspamd/${RSPAMD_VER}~${CODENAME}_${arch}.deb\
&& apt install -y /tmp/${RSPAMD_VER}~${CODENAME}_${arch}.deb \
@@ -29,12 +30,20 @@ RUN apt-get update && apt-get install -y \
&& echo 'alias ll="ls -la --color"' >> ~/.bashrc \
&& sed -i 's/#analysis_keyword_table > 0/analysis_cat_table.macro_exist == "M"/g' /usr/share/rspamd/lualib/lua_scanners/oletools.lua
COPY settings.conf /etc/rspamd/settings.conf
COPY set_worker_password.sh /set_worker_password.sh
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN pip install --break-system-packages \
mysql-connector-python \
jinja2 \
redis \
dnspython \
psutil
COPY data/Dockerfiles/bootstrap /bootstrap
COPY data/Dockerfiles/rspamd/settings.conf /etc/rspamd/settings.conf
COPY data/Dockerfiles/rspamd/set_worker_password.sh /set_worker_password.sh
COPY data/Dockerfiles/rspamd/docker-entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
STOPSIGNAL SIGTERM
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["/usr/bin/rspamd", "-f", "-u", "_rspamd", "-g", "_rspamd"]

View File

@@ -1,121 +1,5 @@
#!/bin/bash
until nc phpfpm 9001 -z; do
echo "Waiting for PHP on port 9001..."
sleep 3
done
until nc phpfpm 9002 -z; do
echo "Waiting for PHP on port 9002..."
sleep 3
done
mkdir -p /etc/rspamd/plugins.d \
/etc/rspamd/custom
touch /etc/rspamd/rspamd.conf.local \
/etc/rspamd/rspamd.conf.override
chmod 755 /var/lib/rspamd
[[ ! -f /etc/rspamd/override.d/worker-controller-password.inc ]] && echo '# Autogenerated by mailcow' > /etc/rspamd/override.d/worker-controller-password.inc
echo ${IPV4_NETWORK}.0/24 > /etc/rspamd/custom/mailcow_networks.map
echo ${IPV6_NETWORK} >> /etc/rspamd/custom/mailcow_networks.map
DOVECOT_V4=
DOVECOT_V6=
until [[ ! -z ${DOVECOT_V4} ]]; do
DOVECOT_V4=$(dig a dovecot +short)
DOVECOT_V6=$(dig aaaa dovecot +short)
[[ ! -z ${DOVECOT_V4} ]] && break;
echo "Waiting for Dovecot..."
sleep 3
done
echo ${DOVECOT_V4}/32 > /etc/rspamd/custom/dovecot_trusted.map
if [[ ! -z ${DOVECOT_V6} ]]; then
echo ${DOVECOT_V6}/128 >> /etc/rspamd/custom/dovecot_trusted.map
fi
RSPAMD_V4=
RSPAMD_V6=
until [[ ! -z ${RSPAMD_V4} ]]; do
RSPAMD_V4=$(dig a rspamd +short)
RSPAMD_V6=$(dig aaaa rspamd +short)
[[ ! -z ${RSPAMD_V4} ]] && break;
echo "Waiting for Rspamd..."
sleep 3
done
echo ${RSPAMD_V4}/32 > /etc/rspamd/custom/rspamd_trusted.map
if [[ ! -z ${RSPAMD_V6} ]]; then
echo ${RSPAMD_V6}/128 >> /etc/rspamd/custom/rspamd_trusted.map
fi
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
cat <<EOF > /etc/rspamd/local.d/redis.conf
read_servers = "redis:6379";
write_servers = "${REDIS_SLAVEOF_IP}:${REDIS_SLAVEOF_PORT}";
timeout = 10;
EOF
until [[ $(redis-cli -h redis-mailcow PING) == "PONG" ]]; do
echo "Waiting for Redis @redis-mailcow..."
sleep 2
done
until [[ $(redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT} PING) == "PONG" ]]; do
echo "Waiting for Redis @${REDIS_SLAVEOF_IP}..."
sleep 2
done
redis-cli -h redis-mailcow SLAVEOF ${REDIS_SLAVEOF_IP} ${REDIS_SLAVEOF_PORT}
else
cat <<EOF > /etc/rspamd/local.d/redis.conf
servers = "redis:6379";
timeout = 10;
EOF
until [[ $(redis-cli -h redis-mailcow PING) == "PONG" ]]; do
echo "Waiting for Redis slave..."
sleep 2
done
redis-cli -h redis-mailcow SLAVEOF NO ONE
fi
# Provide additional lua modules
ln -s /usr/lib/$(uname -m)-linux-gnu/liblua5.1-cjson.so.0.0.0 /usr/lib/rspamd/cjson.so
chown -R _rspamd:_rspamd /var/lib/rspamd \
/etc/rspamd/local.d \
/etc/rspamd/override.d \
/etc/rspamd/rspamd.conf.local \
/etc/rspamd/rspamd.conf.override \
/etc/rspamd/plugins.d
# Fix missing default global maps, if any
# These exists in mailcow UI and should not be removed
touch /etc/rspamd/custom/global_mime_from_blacklist.map \
/etc/rspamd/custom/global_rcpt_blacklist.map \
/etc/rspamd/custom/global_smtp_from_blacklist.map \
/etc/rspamd/custom/global_mime_from_whitelist.map \
/etc/rspamd/custom/global_rcpt_whitelist.map \
/etc/rspamd/custom/global_smtp_from_whitelist.map \
/etc/rspamd/custom/bad_languages.map \
/etc/rspamd/custom/sa-rules \
/etc/rspamd/custom/dovecot_trusted.map \
/etc/rspamd/custom/rspamd_trusted.map \
/etc/rspamd/custom/mailcow_networks.map \
/etc/rspamd/custom/ip_wl.map \
/etc/rspamd/custom/fishy_tlds.map \
/etc/rspamd/custom/bad_words.map \
/etc/rspamd/custom/bad_asn.map \
/etc/rspamd/custom/bad_words_de.map \
/etc/rspamd/custom/bulk_header.map \
/etc/rspamd/custom/bad_header.map
# www-data (82) group needs to write to these files
chown _rspamd:_rspamd /etc/rspamd/custom/
chmod 0755 /etc/rspamd/custom/.
chown -R 82:82 /etc/rspamd/custom/*
chmod 644 -R /etc/rspamd/custom/*
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
@@ -124,190 +8,13 @@ for file in /hooks/*; do
fi
done
# If DQS KEY is set in mailcow.conf add Spamhaus DQS RBLs
if [[ ! -z ${SPAMHAUS_DQS_KEY} ]]; then
cat <<EOF > /etc/rspamd/custom/dqs-rbl.conf
# Autogenerated by mailcow. DO NOT TOUCH!
spamhaus {
rbl = "${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net";
from = false;
}
spamhaus_from {
from = true;
received = false;
rbl = "${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net";
returncodes {
SPAMHAUS_ZEN = [ "127.0.0.2", "127.0.0.3", "127.0.0.4", "127.0.0.5", "127.0.0.6", "127.0.0.7", "127.0.0.9", "127.0.0.10", "127.0.0.11" ];
}
}
spamhaus_authbl_received {
# Check if the sender client is listed in AuthBL (AuthBL is *not* part of ZEN)
rbl = "${SPAMHAUS_DQS_KEY}.authbl.dq.spamhaus.net";
from = false;
received = true;
ipv6 = true;
returncodes {
SH_AUTHBL_RECEIVED = "127.0.0.20"
}
}
spamhaus_dbl {
# Add checks on the HELO string
rbl = "${SPAMHAUS_DQS_KEY}.dbl.dq.spamhaus.net";
helo = true;
rdns = true;
dkim = true;
disable_monitoring = true;
returncodes {
RBL_DBL_SPAM = "127.0.1.2";
RBL_DBL_PHISH = "127.0.1.4";
RBL_DBL_MALWARE = "127.0.1.5";
RBL_DBL_BOTNET = "127.0.1.6";
RBL_DBL_ABUSED_SPAM = "127.0.1.102";
RBL_DBL_ABUSED_PHISH = "127.0.1.104";
RBL_DBL_ABUSED_MALWARE = "127.0.1.105";
RBL_DBL_ABUSED_BOTNET = "127.0.1.106";
RBL_DBL_DONT_QUERY_IPS = "127.0.1.255";
}
}
spamhaus_dbl_fullurls {
ignore_defaults = true;
no_ip = true;
rbl = "${SPAMHAUS_DQS_KEY}.dbl.dq.spamhaus.net";
selector = 'urls:get_host'
disable_monitoring = true;
returncodes {
DBLABUSED_SPAM_FULLURLS = "127.0.1.102";
DBLABUSED_PHISH_FULLURLS = "127.0.1.104";
DBLABUSED_MALWARE_FULLURLS = "127.0.1.105";
DBLABUSED_BOTNET_FULLURLS = "127.0.1.106";
}
}
spamhaus_zrd {
# Add checks on the HELO string also for DQS
rbl = "${SPAMHAUS_DQS_KEY}.zrd.dq.spamhaus.net";
helo = true;
rdns = true;
dkim = true;
disable_monitoring = true;
returncodes {
RBL_ZRD_VERY_FRESH_DOMAIN = ["127.0.2.2", "127.0.2.3", "127.0.2.4"];
RBL_ZRD_FRESH_DOMAIN = [
"127.0.2.5", "127.0.2.6", "127.0.2.7", "127.0.2.8", "127.0.2.9", "127.0.2.10", "127.0.2.11", "127.0.2.12", "127.0.2.13", "127.0.2.14", "127.0.2.15", "127.0.2.16", "127.0.2.17", "127.0.2.18", "127.0.2.19", "127.0.2.20", "127.0.2.21", "127.0.2.22", "127.0.2.23", "127.0.2.24"
];
RBL_ZRD_DONT_QUERY_IPS = "127.0.2.255";
}
}
"SPAMHAUS_ZEN_URIBL" {
enabled = true;
rbl = "${SPAMHAUS_DQS_KEY}.zen.dq.spamhaus.net";
resolve_ip = true;
checks = ['urls'];
replyto = true;
emails = true;
ipv4 = true;
ipv6 = true;
emails_domainonly = true;
returncodes {
URIBL_SBL = "127.0.0.2";
URIBL_SBL_CSS = "127.0.0.3";
URIBL_XBL = ["127.0.0.4", "127.0.0.5", "127.0.0.6", "127.0.0.7"];
URIBL_PBL = ["127.0.0.10", "127.0.0.11"];
URIBL_DROP = "127.0.0.9";
}
}
SH_EMAIL_DBL {
ignore_defaults = true;
replyto = true;
emails_domainonly = true;
disable_monitoring = true;
rbl = "${SPAMHAUS_DQS_KEY}.dbl.dq.spamhaus.net";
returncodes = {
SH_EMAIL_DBL = [
"127.0.1.2",
"127.0.1.4",
"127.0.1.5",
"127.0.1.6"
];
SH_EMAIL_DBL_ABUSED = [
"127.0.1.102",
"127.0.1.104",
"127.0.1.105",
"127.0.1.106"
];
SH_EMAIL_DBL_DONT_QUERY_IPS = [ "127.0.1.255" ];
}
}
SH_EMAIL_ZRD {
ignore_defaults = true;
replyto = true;
emails_domainonly = true;
disable_monitoring = true;
rbl = "${SPAMHAUS_DQS_KEY}.zrd.dq.spamhaus.net";
returncodes = {
SH_EMAIL_ZRD_VERY_FRESH_DOMAIN = ["127.0.2.2", "127.0.2.3", "127.0.2.4"];
SH_EMAIL_ZRD_FRESH_DOMAIN = [
"127.0.2.5", "127.0.2.6", "127.0.2.7", "127.0.2.8", "127.0.2.9", "127.0.2.10", "127.0.2.11", "127.0.2.12", "127.0.2.13", "127.0.2.14", "127.0.2.15", "127.0.2.16", "127.0.2.17", "127.0.2.18", "127.0.2.19", "127.0.2.20", "127.0.2.21", "127.0.2.22", "127.0.2.23", "127.0.2.24"
];
SH_EMAIL_ZRD_DONT_QUERY_IPS = [ "127.0.2.255" ];
}
}
"DBL" {
# override the defaults for DBL defined in modules.d/rbl.conf
rbl = "${SPAMHAUS_DQS_KEY}.dbl.dq.spamhaus.net";
disable_monitoring = true;
}
"ZRD" {
ignore_defaults = true;
rbl = "${SPAMHAUS_DQS_KEY}.zrd.dq.spamhaus.net";
no_ip = true;
dkim = true;
emails = true;
emails_domainonly = true;
urls = true;
returncodes = {
ZRD_VERY_FRESH_DOMAIN = ["127.0.2.2", "127.0.2.3", "127.0.2.4"];
ZRD_FRESH_DOMAIN = ["127.0.2.5", "127.0.2.6", "127.0.2.7", "127.0.2.8", "127.0.2.9", "127.0.2.10", "127.0.2.11", "127.0.2.12", "127.0.2.13", "127.0.2.14", "127.0.2.15", "127.0.2.16", "127.0.2.17", "127.0.2.18", "127.0.2.19", "127.0.2.20", "127.0.2.21", "127.0.2.22", "127.0.2.23", "127.0.2.24"];
}
}
spamhaus_sbl_url {
ignore_defaults = true
rbl = "${SPAMHAUS_DQS_KEY}.sbl.dq.spamhaus.net";
checks = ['urls'];
disable_monitoring = true;
returncodes {
SPAMHAUS_SBL_URL = "127.0.0.2";
}
}
python3 -u /bootstrap/main.py
BOOTSTRAP_EXIT_CODE=$?
SH_HBL_EMAIL {
ignore_defaults = true;
rbl = "_email.${SPAMHAUS_DQS_KEY}.hbl.dq.spamhaus.net";
emails_domainonly = false;
selector = "from('smtp').lower;from('mime').lower";
ignore_whitelist = true;
checks = ['emails', 'replyto'];
hash = "sha1";
returncodes = {
SH_HBL_EMAIL = [
"127.0.3.2"
];
}
}
spamhaus_dqs_hbl {
symbol = "HBL_FILE_UNKNOWN";
rbl = "_file.${SPAMHAUS_DQS_KEY}.hbl.dq.spamhaus.net.";
selector = "attachments('rbase32', 'sha256')";
ignore_whitelist = true;
ignore_defaults = true;
returncodes {
SH_HBL_FILE_MALICIOUS = "127.0.3.10";
SH_HBL_FILE_SUSPICIOUS = "127.0.3.15";
}
}
EOF
else
rm -rf /etc/rspamd/custom/dqs-rbl.conf
if [ $BOOTSTRAP_EXIT_CODE -ne 0 ]; then
echo "Bootstrap failed with exit code $BOOTSTRAP_EXIT_CODE. Not starting Rspamd."
exit $BOOTSTRAP_EXIT_CODE
fi
echo "Bootstrap succeeded. Starting Rspamd..."
exec "$@"

View File

@@ -4,7 +4,7 @@ LABEL maintainer="The Infrastructure Company GmbH <info@servercow.de>"
ARG DEBIAN_FRONTEND=noninteractive
ARG DEBIAN_VERSION=bookworm
ARG SOGO_DEBIAN_REPOSITORY=http://www.axis.cz/linux/debian
ARG SOGO_DEBIAN_REPOSITORY=https://packagingv2.sogo.nu/sogo-nightly-debian/
# renovate: datasource=github-releases depName=tianon/gosu versioning=semver-coerced extractVersion=^(?<version>.*)$
ARG GOSU_VERSION=1.17
ENV LC_ALL=C
@@ -27,32 +27,37 @@ RUN echo "Building from repository $SOGO_DEBIAN_REPOSITORY" \
psmisc \
wget \
patch \
python3 python3-pip \
&& dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true \
&& mkdir /usr/share/doc/sogo \
&& touch /usr/share/doc/sogo/empty.sh \
&& apt-key adv --keyserver keys.openpgp.org --recv-key 74FFC6D72B925A34B5D356BDF8A27B36A6E2EAE9 \
&& echo "deb [trusted=yes] ${SOGO_DEBIAN_REPOSITORY} ${DEBIAN_VERSION} sogo-v5" > /etc/apt/sources.list.d/sogo.list \
&& wget -O- https://keys.openpgp.org/vks/v1/by-fingerprint/74FFC6D72B925A34B5D356BDF8A27B36A6E2EAE9 | gpg --dearmor | apt-key add - \
&& echo "deb [trusted=yes] ${SOGO_DEBIAN_REPOSITORY} ${DEBIAN_VERSION} main" > /etc/apt/sources.list.d/sogo.list \
&& apt-get update && apt-get install -y --no-install-recommends \
sogo \
sogo-activesync \
&& apt-get autoclean \
&& rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/sogo.list \
&& rm -rf /var/lib/apt/lists/* \
&& touch /etc/default/locale
COPY ./bootstrap-sogo.sh /bootstrap-sogo.sh
COPY syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
COPY syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng-redis_slave.conf
COPY supervisord.conf /etc/supervisor/supervisord.conf
COPY acl.diff /acl.diff
COPY stop-supervisor.sh /usr/local/sbin/stop-supervisor.sh
COPY docker-entrypoint.sh /
RUN pip install --break-system-packages \
mysql-connector-python \
jinja2 \
redis \
dnspython \
psutil
RUN chmod +x /bootstrap-sogo.sh \
/usr/local/sbin/stop-supervisor.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
COPY data/Dockerfiles/bootstrap /bootstrap
COPY data/Dockerfiles/sogo/syslog-ng.conf /etc/syslog-ng/syslog-ng.conf
COPY data/Dockerfiles/sogo/syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng-redis_slave.conf
COPY data/Dockerfiles/sogo/supervisord.conf /etc/supervisor/supervisord.conf
COPY data/Dockerfiles/sogo/stop-supervisor.sh /usr/local/sbin/stop-supervisor.sh
COPY data/Dockerfiles/sogo/docker-entrypoint.sh /
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
RUN chmod +x /usr/local/sbin/stop-supervisor.sh
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]

View File

@@ -1,11 +0,0 @@
--- /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox 2018-08-17 18:29:57.987504204 +0200
+++ /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox 2018-08-17 18:29:35.918291298 +0200
@@ -46,7 +46,7 @@
</md-item-template>
</md-autocomplete>
</div>
- <md-card ng-repeat="user in acl.users | orderBy:['userClass', 'cn']"
+ <md-card ng-repeat="user in acl.users | filter:{ userClass: 'normal' } | orderBy:['cn']"
class="sg-collapsed"
ng-class="{ 'sg-expanded': user.uid == acl.selectedUid }">
<a class="md-flex md-button" ng-click="acl.selectUser(user, $event)">

View File

@@ -1,253 +0,0 @@
#!/bin/bash
# Wait for MySQL to warm-up
while ! mariadb-admin status --ssl=false --socket=/var/run/mysqld/mysqld.sock -u${DBUSER} -p${DBPASS} --silent; do
echo "Waiting for database to come up..."
sleep 2
done
# Wait until port becomes free and send sig
until ! nc -z sogo-mailcow 20000;
do
killall -TERM sogod
sleep 3
done
# Wait for updated schema
DBV_NOW=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT version FROM versions WHERE application = 'db_schema';" -BN)
DBV_NEW=$(grep -oE '\$db_version = .*;' init_db.inc.php | sed 's/$db_version = //g;s/;//g' | cut -d \" -f2)
while [[ "${DBV_NOW}" != "${DBV_NEW}" ]]; do
echo "Waiting for schema update..."
DBV_NOW=$(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT version FROM versions WHERE application = 'db_schema';" -BN)
DBV_NEW=$(grep -oE '\$db_version = .*;' init_db.inc.php | sed 's/$db_version = //g;s/;//g' | cut -d \" -f2)
sleep 5
done
echo "DB schema is ${DBV_NOW}"
# Recreate view
if [[ "${MASTER}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "We are master, preparing sogo_view..."
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "DROP VIEW IF EXISTS sogo_view"
while [[ ${VIEW_OK} != 'OK' ]]; do
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
CREATE VIEW sogo_view (c_uid, domain, c_name, c_password, c_cn, mail, aliases, ad_aliases, ext_acl, kind, multiple_bookings) AS
SELECT
mailbox.username,
mailbox.domain,
mailbox.username,
IF(JSON_UNQUOTE(JSON_VALUE(attributes, '$.force_pw_update')) = '0', IF(JSON_UNQUOTE(JSON_VALUE(attributes, '$.sogo_access')) = 1, password, '{SSHA256}A123A123A321A321A321B321B321B123B123B321B432F123E321123123321321'), '{SSHA256}A123A123A321A321A321B321B321B123B123B321B432F123E321123123321321'),
mailbox.name,
mailbox.username,
IFNULL(GROUP_CONCAT(ga.aliases ORDER BY ga.aliases SEPARATOR ' '), ''),
IFNULL(gda.ad_alias, ''),
IFNULL(external_acl.send_as_acl, ''),
mailbox.kind,
mailbox.multiple_bookings
FROM
mailbox
LEFT OUTER JOIN
grouped_mail_aliases ga
ON ga.username REGEXP CONCAT('(^|,)', mailbox.username, '($|,)')
LEFT OUTER JOIN
grouped_domain_alias_address gda
ON gda.username = mailbox.username
LEFT OUTER JOIN
grouped_sender_acl_external external_acl
ON external_acl.username = mailbox.username
WHERE
mailbox.active = '1'
GROUP BY
mailbox.username;
EOF
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'sogo_view'") ]]; then
VIEW_OK=OK
else
echo "Will retry to setup SOGo view in 3s..."
sleep 3
fi
done
else
while [[ ${VIEW_OK} != 'OK' ]]; do
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'sogo_view'") ]]; then
VIEW_OK=OK
else
echo "Waiting for SOGo view to be created by master..."
sleep 3
fi
done
fi
# Wait for static view table if missing after update and update content
if [[ "${MASTER}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "We are master, preparing _sogo_static_view..."
while [[ ${STATIC_VIEW_OK} != 'OK' ]]; do
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = '_sogo_static_view'") ]]; then
STATIC_VIEW_OK=OK
echo "Updating _sogo_static_view content..."
# If changed, also update init_db.inc.php
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "REPLACE INTO _sogo_static_view (c_uid, domain, c_name, c_password, c_cn, mail, aliases, ad_aliases, ext_acl, kind, multiple_bookings) SELECT c_uid, domain, c_name, c_password, c_cn, mail, aliases, ad_aliases, ext_acl, kind, multiple_bookings from sogo_view;"
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "DELETE FROM _sogo_static_view WHERE c_uid NOT IN (SELECT username FROM mailbox WHERE active = '1')"
else
echo "Waiting for database initialization..."
sleep 3
fi
done
else
while [[ ${STATIC_VIEW_OK} != 'OK' ]]; do
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = '_sogo_static_view'") ]]; then
STATIC_VIEW_OK=OK
else
echo "Waiting for database initialization by master..."
sleep 3
fi
done
fi
# Recreate password update trigger
if [[ "${MASTER}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "We are master, preparing update trigger..."
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "DROP TRIGGER IF EXISTS sogo_update_password"
while [[ ${TRIGGER_OK} != 'OK' ]]; do
mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} << EOF
DELIMITER -
CREATE TRIGGER sogo_update_password AFTER UPDATE ON _sogo_static_view
FOR EACH ROW
BEGIN
UPDATE mailbox SET password = NEW.c_password WHERE NEW.c_uid = username;
END;
-
DELIMITER ;
EOF
if [[ ! -z $(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -B -e "SELECT 'OK' FROM INFORMATION_SCHEMA.TRIGGERS WHERE TRIGGER_NAME = 'sogo_update_password'") ]]; then
TRIGGER_OK=OK
else
echo "Will retry to setup SOGo password update trigger in 3s"
sleep 3
fi
done
fi
# cat /dev/urandom seems to hang here occasionally and is not recommended anyway, better use openssl
RAND_PASS=$(openssl rand -base64 16 | tr -dc _A-Z-a-z-0-9)
# Generate plist header with timezone data
mkdir -p /var/lib/sogo/GNUstep/Defaults/
cat <<EOF > /var/lib/sogo/GNUstep/Defaults/sogod.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//GNUstep//DTD plist 0.9//EN" "http://www.gnustep.org/plist-0_9.xml">
<plist version="0.9">
<dict>
<key>OCSAclURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_acl</string>
<key>SOGoIMAPServer</key>
<string>imap://${IPV4_NETWORK}.250:143/?TLS=YES&amp;tlsVerifyMode=none</string>
<key>SOGoSieveServer</key>
<string>sieve://${IPV4_NETWORK}.250:4190/?TLS=YES&amp;tlsVerifyMode=none</string>
<key>SOGoSMTPServer</key>
<string>smtp://${IPV4_NETWORK}.253:588/?TLS=YES&amp;tlsVerifyMode=none</string>
<key>SOGoTrustProxyAuthentication</key>
<string>YES</string>
<key>SOGoEncryptionKey</key>
<string>${RAND_PASS}</string>
<key>OCSAdminURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_admin</string>
<key>OCSCacheFolderURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_cache_folder</string>
<key>OCSEMailAlarmsFolderURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_alarms_folder</string>
<key>OCSFolderInfoURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_folder_info</string>
<key>OCSSessionsFolderURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_sessions_folder</string>
<key>OCSStoreURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_store</string>
<key>SOGoProfileURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/sogo_user_profile</string>
<key>SOGoTimeZone</key>
<string>${TZ}</string>
<key>domains</key>
<dict>
EOF
# Generate multi-domain setup
while read -r line gal
do
echo " <key>${line}</key>
<dict>
<key>SOGoMailDomain</key>
<string>${line}</string>
<key>SOGoUserSources</key>
<array>
<dict>
<key>MailFieldNames</key>
<array>
<string>aliases</string>
<string>ad_aliases</string>
<string>ext_acl</string>
</array>
<key>KindFieldName</key>
<string>kind</string>
<key>DomainFieldName</key>
<string>domain</string>
<key>MultipleBookingsFieldName</key>
<string>multiple_bookings</string>
<key>listRequiresDot</key>
<string>NO</string>
<key>canAuthenticate</key>
<string>YES</string>
<key>displayName</key>
<string>GAL ${line}</string>
<key>id</key>
<string>${line}</string>
<key>isAddressBook</key>
<string>${gal}</string>
<key>type</key>
<string>sql</string>
<key>userPasswordAlgorithm</key>
<string>${MAILCOW_PASS_SCHEME}</string>
<key>prependPasswordScheme</key>
<string>YES</string>
<key>viewURL</key>
<string>mysql://${DBUSER}:${DBPASS}@%2Fvar%2Frun%2Fmysqld%2Fmysqld.sock/${DBNAME}/_sogo_static_view</string>
</dict>" >> /var/lib/sogo/GNUstep/Defaults/sogod.plist
# Generate alternative LDAP authentication dict, when SQL authentication fails
# This will nevertheless read attributes from LDAP
line=${line} envsubst < /etc/sogo/plist_ldap >> /var/lib/sogo/GNUstep/Defaults/sogod.plist
echo " </array>
</dict>" >> /var/lib/sogo/GNUstep/Defaults/sogod.plist
done < <(mysql --socket=/var/run/mysqld/mysqld.sock -u ${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT domain, CASE gal WHEN '1' THEN 'YES' ELSE 'NO' END AS gal FROM domain;" -B -N)
# Generate footer
echo ' </dict>
</dict>
</plist>' >> /var/lib/sogo/GNUstep/Defaults/sogod.plist
# Fix permissions
chown sogo:sogo -R /var/lib/sogo/
chmod 600 /var/lib/sogo/GNUstep/Defaults/sogod.plist
# Patch ACLs
#if [[ ${ACL_ANYONE} == 'allow' ]]; then
# #enable any or authenticated targets for ACL
# if patch -R -sfN --dry-run /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox < /acl.diff > /dev/null; then
# patch -R /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox < /acl.diff;
# fi
#else
# #disable any or authenticated targets for ACL
# if patch -sfN --dry-run /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox < /acl.diff > /dev/null; then
# patch /usr/lib/GNUstep/SOGo/Templates/UIxAclEditor.wox < /acl.diff;
# fi
#fi
# Copy logo, if any
[[ -f /etc/sogo/sogo-full.svg ]] && cp /etc/sogo/sogo-full.svg /usr/lib/GNUstep/SOGo/WebServerResources/img/sogo-full.svg
# Rsync web content
echo "Syncing web content with named volume"
rsync -a /usr/lib/GNUstep/SOGo/. /sogo_web/
# Chown backup path
chown -R sogo:sogo /sogo_backup
exec gosu sogo /usr/sbin/sogod

View File

@@ -1,15 +1,5 @@
#!/bin/bash
if [[ "${SKIP_SOGO}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "SKIP_SOGO=y, skipping SOGo..."
sleep 365d
exit 0
fi
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
cp /etc/syslog-ng/syslog-ng-redis_slave.conf /etc/syslog-ng/syslog-ng.conf
fi
# Run hooks
for file in /hooks/*; do
if [ -x "${file}" ]; then
@@ -18,4 +8,13 @@ for file in /hooks/*; do
fi
done
exec "$@"
python3 -u /bootstrap/main.py
BOOTSTRAP_EXIT_CODE=$?
if [ $BOOTSTRAP_EXIT_CODE -ne 0 ]; then
echo "Bootstrap failed with exit code $BOOTSTRAP_EXIT_CODE. Not starting SOGo."
exit $BOOTSTRAP_EXIT_CODE
fi
echo "Bootstrap succeeded. Starting SOGo..."
exec gosu sogo /usr/sbin/sogod

View File

@@ -11,8 +11,8 @@ stderr_logfile_maxbytes=0
autostart=true
priority=1
[program:bootstrap-sogo]
command=/bootstrap-sogo.sh
[program:bootstrap]
command=/docker-entrypoint.sh
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr

View File

@@ -22,6 +22,7 @@ destination d_redis_ui_log {
host("`REDIS_SLAVEOF_IP`")
persist-name("redis1")
port(`REDIS_SLAVEOF_PORT`)
auth("`REDISPASS`")
command("LPUSH" "SOGO_LOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
@@ -30,6 +31,7 @@ destination d_redis_f2b_channel {
host("`REDIS_SLAVEOF_IP`")
persist-name("redis2")
port(`REDIS_SLAVEOF_PORT`)
auth("`REDISPASS`")
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};

View File

@@ -22,6 +22,7 @@ destination d_redis_ui_log {
host("redis-mailcow")
persist-name("redis1")
port(6379)
auth("`REDISPASS`")
command("LPUSH" "SOGO_LOG" "$(format-json time=\"$S_UNIXTIME\" priority=\"$PRIORITY\" program=\"$PROGRAM\" message=\"$MESSAGE\")\n")
);
};
@@ -30,6 +31,7 @@ destination d_redis_f2b_channel {
host("redis-mailcow")
persist-name("redis2")
port(6379)
auth("`REDISPASS`")
command("PUBLISH" "F2B_CHANNEL" "$(sanitize $MESSAGE)")
);
};

View File

@@ -1,31 +0,0 @@
FROM solr:7.7-slim
USER root
# renovate: datasource=github-releases depName=tianon/gosu versioning=semver-coerced extractVersion=(?<version>.*)$
ARG GOSU_VERSION=1.17
COPY solr.sh /
COPY solr-config-7.7.0.xml /
COPY solr-schema-7.7.0.xml /
RUN dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" \
&& wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" \
&& chmod +x /usr/local/bin/gosu \
&& gosu nobody true \
&& apt-get update && apt-get install -y --no-install-recommends \
tzdata \
curl \
bash \
zip \
&& apt-get autoclean \
&& rm -rf /var/lib/apt/lists/* \
&& chmod +x /solr.sh \
&& sync \
&& bash /solr.sh --bootstrap
RUN zip -q -d /opt/solr/server/lib/ext/log4j-core-*.jar org/apache/logging/log4j/core/lookup/JndiLookup.class
RUN apt remove zip -y
CMD ["/solr.sh"]

View File

@@ -1,289 +0,0 @@
<?xml version="1.0" encoding="UTF-8" ?>
<!-- This is the default config with stuff non-essential to Dovecot removed. -->
<config>
<!-- Controls what version of Lucene various components of Solr
adhere to. Generally, you want to use the latest version to
get all bug fixes and improvements. It is highly recommended
that you fully re-index after changing this setting as it can
affect both how text is indexed and queried.
-->
<luceneMatchVersion>7.7.0</luceneMatchVersion>
<!-- A 'dir' option by itself adds any files found in the directory
to the classpath, this is useful for including all jars in a
directory.
When a 'regex' is specified in addition to a 'dir', only the
files in that directory which completely match the regex
(anchored on both ends) will be included.
If a 'dir' option (with or without a regex) is used and nothing
is found that matches, a warning will be logged.
The examples below can be used to load some solr-contribs along
with their external dependencies.
-->
<lib dir="${solr.install.dir:../../../..}/contrib/extraction/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-cell-\d.*\.jar" />
<lib dir="${solr.install.dir:../../../..}/contrib/clustering/lib/" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-clustering-\d.*\.jar" />
<lib dir="${solr.install.dir:../../../..}/contrib/langid/lib/" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-langid-\d.*\.jar" />
<lib dir="${solr.install.dir:../../../..}/contrib/velocity/lib" regex=".*\.jar" />
<lib dir="${solr.install.dir:../../../..}/dist/" regex="solr-velocity-\d.*\.jar" />
<!-- Data Directory
Used to specify an alternate directory to hold all index data
other than the default ./data under the Solr home. If
replication is in use, this should match the replication
configuration.
-->
<dataDir>${solr.data.dir:}</dataDir>
<!-- The default high-performance update handler -->
<updateHandler class="solr.DirectUpdateHandler2">
<!-- Enables a transaction log, used for real-time get, durability, and
and solr cloud replica recovery. The log can grow as big as
uncommitted changes to the index, so use of a hard autoCommit
is recommended (see below).
"dir" - the target directory for transaction logs, defaults to the
solr data directory.
"numVersionBuckets" - sets the number of buckets used to keep
track of max version values when checking for re-ordered
updates; increase this value to reduce the cost of
synchronizing access to version buckets during high-volume
indexing, this requires 8 bytes (long) * numVersionBuckets
of heap space per Solr core.
-->
<updateLog>
<str name="dir">${solr.ulog.dir:}</str>
<int name="numVersionBuckets">${solr.ulog.numVersionBuckets:65536}</int>
</updateLog>
<!-- AutoCommit
Perform a hard commit automatically under certain conditions.
Instead of enabling autoCommit, consider using "commitWithin"
when adding documents.
http://wiki.apache.org/solr/UpdateXmlMessages
maxDocs - Maximum number of documents to add since the last
commit before automatically triggering a new commit.
maxTime - Maximum amount of time in ms that is allowed to pass
since a document was added before automatically
triggering a new commit.
openSearcher - if false, the commit causes recent index changes
to be flushed to stable storage, but does not cause a new
searcher to be opened to make those changes visible.
If the updateLog is enabled, then it's highly recommended to
have some sort of hard autoCommit to limit the log size.
-->
<autoCommit>
<maxTime>${solr.autoCommit.maxTime:15000}</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<!-- softAutoCommit is like autoCommit except it causes a
'soft' commit which only ensures that changes are visible
but does not ensure that data is synced to disk. This is
faster and more near-realtime friendly than a hard commit.
-->
<autoSoftCommit>
<maxTime>${solr.autoSoftCommit.maxTime:-1}</maxTime>
</autoSoftCommit>
<!-- Update Related Event Listeners
Various IndexWriter related events can trigger Listeners to
take actions.
postCommit - fired after every commit or optimize command
postOptimize - fired after every optimize command
-->
</updateHandler>
<!-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Query section - these settings control query time things like caches
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -->
<query>
<!-- Solr Internal Query Caches
There are two implementations of cache available for Solr,
LRUCache, based on a synchronized LinkedHashMap, and
FastLRUCache, based on a ConcurrentHashMap.
FastLRUCache has faster gets and slower puts in single
threaded operation and thus is generally faster than LRUCache
when the hit ratio of the cache is high (> 75%), and may be
faster under other scenarios on multi-cpu systems.
-->
<!-- Filter Cache
Cache used by SolrIndexSearcher for filters (DocSets),
unordered sets of *all* documents that match a query. When a
new searcher is opened, its caches may be prepopulated or
"autowarmed" using data from caches in the old searcher.
autowarmCount is the number of items to prepopulate. For
LRUCache, the autowarmed items will be the most recently
accessed items.
Parameters:
class - the SolrCache implementation LRUCache or
(LRUCache or FastLRUCache)
size - the maximum number of entries in the cache
initialSize - the initial capacity (number of entries) of
the cache. (see java.util.HashMap)
autowarmCount - the number of entries to prepopulate from
and old cache.
maxRamMB - the maximum amount of RAM (in MB) that this cache is allowed
to occupy. Note that when this option is specified, the size
and initialSize parameters are ignored.
-->
<filterCache class="solr.FastLRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- Query Result Cache
Caches results of searches - ordered lists of document ids
(DocList) based on a query, a sort, and the range of documents requested.
Additional supported parameter by LRUCache:
maxRamMB - the maximum amount of RAM (in MB) that this cache is allowed
to occupy
-->
<queryResultCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- Document Cache
Caches Lucene Document objects (the stored fields for each
document). Since Lucene internal document ids are transient,
this cache will not be autowarmed.
-->
<documentCache class="solr.LRUCache"
size="512"
initialSize="512"
autowarmCount="0"/>
<!-- custom cache currently used by block join -->
<cache name="perSegFilter"
class="solr.search.LRUCache"
size="10"
initialSize="0"
autowarmCount="10"
regenerator="solr.NoOpRegenerator" />
<!-- Lazy Field Loading
If true, stored fields that are not requested will be loaded
lazily. This can result in a significant speed improvement
if the usual case is to not load all stored fields,
especially if the skipped fields are large compressed text
fields.
-->
<enableLazyFieldLoading>true</enableLazyFieldLoading>
<!-- Result Window Size
An optimization for use with the queryResultCache. When a search
is requested, a superset of the requested number of document ids
are collected. For example, if a search for a particular query
requests matching documents 10 through 19, and queryWindowSize is 50,
then documents 0 through 49 will be collected and cached. Any further
requests in that range can be satisfied via the cache.
-->
<queryResultWindowSize>20</queryResultWindowSize>
<!-- Maximum number of documents to cache for any entry in the
queryResultCache.
-->
<queryResultMaxDocsCached>200</queryResultMaxDocsCached>
<!-- Use Cold Searcher
If a search request comes in and there is no current
registered searcher, then immediately register the still
warming searcher and use it. If "false" then all requests
will block until the first searcher is done warming.
-->
<useColdSearcher>false</useColdSearcher>
</query>
<!-- Request Dispatcher
This section contains instructions for how the SolrDispatchFilter
should behave when processing requests for this SolrCore.
-->
<requestDispatcher>
<httpCaching never304="true" />
</requestDispatcher>
<!-- Request Handlers
http://wiki.apache.org/solr/SolrRequestHandler
Incoming queries will be dispatched to a specific handler by name
based on the path specified in the request.
If a Request Handler is declared with startup="lazy", then it will
not be initialized until the first request that uses it.
-->
<!-- SearchHandler
http://wiki.apache.org/solr/SearchHandler
For processing Search Queries, the primary Request Handler
provided with Solr is "SearchHandler" It delegates to a sequent
of SearchComponents (see below) and supports distributed
queries across multiple shards
-->
<requestHandler name="/select" class="solr.SearchHandler">
<!-- default values for query parameters can be specified, these
will be overridden by parameters in the request
-->
<lst name="defaults">
<str name="echoParams">explicit</str>
<int name="rows">10</int>
</lst>
</requestHandler>
<initParams path="/update/**,/select">
<lst name="defaults">
<str name="df">_text_</str>
</lst>
</initParams>
<!-- Response Writers
http://wiki.apache.org/solr/QueryResponseWriter
Request responses will be written using the writer specified by
the 'wt' request parameter matching the name of a registered
writer.
The "default" writer is the default and will be used if 'wt' is
not specified in the request.
-->
<queryResponseWriter name="xml"
default="true"
class="solr.XMLResponseWriter" />
</config>

View File

@@ -1,49 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<schema name="dovecot-fts" version="2.0">
<fieldType name="string" class="solr.StrField" omitNorms="true" sortMissingLast="true"/>
<fieldType name="long" class="solr.LongPointField" positionIncrementGap="0"/>
<fieldType name="boolean" class="solr.BoolField" sortMissingLast="true"/>
<fieldType name="text" class="solr.TextField" autoGeneratePhraseQueries="true" positionIncrementGap="100">
<analyzer type="index">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.EdgeNGramFilterFactory" minGramSize="3" maxGramSize="20"/>
<filter class="solr.StopFilterFactory" words="stopwords.txt" ignoreCase="true"/>
<filter class="solr.WordDelimiterGraphFilterFactory" catenateNumbers="1" generateNumberParts="1" splitOnCaseChange="1" generateWordParts="1" splitOnNumerics="1" catenateAll="1" catenateWords="1"/>
<filter class="solr.FlattenGraphFilterFactory"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
<filter class="solr.PorterStemFilterFactory"/>
</analyzer>
<analyzer type="query">
<tokenizer class="solr.StandardTokenizerFactory"/>
<filter class="solr.SynonymGraphFilterFactory" expand="true" ignoreCase="true" synonyms="synonyms.txt"/>
<filter class="solr.FlattenGraphFilterFactory"/>
<filter class="solr.StopFilterFactory" words="stopwords.txt" ignoreCase="true"/>
<filter class="solr.WordDelimiterGraphFilterFactory" catenateNumbers="1" generateNumberParts="1" splitOnCaseChange="1" generateWordParts="1" splitOnNumerics="1" catenateAll="1" catenateWords="1"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.KeywordMarkerFilterFactory" protected="protwords.txt"/>
<filter class="solr.PorterStemFilterFactory"/>
</analyzer>
</fieldType>
<field name="id" type="string" indexed="true" required="true" stored="true"/>
<field name="uid" type="long" indexed="true" required="true" stored="true"/>
<field name="box" type="string" indexed="true" required="true" stored="true"/>
<field name="user" type="string" indexed="true" required="true" stored="true"/>
<field name="hdr" type="text" indexed="true" stored="false"/>
<field name="body" type="text" indexed="true" stored="false"/>
<field name="from" type="text" indexed="true" stored="false"/>
<field name="to" type="text" indexed="true" stored="false"/>
<field name="cc" type="text" indexed="true" stored="false"/>
<field name="bcc" type="text" indexed="true" stored="false"/>
<field name="subject" type="text" indexed="true" stored="false"/>
<!-- Used by Solr internally: -->
<field name="_version_" type="long" indexed="true" stored="true"/>
<uniqueKey>id</uniqueKey>
</schema>

View File

@@ -1,75 +0,0 @@
#!/bin/bash
if [[ "${FLATCURVE_EXPERIMENTAL}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "FLATCURVE_EXPERIMENTAL=y, skipping Solr but enabling Flatcurve as FTS for Dovecot!"
echo "Solr will be removed in the future!"
sleep 365d
exit 0
elif [[ "${SKIP_SOLR}" =~ ^([yY][eE][sS]|[yY])+$ ]]; then
echo "SKIP_SOLR=y, skipping Solr..."
echo "HINT: You could try the newer FTS Backend Flatcurve, which is currently in experimental state..."
echo "Simply set FLATCURVE_EXPERIMENTAL=y inside your mailcow.conf and restart the stack afterwards!"
echo "Solr will be removed in the future!"
sleep 365d
exit 0
fi
MEM_TOTAL=$(awk '/MemTotal/ {print $2}' /proc/meminfo)
if [[ "${1}" != "--bootstrap" ]]; then
if [ ${MEM_TOTAL} -lt "2097152" ]; then
echo "System memory less than 2 GB, skipping Solr..."
sleep 365d
exit 0
fi
fi
set -e
# run the optional initdb
. /opt/docker-solr/scripts/run-initdb
# fixing volume permission
[[ -d /opt/solr/server/solr/dovecot-fts/data ]] && chown -R solr:solr /opt/solr/server/solr/dovecot-fts/data
if [[ "${1}" != "--bootstrap" ]]; then
sed -i '/SOLR_HEAP=/c\SOLR_HEAP="'${SOLR_HEAP:-1024}'m"' /opt/solr/bin/solr.in.sh
else
sed -i '/SOLR_HEAP=/c\SOLR_HEAP="256m"' /opt/solr/bin/solr.in.sh
fi
if [[ "${1}" == "--bootstrap" ]]; then
echo "Creating initial configuration"
echo "Modifying default config set"
cp /solr-config-7.7.0.xml /opt/solr/server/solr/configsets/_default/conf/solrconfig.xml
cp /solr-schema-7.7.0.xml /opt/solr/server/solr/configsets/_default/conf/schema.xml
rm /opt/solr/server/solr/configsets/_default/conf/managed-schema
echo "Starting local Solr instance to setup configuration"
gosu solr start-local-solr
echo "Creating core \"dovecot-fts\""
gosu solr /opt/solr/bin/solr create -c "dovecot-fts"
# See https://github.com/docker-solr/docker-solr/issues/27
echo "Checking core"
while ! wget -O - 'http://localhost:8983/solr/admin/cores?action=STATUS' | grep -q instanceDir; do
echo "Could not find any cores, waiting..."
sleep 3
done
echo "Created core \"dovecot-fts\""
echo "Stopping local Solr"
gosu solr stop-local-solr
exit 0
fi
echo "Starting up Solr..."
echo -e "\e[31mSolr is deprecated! You can try the new FTS System now by enabling FLATCURVE_EXPERIMENTAL=y inside mailcow.conf and restarting the stack\e[0m"
echo -e "\e[31mSolr will be removed completely soon!\e[0m"
sleep 15
exec gosu solr solr-foreground

View File

@@ -1,4 +1,4 @@
FROM alpine:3.20
FROM alpine:3.21
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"

View File

@@ -1,4 +1,4 @@
FROM alpine:3.20
FROM alpine:3.21
LABEL maintainer = "The Infrastructure Company GmbH <info@servercow.de>"
@@ -34,7 +34,7 @@ RUN apk add --update \
&& curl https://raw.githubusercontent.com/mludvig/smtp-cli/v3.10/smtp-cli -o /smtp-cli \
&& chmod +x smtp-cli
COPY watchdog.sh /watchdog.sh
COPY check_mysql_slavestatus.sh /usr/lib/nagios/plugins/check_mysql_slavestatus.sh
COPY data/Dockerfiles/watchdog/watchdog.sh /watchdog.sh
COPY data/Dockerfiles/watchdog/check_mysql_slavestatus.sh /usr/lib/nagios/plugins/check_mysql_slavestatus.sh
CMD ["/watchdog.sh"]

View File

@@ -49,7 +49,7 @@
# 2013101601 Optical clean up #
# 2013101602 Rewrite help output #
# 2013101700 Handle Slave IO in 'Connecting' state #
# 2013101701 Minor changes in output, handling UNKWNON situations now #
# 2013101701 Minor changes in output, handling UNKNOWN situations now #
# 2013101702 Exit CRITICAL when Slave IO in Connecting state #
# 2013123000 Slave_SQL_Running also matched Slave_SQL_Running_State #
# 2015011600 Added 'moving' check to catch possible connection issues #
@@ -131,10 +131,10 @@ elif [[ -n "${socket}" && (-z "${user}" || -z "${password}") ]]; then
fi
# Connect to the DB server and store output in vars
if [[ -n $socket ]]; then
ConnectionResult=$(mysql ${optfile} ${socket} ${user} -e "show slave ${connection} status\G" 2>&1)
if [[ -n $socket ]]; then
ConnectionResult=$(mariadb --skip-ssl ${optfile} ${socket} ${user} -e "show slave ${connection} status\G" 2>&1)
else
ConnectionResult=$(mysql ${optfile} ${host} ${port} ${user} -e "show slave ${connection} status\G" 2>&1)
ConnectionResult=$(mariadb --skip-ssl ${optfile} ${host} ${port} ${user} -e "show slave ${connection} status\G" 2>&1)
fi
if [ -z "`echo "${ConnectionResult}" |grep Slave_IO_State`" ]; then
@@ -178,33 +178,33 @@ if [ ${check} = ${ok} ] && [ ${checkio} = ${ok} ]; then
then echo "CRITICAL: Slave is ${delayinfo} seconds behind Master | delay=${delayinfo}s"; exit ${STATE_CRITICAL}
elif [[ ${delayinfo} -ge ${warn_delay} ]]
then echo "WARNING: Slave is ${delayinfo} seconds behind Master | delay=${delayinfo}s"; exit ${STATE_WARNING}
else
else
# Everything looks OK here but now let us check if the replication is moving
if [[ -n ${moving} ]] && [[ -n ${tmpfile} ]] && [[ $readpos -eq $execpos ]]
then
#echo "Debug: Read pos is $readpos - Exec pos is $execpos"
then
#echo "Debug: Read pos is $readpos - Exec pos is $execpos"
# Check if tmp file exists
curtime=`date +%s`
if [[ -w $tmpfile ]]
then
if [[ -w $tmpfile ]]
then
tmpfiletime=`date +%s -r $tmpfile`
if [[ `expr $curtime - $tmpfiletime` -gt ${moving} ]]
then
exectmp=`cat $tmpfile`
#echo "Debug: Exec pos in tmpfile is $exectmp"
if [[ $exectmp -eq $execpos ]]
then
then
# The value read from the tmp file and from db are the same. Replication hasnt moved!
echo "WARNING: Slave replication has not moved in ${moving} seconds. Manual check required."; exit ${STATE_WARNING}
else
else
# Replication has moved since the tmp file was written. Delete tmp file and output OK.
rm $tmpfile
echo "OK: Slave SQL running: ${check} Slave IO running: ${checkio} / master: ${masterinfo} / slave is ${delayinfo} seconds behind master | delay=${delayinfo}s"; exit ${STATE_OK};
fi
else
else
echo "OK: Slave SQL running: ${check} Slave IO running: ${checkio} / master: ${masterinfo} / slave is ${delayinfo} seconds behind master | delay=${delayinfo}s"; exit ${STATE_OK};
fi
else
else
echo "$execpos" > $tmpfile
echo "OK: Slave SQL running: ${check} Slave IO running: ${checkio} / master: ${masterinfo} / slave is ${delayinfo} seconds behind master | delay=${delayinfo}s"; exit ${STATE_OK};
fi

View File

@@ -40,9 +40,9 @@ done
# Do not attempt to write to slave
if [[ ! -z ${REDIS_SLAVEOF_IP} ]]; then
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT}"
REDIS_CMDLINE="redis-cli -h ${REDIS_SLAVEOF_IP} -p ${REDIS_SLAVEOF_PORT} -a ${REDISPASS} --no-auth-warning"
else
REDIS_CMDLINE="redis-cli -h redis -p 6379"
REDIS_CMDLINE="redis-cli -h redis -p 6379 -a ${REDISPASS} --no-auth-warning"
fi
until [[ $(${REDIS_CMDLINE} PING) == "PONG" ]]; do
@@ -234,7 +234,7 @@ external_checks() {
diff_c=0
THRESHOLD=${EXTERNAL_CHECKS_THRESHOLD}
# Reduce error count by 2 after restarting an unhealthy container
GUID=$(mysql -u${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT version FROM versions WHERE application = 'GUID'" -BN)
GUID=$(mariadb --skip-ssl -u${DBUSER} -p${DBPASS} ${DBNAME} -e "SELECT version FROM versions WHERE application = 'GUID'" -BN)
trap "[ ${err_count} -gt 1 ] && err_count=$(( ${err_count} - 2 ))" USR1
while [ ${err_count} -lt ${THRESHOLD} ]; do
err_c_cur=${err_count}
@@ -330,7 +330,7 @@ redis_checks() {
touch /tmp/redis-mailcow; echo "$(tail -50 /tmp/redis-mailcow)" > /tmp/redis-mailcow
host_ip=$(get_container_ip redis-mailcow)
err_c_cur=${err_count}
/usr/lib/nagios/plugins/check_tcp -4 -H redis-mailcow -p 6379 -E -s "PING\n" -q "QUIT" -e "PONG" 2>> /tmp/redis-mailcow 1>&2; err_count=$(( ${err_count} + $? ))
/usr/lib/nagios/plugins/check_tcp -4 -H redis-mailcow -p 6379 -E -s "AUTH ${REDISPASS}\nPING\n" -q "QUIT" -e "PONG" 2>> /tmp/redis-mailcow 1>&2; err_count=$(( ${err_count} + $? ))
[ ${err_c_cur} -eq ${err_count} ] && [ ! $((${err_count} - 1)) -lt 0 ] && err_count=$((${err_count} - 1)) diff_c=1
[ ${err_c_cur} -ne ${err_count} ] && diff_c=$(( ${err_c_cur} - ${err_count} ))
progress "Redis" ${THRESHOLD} $(( ${THRESHOLD} - ${err_count} )) ${diff_c}
@@ -402,7 +402,7 @@ sogo_checks() {
trap "[ ${err_count} -gt 1 ] && err_count=$(( ${err_count} - 2 ))" USR1
while [ ${err_count} -lt ${THRESHOLD} ]; do
touch /tmp/sogo-mailcow; echo "$(tail -50 /tmp/sogo-mailcow)" > /tmp/sogo-mailcow
host_ip=$(get_container_ip sogo-mailcow)
host_ip=$SOGO_HOST
err_c_cur=${err_count}
/usr/lib/nagios/plugins/check_http -4 -H ${host_ip} -u /SOGo.index/ -p 20000 2>> /tmp/sogo-mailcow 1>&2; err_count=$(( ${err_count} + $? ))
[ ${err_c_cur} -eq ${err_count} ] && [ ! $((${err_count} - 1)) -lt 0 ] && err_count=$((${err_count} - 1)) diff_c=1
@@ -503,12 +503,12 @@ dovecot_repl_checks() {
err_count=0
diff_c=0
THRESHOLD=${DOVECOT_REPL_THRESHOLD}
D_REPL_STATUS=$(redis-cli -h redis -r GET DOVECOT_REPL_HEALTH)
D_REPL_STATUS=$(redis-cli -h redis -a ${REDISPASS} --no-auth-warning -r GET DOVECOT_REPL_HEALTH)
# Reduce error count by 2 after restarting an unhealthy container
trap "[ ${err_count} -gt 1 ] && err_count=$(( ${err_count} - 2 ))" USR1
while [ ${err_count} -lt ${THRESHOLD} ]; do
err_c_cur=${err_count}
D_REPL_STATUS=$(redis-cli --raw -h redis GET DOVECOT_REPL_HEALTH)
D_REPL_STATUS=$(redis-cli --raw -h redis -a ${REDISPASS} --no-auth-warning GET DOVECOT_REPL_HEALTH)
if [[ "${D_REPL_STATUS}" != "1" ]]; then
err_count=$(( ${err_count} + 1 ))
fi
@@ -578,19 +578,19 @@ ratelimit_checks() {
err_count=0
diff_c=0
THRESHOLD=${RATELIMIT_THRESHOLD}
RL_LOG_STATUS=$(redis-cli -h redis LRANGE RL_LOG 0 0 | jq .qid)
RL_LOG_STATUS=$(redis-cli -h redis -a ${REDISPASS} --no-auth-warning LRANGE RL_LOG 0 0 | jq .qid)
# Reduce error count by 2 after restarting an unhealthy container
trap "[ ${err_count} -gt 1 ] && err_count=$(( ${err_count} - 2 ))" USR1
while [ ${err_count} -lt ${THRESHOLD} ]; do
err_c_cur=${err_count}
RL_LOG_STATUS_PREV=${RL_LOG_STATUS}
RL_LOG_STATUS=$(redis-cli -h redis LRANGE RL_LOG 0 0 | jq .qid)
RL_LOG_STATUS=$(redis-cli -h redis -a ${REDISPASS} --no-auth-warning LRANGE RL_LOG 0 0 | jq .qid)
if [[ ${RL_LOG_STATUS_PREV} != ${RL_LOG_STATUS} ]]; then
err_count=$(( ${err_count} + 1 ))
echo 'Last 10 applied ratelimits (may overlap with previous reports).' > /tmp/ratelimit
echo 'Full ratelimit buckets can be emptied by deleting the ratelimit hash from within mailcow UI (see /debug -> Protocols -> Ratelimit):' >> /tmp/ratelimit
echo >> /tmp/ratelimit
redis-cli --raw -h redis LRANGE RL_LOG 0 10 | jq . >> /tmp/ratelimit
redis-cli --raw -h redis -a ${REDISPASS} --no-auth-warning LRANGE RL_LOG 0 10 | jq . >> /tmp/ratelimit
fi
[ ${err_c_cur} -eq ${err_count} ] && [ ! $((${err_count} - 1)) -lt 0 ] && err_count=$((${err_count} - 1)) diff_c=1
[ ${err_c_cur} -ne ${err_count} ] && diff_c=$(( ${err_c_cur} - ${err_count} ))
@@ -673,7 +673,7 @@ acme_checks() {
err_count=0
diff_c=0
THRESHOLD=${ACME_THRESHOLD}
ACME_LOG_STATUS=$(redis-cli -h redis GET ACME_FAIL_TIME)
ACME_LOG_STATUS=$(redis-cli -h redis -a ${REDISPASS} --no-auth-warning GET ACME_FAIL_TIME)
if [[ -z "${ACME_LOG_STATUS}" ]]; then
${REDIS_CMDLINE} SET ACME_FAIL_TIME 0
ACME_LOG_STATUS=0
@@ -685,7 +685,7 @@ acme_checks() {
ACME_LOG_STATUS_PREV=${ACME_LOG_STATUS}
ACME_LC=0
until [[ ! -z ${ACME_LOG_STATUS} ]] || [ ${ACME_LC} -ge 3 ]; do
ACME_LOG_STATUS=$(redis-cli -h redis GET ACME_FAIL_TIME 2> /dev/null)
ACME_LOG_STATUS=$(redis-cli -h redis -a ${REDISPASS} --no-auth-warning GET ACME_FAIL_TIME 2> /dev/null)
sleep 3
ACME_LC=$((ACME_LC+1))
done
@@ -994,6 +994,7 @@ PID=$!
echo "Spawned cert_checks with PID ${PID}"
BACKGROUND_TASKS+=(${PID})
if [[ "${SKIP_OLEFY}" =~ ^([nN][oO]|[nN])+$ ]]; then
(
while true; do
if ! olefy_checks; then
@@ -1005,6 +1006,7 @@ done
PID=$!
echo "Spawned olefy_checks with PID ${PID}"
BACKGROUND_TASKS+=(${PID})
fi
(
while true; do

View File

@@ -1,130 +0,0 @@
map $http_x_forwarded_proto $client_req_scheme_nc {
default $scheme;
https https;
}
server {
include /etc/nginx/conf.d/listen_ssl.active;
include /etc/nginx/conf.d/listen_plain.active;
include /etc/nginx/mime.types;
charset utf-8;
override_charset on;
ssl_certificate /etc/ssl/mail/cert.pem;
ssl_certificate_key /etc/ssl/mail/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305;
ssl_ecdh_curve X25519:X448:secp384r1:secp256k1;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "noindex, nofollow" always;
add_header X-XSS-Protection "1; mode=block" always;
fastcgi_hide_header X-Powered-By;
server_name NC_SUBD;
root /web/nextcloud/;
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
location = /.well-known/carddav {
return 301 $client_req_scheme_nc://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $client_req_scheme_nc://$host/remote.php/dav;
}
location = /.well-known/webfinger {
return 301 $client_req_scheme_nc://$host/index.php/.well-known/webfinger;
}
location = /.well-known/nodeinfo {
return 301 $client_req_scheme_nc://$host/index.php/.well-known/nodeinfo;
}
location ^~ /.well-known/acme-challenge/ {
default_type "text/plain";
root /web;
}
fastcgi_buffers 64 4K;
gzip on;
gzip_vary on;
gzip_comp_level 4;
gzip_min_length 256;
gzip_proxied expired no-cache no-store private no_last_modified no_etag auth;
gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy;
set_real_ip_from fc00::/7;
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.16.0.0/12;
set_real_ip_from 192.168.0.0/16;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
location / {
rewrite ^ /index.php$uri;
}
location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ {
deny all;
}
location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|ocs-provider\/.+)\.php(?:$|\/) {
fastcgi_split_path_info ^(.+?\.php)(\/.*|)$;
set $path_info $fastcgi_path_info;
try_files $fastcgi_script_name =404;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $path_info;
fastcgi_param HTTPS on;
# Avoid sending the security headers twice
fastcgi_param modHeadersAvailable true;
# Enable pretty urls
fastcgi_param front_controller_active true;
fastcgi_pass phpfpm:9002;
fastcgi_intercept_errors on;
fastcgi_request_buffering off;
client_max_body_size 0;
fastcgi_read_timeout 1200;
}
location ~ ^\/(?:updater|ocs-provider)(?:$|\/) {
try_files $uri/ =404;
index index.php;
}
location ~ \.(?:css|js|woff2?|svg|gif|map)$ {
try_files $uri /index.php$request_uri;
add_header Cache-Control "public, max-age=15778463";
add_header Referrer-Policy "no-referrer" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-Download-Options "noopen" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Permitted-Cross-Domain-Policies "none" always;
add_header X-Robots-Tag "none" always;
add_header X-XSS-Protection "1; mode=block" always;
access_log off;
}
location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ {
try_files $uri /index.php$request_uri;
access_log off;
}
}

View File

@@ -1,2 +0,0 @@
#!/bin/bash
docker exec -it -u www-data $(docker ps -f name=php-fpm-mailcow -q) php /web/nextcloud/occ ${@}

View File

@@ -1,8 +1,8 @@
-----BEGIN DH PARAMETERS-----
MIIBCAKCAQEA9iHB0CRDhV8wfBgqnmvuJpl0fzL3qL75R4ZvQHlfMNLrxuIz2x9D
9zcDhPcBTVzV5Ay0AAkke4wP6r6wDQqXqBP4Y8IOkYAyLh3jM40jfHQzQt+5JdQl
ond3kiscBsFOch/vMfSLMu3lAb0YhPNTvrxhMz7LcVAWYl82swASupdiKR+MgaQr
XsugpmDKsHW60VmIM9B7K9Y+rNHwvMWkmISd0KxA8oOy1WJvsVEissMALZDE3c4w
2xHmO2lXxgEx3aez28736t4m/KW3g9Zr31a1M0KusmfY//fGkPk4NUrLBOS2xrgp
Y/rG1qSBdcVyerM0Ki93qCyHKYu4ene0OwIBAg==
MIIBCAKCAQEA//////////+t+FRYortKmq/cViAnPTzx2LnFg84tNpWp4TZBFGQz
+8yTnc4kmz75fS/jY2MMddj2gbICrsRhetPfHtXV/WVhJDP1H18GbtCFY2VVPe0a
87VXE15/V8k1mE8McODmi3fipona8+/och3xWKE2rec1MKzKT0g6eXq8CrGCsyT7
YdEIqUuyyOP7uWrat2DX9GgdT0Kj3jlN9K5W7edjcrsZCwenyO4KbXCeAvzhzffi
7MA0BM0oNC9hkXL+nOmFg/+OTxIy7vKBg8P+OxtMb61zO7X8vC7CIAXFjvGDfRaD
ssbzSibBsu/6iGtCOGEoXJf//////////wIBAg==
-----END DH PARAMETERS-----

View File

@@ -0,0 +1,15 @@
[
{
"template": "whitelist.ign2.j2",
"output": "/var/lib/clamav/whitelist.ign2",
"clean_blank_lines": true
},
{
"template": "clamd.conf.j2",
"output": "/etc/clamav/clamd.conf"
},
{
"template": "freshclam.conf.j2",
"output": "/etc/clamav/freshclam.conf"
}
]

View File

@@ -0,0 +1,5 @@
# Please restart ClamAV after changing signatures
Example-Signature.Ignore-1
PUA.Win.Trojan.EmbeddedPDF-1
PUA.Pdf.Trojan.EmbeddedJavaScript-1
PUA.Pdf.Trojan.OpenActionObjectwithJavascript-1

View File

@@ -0,0 +1,115 @@
<?php
ini_set('error_reporting', 0);
header('Content-Type: application/json');
$post = trim(file_get_contents('php://input'));
if ($post) {
$post = json_decode($post, true);
}
$return = array("success" => false);
if(!isset($post['username']) || !isset($post['password']) || !isset($post['real_rip'])){
error_log("MAILCOWAUTH: Bad Request");
http_response_code(400); // Bad Request
echo json_encode($return);
exit();
}
require_once('../../../web/inc/vars.inc.php');
if (file_exists('../../../web/inc/vars.local.inc.php')) {
include_once('../../../web/inc/vars.local.inc.php');
}
require_once '../../../web/inc/lib/vendor/autoload.php';
// Init Redis
$redis = new Redis();
try {
if (!empty(getenv('REDIS_SLAVEOF_IP'))) {
$redis->connect(getenv('REDIS_SLAVEOF_IP'), getenv('REDIS_SLAVEOF_PORT'));
}
else {
$redis->connect('redis-mailcow', 6379);
}
$redis->auth(getenv("REDISPASS"));
}
catch (Exception $e) {
error_log("MAILCOWAUTH: " . $e . PHP_EOL);
http_response_code(500); // Internal Server Error
echo json_encode($return);
exit;
}
// Init database
$dsn = $database_type . ":unix_socket=" . $database_sock . ";dbname=" . $database_name;
$opt = [
PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION,
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
PDO::ATTR_EMULATE_PREPARES => false,
];
try {
$pdo = new PDO($dsn, $database_user, $database_pass, $opt);
}
catch (PDOException $e) {
error_log("MAILCOWAUTH: " . $e . PHP_EOL);
http_response_code(500); // Internal Server Error
echo json_encode($return);
exit;
}
// Load core functions first
require_once 'functions.inc.php';
require_once 'functions.auth.inc.php';
require_once 'sessions.inc.php';
require_once 'functions.mailbox.inc.php';
require_once 'functions.ratelimit.inc.php';
require_once 'functions.acl.inc.php';
$isSOGoRequest = $post['real_rip'] == getenv('SOGO_HOST');
$result = false;
if ($isSOGoRequest) {
// This is a SOGo Auth request. First check for SSO password.
$sogo_sso_pass = file_get_contents("/etc/sogo-sso/sogo-sso.pass");
if ($sogo_sso_pass === $post['password']){
error_log('MAILCOWAUTH: SOGo SSO auth for user ' . $post['username']);
set_sasl_log($post['username'], $post['real_rip'], "SOGO");
$result = true;
}
}
if ($result === false){
// If it's a SOGo Request, don't check for protocol access
$service = ($isSOGoRequest) ? false : array($post['service'] => true);
$result = apppass_login($post['username'], $post['password'], $service, array(
'is_internal' => true,
'remote_addr' => $post['real_rip']
));
if ($result) {
error_log('MAILCOWAUTH: App auth for user ' . $post['username']);
set_sasl_log($post['username'], $post['real_rip'], $post['service']);
}
}
if ($result === false){
// Init Identity Provider
$iam_provider = identity_provider('init');
$iam_settings = identity_provider('get');
$result = user_login($post['username'], $post['password'], array('is_internal' => true));
if ($result) {
error_log('MAILCOWAUTH: User auth for user ' . $post['username']);
set_sasl_log($post['username'], $post['real_rip'], $post['service']);
}
}
if ($result) {
http_response_code(200); // OK
$return['success'] = true;
} else {
error_log("MAILCOWAUTH: Login failed for user " . $post['username']);
http_response_code(401); // Unauthorized
}
echo json_encode($return);
session_destroy();
exit;

View File

@@ -0,0 +1,57 @@
function auth_password_verify(request, password)
if request.domain == nil then
return dovecot.auth.PASSDB_RESULT_USER_UNKNOWN, "No such user"
end
local json = require "cjson"
local ltn12 = require "ltn12"
local https = require "ssl.https"
https.TIMEOUT = 30
local req = {
username = request.user,
password = password,
real_rip = request.real_rip,
service = request.service
}
local req_json = json.encode(req)
local res = {}
local b, c = https.request {
method = "POST",
url = "https://nginx:9082",
source = ltn12.source.string(req_json),
headers = {
["content-type"] = "application/json",
["content-length"] = tostring(#req_json)
},
sink = ltn12.sink.table(res),
insecure = true
}
-- Returning PASSDB_RESULT_PASSWORD_MISMATCH will reset the user's auth cache entry.
-- Returning PASSDB_RESULT_INTERNAL_FAILURE keeps the existing cache entry,
-- even if the TTL has expired. Useful to avoid cache eviction during backend issues.
if c ~= 200 and c ~= 401 then
dovecot.i_info("HTTP request failed with " .. c .. " for user " .. request.user)
return dovecot.auth.PASSDB_RESULT_PASSWORD_MISMATCH, "Upstream error"
end
local response_str = table.concat(res)
local is_response_valid, response_json = pcall(json.decode, response_str)
if not is_response_valid then
dovecot.i_info("Invalid JSON received: " .. response_str)
return dovecot.auth.PASSDB_RESULT_PASSWORD_MISMATCH, "Invalid response format"
end
if response_json.success == true then
return dovecot.auth.PASSDB_RESULT_OK, ""
end
return dovecot.auth.PASSDB_RESULT_PASSWORD_MISMATCH, "Failed to authenticate"
end
function auth_passdb_lookup(req)
return dovecot.auth.PASSDB_RESULT_USER_UNKNOWN, ""
end

View File

@@ -0,0 +1,70 @@
[
{
"template": "dovecot-dict-sql-quota.conf.j2",
"output": "/etc/dovecot/sql/dovecot-dict-sql-quota.conf"
},
{
"template": "dovecot-dict-sql-userdb.conf.j2",
"output": "/etc/dovecot/sql/dovecot-dict-sql-userdb.conf"
},
{
"template": "dovecot-dict-sql-sieve_before.conf.j2",
"output": "/etc/dovecot/sql/dovecot-dict-sql-sieve_before.conf"
},
{
"template": "dovecot-dict-sql-sieve_after.conf.j2",
"output": "/etc/dovecot/sql/dovecot-dict-sql-sieve_after.conf"
},
{
"template": "mail_plugins.j2",
"output": "/etc/dovecot/mail_plugins"
},
{
"template": "mail_plugins_imap.j2",
"output": "/etc/dovecot/mail_plugins_imap"
},
{
"template": "mail_plugins_lmtp.j2",
"output": "/etc/dovecot/mail_plugins_lmtp"
},
{
"template": "global_sieve_after.sieve.j2",
"output": "/var/vmail/sieve/global_sieve_after.sieve"
},
{
"template": "global_sieve_before.sieve.j2",
"output": "/var/vmail/sieve/global_sieve_before.sieve"
},
{
"template": "dovecot-master.passwd.j2",
"output": "/etc/dovecot/dovecot-master.passwd"
},
{
"template": "dovecot-master.userdb.j2",
"output": "/etc/dovecot/dovecot-master.userdb"
},
{
"template": "sieve.creds.j2",
"output": "/etc/sogo/sieve.creds"
},
{
"template": "sogo-sso.pass.j2",
"output": "/etc/phpfpm/sogo-sso.pass"
},
{
"template": "cron.creds.j2",
"output": "/etc/sogo/cron.creds"
},
{
"template": "source_env.sh.j2",
"output": "/source_env.sh"
},
{
"template": "maildir_gc.sh.j2",
"output": "/usr/local/bin/maildir_gc.sh"
},
{
"template": "dovecot.conf.j2",
"output": "/etc/dovecot/dovecot.conf"
}
]

View File

@@ -0,0 +1 @@
{{ RAND_USER }}@mailcow.local:{{ RAND_PASS2 }}

View File

@@ -0,0 +1,14 @@
{% set QUOTA_TABLE = "quota2" if MASTER|lower in ["y", "yes"] else "quota2replica" %}
connect = "host=/var/run/mysqld/mysqld.sock dbname={{ DBNAME }} user={{ DBUSER }} password={{ DBPASS | escape_quotes }}"
map {
pattern = priv/quota/storage
table = {{ QUOTA_TABLE }}
username_field = username
value_field = bytes
}
map {
pattern = priv/quota/messages
table = {{ QUOTA_TABLE }}
username_field = username
value_field = messages
}

View File

@@ -0,0 +1,21 @@
connect = "host=/var/run/mysqld/mysqld.sock dbname={{ DBNAME }} user={{ DBUSER }} password={{ DBPASS | escape_quotes }}"
map {
pattern = priv/sieve/name/$script_name
table = sieve_after
username_field = username
value_field = id
fields {
script_name = $script_name
}
}
map {
pattern = priv/sieve/data/$id
table = sieve_after
username_field = username
value_field = script_data
fields {
id = $id
}
}

View File

@@ -0,0 +1,19 @@
connect = "host=/var/run/mysqld/mysqld.sock dbname={{ DBNAME }} user={{ DBUSER }} password={{ DBPASS | escape_quotes }}"
map {
pattern = priv/sieve/name/$script_name
table = sieve_before
username_field = username
value_field = id
fields {
script_name = $script_name
}
}
map {
pattern = priv/sieve/data/$id
table = sieve_before
username_field = username
value_field = script_data
fields {
id = $id
}
}

View File

@@ -0,0 +1,4 @@
driver = mysql
connect = "host=/var/run/mysqld/mysqld.sock dbname={{ DBNAME }} user={{ DBUSER }} password={{ DBPASS | escape_quotes }}"
user_query = SELECT CONCAT(JSON_UNQUOTE(JSON_VALUE(attributes, '$.mailbox_format')), mailbox_path_prefix, '%d/%n/{{ MAILDIR_SUB }}:VOLATILEDIR=/var/volatile/%u:INDEX=/var/vmail_index/%u') AS mail, '%s' AS protocol, 5000 AS uid, 5000 AS gid, concat('*:bytes=', quota) AS quota_rule FROM mailbox WHERE username = '%u' AND (active = '1' OR active = '2')
iterate_query = SELECT username FROM mailbox WHERE active = '1' OR active = '2';

View File

@@ -0,0 +1,3 @@
{%- set master_user = DOVECOT_MASTER_USER or RAND_USER %}
{%- set master_pass = DOVECOT_MASTER_PASS or RAND_PASS %}
{{ master_user }}@mailcow.local:{SHA1}{{ master_pass | sha1 }}::::::

View File

@@ -0,0 +1 @@
{{ DOVECOT_MASTER_USER or RAND_USER }}@mailcow.local::5000:5000::::

View File

@@ -1,23 +1,16 @@
# --------------------------------------------------------------------------
# Please create a file "extra.conf" for persistent overrides to dovecot.conf
# --------------------------------------------------------------------------
# LDAP example:
#passdb {
# args = /etc/dovecot/ldap/passdb.conf
# driver = ldap
#}
auth_mechanisms = plain login
#mail_debug = yes
#auth_debug = yes
#log_debug = category=fts-flatcurve # Activate Logging for Flatcurve FTS Searchings
log_path = syslog
disable_plaintext_auth = yes
# Uncomment on NFS share
#mmap_disable = yes
#mail_fsync = always
#mail_nfs_index = yes
#mail_nfs_storage = yes
login_log_format_elements = "user=<%u> method=%m rip=%r lip=%l mpid=%e %c %k"
mail_home = /var/vmail/%d/%n
mail_location = maildir:~/
@@ -53,7 +46,7 @@ mail_shared_explicit_inbox = yes
mail_prefetch_count = 30
passdb {
driver = lua
args = file=/etc/dovecot/lua/passwd-verify.lua blocking=yes
args = file=/etc/dovecot/auth/passwd-verify.lua blocking=yes cache_key=%s:%u:%w
result_success = return-ok
result_failure = continue
result_internalfail = continue
@@ -69,7 +62,7 @@ passdb {
# a return of the following passdb is mandatory
passdb {
driver = lua
args = file=/etc/dovecot/lua/passwd-verify.lua blocking=yes
args = file=/etc/dovecot/auth/passwd-verify.lua blocking=yes
}
# Set doveadm_password=your-secret-password in data/conf/dovecot/extra.conf (create if missing)
service doveadm {
@@ -78,7 +71,9 @@ service doveadm {
}
vsz_limit=2048 MB
}
!include /etc/dovecot/dovecot.folders.conf
{% include 'dovecot.folders.conf.j2' %}
protocols = imap sieve lmtp pop3
service dict {
unix_listener dict {
@@ -125,6 +120,7 @@ service managesieve-login {
}
service imap-login {
service_count = 1
process_min_avail = 2
process_limit = 10000
vsz_limit = 1G
user = dovenull
@@ -140,6 +136,7 @@ service imap-login {
}
service pop3-login {
service_count = 1
process_min_avail = 1
vsz_limit = 1G
inet_listener pop3_haproxy {
port = 10110
@@ -191,7 +188,7 @@ protocol sieve {
}
plugin {
# Allow "any" or "authenticated" to be used in ACLs
acl_anyone = </etc/dovecot/acl_anyone
acl_anyone = {{ ACL_ANYONE }}
acl_shared_dict = file:/var/vmail/shared-mailboxes.db
acl = vfile
acl_user = %u
@@ -239,7 +236,7 @@ plugin {
mail_crypt_global_public_key = </mail_crypt/ecpubkey.pem
mail_crypt_save_version = 2
# Enable compression while saving, lz4 Dovecot v2.2.11+
# Enable compression while saving, lz4 Dovecot v2.3.17+
zlib_save = lz4
mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename
@@ -247,7 +244,7 @@ plugin {
mail_log_cached_only = yes
# Try set mail_replica
!include_try /etc/dovecot/mail_replica.conf
{% include 'mail_replica.conf.j2' %}
}
service quota-warning {
executable = script /usr/local/bin/quota_notify.py
@@ -274,10 +271,11 @@ service stats {
}
}
imap_max_line_length = 2 M
#auth_cache_verify_password_with_worker = yes
#auth_cache_negative_ttl = 0
#auth_cache_ttl = 30 s
#auth_cache_size = 2 M
auth_cache_verify_password_with_worker = yes
auth_cache_negative_ttl = 60s
auth_cache_ttl = 300s
auth_cache_size = 10M
auth_verbose_passwords = sha1:6
service replicator {
process_min_avail = 1
}
@@ -297,13 +295,15 @@ service replicator {
replication_max_conns = 10
doveadm_port = 12345
replication_dsync_parameters = -d -l 30 -U -n INBOX
{% include 'sogo_trusted_ip.conf.j2' %}
{% include 'shared_namespace.conf.j2' %}
{% include 'fts.conf.j2' %}
{% include 'sni.conf.j2' %}
# <Includes>
!include_try /etc/dovecot/sni.conf
!include_try /etc/dovecot/sogo_trusted_ip.conf
!include_try /etc/dovecot/extra.conf
!include_try /etc/dovecot/sogo-sso.conf
!include_try /etc/dovecot/shared_namespace.conf
!include_try /etc/dovecot/conf.d/fts.conf
# </Includes>
default_client_limit = 10400
default_vsz_limit = 1024 M

View File

@@ -1,308 +1,308 @@
namespace inbox {
inbox = yes
location =
separator = /
mailbox "Trash" {
auto = subscribe
special_use = \Trash
}
mailbox "Deleted Messages" {
special_use = \Trash
}
mailbox "Deleted Items" {
special_use = \Trash
}
mailbox "Rubbish" {
special_use = \Trash
}
mailbox "Gelöschte Objekte" {
special_use = \Trash
}
mailbox "Gelöschte Elemente" {
special_use = \Trash
}
mailbox "Papierkorb" {
special_use = \Trash
}
mailbox "Itens Excluidos" {
special_use = \Trash
}
mailbox "Itens Excluídos" {
special_use = \Trash
}
mailbox "Lixeira" {
special_use = \Trash
}
mailbox "Prullenbak" {
special_use = \Trash
}
mailbox "Odstránené položky" {
special_use = \Trash
}
mailbox "Koš" {
special_use = \Trash
}
mailbox "Verwijderde items" {
special_use = \Trash
}
mailbox "Удаленные" {
special_use = \Trash
}
mailbox "Удаленные элементы" {
special_use = \Trash
}
mailbox "Корзина" {
special_use = \Trash
}
mailbox "Видалені" {
special_use = \Trash
}
mailbox "Видалені елементи" {
special_use = \Trash
}
mailbox "Кошик" {
special_use = \Trash
}
mailbox "废件箱" {
special_use = \Trash
}
mailbox "已删除消息" {
special_use = \Trash
}
mailbox "已删除邮件" {
special_use = \Trash
}
mailbox "Archive" {
auto = subscribe
special_use = \Archive
}
mailbox "Archiv" {
special_use = \Archive
}
mailbox "Archives" {
special_use = \Archive
}
mailbox "Arquivo" {
special_use = \Archive
}
mailbox "Arquivos" {
special_use = \Archive
}
mailbox "Archief" {
special_use = \Archive
}
mailbox "Archív" {
special_use = \Archive
}
mailbox "Archivovať" {
special_use = \Archive
}
mailbox "归档" {
special_use = \Archive
}
mailbox "Архив" {
special_use = \Archive
}
mailbox "Архів" {
special_use = \Archive
}
mailbox "Sent" {
auto = subscribe
special_use = \Sent
}
mailbox "Sent Messages" {
special_use = \Sent
}
mailbox "Sent Items" {
special_use = \Sent
}
mailbox "已发送" {
special_use = \Sent
}
mailbox "已发送消息" {
special_use = \Sent
}
mailbox "已发送邮件" {
special_use = \Sent
}
mailbox "Отправленные" {
special_use = \Sent
}
mailbox "Отправленные элементы" {
special_use = \Sent
}
mailbox "Надіслані" {
special_use = \Sent
}
mailbox "Надіслані елементи" {
special_use = \Sent
}
mailbox "Gesendet" {
special_use = \Sent
}
mailbox "Gesendete Objekte" {
special_use = \Sent
}
mailbox "Gesendete Elemente" {
special_use = \Sent
}
mailbox "Itens Enviados" {
special_use = \Sent
}
mailbox "Enviados" {
special_use = \Sent
}
mailbox "Verzonden items" {
special_use = \Sent
}
mailbox "Verzonden" {
special_use = \Sent
}
mailbox "Odoslaná pošta" {
special_use = \Sent
}
mailbox "Odoslané" {
special_use = \Sent
}
mailbox "Drafts" {
auto = subscribe
special_use = \Drafts
}
mailbox "Entwürfe" {
special_use = \Drafts
}
mailbox "Rascunhos" {
special_use = \Drafts
}
mailbox "Concepten" {
special_use = \Drafts
}
mailbox "Koncepty" {
special_use = \Drafts
}
mailbox "草稿" {
special_use = \Drafts
}
mailbox "草稿箱" {
special_use = \Drafts
}
mailbox "Черновики" {
special_use = \Drafts
}
mailbox "Чернетки" {
special_use = \Drafts
}
mailbox "Junk" {
auto = subscribe
special_use = \Junk
}
mailbox "Junk-E-Mail" {
special_use = \Junk
}
mailbox "Junk E-Mail" {
special_use = \Junk
}
mailbox "Spam" {
special_use = \Junk
}
mailbox "Lixo Eletrônico" {
special_use = \Junk
}
mailbox "Nevyžiadaná pošta" {
special_use = \Junk
}
mailbox "Infikované položky" {
special_use = \Junk
}
mailbox "Ongewenste e-mail" {
special_use = \Junk
}
mailbox "垃圾" {
special_use = \Junk
}
mailbox "垃圾箱" {
special_use = \Junk
}
mailbox "Нежелательная почта" {
special_use = \Junk
}
mailbox "Спам" {
special_use = \Junk
}
mailbox "Небажана пошта" {
special_use = \Junk
}
mailbox "Koncepty" {
special_use = \Drafts
}
mailbox "Nevyžádaná pošta" {
special_use = \Junk
}
mailbox "Odstraněná pošta" {
special_use = \Trash
}
mailbox "Odeslaná pošta" {
special_use = \Sent
}
mailbox "Skräp" {
special_use = \Trash
}
mailbox "Borttagna Meddelanden" {
special_use = \Trash
}
mailbox "Arkiv" {
special_use = \Archive
}
mailbox "Arkeverat" {
special_use = \Archive
}
mailbox "Skickat" {
special_use = \Sent
}
mailbox "Skickade Meddelanden" {
special_use = \Sent
}
mailbox "Utkast" {
special_use = \Drafts
}
mailbox "Skraldespand" {
special_use = \Trash
}
mailbox "Slettet mails" {
special_use = \Trash
}
mailbox "Arkiv" {
special_use = \Archive
}
mailbox "Arkiveret mails" {
special_use = \Archive
}
mailbox "Sendt" {
special_use = \Sent
}
mailbox "Sendte mails" {
special_use = \Sent
}
mailbox "Udkast" {
special_use = \Drafts
}
mailbox "Kladde" {
special_use = \Drafts
}
mailbox "Πρόχειρα" {
special_use = \Drafts
}
mailbox "Απεσταλμένα" {
special_use = \Sent
}
mailbox "Κάδος απορριμάτων" {
special_use = \Trash
}
mailbox "Ανεπιθύμητα" {
special_use = \Junk
}
mailbox "Αρχειοθετημένα" {
special_use = \Archive
}
prefix =
}
namespace inbox {
inbox = yes
location =
separator = /
mailbox "Trash" {
auto = subscribe
special_use = \Trash
}
mailbox "Deleted Messages" {
special_use = \Trash
}
mailbox "Deleted Items" {
special_use = \Trash
}
mailbox "Rubbish" {
special_use = \Trash
}
mailbox "Gelöschte Objekte" {
special_use = \Trash
}
mailbox "Gelöschte Elemente" {
special_use = \Trash
}
mailbox "Papierkorb" {
special_use = \Trash
}
mailbox "Itens Excluidos" {
special_use = \Trash
}
mailbox "Itens Excluídos" {
special_use = \Trash
}
mailbox "Lixeira" {
special_use = \Trash
}
mailbox "Prullenbak" {
special_use = \Trash
}
mailbox "Odstránené položky" {
special_use = \Trash
}
mailbox "Koš" {
special_use = \Trash
}
mailbox "Verwijderde items" {
special_use = \Trash
}
mailbox "Удаленные" {
special_use = \Trash
}
mailbox "Удаленные элементы" {
special_use = \Trash
}
mailbox "Корзина" {
special_use = \Trash
}
mailbox "Видалені" {
special_use = \Trash
}
mailbox "Видалені елементи" {
special_use = \Trash
}
mailbox "Кошик" {
special_use = \Trash
}
mailbox "废件箱" {
special_use = \Trash
}
mailbox "已删除消息" {
special_use = \Trash
}
mailbox "已删除邮件" {
special_use = \Trash
}
mailbox "Archive" {
auto = subscribe
special_use = \Archive
}
mailbox "Archiv" {
special_use = \Archive
}
mailbox "Archives" {
special_use = \Archive
}
mailbox "Arquivo" {
special_use = \Archive
}
mailbox "Arquivos" {
special_use = \Archive
}
mailbox "Archief" {
special_use = \Archive
}
mailbox "Archív" {
special_use = \Archive
}
mailbox "Archivovať" {
special_use = \Archive
}
mailbox "归档" {
special_use = \Archive
}
mailbox "Архив" {
special_use = \Archive
}
mailbox "Архів" {
special_use = \Archive
}
mailbox "Sent" {
auto = subscribe
special_use = \Sent
}
mailbox "Sent Messages" {
special_use = \Sent
}
mailbox "Sent Items" {
special_use = \Sent
}
mailbox "已发送" {
special_use = \Sent
}
mailbox "已发送消息" {
special_use = \Sent
}
mailbox "已发送邮件" {
special_use = \Sent
}
mailbox "Отправленные" {
special_use = \Sent
}
mailbox "Отправленные элементы" {
special_use = \Sent
}
mailbox "Надіслані" {
special_use = \Sent
}
mailbox "Надіслані елементи" {
special_use = \Sent
}
mailbox "Gesendet" {
special_use = \Sent
}
mailbox "Gesendete Objekte" {
special_use = \Sent
}
mailbox "Gesendete Elemente" {
special_use = \Sent
}
mailbox "Itens Enviados" {
special_use = \Sent
}
mailbox "Enviados" {
special_use = \Sent
}
mailbox "Verzonden items" {
special_use = \Sent
}
mailbox "Verzonden" {
special_use = \Sent
}
mailbox "Odoslaná pošta" {
special_use = \Sent
}
mailbox "Odoslané" {
special_use = \Sent
}
mailbox "Drafts" {
auto = subscribe
special_use = \Drafts
}
mailbox "Entwürfe" {
special_use = \Drafts
}
mailbox "Rascunhos" {
special_use = \Drafts
}
mailbox "Concepten" {
special_use = \Drafts
}
mailbox "Koncepty" {
special_use = \Drafts
}
mailbox "草稿" {
special_use = \Drafts
}
mailbox "草稿箱" {
special_use = \Drafts
}
mailbox "Черновики" {
special_use = \Drafts
}
mailbox "Чернетки" {
special_use = \Drafts
}
mailbox "Junk" {
auto = subscribe
special_use = \Junk
}
mailbox "Junk-E-Mail" {
special_use = \Junk
}
mailbox "Junk E-Mail" {
special_use = \Junk
}
mailbox "Spam" {
special_use = \Junk
}
mailbox "Lixo Eletrônico" {
special_use = \Junk
}
mailbox "Nevyžiadaná pošta" {
special_use = \Junk
}
mailbox "Infikované položky" {
special_use = \Junk
}
mailbox "Ongewenste e-mail" {
special_use = \Junk
}
mailbox "垃圾" {
special_use = \Junk
}
mailbox "垃圾箱" {
special_use = \Junk
}
mailbox "Нежелательная почта" {
special_use = \Junk
}
mailbox "Спам" {
special_use = \Junk
}
mailbox "Небажана пошта" {
special_use = \Junk
}
mailbox "Koncepty" {
special_use = \Drafts
}
mailbox "Nevyžádaná pošta" {
special_use = \Junk
}
mailbox "Odstraněná pošta" {
special_use = \Trash
}
mailbox "Odeslaná pošta" {
special_use = \Sent
}
mailbox "Skräp" {
special_use = \Trash
}
mailbox "Borttagna Meddelanden" {
special_use = \Trash
}
mailbox "Arkiv" {
special_use = \Archive
}
mailbox "Arkeverat" {
special_use = \Archive
}
mailbox "Skickat" {
special_use = \Sent
}
mailbox "Skickade Meddelanden" {
special_use = \Sent
}
mailbox "Utkast" {
special_use = \Drafts
}
mailbox "Skraldespand" {
special_use = \Trash
}
mailbox "Slettet mails" {
special_use = \Trash
}
mailbox "Arkiv" {
special_use = \Archive
}
mailbox "Arkiveret mails" {
special_use = \Archive
}
mailbox "Sendt" {
special_use = \Sent
}
mailbox "Sendte mails" {
special_use = \Sent
}
mailbox "Udkast" {
special_use = \Drafts
}
mailbox "Kladde" {
special_use = \Drafts
}
mailbox "Πρόχειρα" {
special_use = \Drafts
}
mailbox "Απεσταλμένα" {
special_use = \Sent
}
mailbox "Κάδος απορριμάτων" {
special_use = \Trash
}
mailbox "Ανεπιθύμητα" {
special_use = \Junk
}
mailbox "Αρχειοθετημένα" {
special_use = \Archive
}
prefix =
}

Some files were not shown because too many files have changed in this diff Show More