Merge remote-tracking branch 'origin/stable/14' into dev

This commit is contained in:
Oliver Günther
2024-09-03 10:08:16 +02:00
25 changed files with 85 additions and 522 deletions

3
.gitignore vendored
View File

@@ -1,2 +1,5 @@
# Jetbrains IDE
.idea/
*.swp
*.tar.gz

View File

@@ -1,5 +1,6 @@
# OpenProject Deploy
Recipes and examples for deploying OpenProject using Docker, Docker Compose, Kubernetes, etc.
Recipes and examples for deploying OpenProject.
* [Docker Compose](./compose/)
* [Kubernetes](./kubernetes/)

View File

@@ -6,7 +6,7 @@
# Please refer to our documentation to see all possible variables:
# https://www.openproject.org/docs/installation-and-operations/configuration/environment/
#
TAG=12
TAG=14-slim
OPENPROJECT_HTTPS=false
OPENPROJECT_HOST__NAME=localhost
PORT=127.0.0.1:8080

2
compose/.gitignore vendored
View File

@@ -1 +1,3 @@
.env
docker-compose.override.yml

View File

@@ -4,36 +4,57 @@
Clone this repository:
git clone https://github.com/opf/openproject-deploy --depth=1 --branch=stable/12 openproject
```shell
git clone https://github.com/opf/openproject-deploy --depth=1 --branch=stable/14 openproject
```
Go to the compose folder:
cd openproject/compose
Make sure you are using the latest version of the Docker images:
docker-compose pull
```shell
cd openproject/compose
```
Copy the example `.env` file and edit any values you want to change:
cp .env.example .env
vim .env
```shell
cp .env.example .env
vim .env
```
Launch the containers:
Next you start up the containers in the background while making sure to pull the latest versions of all used images.
docker-compose up -d
```shell
docker compose up -d --build --pull always
```
After a while, OpenProject should be up and running on <http://localhost:8080>.
**HTTPS/SSL**
### Troubleshooting
By default OpenProject starts with the HTTPS option **enabled**, but it **does not** handle SSL termination itself.
This is usually done separately via a [reverse proxy setup](https://www.openproject.org/docs/installation-and-operations/installation/docker/#apache-reverse-proxy-setup).
**pull access denied for openproject/proxy, repository does not exist or may require 'docker login': denied: requested access to the resource is denied**
If you encounter this after `docker compose up` this is merely a warning which can be ignored.
If this happens during `docker compose pull` this is simply a warning as well.
But it will result in the command's exit code to be a failure even though all images are pulled.
To prevent this you can add the `--ignore-buildable` option, running `docker compose pull --ignore-buildable`.
### HTTPS/SSL
By default OpenProject starts with the HTTPS option **enabled**, but it **does not** handle SSL termination itself. This
is usually done separately via a [reverse proxy
setup](https://www.openproject.org/docs/installation-and-operations/installation/docker/#apache-reverse-proxy-setup).
Without this you will run into an `ERR_SSL_PROTOCOL_ERROR` when accessing OpenProject.
See below how to disable HTTPS.
**PORT**
Be aware that if you want to use the integrated Caddy proxy as a proxy with outbound connections, you need to rewrite the
`Caddyfile`. In the default state, it is configured to forward the `X-Forwarded-*` headers from the reverse proxy in
front of it and not setting them itself. This is considered a security flaw and should instead be solved by configuring
`trusted_proxies` inside the `Caddyfile`. For more information read
the [Caddy documentation](https://caddyserver.com/docs/caddyfile/directives/reverse_proxy).
### PORT
By default the port is bound to `0.0.0.0` means access to OpenProject will be public.
See below how to change that.
@@ -117,15 +138,18 @@ For the complete documentation, please refer to https://docs.openproject.org/ins
### Network issues
If you're running into weird network issues and timeouts such as the one described in [OP#42802](https://community.openproject.org/work_packages/42802), you might have success in remove the two separate frontend and backend networks. This might be connected to using podman for orchestration, although we haven't been able to confirm this.
If you're running into weird network issues and timeouts such as the one described in
[OP#42802](https://community.openproject.org/work_packages/42802), you might have success in remove the two separate
frontend and backend networks. This might be connected to using podman for orchestration, although we haven't been able
to confirm this.
### SMTP setup fails: Network is unreachable.
Make sure your container has DNS resolution to access external SMTP server when set up as described in [OP#44515](https://community.openproject.org/work_packages/44515).
Make sure your container has DNS resolution to access external SMTP server when set up as described in
[OP#44515](https://community.openproject.org/work_packages/44515).
```yml
worker:
dns:
- "Your DNS IP" # OR add a public DNS resolver like 8.8.8.8
dns:
- "Your DNS IP" # OR add a public DNS resolver like 8.8.8.8
```

View File

@@ -1,5 +1,3 @@
version: "3.7"
networks:
frontend:
backend:
@@ -11,7 +9,7 @@ volumes:
x-op-restart-policy: &restart_policy
restart: unless-stopped
x-op-image: &image
image: openproject/community:${TAG:-12}
image: openproject/openproject:${TAG:-14-slim}
x-op-app: &app
<<: [*image, *restart_policy]
environment:
@@ -49,13 +47,14 @@ services:
- backend
proxy:
<<: [*image, *restart_policy]
command: "./docker/prod/proxy"
build:
context: ./proxy
args:
APP_HOST: web
image: openproject/proxy
<<: *restart_policy
ports:
- "${PORT:-8080}:80"
environment:
APP_HOST: web
OPENPROJECT_RAILS__RELATIVE__URL__ROOT: "${OPENPROJECT_RAILS__RELATIVE__URL__ROOT:-}"
depends_on:
- web
networks:
@@ -115,4 +114,3 @@ services:
restart: on-failure
networks:
- backend

View File

@@ -0,0 +1,17 @@
:80 {
reverse_proxy * http://${APP_HOST}:8080 {
# The following directives are needed to make the proxy forward explicitly the X-Forwarded-* headers. If unset,
# Caddy will reset them. See: https://caddyserver.com/docs/caddyfile/directives/reverse_proxy#defaults
# This is needed, if you are using a reverse proxy in front of the compose stack and Caddy is NOT your first
# point of contact.
# When using Caddy is reachable as a first point of contact, it is highly recommended to configure the server's
# global `trusted_proxies` directive. See: https://caddyserver.com/docs/caddyfile/options#trusted-proxies
header_up X-Forwarded-Proto {header.X-Forwarded-Proto}
header_up X-Forwarded-For {header.X-Forwarded-For}
}
file_server
log
}

8
compose/proxy/Dockerfile Normal file
View File

@@ -0,0 +1,8 @@
FROM caddy:2
COPY ./Caddyfile.template /etc/caddy/Caddyfile.template
ARG APP_HOST
RUN sed 's|${APP_HOST}|'"$APP_HOST"'|g' /etc/caddy/Caddyfile.template > /etc/caddy/Caddyfile
ENTRYPOINT ["caddy", "run", "--config", "/etc/caddy/Caddyfile"]

View File

@@ -1,13 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend
spec:
ingress:
- from:
- podSelector:
matchLabels:
openproject.network/backend: "true"
podSelector:
matchLabels:
openproject.network/backend: "true"

View File

@@ -1,13 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend
spec:
ingress:
- from:
- podSelector:
matchLabels:
openproject.network/frontend: "true"
podSelector:
matchLabels:
openproject.network/frontend: "true"

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
openproject.service: opdata
name: opdata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
openproject.service: pgdata
name: pgdata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}

View File

@@ -1,24 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
openproject.service: cache
name: cache
spec:
replicas: 1
selector:
matchLabels:
openproject.service: cache
strategy: {}
template:
metadata:
labels:
openproject.network/backend: "true"
openproject.service: cache
spec:
containers:
- image: memcached
name: cache
resources: {}
restartPolicy: Always
status: {}

View File

@@ -1,41 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
openproject.service: db
name: db
spec:
replicas: 1
selector:
matchLabels:
openproject.service: db
strategy:
type: Recreate
template:
metadata:
labels:
openproject.network/backend: "true"
openproject.service: db
spec:
containers:
- env:
- name: POSTGRES_DB
value: openproject
- name: POSTGRES_PASSWORD
value: p4ssw0rd
image: postgres:13
ports:
- containerPort: 5432
name: psql
name: db
resources: {}
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: pgdata
restartPolicy: Always
terminationGracePeriodSeconds: 3
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: pgdata
status: {}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: cache
labels:
openproject.service: cache
spec:
type: NodePort
selector:
openproject.service: cache
ports:
- name: cache
protocol: TCP
port: 11211
targetPort: 11211

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: db
labels:
openproject.service: db
spec:
type: NodePort
selector:
openproject.service: db
ports:
- name: db
protocol: TCP
port: 5432
targetPort: 5432

View File

@@ -1,39 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
labels:
openproject.network/backend: "true"
openproject.service: seeder
name: seeder
spec:
containers:
- args:
- ./docker/prod/seeder
env:
- name: DATABASE_URL
value: postgres://postgres:p4ssw0rd@$(DB_SERVICE_HOST):$(DB_SERVICE_PORT)/openproject?pool=20&encoding=unicode&reconnect=true
- name: IMAP_ENABLED
value: "false"
- name: OPENPROJECT_CACHE__MEMCACHE__SERVER
value: $(CACHE_SERVICE_HOST):$(CACHE_SERVICE_PORT)
- name: OPENPROJECT_RAILS__RELATIVE__URL__ROOT
- name: RAILS_CACHE_STORE
value: memcache
- name: RAILS_MAX_THREADS
value: "16"
- name: RAILS_MIN_THREADS
value: "4"
- name: OPENPROJECT_EDITION
value: standard
image: openproject/community:12
name: seeder
resources: {}
volumeMounts:
- mountPath: /var/openproject/assets
name: opdata
restartPolicy: OnFailure
volumes:
- name: opdata
persistentVolumeClaim:
claimName: opdata
status: {}

View File

@@ -1,48 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
openproject.service: cron
name: cron
spec:
replicas: 1
selector:
matchLabels:
openproject.service: cron
strategy:
type: Recreate
template:
metadata:
labels:
openproject.network/backend: "true"
openproject.service: cron
spec:
containers:
- args:
- ./docker/prod/cron
env:
- name: DATABASE_URL
value: postgres://postgres:p4ssw0rd@db/openproject?pool=20&encoding=unicode&reconnect=true
- name: IMAP_ENABLED
value: "false"
- name: OPENPROJECT_CACHE__MEMCACHE__SERVER
value: cache:11211
- name: OPENPROJECT_RAILS__RELATIVE__URL__ROOT
- name: RAILS_CACHE_STORE
value: memcache
- name: RAILS_MAX_THREADS
value: "16"
- name: RAILS_MIN_THREADS
value: "4"
image: openproject/community:12
name: cron
resources: {}
volumeMounts:
- mountPath: /var/openproject/assets
name: opdata
restartPolicy: Always
volumes:
- name: opdata
persistentVolumeClaim:
claimName: opdata
status: {}

View File

@@ -1,64 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
openproject.service: web
name: web
spec:
replicas: 1
selector:
matchLabels:
openproject.service: web
strategy:
type: Recreate
template:
metadata:
labels:
openproject.network/backend: "true"
openproject.network/frontend: "true"
openproject.service: web
spec:
containers:
- args:
- ./docker/prod/web
env:
- name: DATABASE_URL
value: postgres://postgres:p4ssw0rd@$(DB_SERVICE_HOST):$(DB_SERVICE_PORT)/openproject?pool=20&encoding=unicode&reconnect=true
- name: IMAP_ENABLED
value: "false"
- name: OPENPROJECT_CACHE__MEMCACHE__SERVER
value: $(CACHE_SERVICE_HOST):$(CACHE_SERVICE_PORT)
- name: OPENPROJECT_RAILS__RELATIVE__URL__ROOT
- name: RAILS_CACHE_STORE
value: memcache
- name: RAILS_MAX_THREADS
value: "16"
- name: RAILS_MIN_THREADS
value: "4"
- name: OPENPROJECT_EDITION
value: standard
image: openproject/community:12
ports:
- containerPort: 8080
name: http
livenessProbe:
exec:
command:
- curl
- -f
- http://localhost:8080/health_checks/default
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 3
name: web
resources: {}
volumeMounts:
- mountPath: /var/openproject/assets
name: opdata
restartPolicy: Always
volumes:
- name: opdata
persistentVolumeClaim:
claimName: opdata
status: {}

View File

@@ -1,50 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
openproject.service: worker
name: worker
spec:
replicas: 1
selector:
matchLabels:
openproject.service: worker
strategy:
type: Recreate
template:
metadata:
labels:
openproject.network/backend: "true"
openproject.service: worker
spec:
containers:
- args:
- ./docker/prod/worker
env:
- name: DATABASE_URL
value: postgres://postgres:p4ssw0rd@$(DB_SERVICE_HOST):$(DB_SERVICE_PORT)/openproject?pool=20&encoding=unicode&reconnect=true
- name: IMAP_ENABLED
value: "false"
- name: OPENPROJECT_CACHE__MEMCACHE__SERVER
value: $(CACHE_SERVICE_HOST):$(CACHE_SERVICE_PORT)
- name: OPENPROJECT_RAILS__RELATIVE__URL__ROOT
- name: RAILS_CACHE_STORE
value: memcache
- name: RAILS_MAX_THREADS
value: "16"
- name: RAILS_MIN_THREADS
value: "4"
- name: OPENPROJECT_EDITION
value: standard
image: openproject/community:12
name: worker
resources: {}
volumeMounts:
- mountPath: /var/openproject/assets
name: opdata
restartPolicy: Always
volumes:
- name: opdata
persistentVolumeClaim:
claimName: opdata
status: {}

View File

@@ -1,15 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: web
labels:
openproject.service: web
spec:
type: NodePort
selector:
openproject.service: web
ports:
- name: web
protocol: TCP
port: 8080
targetPort: 8080

View File

@@ -1,32 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
openproject.service: proxy
name: proxy
spec:
replicas: 1
selector:
matchLabels:
openproject.service: proxy
strategy: {}
template:
metadata:
labels:
openproject.network/frontend: "true"
openproject.service: proxy
spec:
containers:
- args:
- ./docker/prod/proxy
env:
- name: APP_HOST
value: $(WEB_SERVICE_HOST)
- name: OPENPROJECT_RAILS__RELATIVE__URL__ROOT
image: openproject/community:12
name: proxy
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}

View File

@@ -1,14 +0,0 @@
apiVersion: v1
kind: Service
metadata:
labels:
openproject.service: proxy
name: proxy
spec:
type: NodePort
selector:
openproject.service: proxy
ports:
- name: http
port: 80
targetPort: 80

View File

@@ -1,17 +0,0 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: proxy-ingress
spec:
ingressClassName: nginx
rules:
- host: k8s.openproject-dev.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: proxy
port:
number: 80

View File

@@ -1,65 +1,3 @@
# OpenProject installation using Kubernetes
This is an example setup of OpenProject on Kubernetes.
## Install
Clone this repository:
```
git clone https://github.com/opf/openproject-deploy --depth=1 --branch=dev openproject
```
Go to the compose folder:
```
cd openproject/kubernetes
```
Adjust the host name for the ingress in [09-proxy-ingress.yaml](./09-proxy-ingress.yaml).
The default value `k8s.openproject-dev.com` simply points to `127.0.0.1`.
You will have to insert the actual host name here and set up the DNS so that it points to the cluster IP.
Next, apply the definitions:
```
kubectl apply -f .
```
## Ingress
For the ingress to work you will need to enable an ingress addon in your cluster.
If you already have a load balancer you want to use to expose the service,
simply delete [09-proxy-ingress.yaml](./09-proxy-ingress.yaml) and integrate
the [proxy service](./08-proxy-service.yaml) in your existing ingress or load balancer.
## SSL Termination
This setup does not include SSL termination.
The ingress simply listens on port 80 and serves HTTP requests.
You will have to set up HTTPS yourself.
You can find more information on this in the [kubernetes docs](https://kubernetes.github.io/ingress-nginx/user-guide/tls/).
## Scaling
You can adjust the `replica` specs in the [web](./05-web-deployment.yaml) and [worker](./05-worker-deployment.yaml) deployments
to scale up the respective processes.
## TROUBLESHOOTING
### The **db** deployment fails due to the data directory not being empty
This can happen if your cluster creates the `opdata` PVC (persistent volume claims) with an ext4 file system
which will automatically have a `lost+found` folder.
To fix the issue you can add the following to the [db-deployment](./02-db-deployment.yaml)'s env next to
`POSTGRES_USER` and `POSTGRES_PASSWORD`:
```
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
```
This makes the postgres container use a subfolder of the mount path (`/var/lib/postgresql/data`) as the data directory.
Please use the [OpenProject helm chart](https://charts.openproject.org) to install OpenProject on kubernetes.