Compare commits

...

48 Commits

Author SHA1 Message Date
Jan Novak
0eab64c954 hosting: some config files for host: shadow, some named conf for
utility-101-shadow vm
2026-02-20 02:16:16 +01:00
Jan Novak
be362a5ab7 gitops/cilium: configure gateway and wildcard certificate it needs 2026-02-20 02:15:02 +01:00
Jan Novak
bb9f2ae3ce docker-30: several new and forgotten config files relevant to services
running in docker
2026-02-20 02:13:55 +01:00
Jan Novak
dc947165a4 gitops/ghost: add httproute resource aka gatewayApi instead of ingress 2026-02-20 02:13:09 +01:00
Jan Novak
1cd7625220 gitops/cert-manager: add dns challenger cluster issuer, add
deployment/service with socat proxy that works around my internet
provider's medling into dns traffic on port 53.
2026-02-20 02:11:50 +01:00
Jan Novak
409f8247e6 gitops/cert-manager: enable Gateway API support
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 01:43:04 +01:00
Jan Novak
8608696909 gitops/cilium: fix gateway.yaml indentation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 01:04:18 +01:00
Jan Novak
6454c893cb gitops/cilium: move gateway listeners from helm values to Gateway resource
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 01:02:14 +01:00
Jan Novak
b2daa822a6 gitops/cilium: configure gateway listeners and allow routes from all namespaces
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-19 00:51:37 +01:00
Jan Novak
8ae7b086a5 gitops/00-crds: add Gateway API v1.2.0 CRDs for Cilium gateway support
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 12:17:46 +01:00
Jan Novak
4b7ed6085b gitops/cilium: enable Gateway API and add HTTPRoute for ghost
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-17 11:55:49 +01:00
Jan Novak
0d97a796e9 gitops/velero: add manifests and runbook - kustomization is yet to be
created
2026-01-17 00:07:03 +01:00
Jan Novak
b9f99c2950 gitops/plane: fix issuer on ingress 2026-01-16 13:21:15 +01:00
Jan Novak
a20ae55b8f gitops/cilium: specify which interfaces it handles to not clash with
tailscaled
2026-01-15 01:24:49 +01:00
Jan Novak
36f447c39c gitops: assorted leftovers and fixes 2026-01-14 14:49:54 +01:00
Jan Novak
76e3ff9d03 kubernetes/terraform: several updates 2026-01-14 14:49:19 +01:00
Jan Novak
90a44bd59f vault: deployment manifest, some docs, backup script - expected to run
on docker host
2026-01-14 14:48:09 +01:00
Jan Novak
b5e1f4b737 gitops/external-secrets: change roleid 2026-01-13 10:28:43 +01:00
Jan Novak
099734fb6b gitops/ghost: prepare initial deployment with secrets in vault 2026-01-08 10:40:13 +01:00
Jan Novak
b081e947f5 gitops/plane: remove doc_upload_size_limit which seems to be causing
crashes
2026-01-07 22:42:26 +01:00
Jan Novak
d908e788af gitops/external-secrets: fix cloudsecretstore location where to look for
approle secret_id
2026-01-07 22:16:13 +01:00
Jan Novak
81f2e754ed gitops/external-secrets: set deployment replicas to 1 and add
cloudsecretstore
2026-01-07 22:05:31 +01:00
Jan Novak
a3a6ef79fe gitops/external-secrets do not use outdated api version of secretstore 2026-01-07 20:19:34 +01:00
Jan Novak
52089bc1b4 gitops: fix external secrets CRDs helm release 2026-01-07 20:02:57 +01:00
Jan Novak
a3c8cc9e47 gitops: move external-secrets helmrepo to 00-crds 2026-01-07 19:54:24 +01:00
Jan Novak
b6f775fd2b gitops/external-secrets: deploy CRDs first in another kustomization 2026-01-07 19:52:16 +01:00
Jan Novak
ed14d74738 gitops/external-secrets: add helmrelease + some coredns config for vault
resolving
2026-01-07 19:43:39 +01:00
Jan Novak
060a24437b gitops/plane: fix ingress 2026-01-06 10:57:11 +01:00
Jan Novak
c8011579c9 gitops: fix grafana ingress 2026-01-06 10:39:52 +01:00
Jan Novak
5bfc1f5fe5 gitops: add kube-prometheus 2026-01-06 09:57:26 +01:00
Jan Novak
7be7e0871c gitops: fix oauth kustomization 2026-01-05 22:21:12 +01:00
Jan Novak
437c94f2e1 gitops: add oauth-proxy + some changes in plane helmrelease 2026-01-05 22:19:31 +01:00
Jan Novak
edd945b709 gitops/plane: use app version v1.2.1 2026-01-05 11:48:57 +01:00
Jan Novak
1e9e981642 gitops/plane: use existing version of helm chart 2026-01-05 11:44:20 +01:00
Jan Novak
e4bc0424a7 gitops: add plane kustomization 2026-01-05 11:34:46 +01:00
Jan Novak
1096c7b603 gitops: plane - project management 2026-01-05 11:32:55 +01:00
Jan Novak
d3697c8132 terraform: extend kubernetes a little bit 2026-01-02 23:17:43 +01:00
Jan Novak
bdf82c7e49 gitops: cert-manager (semi manual deployment / incomplete) 2026-01-02 23:16:41 +01:00
Jan Novak
777772019c docker-30: kanidm deployment 2026-01-02 23:15:30 +01:00
Jan Novak
0e72629197 gitops: add cert-manager 2026-01-01 23:10:56 +01:00
Jan Novak
01fe056584 gitops/cilium: configure l2 ip address anouncement for external
loadbalancer ips
2026-01-01 20:21:37 +01:00
Jan Novak
6447e39163 gitops/podinfo: remove values.yaml 2025-12-30 23:37:37 +01:00
Jan Novak
dd9a90e8b2 gitops: add podinfo kustomization, remove everything related to kuard
which has no available image anyway
2025-12-30 23:36:01 +01:00
Jan Novak
817a3c8335 gitops: add podinfo deployment 2025-12-30 23:33:27 +01:00
Jan Novak
d275ec09a4 gitops: fix repo path for home-kubernetes and kuard image version 2025-12-30 23:22:58 +01:00
Flux
f3c1e5c635 Add Flux v2.7.5 component manifests 2025-12-30 23:16:55 +01:00
Jan Novak
fcdafc32d6 terraform/kube: make sure secrets relevant to kube deployment are not
committed to the repo
2025-12-29 14:36:09 +01:00
Jan Novak
6ce7c3a530 remove unwanted secrets (expired already anyway) in the repo 2025-12-29 14:35:34 +01:00
130 changed files with 29098 additions and 3688 deletions

6
.gitignore vendored
View File

@@ -1 +1,7 @@
.DS_Store
.terraform/
.terraform.lock.hcl
kubernetes-kvm-terraform/join-command.txt
kubernetes-kvm-terraform/kubeconfig

View File

@@ -57,6 +57,15 @@ services:
- GITEA__server__ROOT_URL=https://gitea.home.hrajfrisbee.cz
- GITEA__security__SECRET_KEY=${GITEA_SECRET_KEY}
- GITEA__security__INTERNAL_TOKEN=${INTERNAL_TOKEN}
- GITEA__mailer__ENABLED=true
- GITEA__mailer__PROTOCOL=smtps
- GITEA__mailer__SMTP_ADDR=smtp.gmail.com
- GITEA__mailer__SMTP_PORT=465
- GITEA__mailer__USER=kacerr.cz@gmail.com
- GITEA__mailer__PASSWD=${GMAIL_GITEA_APP_PASSWORD}
- GITEA__mailer__FROM=kacerr.cz+gitea@gmail.com
- GITEA__packages__ENABLED=true
#- GITEA__storage__STORAGE_TYPE=minio
#- GITEA__storage__MINIO_ENDPOINT=minio:9000
#- GITEA__storage__MINIO_ACCESS_KEY_ID=gitea
@@ -83,7 +92,7 @@ services:
depends_on:
- gitea
environment:
GITEA_INSTANCE_URL: http://gitea:3000
GITEA_INSTANCE_URL: https://gitea.home.hrajfrisbee.cz/
GITEA_RUNNER_REGISTRATION_TOKEN: ${RUNNER_TOKEN}
volumes:
- ./runner-data:/data

118
docker-30/kanidm/readme.md Normal file
View File

@@ -0,0 +1,118 @@
## add user to k8s group
based on: https://blog.kammel.dev/post/k8s_home_lab_2025_06/
```bash
export GROUP_NAME=k8s_users
kanidm group create ${GROUP_NAME}
kanidm group add-members ${GROUP_NAME} novakj
export OAUTH2_NAME=k8s
kanidm system oauth2 create-public ${OAUTH2_NAME} ${OAUTH2_NAME} http://localhost:8000
kanidm system oauth2 add-redirect-url ${OAUTH2_NAME} http://localhost:8000
kanidm system oauth2 update-scope-map ${OAUTH2_NAME} ${GROUP_NAME} email openid profile groups
kanidm system oauth2 enable-localhost-redirects ${OAUTH2_NAME}
kubectl oidc-login setup \
--oidc-issuer-url=https://idm.home.hrajfrisbee.cz/oauth2/openid/k8s \
--oidc-client-id=k8s
kubectl config set-credentials oidc \
--exec-api-version=client.authentication.k8s.io/v1 \
--exec-interactive-mode=Never \
--exec-command=kubectl \
--exec-arg=oidc-login \
--exec-arg=get-token \
--exec-arg="--oidc-issuer-url=https://idm.home.hrajfrisbee.cz/oauth2/openid/k8s" \
--exec-arg="--oidc-client-id=k8s"
kubectl create clusterrolebinding oidc-cluster-admin \
--clusterrole=cluster-admin \
--user='https://idm.home.hrajfrisbee.cz/oauth2/openid/k8s#35842461-a1c4-4ad6-8b29-697c5ddbfe84'
```
## commands
```bash
# recover admin password
# on the docker host
docker exec -i -t kanidmd kanidmd recover-account admin
docker exec -i -t kanidmd kanidmd recover-account idm_admin
# kanidm mangement commands (could be run on any logged in client)
kanidm person credential create-reset-token novakj
kanidm person get novakj | grep memberof
kanidm group get kanidm group get
kanidm group get idm_all_accounts
kanidm group get idm_all_persons
kanidm group account-policy credential-type-minimum idm_all_accounts any
kanidm person get novakj | grep memberof
kanidm group get idm_people_self_name_write
```
## configure oauth proxy
```bash
kanidm system oauth2 create oauth2-proxy "OAuth2 Proxy" https://oauth2-proxy.lab.home.hrajfrisbee.cz/oauth2/callback
kanidm system oauth2 set-landing-url oauth2-proxy https://oauth2-proxy.lab.home.hrajfrisbee.cz
kanidm system oauth2 enable-pkce oauth2-proxy
kanidm system oauth2 warning-insecure-client-disable-pkce oauth2-proxy # if proxy doesn't support PKCE
kanidm system oauth2 get oauth2-proxy # note the client secret
# update incorrect urls if needed
remove-redirect-url
kanidm system oauth2 add-redirect-url oauth2-proxy https://oauth2-proxy.lab.home.hrajfrisbee.cz/oauth2/callback
kanidm system oauth2 set-landing-url oauth2-proxy https://oauth2-proxy.lab.home.hrajfrisbee.cz
# output
✔ Multiple authentication tokens exist. Please select one · idm_admin@idm.home.hrajfrisbee.cz
---
class: account
class: key_object
class: key_object_internal
class: key_object_jwe_a128gcm
class: key_object_jwt_es256
class: memberof
class: oauth2_resource_server
class: oauth2_resource_server_basic
class: object
displayname: OAuth2 Proxy
key_internal_data: 69df0a387991455f7c9800f13b881803: valid jwe_a128gcm 0
key_internal_data: c5f61c48a9c0eb61ba993a36748826cc: valid jws_es256 0
name: oauth2-proxy
oauth2_allow_insecure_client_disable_pkce: true
oauth2_rs_basic_secret: hidden
oauth2_rs_origin_landing: https://oauth2-proxylab.home.hrajfrisbee.cz/
oauth2_strict_redirect_uri: true
spn: oauth2-proxy@idm.home.hrajfrisbee.cz
uuid: d0dcbad5-90e4-4e36-a51b-653624069009
secret: 7KJbUe5x35NVCT1VbzZfhYBU19cz9Xe9Z1fvw4WazrkHX2c8
kanidm system oauth2 update-scope-map oauth2-proxy k8s_users openid profile email
```
```bash
docker run -d --name=kanidmd --restart=always \
-p '8443:8443' \
-p '3636:3636' \
--volume /srv/docker/kanidm/data:/data \
docker.io/kanidm/server:latest
docker run --rm -i -t -v --restart=always \
-p '8443:8443' \
-p '3636:3636' \
--volume /srv/docker/kanidm/data:/data \
docker.io/kanidm/server:latest \
kanidmd cert-generate
```

View File

@@ -0,0 +1,136 @@
# The server configuration file version.
version = "2"
# The webserver bind address. Requires TLS certificates.
# If the port is set to 443 you may require the
# NET_BIND_SERVICE capability. This accepts a single address
# or an array of addresses to listen on.
# Defaults to "127.0.0.1:8443"
bindaddress = "0.0.0.0:8443"
#
# The read-only ldap server bind address. Requires
# TLS certificates. If set to 636 you may require the
# NET_BIND_SERVICE capability. This accepts a single address
# or an array of addresses to listen on.
# Defaults to "" (disabled)
# ldapbindaddress = "0.0.0.0:3636"
#
# The path to the kanidm database.
db_path = "/data/kanidm.db"
#
# If you have a known filesystem, kanidm can tune the
# database page size to match. Valid choices are:
# [zfs, other]
# If you are unsure about this leave it as the default
# (other). After changing this
# value you must run a vacuum task.
# - zfs:
# * sets database pagesize to 64k. You must set
# recordsize=64k on the zfs filesystem.
# - other:
# * sets database pagesize to 4k, matching most
# filesystems block sizes.
# db_fs_type = "zfs"
#
# The number of entries to store in the in-memory cache.
# Minimum value is 256. If unset
# an automatic heuristic is used to scale this.
# You should only adjust this value if you experience
# memory pressure on your system.
# db_arc_size = 2048
#
# TLS chain and key in pem format. Both must be present.
# If the server receives a SIGHUP, these files will be
# re-read and reloaded if their content is valid.
tls_chain = "/data/chain.pem"
tls_key = "/data/key.pem"
#
# The log level of the server. May be one of info, debug, trace
#
# NOTE: this can be overridden by the environment variable
# `KANIDM_LOG_LEVEL` at runtime
# Defaults to "info"
# log_level = "info"
#
# The DNS domain name of the server. This is used in a
# number of security-critical contexts
# such as webauthn, so it *must* match your DNS
# hostname. It is used to create
# security principal names such as `william@idm.example.com`
# so that in a (future) trust configuration it is possible
# to have unique Security Principal Names (spns) throughout
# the topology.
#
# ⚠️ WARNING ⚠️
#
# Changing this value WILL break many types of registered
# credentials for accounts including but not limited to
# webauthn, oauth tokens, and more.
# If you change this value you *must* run
# `kanidmd domain rename` immediately after.
domain = "idm.home.hrajfrisbee.cz"
#
# The origin for webauthn. This is the url to the server,
# with the port included if it is non-standard (any port
# except 443). This must match or be a descendent of the
# domain name you configure above. If these two items are
# not consistent, the server WILL refuse to start!
# origin = "https://idm.example.com"
# # OR
# origin = "https://idm.example.com:8443"
origin = "https://idm.home.hrajfrisbee.cz"
# HTTPS requests can be reverse proxied by a loadbalancer.
# To preserve the original IP of the caller, these systems
# will often add a header such as "Forwarded" or
# "X-Forwarded-For". Some other proxies can use the PROXY
# protocol v2 header. While we support the PROXY protocol
# v1 header, we STRONGLY discourage it's use as it has
# significantly greater overheads compared to v2 during
# processing.
# This setting allows configuration of the list of trusted
# IPs or IP ranges which can supply this header information,
# and which format the information is provided in.
# Defaults to "none" (no trusted sources)
# Only one option can be used at a time.
# [http_client_address_info]
# proxy-v2 = ["127.0.0.1", "127.0.0.0/8"]
# # OR
# [http_client_address_info]
# x-forward-for = ["127.0.0.1", "127.0.0.0/8"]
# # OR
# [http_client_address_info]
# # AVOID IF POSSIBLE!!!
# proxy-v1 = ["127.0.0.1", "127.0.0.0/8"]
# LDAPS requests can be reverse proxied by a loadbalancer.
# To preserve the original IP of the caller, these systems
# can add a header such as the PROXY protocol v2 header.
# While we support the PROXY protocol v1 header, we STRONGLY
# discourage it's use as it has significantly greater
# overheads compared to v2 during processing.
# This setting allows configuration of the list of trusted
# IPs or IP ranges which can supply this header information,
# and which format the information is provided in.
# Defaults to "none" (no trusted sources)
# [ldap_client_address_info]
# proxy-v2 = ["127.0.0.1", "127.0.0.0/8"]
# # OR
# [ldap_client_address_info]
# # AVOID IF POSSIBLE!!!
# proxy-v1 = ["127.0.0.1", "127.0.0.0/8"]
[online_backup]
# The path to the output folder for online backups
path = "/data/kanidm/backups/"
# The schedule to run online backups (see https://crontab.guru/)
# every day at 22:00 UTC (default)
schedule = "00 22 * * *"
# four times a day at 3 minutes past the hour, every 6th hours
# schedule = "03 */6 * * *"
# We also support non standard cron syntax, with the following format:
# sec min hour day of month month day of week year
# (it's very similar to the standard cron syntax, it just allows to specify the seconds
# at the beginning and the year at the end)
# Number of backups to keep (default 7)
# versions = 7

View File

@@ -0,0 +1,46 @@
# nginx.conf
error_log /dev/stderr;
http {
server {
listen 9080;
location / {
proxy_pass http://192.168.0.35:80;
proxy_set_header Host $host;
}
}
log_format detailed '$remote_addr - [$time_local] '
'"$request_method $host$request_uri" '
'$status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log /dev/stdout detailed;
}
stream {
# Stream doesn't log by default, enable explicitly:
log_format stream_log '$remote_addr [$time_local] '
'$protocol $ssl_preread_server_name '
'$status $bytes_sent $bytes_received $session_time';
access_log /dev/stdout stream_log;
# Nginx ingress in kubernetes
server {
listen 9443;
proxy_pass 192.168.0.35:443;
}
# Gateway provided by cilium/envoy
server {
listen 9444;
proxy_pass 192.168.0.36:443;
}
}
events {}

View File

@@ -0,0 +1,9 @@
docker rm -f lab-proxy || /usr/bin/true
docker run -d --name lab-proxy \
--restart unless-stopped \
-v /srv/docker/lab-proxy/nginx.conf:/etc/nginx/nginx.conf:ro \
-p 9443:9443 \
-p 9444:9444 \
-p 9080:9080 \
nginx:alpine

View File

@@ -0,0 +1,9 @@
#!/bin/bash
docker rm -f maru-hleda-byt
# gitea registry login with kacerr / token
docker run -d --name maru-hleda-byt \
-p 8080:8080 \
-v /srv/maru-hleda-byt/data:/app/data \
gitea.home.hrajfrisbee.cz/littlemeat/maru-hleda-byt:0.01

View File

@@ -0,0 +1,22 @@
server {
listen 443 ssl http2;
server_name gitea.home.hrajfrisbee.cz;
ssl_certificate /etc/letsencrypt/live/gitea.home.hrajfrisbee.cz/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/gitea.home.hrajfrisbee.cz/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
location / {
proxy_pass http://192.168.0.30:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Gitea Git over HTTP
client_max_body_size 512m;
}

View File

@@ -0,0 +1,35 @@
server {
listen 443 ssl http2;
server_name jellyfin.home.hrajfrisbee.cz;
ssl_certificate /etc/letsencrypt/live/gitea.home.hrajfrisbee.cz/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/gitea.home.hrajfrisbee.cz/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# Security headers for media streaming
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";
# Increase body size for high-res movie posters
client_max_body_size 20M;
location / {
# Proxy to your Synology or VM IP and Jellyfin port (default 8096)
proxy_pass http://192.168.0.2:8096;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Protocol $scheme;
proxy_set_header X-Forwarded-Host $http_host;
# Disable buffering for smoother streaming
proxy_buffering off;
}
}

View File

@@ -0,0 +1,17 @@
# docker-30
## taiscale
```bash
# Add signing key
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/$(lsb_release -cs).noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
# Add repo
echo "deb [signed-by=/usr/share/keyrings/tailscale-archive-keyring.gpg] https://pkgs.tailscale.com/stable/ubuntu $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/tailscale.list
# Install
sudo apt update && sudo apt install tailscale
# Start
sudo tailscale up
```

35
docker-30/vault/backup.md Normal file
View File

@@ -0,0 +1,35 @@
## vault-cli install
```bash
VAULT_VERSION="1.21.2"
wget https://releases.hashicorp.com/vault/${VAULT_VERSION}/vault_${VAULT_VERSION}_linux_amd64.zip
unzip vault_${VAULT_VERSION}_linux_amd64.zip
sudo mv vault /usr/local/bin/
```
## minio-cli
```bash
wget https://dl.min.io/client/mc/release/linux-amd64/mc -O /tmp/minio-cli
chmod +x /tmp/minio-cli
sudo mv /tmp/minio-cli /usr/local/bin/minio-cli
minio-cli alias set synology http://192.168.0.2:9000 k8s ----proper secret here----
```
## backup token
```bash
mkdir -p /etc/vault.d/
vault policy write backup - <<EOF
path "sys/storage/raft/snapshot" {
capabilities = ["read"]
}
EOF
vault token create -policy=backup -period=8760h -orphan > /etc/vault.d/backup-token
chmod 600 /etc/vault.d/backup-token
```

View File

@@ -0,0 +1,20 @@
{
"ui": true,
"listener": {
"tcp": {
"address": "0.0.0.0:8200",
"tls_disable": "1",
"tls_cert_file": "/vault/certs/fullchain.pem",
"tls_key_file": "/vault/certs/privkey.pem"
}
},
"backend": {
"file": {
"path": "/vault/data/file"
}
},
"default_lease_ttl": "168h",
"max_lease_ttl": "0h",
"api_addr": "https://vault.hrajfrisbee.cz"
// "api_addr": "http://0.0.0.0:8200"
}

View File

@@ -0,0 +1,20 @@
services:
vault:
image: hashicorp/vault:1.21.1
container_name: vault
restart: unless-stopped
cap_add:
- IPC_LOCK
ports:
- 8200:8200
environment:
- VAULT_ADDR=http://0.0.0.0:8200
- VAULT_API_ADDR=http://0.0.0.0:8200
- VAULT_ADDRESS=http://0.0.0.0:8200
volumes:
- ./data:/vault/data
- ./config:/vault/config
- ./logs:/vault/logs
- ./certs:/vault/certs
entrypoint: vault
command: server -config=/vault/config/vault.json -log-level=debug

38
docker-30/vault/readme.md Normal file
View File

@@ -0,0 +1,38 @@
## deployment notes
There was a problem with "production" deployment of Vault through docker container, because default `docker-entrypoint.sh` adds argument saying where dev instance is supposed to listen and then vault crashes because it tries to listen on same port twice.
Solution: override default entrypoint
```bash
# vault helpers
alias set-vault="export VAULT_ADDR=https://docker-30:8200"
alias set-vault-ignore-tls="export VAULT_ADDR=https://docker-30:8200; export VAULT_SKIP_VERIFY=true"
export VAULT_ADDR="https://vault.hrajfrisbee.cz"
export VAULT_SKIP_VERIFY=true
```
## backup
Simple file copy initiated by cron, backend storage is minio (s3) running on synology
```bash
echo '30 2 * * * root /root/bin/vault-backup.sh >> /var/log/vault-backup.log 2>&1' > /etc/cron.d/vault-backup
```
```bash
# output role info
tofu output -raw role_id
tofu output -raw secret_id
```
## vault initialization
```bash
vault operator init -key-shares=1 -key-threshold=1
```

View File

@@ -0,0 +1,49 @@
resource "vault_mount" "kv" {
path = "secret"
type = "kv-v2"
description = "KV v2 secrets engine"
}
resource "vault_policy" "eso_read" {
name = "external-secrets-read"
policy = <<-EOT
path "${vault_mount.kv.path}/data/*" {
capabilities = ["read"]
}
path "${vault_mount.kv.path}/metadata/*" {
capabilities = ["read", "list"]
}
EOT
}
resource "vault_auth_backend" "approle" {
type = "approle"
}
resource "vault_approle_auth_backend_role" "eso" {
backend = vault_auth_backend.approle.path
role_name = "external-secrets"
token_policies = [vault_policy.eso_read.name]
token_ttl = 3600
token_max_ttl = 14400
}
data "vault_approle_auth_backend_role_id" "eso" {
backend = vault_auth_backend.approle.path
role_name = vault_approle_auth_backend_role.eso.role_name
}
resource "vault_approle_auth_backend_role_secret_id" "eso" {
backend = vault_auth_backend.approle.path
role_name = vault_approle_auth_backend_role.eso.role_name
}
output "role_id" {
value = data.vault_approle_auth_backend_role_id.eso.role_id
sensitive = true
}
output "secret_id" {
value = vault_approle_auth_backend_role_secret_id.eso.secret_id
sensitive = true
}

View File

@@ -0,0 +1 @@
{"version":4,"terraform_version":"1.11.2","serial":2,"lineage":"88d0da45-267c-24b8-34e1-c9a1c58ab70f","outputs":{"role_id":{"value":"864e352d-2064-2bf9-2c73-dbd676a95368","type":"string","sensitive":true},"secret_id":{"value":"8dd0e675-f4dc-50ba-6665-3db5ae423702","type":"string","sensitive":true}},"resources":[{"mode":"data","type":"vault_approle_auth_backend_role_id","name":"eso","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":0,"attributes":{"backend":"approle","id":"auth/approle/role/external-secrets/role-id","namespace":null,"role_id":"864e352d-2064-2bf9-2c73-dbd676a95368","role_name":"external-secrets"},"sensitive_attributes":[]}]},{"mode":"managed","type":"vault_approle_auth_backend_role","name":"eso","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":0,"attributes":{"backend":"approle","bind_secret_id":true,"id":"auth/approle/role/external-secrets","namespace":null,"role_id":"864e352d-2064-2bf9-2c73-dbd676a95368","role_name":"external-secrets","secret_id_bound_cidrs":null,"secret_id_num_uses":0,"secret_id_ttl":0,"token_bound_cidrs":null,"token_explicit_max_ttl":0,"token_max_ttl":14400,"token_no_default_policy":false,"token_num_uses":0,"token_period":0,"token_policies":["external-secrets-read"],"token_ttl":3600,"token_type":"default"},"sensitive_attributes":[],"private":"bnVsbA==","dependencies":["vault_auth_backend.approle","vault_mount.kv","vault_policy.eso_read"]}]},{"mode":"managed","type":"vault_approle_auth_backend_role_secret_id","name":"eso","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":0,"attributes":{"accessor":"f20ef8a0-f21f-8c9b-fc38-887a005af763","backend":"approle","cidr_list":null,"id":"backend=approle::role=external-secrets::accessor=f20ef8a0-f21f-8c9b-fc38-887a005af763","metadata":"{}","namespace":null,"num_uses":0,"role_name":"external-secrets","secret_id":"8dd0e675-f4dc-50ba-6665-3db5ae423702","ttl":0,"with_wrapped_accessor":null,"wrapping_accessor":null,"wrapping_token":null,"wrapping_ttl":null},"sensitive_attributes":[[{"type":"get_attr","value":"secret_id"}],[{"type":"get_attr","value":"wrapping_token"}]],"private":"bnVsbA==","dependencies":["vault_approle_auth_backend_role.eso","vault_auth_backend.approle","vault_mount.kv","vault_policy.eso_read"]}]},{"mode":"managed","type":"vault_auth_backend","name":"approle","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":1,"attributes":{"accessor":"auth_approle_409190cb","description":"","disable_remount":false,"id":"approle","identity_token_key":null,"local":false,"namespace":null,"path":"approle","tune":[],"type":"approle"},"sensitive_attributes":[],"private":"eyJzY2hlbWFfdmVyc2lvbiI6IjEifQ=="}]},{"mode":"managed","type":"vault_mount","name":"kv","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":0,"attributes":{"accessor":"kv_d207dd40","allowed_managed_keys":null,"allowed_response_headers":null,"audit_non_hmac_request_keys":[],"audit_non_hmac_response_keys":[],"default_lease_ttl_seconds":0,"delegated_auth_accessors":null,"description":"KV v2 secrets engine","external_entropy_access":false,"id":"secret","identity_token_key":"","listing_visibility":"","local":false,"max_lease_ttl_seconds":0,"namespace":null,"options":null,"passthrough_request_headers":null,"path":"secret","plugin_version":null,"seal_wrap":false,"type":"kv-v2"},"sensitive_attributes":[],"private":"bnVsbA=="}]},{"mode":"managed","type":"vault_policy","name":"eso_read","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":0,"attributes":{"id":"external-secrets-read","name":"external-secrets-read","namespace":null,"policy":"path \"secret/data/*\" {\n capabilities = [\"read\"]\n}\npath \"secret/metadata/*\" {\n capabilities = [\"read\", \"list\"]\n}\n"},"sensitive_attributes":[],"private":"bnVsbA==","dependencies":["vault_mount.kv"]}]}],"check_results":null}

View File

@@ -0,0 +1 @@
{"version":4,"terraform_version":"1.11.2","serial":1,"lineage":"88d0da45-267c-24b8-34e1-c9a1c58ab70f","outputs":{"role_id":{"value":"8833d0f8-d35d-d7ea-658b-c27837d121ab","type":"string","sensitive":true},"secret_id":{"value":"1791bfd9-5dc6-406a-3960-ba8fcad4a5a9","type":"string","sensitive":true}},"resources":[{"mode":"data","type":"vault_approle_auth_backend_role_id","name":"eso","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":0,"attributes":{"backend":"approle","id":"auth/approle/role/external-secrets/role-id","namespace":null,"role_id":"8833d0f8-d35d-d7ea-658b-c27837d121ab","role_name":"external-secrets"},"sensitive_attributes":[]}]},{"mode":"managed","type":"vault_approle_auth_backend_role","name":"eso","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":0,"attributes":{"backend":"approle","bind_secret_id":true,"id":"auth/approle/role/external-secrets","namespace":null,"role_id":"8833d0f8-d35d-d7ea-658b-c27837d121ab","role_name":"external-secrets","secret_id_bound_cidrs":null,"secret_id_num_uses":0,"secret_id_ttl":0,"token_bound_cidrs":null,"token_explicit_max_ttl":0,"token_max_ttl":14400,"token_no_default_policy":false,"token_num_uses":0,"token_period":0,"token_policies":["external-secrets-read"],"token_ttl":3600,"token_type":"default"},"sensitive_attributes":[],"private":"bnVsbA==","dependencies":["vault_auth_backend.approle","vault_mount.kv","vault_policy.eso_read"]}]},{"mode":"managed","type":"vault_approle_auth_backend_role_secret_id","name":"eso","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":0,"attributes":{"accessor":"bcc08746-6bea-8df2-02da-f6a697bceb59","backend":"approle","cidr_list":null,"id":"backend=approle::role=external-secrets::accessor=bcc08746-6bea-8df2-02da-f6a697bceb59","metadata":"{}","namespace":null,"num_uses":0,"role_name":"external-secrets","secret_id":"1791bfd9-5dc6-406a-3960-ba8fcad4a5a9","ttl":0,"with_wrapped_accessor":null,"wrapping_accessor":null,"wrapping_token":null,"wrapping_ttl":null},"sensitive_attributes":[[{"type":"get_attr","value":"secret_id"}],[{"type":"get_attr","value":"wrapping_token"}]],"private":"bnVsbA==","dependencies":["vault_approle_auth_backend_role.eso","vault_auth_backend.approle","vault_mount.kv","vault_policy.eso_read"]}]},{"mode":"managed","type":"vault_auth_backend","name":"approle","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":1,"attributes":{"accessor":"auth_approle_c6cd7bc1","description":"","disable_remount":false,"id":"approle","identity_token_key":null,"local":false,"namespace":null,"path":"approle","tune":[],"type":"approle"},"sensitive_attributes":[],"private":"eyJzY2hlbWFfdmVyc2lvbiI6IjEifQ=="}]},{"mode":"managed","type":"vault_mount","name":"kv","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":0,"attributes":{"accessor":"kv_8285fbfc","allowed_managed_keys":null,"allowed_response_headers":null,"audit_non_hmac_request_keys":[],"audit_non_hmac_response_keys":[],"default_lease_ttl_seconds":0,"delegated_auth_accessors":null,"description":"KV v2 secrets engine","external_entropy_access":false,"id":"secret","identity_token_key":"","listing_visibility":"","local":false,"max_lease_ttl_seconds":0,"namespace":null,"options":null,"passthrough_request_headers":null,"path":"secret","plugin_version":null,"seal_wrap":false,"type":"kv-v2"},"sensitive_attributes":[],"private":"bnVsbA=="}]},{"mode":"managed","type":"vault_policy","name":"eso_read","provider":"provider[\"registry.opentofu.org/hashicorp/vault\"]","instances":[{"schema_version":0,"attributes":{"id":"external-secrets-read","name":"external-secrets-read","namespace":null,"policy":"path \"secret/data/*\" {\n capabilities = [\"read\"]\n}\npath \"secret/metadata/*\" {\n capabilities = [\"read\", \"list\"]\n}\n"},"sensitive_attributes":[],"private":"bnVsbA==","dependencies":["vault_mount.kv"]}]}],"check_results":null}

View File

@@ -0,0 +1,12 @@
terraform {
required_providers {
vault = {
source = "hashicorp/vault"
version = "~> 4.0"
}
}
}
provider "vault" {
# Uses VAULT_ADDR and VAULT_TOKEN from env
}

View File

@@ -0,0 +1,38 @@
#!/bin/bash
set -euo pipefail
# set -x # Enable debug output
# --- Configuration ---
VAULT_DATA_DIR="${VAULT_DATA_DIR:-/srv/docker/vault/data/}"
S3_BUCKET="${S3_BUCKET:-vault-backup}"
MC_ALIAS="${MC_ALIAS:-synology}" # Pre-configured mc alias
RETENTION_DAYS="${RETENTION_DAYS:-60}"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
BACKUP_FILE="/tmp/vault-backup-${TIMESTAMP}.tar.gz"
log() { echo "[$(date -Iseconds)] $*"; }
cleanup() {
rm -f "${BACKUP_FILE}"
}
trap cleanup EXIT
# --- Create backup ---
log "Backing up ${VAULT_DATA_DIR}..."
tar -czf "${BACKUP_FILE}" -C "$(dirname "${VAULT_DATA_DIR}")" "$(basename "${VAULT_DATA_DIR}")"
BACKUP_SIZE=$(stat -c%s "${BACKUP_FILE}")
log "Backup size: ${BACKUP_SIZE} bytes"
# --- Upload to MinIO ---
log "Uploading to ${MC_ALIAS}/${S3_BUCKET}..."
set -x
minio-cli cp --quiet "${BACKUP_FILE}" "${MC_ALIAS}/${S3_BUCKET}/vault-backup-${TIMESTAMP}.tar.gz"
# --- Prune old backups ---
log "Pruning backups older than ${RETENTION_DAYS} days..."
minio-cli rm --quiet --recursive --force --older-than "${RETENTION_DAYS}d" "${MC_ALIAS}/${S3_BUCKET}/"
log "Backup complete: vault-backup-${TIMESTAMP}.tar.gz"

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,28 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: external-secrets-crds
namespace: external-secrets
spec:
interval: 1h
chart:
spec:
chart: external-secrets
sourceRef:
kind: HelmRepository
name: external-secrets
namespace: flux-system
version: "1.2.1"
values:
installCRDs: true
webhook:
create: false
certController:
create: false
serviceAccount:
create: false
resources: {}
crds:
createClusterExternalSecret: true
createClusterSecretStore: true
createPushSecret: true

View File

@@ -0,0 +1,8 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: external-secrets
namespace: flux-system
spec:
interval: 1h
url: https://charts.external-secrets.io

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,22 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod-dns
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: kacerr.cz@gmail.com
privateKeySecretRef:
name: letsencrypt-dns-account-key
solvers:
- dns01:
rfc2136:
nameserver: dns-update-proxy.cert-manager.svc.cluster.local:53
tsigKeyName: acme-update-key
tsigAlgorithm: HMACSHA512
tsigSecretSecretRef:
name: acme-update-key
key: acme-update-key
selector:
dnsZones:
- "lab.home.hrajfrisbee.cz"

View File

@@ -0,0 +1,33 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: dns-update-proxy
namespace: cert-manager
spec:
replicas: 1
selector:
matchLabels:
app: dns-update-proxy
template:
metadata:
labels:
app: dns-update-proxy
spec:
containers:
- name: socat-tcp
image: alpine/socat
args:
- TCP-LISTEN:53,fork,reuseaddr
- TCP:87.236.195.209:5353
ports:
- containerPort: 53
protocol: TCP
- name: socat-udp
image: alpine/socat
args:
- -T5
- UDP-RECVFROM:53,fork,reuseaddr
- UDP:87.236.195.209:5353
ports:
- containerPort: 53
protocol: UDP

View File

@@ -0,0 +1,18 @@
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: acme-update-key
namespace: cert-manager
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend # or your store
kind: ClusterSecretStore
target:
name: acme-update-key
creationPolicy: Owner
data:
- secretKey: acme-update-key
remoteRef:
key: k8s_home/cert-manager
property: acme-update-key

View File

@@ -0,0 +1,62 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: cert-manager
namespace: cert-manager
spec:
interval: 1h
chart:
spec:
chart: cert-manager
version: "v1.17.2"
sourceRef:
kind: HelmRepository
name: cert-manager
namespace: flux-system
install:
createNamespace: true
crds: CreateReplace
upgrade:
crds: CreateReplace
values:
global:
logLevel: 6
crds:
enabled: false
config:
apiVersion: controller.config.cert-manager.io/v1alpha1
kind: ControllerConfiguration
enableGatewayAPI: true
prometheus:
enabled: true
extraObjects:
- |
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: kacerr.cz+lets-encrypt@gmail.com
privateKeySecretRef:
name: letsencrypt-staging-account-key
solvers:
- http01:
ingress:
ingressClassName: nginx
- |
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: kacerr.cz+lets-encrypt@gmail.com
privateKeySecretRef:
name: letsencrypt-prod-account-key
solvers:
- http01:
ingress:
ingressClassName: nginx

View File

@@ -0,0 +1,8 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: cert-manager
namespace: flux-system
spec:
interval: 1h
url: https://charts.jetstack.io

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: cert-manager

View File

@@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: dns-update-proxy
namespace: cert-manager
spec:
selector:
app: dns-update-proxy
ports:
- name: dns-tcp
port: 53
targetPort: 53
protocol: TCP
- name: dns-udp
port: 53
targetPort: 53
protocol: UDP

View File

@@ -0,0 +1,10 @@
apiVersion: cilium.io/v2alpha1
kind: CiliumL2AnnouncementPolicy
metadata:
name: default
spec:
interfaces:
- ^en.* # Match your interfaces
loadBalancerIPs: true
serviceSelector:
matchLabels: {}

View File

@@ -0,0 +1,12 @@
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: wildcard-lab-home-hrajfrisbee
namespace: kube-system
spec:
secretName: wildcard-lab-home-hrajfrisbee-tls
issuerRef:
name: letsencrypt-prod-dns
kind: ClusterIssuer
dnsNames:
- "*.lab.home.hrajfrisbee.cz"

View File

@@ -0,0 +1,27 @@
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: cilium-gateway
namespace: kube-system
spec:
gatewayClassName: cilium
listeners:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: All
- name: lab-home-hrajfrisbee-https-wildcard
hostname: "*.lab.home.hrajfrisbee.cz"
port: 443
protocol: HTTPS
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: wildcard-lab-home-hrajfrisbee-tls
allowedRoutes:
namespaces:
from: All

View File

@@ -0,0 +1,43 @@
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: cilium
namespace: kube-system
spec:
chart:
spec:
chart: cilium
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: cilium
namespace: flux-system
version: 1.18.5
interval: 5m0s
values:
cluster:
name: "home-kube"
devices: "eth+ bond+ en+"
hubble:
relay:
enabled: true
ui:
enabled: true
ipam:
mode: cluster-pool
operator:
clusterPoolIPv4MaskSize: 24
clusterPoolIPv4PodCIDRList: "10.96.0.0/16"
l2announcements:
enabled: true
gatewayAPI:
enabled: true
kubeProxyReplacement: true
k8sServiceHost: 192.168.0.31 # or LB IP
k8sServicePort: 6443
# disable envoy daemonset - i guess that is stupid idea anyway
# envoy:
# enabled: false
# l7Proxy: false

View File

@@ -0,0 +1,9 @@
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
name: cilium-lb-ipam
namespace: kube-system
spec:
blocks:
- start: "192.168.0.35"
stop: "192.168.0.39"

View File

@@ -1,16 +0,0 @@
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: cilium
namespace: kube-system
spec:
chart:
spec:
chart: cilium
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: cilium
version: 1.16.5
interval: 5m0s

View File

@@ -0,0 +1,19 @@
apiVersion: external-secrets.io/v1
kind: ClusterSecretStore
metadata:
name: vault-backend
namespace: external-secrets
spec:
provider:
vault:
server: "https://vault.hrajfrisbee.cz"
path: "secret"
version: "v2"
auth:
appRole:
path: "approle"
roleId: "864e352d-2064-2bf9-2c73-dbd676a95368" # or reference a secret
secretRef:
name: vault-approle
key: secret-id
namespace: external-secrets

View File

@@ -0,0 +1,63 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: external-secrets
namespace: external-secrets
spec:
interval: 30m
chart:
spec:
chart: external-secrets
version: "1.2.1" # latest stable 1.x
sourceRef:
kind: HelmRepository
name: external-secrets
namespace: flux-system
install:
createNamespace: true
remediation:
retries: 3
upgrade:
remediation:
retries: 3
values:
replicaCount: 1
leaderElect: true
# Resources (adjust to your cluster)
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
memory: 256Mi
webhook:
replicaCount: 1
resources:
requests:
cpu: 25m
memory: 64Mi
limits:
memory: 128Mi
podDisruptionBudget:
enabled: true
minAvailable: 1
certController:
replicaCount: 1
resources:
requests:
cpu: 25m
memory: 64Mi
limits:
memory: 128Mi
# Metrics (enable if prometheus-operator is present)
serviceMonitor:
enabled: false
# Pod disruption budgets
podDisruptionBudget:
enabled: true
minAvailable: 1

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: external-secrets

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: Secret
metadata:
name: vault-approle
namespace: external-secrets
annotations:
kustomize.toolkit.fluxcd.io/reconcile: disabled
type: Opaque
stringData:
secret-id: --- fill in the secret_id ---

View File

@@ -1,6 +1,19 @@
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: 00-crds
namespace: flux-system
spec:
interval: 10m0s
path: ./gitops/home-kubernetes/00-crds
prune: true
sourceRef:
kind: GitRepository
name: flux-system
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: cilium
namespace: flux-system
@@ -14,6 +27,47 @@ spec:
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: cert-manager
namespace: flux-system
spec:
interval: 10m0s
path: ./gitops/home-kubernetes/cert-manager
prune: true
sourceRef:
kind: GitRepository
name: flux-system
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: external-secrets
namespace: flux-system
spec:
interval: 10m0s
path: ./gitops/home-kubernetes/external-secrets
prune: true
sourceRef:
kind: GitRepository
name: flux-system
dependsOn:
- name: 00-crds
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: kube-prometheus
namespace: flux-system
spec:
interval: 10m0s
path: ./gitops/home-kubernetes/kube-prometheus
prune: true
sourceRef:
kind: GitRepository
name: flux-system
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: ingress-nginx
namespace: flux-system
@@ -28,11 +82,37 @@ spec:
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: kuard
name: oauth-proxy
namespace: flux-system
spec:
interval: 10m0s
path: ./gitops/home-kubernetes/kuard
path: ./gitops/home-kubernetes/oauth-proxy
prune: true
sourceRef:
kind: GitRepository
name: flux-system
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: podinfo
namespace: flux-system
spec:
interval: 10m0s
path: ./gitops/home-kubernetes/podinfo
prune: true
sourceRef:
kind: GitRepository
name: flux-system
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: plane
namespace: flux-system
spec:
interval: 10m0s
path: ./gitops/home-kubernetes/plane
prune: true
sourceRef:
kind: GitRepository

File diff suppressed because it is too large Load Diff

View File

@@ -11,7 +11,7 @@ spec:
branch: main
secretRef:
name: flux-system
url: https://gitlab.hrajfrisbee.cz/infrastructure/home-kubernetes.git
url: https://gitea.home.hrajfrisbee.cz/kacerr/home-kubernetes
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization

View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: Namespace
metadata:
name: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes
app.kubernetes.io/name: ghost-on-kubernetes
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: namespace
app.kubernetes.io/part-of: ghost-on-kubernetes

View File

@@ -0,0 +1,17 @@
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: ghost-config
namespace: ghost-on-kubernetes
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: ghost-config
data:
- secretKey: gmail-app-password
remoteRef:
key: k8s_home/ghost # Vault path (without 'data/' prefix)
property: gmail-app-password

View File

@@ -0,0 +1,42 @@
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: ghost-on-kubernetes-mysql-env
namespace: ghost-on-kubernetes
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend
kind: ClusterSecretStore
target:
name: ghost-on-kubernetes-mysql-env # resulting K8s secret name
data:
- secretKey: MYSQL_DATABASE # key in K8s secret
remoteRef:
key: k8s_home/ghost # Vault path (without 'data/' prefix)
property: mysql-db-name # field within Vault secret
- secretKey: MYSQL_USER # key in K8s secret
remoteRef:
key: k8s_home/ghost
property: mysql-db-user
- secretKey: MYSQL_PASSWORD
remoteRef:
key: k8s_home/ghost
property: mysql-db-password
- secretKey: MYSQL_ROOT_PASSWORD
remoteRef:
key: k8s_home/ghost
property: mysql-db-root-password
- secretKey: MYSQL_HOST
remoteRef:
key: k8s_home/ghost
property: mysql-host
# type: Opaque
# stringData:
# MYSQL_DATABASE: mysql-db-name # Same as in config.production.json
# MYSQL_USER: mysql-db-user # Same as in config.production.json
# MYSQL_PASSWORD: mysql-db-password # Same as in config.production.json
# MYSQL_ROOT_PASSWORD: mysql-db-root-password # Same as in config.production.json
# MYSQL_HOST: '%' # Same as in config.production.json

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Secret
metadata:
name: ghost-on-kubernetes-mysql-env
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes-mysql
app.kubernetes.io/name: ghost-on-kubernetes-mysql-env
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: database-secret
app.kubernetes.io/part-of: ghost-on-kubernetes
type: Opaque
stringData:
MYSQL_DATABASE: mysql-db-name # Same as in config.production.json
MYSQL_USER: mysql-db-user # Same as in config.production.json
MYSQL_PASSWORD: mysql-db-password # Same as in config.production.json
MYSQL_ROOT_PASSWORD: mysql-db-root-password # Same as in config.production.json
MYSQL_HOST: '%' # Same as in config.production.json

View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes
app.kubernetes.io/name: tls-secret
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: tls-secret
app.kubernetes.io/part-of: ghost-on-kubernetes
type: kubernetes.io/tls
stringData:
tls.crt: content-tls-crt-base64 # Optional, if you want to use your own TLS certificate
tls.key: content-tls-key-base64 # Optional, if you want to use your own TLS certificate

View File

@@ -0,0 +1,49 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: k8s-ghost-content
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes
app.kubernetes.io/name: k8s-ghost-content
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: storage
app.kubernetes.io/part-of: ghost-on-kubernetes
spec:
# Change this to your storageClassName, we suggest using a storageClassName that supports ReadWriteMany for production.
storageClassName: freenas-iscsi
volumeMode: Filesystem
# Change this to your accessModes. We suggest ReadWriteMany for production, ReadWriteOnce for development.
# With ReadWriteMany, you can have multiple replicas of Ghost, so you can achieve high availability.
# Note that ReadWriteMany is not supported by all storage providers and may require additional configuration.
# Ghost officialy doesn't support HA, they suggest using a CDN or caching. Info: https://ghost.org/docs/faq/clustering-sharding-multi-server/
accessModes:
- ReadWriteOnce # Change this to your accessModes if needed, we suggest ReadWriteMany so we can scale the deployment later.
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ghost-on-kubernetes-mysql-pvc
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes-mysql
app.kubernetes.io/name: ghost-on-kubernetes-mysql-pvc
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: database-storage
app.kubernetes.io/part-of: ghost-on-kubernetes
spec:
storageClassName: freenas-iscsi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

View File

@@ -0,0 +1,49 @@
apiVersion: v1
kind: Service
metadata:
name: ghost-on-kubernetes-service
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes
app.kubernetes.io/name: ghost-on-kubernetes-service
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: service-frontend
app.kubernetes.io/part-of: ghost-on-kubernetes
spec:
ports:
- port: 2368
protocol: TCP
targetPort: ghk8s
name: ghk8s
type: ClusterIP
selector:
app: ghost-on-kubernetes
---
apiVersion: v1
kind: Service
metadata:
name: ghost-on-kubernetes-mysql-service
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes-mysql
app.kubernetes.io/name: ghost-on-kubernetes-mysql-service
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: service-database
app.kubernetes.io/part-of: ghost-on-kubernetes
spec:
ports:
- port: 3306
protocol: TCP
targetPort: mysqlgh
name: mysqlgh
type: ClusterIP
clusterIP: None
selector:
app: ghost-on-kubernetes-mysql

View File

@@ -0,0 +1,59 @@
apiVersion: v1
kind: Secret
metadata:
name: ghost-config-prod
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes
app.kubernetes.io/name: ghost-config-prod
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: ghost-config
app.kubernetes.io/part-of: ghost-on-kubernetes
type: Opaque
stringData:
config.production.json: |-
{
"url": "https://ghost.lab.home.hrajfrisbee.cz",
"admin": {
"url": "https://ghost.lab.home.hrajfrisbee.cz"
},
"server": {
"port": 2368,
"host": "0.0.0.0"
},
"mail": {
"transport": "SMTP",
"from": "user@server.com",
"options": {
"service": "Google",
"host": "smtp.gmail.com",
"port": 465,
"secure": true,
"auth": {
"user": "user@server.com",
"pass": "passsword"
}
}
},
"logging": {
"transports": [
"stdout"
]
},
"database": {
"client": "mysql",
"connection":
{
"host": "ghost-on-kubernetes-mysql-service",
"user": "mysql-db-user",
"password": "mysql-db-password",
"database": "mysql-db-name",
"port": "3306"
}
},
"process": "local",
"paths": {
"contentPath": "/home/nonroot/app/ghost/content"
}
}

View File

@@ -0,0 +1,134 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ghost-on-kubernetes-mysql
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes-mysql
app.kubernetes.io/name: ghost-on-kubernetes-mysql
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: database
app.kubernetes.io/part-of: ghost-on-kubernetes
spec:
serviceName: ghost-on-kubernetes-mysql-service
replicas: 1
selector:
matchLabels:
app: ghost-on-kubernetes-mysql
template:
metadata:
labels:
app: ghost-on-kubernetes-mysql
spec:
initContainers:
- name: ghost-on-kubernetes-mysql-init
securityContext:
allowPrivilegeEscalation: false
privileged: false
readOnlyRootFilesystem: true
image: docker.io/busybox:stable-musl
imagePullPolicy: Always # You can change this value according to your needs
command:
- /bin/sh
- -c
- |
set -e
echo 'Changing ownership of mysql mount directory to 65534:65534'
chown -Rfv 65534:65534 /mnt/mysql || echo 'Error changing ownership of mysql mount directory to 65534:65534'
echo 'Changing ownership of tmp mount directory to 65534:65534'
chown -Rfv 65534:65534 /mnt/tmp || echo 'Error changing ownership of tmp mount directory to 65534:65534'
echo 'Changing ownership of socket mount directory to 65534:65534'
chown -Rfv 65534:65534 /mnt/var/run/mysqld || echo 'Error changing ownership of socket mount directory to 65534:65534'
volumeMounts:
- name: ghost-on-kubernetes-mysql-volume
mountPath: /mnt/mysql
subPath: mysql-empty-subdir
readOnly: false
- name: ghost-on-kubernetes-mysql-tmp
mountPath: /mnt/tmp
readOnly: false
- name: ghost-on-kubernetes-mysql-socket
mountPath: /mnt/var/run/mysqld
readOnly: false
# YOu can ajust the resources according to your needs
resources:
requests:
memory: 0Mi
cpu: 0m
limits:
memory: 1Gi
cpu: 900m
containers:
- name: ghost-on-kubernetes-mysql
securityContext:
allowPrivilegeEscalation: false
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65534
image: docker.io/mysql:8.4
imagePullPolicy: Always # You can change this value according to your needs
envFrom:
- secretRef:
name: ghost-on-kubernetes-mysql-env
resources:
requests:
memory: 500Mi # You can change this value according to your needs
cpu: 300m # You can change this value according to your needs
limits:
memory: 1Gi # You can change this value according to your needs
cpu: 900m # You can change this value according to your needs
ports:
- containerPort: 3306
protocol: TCP
name: mysqlgh
volumeMounts:
- name: ghost-on-kubernetes-mysql-volume
mountPath: /var/lib/mysql
subPath: mysql-empty-subdir
readOnly: false
- name: ghost-on-kubernetes-mysql-tmp
mountPath: /tmp
readOnly: false
- name: ghost-on-kubernetes-mysql-socket
mountPath: /var/run/mysqld
readOnly: false
automountServiceAccountToken: false
# Optional: Uncomment the following to specify node selectors
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: node-role.kubernetes.io/worker
# operator: In
# values:
# - 'true'
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- name: ghost-on-kubernetes-mysql-volume
persistentVolumeClaim:
claimName: ghost-on-kubernetes-mysql-pvc
- name: ghost-on-kubernetes-mysql-tmp
emptyDir:
sizeLimit: 128Mi
- name: ghost-on-kubernetes-mysql-socket
emptyDir:
sizeLimit: 128Mi

View File

@@ -0,0 +1,214 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ghost-on-kubernetes
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes
app.kubernetes.io/name: ghost-on-kubernetes
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: ghost
app.kubernetes.io/part-of: ghost-on-kubernetes
spec:
# If you want HA for your Ghost instance, you can increase the number of replicas AFTER creation and you need to adjust the storage class. See 02-pvc.yaml for more information.
replicas: 1
selector:
matchLabels:
app: ghost-on-kubernetes
minReadySeconds: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 3
revisionHistoryLimit: 4
progressDeadlineSeconds: 600
template:
metadata:
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes
spec:
automountServiceAccountToken: false # Disable automounting of service account token
volumes:
- name: k8s-ghost-content
persistentVolumeClaim:
claimName: k8s-ghost-content
- name: ghost-config-prod
secret:
secretName: ghost-config-prod
defaultMode: 420
- name: tmp
emptyDir:
sizeLimit: 64Mi
initContainers:
- name: permissions-fix
imagePullPolicy: Always
image: docker.io/busybox:stable-musl
env:
- name: GHOST_INSTALL
value: /home/nonroot/app/ghost
- name: GHOST_CONTENT
value: /home/nonroot/app/ghost/content
- name: NODE_ENV
value: production
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
resources:
limits:
cpu: 900m
memory: 1000Mi
requests:
cpu: 100m
memory: 128Mi
command:
- /bin/sh
- '-c'
- |
set -e
export DIRS='files logs apps themes data public settings images media'
echo 'Check if base dirs exists, if not, create them'
echo "Directories to check: $DIRS"
for dir in $DIRS; do
if [ ! -d $GHOST_CONTENT/$dir ]; then
echo "Creating $GHOST_CONTENT/$dir directory"
mkdir -pv $GHOST_CONTENT/$dir || echo "Error creating $GHOST_CONTENT/$dir directory"
fi
chown -Rfv 65532:65532 $GHOST_CONTENT/$dir && echo "chown ok on $dir" || echo "Error changing ownership of $GHOST_CONTENT/$dir directory"
done
exit 0
volumeMounts:
- name: k8s-ghost-content
mountPath: /home/nonroot/app/ghost/content
readOnly: false
containers:
- name: ghost-on-kubernetes
# For development, you can use the following image:
# image: ghcr.io/sredevopsorg/ghost-on-kubernetes:latest-dev
# image: ghcr.io/sredevopsorg/ghost-on-kubernetes:main
image: ghost:bookworm
imagePullPolicy: Always
ports:
- name: ghk8s
containerPort: 2368
protocol: TCP
# You should uncomment the following lines in production. Change the values according to your environment.
readinessProbe:
httpGet:
path: /ghost/api/v4/admin/site/
port: ghk8s
httpHeaders:
- name: X-Forwarded-Proto
value: https
- name: Host
value: ghost.lab.home.hrajfrisbee.cz
periodSeconds: 10
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
initialDelaySeconds: 10
livenessProbe:
httpGet:
path: /ghost/api/v4/admin/site/
port: ghk8s
httpHeaders:
- name: X-Forwarded-Proto
value: https
- name: Host
value: ghost.lab.home.hrajfrisbee.cz
periodSeconds: 300
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 1
initialDelaySeconds: 30
env:
- name: NODE_ENV
value: production
- name: url
value: "https://ghost.lab.home.hrajfrisbee.cz"
- name: database__client
value: "mysql"
- name: database__connection__host
value: "ghost-on-kubernetes-mysql-service"
- name: database__connection__port
value: "3306"
- name: database__connection__user
valueFrom:
secretKeyRef:
name: ghost-on-kubernetes-mysql-env
key: MYSQL_USER
- name: database__connection__password
valueFrom:
secretKeyRef:
name: ghost-on-kubernetes-mysql-env
key: MYSQL_PASSWORD
- name: database__connection__database
value: "ghost"
- name: mail__transport
value: "SMTP"
- name: mail__options__service
value: "Gmail"
- name: mail__options__auth__user
value: "kacerr.cz@gmail.com"
- name: mail__options__auth__pass
valueFrom:
secretKeyRef:
name: ghost-config
key: gmail-app-password
- name: mail__from
value: "'Kacerr's Blog' <kacerr.cz@gmail.com>"
resources:
limits:
cpu: 800m
memory: 800Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- name: k8s-ghost-content
mountPath: /home/nonroot/app/ghost/content
readOnly: false
- name: ghost-config-prod
readOnly: true
mountPath: /home/nonroot/app/ghost/config.production.json
subPath: config.production.json
- name: tmp # This is the temporary volume mount to allow loading themes
mountPath: /tmp
readOnly: false
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
runAsNonRoot: true
runAsUser: 65532
restartPolicy: Always
terminationGracePeriodSeconds: 15
dnsPolicy: ClusterFirst
# Optional: Uncomment the following to specify node selectors
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: node-role.kubernetes.io/worker
# operator: In
# values:
# - 'true'
securityContext: {}

View File

@@ -0,0 +1,30 @@
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: ghost-on-kubernetes-redirect
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes
app.kubernetes.io/name: ghost-on-kubernetes-httproute
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: httproute
app.kubernetes.io/part-of: ghost-on-kubernetes
spec:
parentRefs:
- name: cilium-gateway
namespace: kube-system
sectionName: http
hostnames:
- ghost.lab.home.hrajfrisbee.cz
rules:
- matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301

View File

@@ -0,0 +1,29 @@
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: ghost-on-kubernetes
namespace: ghost-on-kubernetes
labels:
app: ghost-on-kubernetes
app.kubernetes.io/name: ghost-on-kubernetes-httproute
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: httproute
app.kubernetes.io/part-of: ghost-on-kubernetes
spec:
parentRefs:
- name: cilium-gateway
namespace: kube-system
sectionName: lab-home-hrajfrisbee-https-wildcard
hostnames:
- ghost.lab.home.hrajfrisbee.cz
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: ghost-on-kubernetes-service
namespace: ghost-on-kubernetes
port: 2368

View File

@@ -0,0 +1,33 @@
# Optional: If you have a domain name, you can create an Ingress resource to expose your Ghost blog to the internet.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ghost-on-kubernetes-ingress
namespace: ghost-on-kubernetes
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
labels:
app: ghost-on-kubernetes
app.kubernetes.io/name: ghost-on-kubernetes-ingress
app.kubernetes.io/instance: ghost-on-kubernetes
app.kubernetes.io/version: '6.0'
app.kubernetes.io/component: ingress
app.kubernetes.io/part-of: ghost-on-kubernetes
spec:
ingressClassName: nginx
tls:
- hosts:
- ghost.lab.home.hrajfrisbee.cz
secretName: tls-secret
rules:
- host: ghost.lab.home.hrajfrisbee.cz
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ghost-on-kubernetes-service
port:
name: ghk8s

View File

@@ -11,11 +11,13 @@ spec:
sourceRef:
kind: HelmRepository
name: ingress-nginx
version: 4.12.0
version: 4.14.1
values:
controller:
admissionWebhooks:
enabled: false
patch:
enabled: false
config:
annotations-risk-level: "Critical"
interval: 5m0s

View File

@@ -1,29 +0,0 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kuard
namespace: kuard
spec:
selector:
matchLabels:
app: kuard
replicas: 1
template:
metadata:
labels:
app: kuard
spec:
containers:
- image: gcr.io/kuar-demo/kuard-amd64:1
imagePullPolicy: Always
name: kuard
ports:
- containerPort: 8080
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi

View File

@@ -1,29 +0,0 @@
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kuard
namespace: kuard
spec:
ingressClassName: nginx
rules:
- host: test.kuard.dev
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kuard
port:
number: 80
- host: kuard.home.lab
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kuard
port:
number: 80

View File

@@ -1,13 +0,0 @@
---
apiVersion: v1
kind: Service
metadata:
name: kuard
namespace: kuard
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: kuard

View File

@@ -0,0 +1,79 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: kube-prometheus-stack
namespace: monitoring
spec:
interval: 30m
chart:
spec:
chart: kube-prometheus-stack
sourceRef:
kind: HelmRepository
name: prometheus-community
namespace: flux-system
install:
createNamespace: true
crds: CreateReplace
remediation:
retries: 3
upgrade:
crds: CreateReplace
remediation:
retries: 3
values:
prometheus:
prometheusSpec:
retention: 60d
storageSpec:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 20Gi
resources:
requests:
memory: 0.5Gi
cpu: 500m
limits:
memory: 4Gi
cpu: 2
serviceMonitorSelectorNilUsesHelmValues: false
podMonitorSelectorNilUsesHelmValues: false
ruleSelectorNilUsesHelmValues: false
alertmanager:
alertmanagerSpec:
storage:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 3Gi
grafana:
persistence:
enabled: true
size: 10Gi
adminPassword: admin
ingress:
enabled: true
ingressClassName: nginx # adjust if using traefik/contour/etc
hosts:
- grafana.lab.home.hrajfrisbee.cz
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User,X-Auth-Request-Email,Authorization
nginx.ingress.kubernetes.io/auth-signin: https://oauth2-proxy.lab.home.hrajfrisbee.cz/oauth2/start?rd=$scheme://$host$escaped_request_uri
nginx.ingress.kubernetes.io/auth-url: https://oauth2-proxy.lab.home.hrajfrisbee.cz/oauth2/auth
tls:
- secretName: grafana-tls
hosts:
- grafana.lab.home.hrajfrisbee.cz
prometheusOperator:
admissionWebhooks:
certManager:
enabled: false

View File

@@ -0,0 +1,8 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: prometheus-community
namespace: flux-system
spec:
interval: 1h
url: https://prometheus-community.github.io/helm-charts

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: monitoring

View File

@@ -0,0 +1,34 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
hosts {
# 192.168.0.30 vault.hrajfrisbee.cz
fallthrough
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30 {
disable success cluster.local
disable denial cluster.local
}
loop
reload
loadbalance
}

View File

@@ -0,0 +1,19 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: mariadb-operator-crds
namespace: mariadb-operator
spec:
interval: 1h
chart:
spec:
chart: mariadb-operator-crds
version: "25.10.*"
sourceRef:
kind: HelmRepository
name: mariadb-operator
namespace: flux-system
install:
crds: Create
upgrade:
crds: CreateReplace

View File

@@ -0,0 +1,31 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: mariadb-operator
namespace: mariadb-operator
spec:
interval: 1h
dependsOn:
- name: mariadb-operator-crds
chart:
spec:
chart: mariadb-operator
version: "25.10.*"
sourceRef:
kind: HelmRepository
name: mariadb-operator
namespace: flux-system
values:
# uses built-in cert-controller for webhook TLS (no cert-manager dep)
webhook:
cert:
certManager:
enabled: false
# disable HA for operator itself (fine for testing)
ha:
enabled: false
# optional: enable metrics
metrics:
enabled: false
serviceMonitor:
enabled: false

View File

@@ -0,0 +1,8 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: mariadb-operator
namespace: flux-system
spec:
interval: 1h
url: https://helm.mariadb.com/mariadb-operator

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: mariadb-operator

View File

@@ -0,0 +1,34 @@
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: nextcloud-secrets
namespace: nextcloud
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend # or your store
kind: ClusterSecretStore
target:
name: nextcloud-secrets
creationPolicy: Owner
data:
- secretKey: nextcloud-password
remoteRef:
key: k8s_home/nextcloud/admin
property: password
- secretKey: nextcloud-username
remoteRef:
key: k8s_home/nextcloud/admin
property: username
- secretKey: db-username
remoteRef:
key: k8s_home/nextcloud/postgres
property: db-username
- secretKey: postgres-password
remoteRef:
key: k8s_home/nextcloud/postgres
property: password
- secretKey: redis-password
remoteRef:
key: k8s_home/nextcloud/redis
property: password

View File

@@ -0,0 +1,263 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: nextcloud
namespace: nextcloud
spec:
interval: 30m
timeout: 15m # Nextcloud init can be slow
chart:
spec:
chart: nextcloud
version: "8.6.0" # Latest as of Jan 2025
sourceRef:
kind: HelmRepository
name: nextcloud
namespace: flux-system
interval: 12h
install:
crds: CreateReplace
remediation:
retries: 3
upgrade:
crds: CreateReplace
cleanupOnFail: true
remediation:
retries: 3
remediateLastFailure: true
# CRITICAL: Suspend during major version upgrades to prevent restart loops
# suspend: true
values:
image:
repository: nextcloud
tag: 32.0.3-apache # Latest as of Jan 2025. For fresh installs only.
# UPGRADE PATH: If upgrading from older version, go sequentially:
# 29.x → 30.0.x → 31.0.x → 32.0.x (one major at a time)
pullPolicy: IfNotPresent
replicaCount: 1 # >1 requires Redis, see below
nextcloud:
host: nextcloud.lab.home.hrajfrisbee.cz # Substitute or hardcode
# existingSecret: nextcloud-admin # Alternative to inline credentials
existingSecret:
enabled: true
secretName: nextcloud-secrets
# usernameKey: username
passwordKey: nextcloud-password
username: admin
# password set via valuesFrom secret
# PHP tuning - critical for stability
phpConfigs:
uploadLimit.ini: |
upload_max_filesize = 16G
post_max_size = 16G
max_input_time = 3600
max_execution_time = 3600
www-conf.ini: |
[www]
pm = dynamic
pm.max_children = 20
pm.start_servers = 4
pm.min_spare_servers = 2
pm.max_spare_servers = 6
pm.max_requests = 500
memory.ini: |
memory_limit = 1G
opcache.ini: |
opcache.enable = 1
opcache.interned_strings_buffer = 32
opcache.max_accelerated_files = 10000
opcache.memory_consumption = 256
opcache.save_comments = 1
opcache.revalidate_freq = 60
; Set to 0 if using ConfigMap-mounted configs
configs:
# Proxy and overwrite settings - CRITICAL for ingress
proxy.config.php: |-
<?php
$CONFIG = array (
'trusted_proxies' => array(
0 => '127.0.0.1',
1 => '10.0.0.0/8',
2 => '172.16.0.0/12',
3 => '192.168.0.0/16',
),
'forwarded_for_headers' => array('HTTP_X_FORWARDED_FOR'),
'overwriteprotocol' => 'https',
);
# Performance and maintenance
custom.config.php: |-
<?php
$CONFIG = array (
'default_phone_region' => 'US',
'maintenance_window_start' => 1,
'filelocking.enabled' => true,
'memcache.local' => '\\OC\\Memcache\\APCu',
'memcache.distributed' => '\\OC\\Memcache\\Redis',
'memcache.locking' => '\\OC\\Memcache\\Redis',
'redis' => array(
'host' => 'nextcloud-redis-master',
'port' => 6379,
'password' => getenv('REDIS_PASSWORD'),
),
);
extraEnv:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: nextcloud-secrets
key: redis-password
# Ingress - adjust for your ingress controller
ingress:
enabled: true
className: nginx # or traefik, etc.
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "16G"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "3600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
nginx.ingress.kubernetes.io/server-snippet: |
server_tokens off;
proxy_hide_header X-Powered-By;
rewrite ^/.well-known/webfinger /index.php/.well-known/webfinger last;
rewrite ^/.well-known/nodeinfo /index.php/.well-known/nodeinfo last;
rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
location = /.well-known/carddav {
return 301 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 301 $scheme://$host/remote.php/dav;
}
cert-manager.io/cluster-issuer: letsencrypt-prod
tls:
- secretName: nextcloud-tls
hosts:
- nextcloud.lab.home.hrajfrisbee.cz
# PostgreSQL - strongly recommended over MariaDB for Nextcloud
postgresql:
enabled: true
global:
postgresql:
auth:
username: nextcloud
database: nextcloud
existingSecret: nextcloud-secrets
secretKeys:
userPasswordKey: postgres-password
primary:
persistence:
enabled: true
size: 8Gi
storageClass: "" # Use default or specify
resources:
requests:
memory: 256Mi
cpu: 100m
limits:
memory: 512Mi
# Redis - required for file locking and sessions
redis:
enabled: true
auth:
enabled: true
existingSecret: nextcloud-secrets
existingSecretPasswordKey: redis-password
architecture: standalone
master:
persistence:
enabled: true
size: 1Gi
# Disable built-in databases we're not using
mariadb:
enabled: false
internalDatabase:
enabled: false
externalDatabase:
enabled: true
type: postgresql
host: nextcloud-postgresql # Service name created by subchart
user: nextcloud
database: nextcloud
existingSecret:
enabled: true
secretName: nextcloud-secrets
passwordKey: postgres-password
# Cron job - CRITICAL: never use AJAX cron
cronjob:
enabled: true
schedule: "*/5 * * * *"
resources:
requests:
memory: 256Mi
cpu: 50m
limits:
memory: 512Mi
# Main persistence
persistence:
enabled: true
storageClass: "" # Specify your storage class
size: 100Gi
accessMode: ReadWriteOnce
# nextcloudData - separate PVC for user data (recommended)
nextcloudData:
enabled: true
storageClass: ""
size: 500Gi
accessMode: ReadWriteOnce
# Resource limits - tune based on usage
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
memory: 2Gi
# Liveness/Readiness - tuned to prevent upgrade restart loops
livenessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 6
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 6
successThreshold: 1
startupProbe:
enabled: true
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 10
failureThreshold: 30 # 15 minutes for upgrades
# Security context - avoid fsGroup recursive chown
securityContext:
fsGroupChangePolicy: OnRootMismatch
podSecurityContext:
fsGroup: 33 # www-data
# Metrics - optional but recommended
metrics:
enabled: false # Enable if you have Prometheus
# serviceMonitor:
# enabled: true

View File

@@ -0,0 +1,8 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: nextcloud
namespace: flux-system
spec:
interval: 24h
url: https://nextcloud.github.io/helm/

View File

@@ -0,0 +1,7 @@
apiVersion: v1
kind: Namespace
metadata:
name: nextcloud
labels:
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/warn: restricted

View File

@@ -0,0 +1,60 @@
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: oauth2-proxy
namespace: oauth2-proxy
spec:
interval: 30m
chart:
spec:
chart: oauth2-proxy
version: ">=7.0.0 <8.0.0"
sourceRef:
kind: HelmRepository
name: oauth2-proxy
namespace: oauth2-proxy
interval: 12h
values:
replicaCount: 2
config:
existingSecret: oauth2-proxy-secrets
configFile: |-
provider = "oidc"
oidc_issuer_url = "https://idm.home.hrajfrisbee.cz/oauth2/openid/oauth2-proxy"
email_domains = ["*"]
cookie_secure = true
cookie_domains = [".lab.home.hrajfrisbee.cz"]
whitelist_domains = [".lab.home.hrajfrisbee.cz"]
set_xauthrequest = true
set_authorization_header = true
pass_access_token = true
skip_provider_button = true
upstreams = ["static://202"]
skip_auth_routes = ["PUT=^/uploads/.*", "POST=^/uploads/.*"]
extraArgs:
- --reverse-proxy=true
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
hosts:
- oauth2-proxy.lab.home.hrajfrisbee.cz
tls:
- secretName: oauth2-proxy-tls
hosts:
- oauth2-proxy.lab.home.hrajfrisbee.cz
resources:
limits:
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
podDisruptionBudget:
enabled: true
minAvailable: 1

View File

@@ -0,0 +1,8 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: oauth2-proxy
namespace: oauth2-proxy
spec:
interval: 1h
url: https://oauth2-proxy.github.io/manifests

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: oauth2-proxy

View File

@@ -0,0 +1,8 @@
```bash
annotations:
nginx.ingress.kubernetes.io/auth-url: "http://oauth2-proxy.oauth2-proxy.svc.cluster.local:4180/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://oauth2-proxy.lab.home.hrajfrisbee.cz/oauth2/start?rd=$scheme://$host$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "X-Auth-Request-User,X-Auth-Request-Email,Authorization"
```

View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: Secret
metadata:
name: oauth2-proxy-secrets
namespace: oauth2-proxy
annotations:
kustomize.toolkit.fluxcd.io/reconcile: disabled
stringData:
client-id: oauth2-proxy
client-secret: <REPLACE_WITH_KANIDM_SECRET>
cookie-secret: a1f522c2394696c76e88eea54769d9e1

View File

@@ -0,0 +1,142 @@
# helmrelease.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: plane
namespace: plane
spec:
interval: 30m
chart:
spec:
chart: plane-ce
version: "1.4.1" # pin version, avoid 'stable'
sourceRef:
kind: HelmRepository
name: plane
namespace: flux-system
interval: 12h
timeout: 10m
install:
createNamespace: true
remediation:
retries: 3
upgrade:
remediation:
retries: 3
values:
planeVersion: "v1.2.1"
ingress:
enabled: true
appHost: "plane.lab.home.hrajfrisbee.cz"
minioHost: "plane-minio.lab.home.hrajfrisbee.cz"
rabbitmqHost: "plane-mq.lab.home.hrajfrisbee.cz" # optional
ingressClass: nginx
ingress_annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://oauth2-proxy.lab.home.hrajfrisbee.cz/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://oauth2-proxy.lab.home.hrajfrisbee.cz/oauth2/start?rd=$scheme://$host$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "X-Auth-Request-User,X-Auth-Request-Email,Authorization"
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($request_uri ~* "^/uploads/") {
set $auth_request_uri "";
}
# nginx.ingress.kubernetes.io/proxy-body-size: "10m"
# PostgreSQL - local stateful or external
postgres:
local_setup: true
storageClass: freenas-iscsi
volumeSize: 10Gi
# assign_cluster_ip: false
# nodeSelector: {}
# tolerations: []
# affinity: {}
# Redis/Valkey
redis:
local_setup: true
storageClass: freenas-iscsi
volumeSize: 2Gi
# RabbitMQ
rabbitmq:
local_setup: true
storageClass: freenas-iscsi
volumeSize: 1Gi
# MinIO (S3-compatible storage)
minio:
local_setup: true
storageClass: freenas-iscsi
volumeSize: 10Gi
env:
# Database credentials (change these!)
pgdb_username: plane
pgdb_password: plane-not-so-secret # TODO: do this properly
pgdb_name: plane
# Application secret (MUST change - used for encryption)
secret_key: 6u8w9T8P9zolcTMTC1DnErasyHnE6QGyB77tCPPFC/mnbPykb6DfiMW6id3Qy+Ly
# Storage
docstore_bucket: uploads
# doc_upload_size_limit: 5242880 -- this seems to be only causing errors
# Optional: External services (when local_setup: false)
# pgdb_remote_url: "postgresql://user:pass@host:5432/plane"
# remote_redis_url: "redis://host:6379/"
# aws_access_key: ""
# aws_secret_access_key: ""
# aws_region: ""
# aws_s3_endpoint_url: ""
# Workload resources (adjust based on cluster capacity)
web:
replicas: 2
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 128Mi
cpuRequest: 100m
api:
replicas: 2
memoryLimit: 1000Mi
cpuLimit: 500m
memoryRequest: 128Mi
cpuRequest: 100m
worker:
replicas: 1
memoryLimit: 1000Mi
cpuLimit: 500m
beatworker:
replicas: 1
memoryLimit: 500Mi
cpuLimit: 250m
space:
replicas: 1
memoryLimit: 500Mi
cpuLimit: 250m
admin:
replicas: 1
memoryLimit: 500Mi
cpuLimit: 250m
live:
replicas: 1
memoryLimit: 500Mi
cpuLimit: 250m
# TLS (requires cert-manager)
ssl:
createIssuer: false
generateCerts: true
issuer: letsencrypt-prod
# email: admin@example.com
# server: https://acme-v02.api.letsencrypt.org/directory
tls_secret_name: plane-tls # if using existing cert

View File

@@ -0,0 +1,8 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: plane
namespace: flux-system
spec:
interval: 1h
url: https://helm.plane.so/

View File

@@ -1,5 +1,4 @@
---
apiVersion: v1
kind: Namespace
metadata:
name: kuard
name: plane

View File

@@ -0,0 +1,38 @@
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: podinfo
namespace: podinfo
spec:
chart:
spec:
chart: podinfo
reconcileStrategy: ChartVersion
sourceRef:
kind: HelmRepository
name: podinfo
namespace: flux-system
version: '>5.0.0'
interval: 1m0s
releaseName: podinfo
values:
ingress:
enabled: true
className: nginx
hosts:
- host: podinfo.lab.home.hrajfrisbee.cz
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
replicaCount: 2
resources:
limits:
memory: 256Mi
requests:
cpu: 100m
memory: 64Mi

View File

@@ -0,0 +1,9 @@
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: podinfo
namespace: flux-system
spec:
interval: 10m0s
url: https://stefanprodan.github.io/podinfo

View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: podinfo

View File

@@ -0,0 +1,30 @@
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: seafile-secret
namespace: seafile
spec:
refreshInterval: 1h
secretStoreRef:
name: vault-backend # or your store
kind: ClusterSecretStore
target:
name: seafile-secret
creationPolicy: Owner
data:
- secretKey: JWT_PRIVATE_KEY
remoteRef:
key: k8s_home/seafile
property: JWT_PRIVATE_KEY
- secretKey: SEAFILE_MYSQL_DB_PASSWORD
remoteRef:
key: k8s_home/seafile
property: SEAFILE_MYSQL_DB_PASSWORD
- secretKey: INIT_SEAFILE_ADMIN_PASSWORD
remoteRef:
key: k8s_home/seafile
property: INIT_SEAFILE_ADMIN_PASSWORD
- secretKey: INIT_SEAFILE_MYSQL_ROOT_PASSWORD
remoteRef:
key: k8s_home/seafile
property: INIT_SEAFILE_MYSQL_ROOT_PASSWORD

View File

@@ -0,0 +1,114 @@
# apps/seafile/helmrelease.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: seafile
namespace: seafile
spec:
interval: 30m
chart:
spec:
chart: ce
version: "13.0.2"
sourceRef:
kind: HelmRepository
name: seafile
namespace: flux-system
install:
createNamespace: true
remediation:
retries: 3
upgrade:
remediation:
retries: 3
# Post-render patches
postRenderers:
- kustomize:
patches:
# Remove imagePullSecrets from all Deployments
- target:
kind: Deployment
patch: |
- op: remove
path: /spec/template/spec/imagePullSecrets
# Remove from StatefulSets (MariaDB, etc.)
- target:
kind: StatefulSet
patch: |
- op: remove
path: /spec/template/spec/imagePullSecrets
# Remove from Pods if any
- target:
kind: Pod
patch: |
- op: remove
path: /spec/imagePullSecrets
values:
seafile:
initMode: true
# The following are the configurations of seafile container
configs:
image: seafileltd/seafile-mc:13.0-latest
seafileDataVolume:
storage: 10Gi
# The following are environments of seafile services
env:
# for Seafile server
TIME_ZONE: "UTC"
SEAFILE_LOG_TO_STDOUT: "true"
SITE_ROOT: "/"
SEAFILE_SERVER_HOSTNAME: "seafile.lab.home.hrajfrisbee.cz"
SEAFILE_SERVER_PROTOCOL: "https"
# for database
SEAFILE_MYSQL_DB_HOST: "seafile-mariadb"
SEAFILE_MYSQL_DB_PORT: "3306"
SEAFILE_MYSQL_DB_USER: "seafile"
#SEAFILE_MYSQL_DB_CCNET_DB_NAME: "ccnet-db"
#SEAFILE_MYSQL_DB_SEAFILE_DB_NAME: "seafile-db"
#SEAFILE_MYSQL_DB_SEAHUB_DB_NAME: "seahub-db"
# for cache
CACHE_PROVIDER: "redis"
## for redis
REDIS_HOST: "redis"
REDIS_PORT: "6379"
## for memcached
#MEMCACHED_HOST: ""
#MEMCACHED_PORT: "11211"
# for notification
ENABLE_NOTIFICATION_SERVER: "false"
NOTIFICATION_SERVER_URL: ""
# for seadoc
ENABLE_SEADOC: "false"
SEADOC_SERVER_URL: "" # only valid in ENABLE_SEADOC = true
# for Seafile AI
ENABLE_SEAFILE_AI: "false"
SEAFILE_AI_SERVER_URL: ""
# for Metadata server
MD_FILE_COUNT_LIMIT: "100000"
# initialization (only valid in first-time deployment and initMode = true)
## for Seafile admin
INIT_SEAFILE_ADMIN_EMAIL: "kacerr.cz@gmail.com"
# if you are using another secret name / key for seafile or mysql, please make correct the following fields:
#secretsMap:
# DB_ROOT_PASSWD: # Env's name
# secret: seafile-secret # secret's name, `seafile-secret` if not specify
# key: INIT_SEAFILE_MYSQL_ROOT_PASSWORD # secret's key, `Env's name` if not specify
# extra configurations
extraResources: {}
extraEnv: []
extraVolumes: []

View File

@@ -0,0 +1,8 @@
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: seafile
namespace: flux-system
spec:
interval: 1h
url: https://haiwen.github.io/seafile-helm-chart/repo

View File

@@ -0,0 +1,35 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
meta.helm.sh/release-name: seafile
meta.helm.sh/release-namespace: seafile
nginx.ingress.kubernetes.io/proxy-body-size: "100m" # 0 = unlimited, or "500m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
labels:
app.kubernetes.io/component: app
app.kubernetes.io/instance: seafile
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: seafile
name: seafile
namespace: seafile
spec:
ingressClassName: nginx
rules:
- host: seafile.lab.home.hrajfrisbee.cz
http:
paths:
- backend:
service:
name: seafile
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- seafile.lab.home.hrajfrisbee.cz
secretName: seafile-tls

View File

@@ -0,0 +1,10 @@
apiVersion: k8s.mariadb.com/v1alpha1
kind: Database
metadata:
name: ccnet-db
namespace: seafile
spec:
mariaDbRef:
name: seafile-mariadb
characterSet: utf8mb4
collate: utf8mb4_general_ci

View File

@@ -0,0 +1,10 @@
apiVersion: k8s.mariadb.com/v1alpha1
kind: Database
metadata:
name: seafile-db
namespace: seafile
spec:
mariaDbRef:
name: seafile-mariadb
characterSet: utf8mb4
collate: utf8mb4_general_ci

View File

@@ -0,0 +1,10 @@
apiVersion: k8s.mariadb.com/v1alpha1
kind: Database
metadata:
name: seahub-db
namespace: seafile
spec:
mariaDbRef:
name: seafile-mariadb
characterSet: utf8mb4
collate: utf8mb4_general_ci

View File

@@ -0,0 +1,61 @@
apiVersion: k8s.mariadb.com/v1alpha1
kind: Grant
metadata:
name: all-privileges
spec:
mariaDbRef:
name: seafile-mariadb
username: seafile
database: "*"
table: "*"
privileges:
- ALL PRIVILEGES
grantOption: true
# ---
# apiVersion: k8s.mariadb.com/v1alpha1
# kind: Grant
# metadata:
# name: seafile-grant
# namespace: seafile
# spec:
# mariaDbRef:
# name: seafile-mariadb
# privileges:
# - ALL PRIVILEGES
# database: seafile-db
# table: "*"
# username: seafile
# host: "%"
# grantOption: false
# ---
# apiVersion: k8s.mariadb.com/v1alpha1
# kind: Grant
# metadata:
# name: seahub-grant
# namespace: seafile
# spec:
# mariaDbRef:
# name: seafile-mariadb
# privileges:
# - ALL PRIVILEGES
# database: seahub-db
# table: "*"
# username: seafile
# host: "%"
# grantOption: false
# ---
# apiVersion: k8s.mariadb.com/v1alpha1
# kind: Grant
# metadata:
# name: ccnet-grant
# namespace: seafile
# spec:
# mariaDbRef:
# name: seafile-mariadb
# privileges:
# - ALL PRIVILEGES
# database: ccnet-db
# table: "*"
# username: seafile
# host: "%"
# grantOption: false

View File

@@ -0,0 +1,13 @@
apiVersion: k8s.mariadb.com/v1alpha1
kind: User
metadata:
name: seafile
namespace: seafile
spec:
mariaDbRef:
name: seafile-mariadb
passwordSecretKeyRef:
name: seafile-secret
key: SEAFILE_MYSQL_DB_PASSWORD
maxUserConnections: 20
host: "%"

View File

@@ -0,0 +1,33 @@
apiVersion: k8s.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: seafile-mariadb
namespace: seafile
spec:
rootPasswordSecretKeyRef:
name: seafile-secret
key: INIT_SEAFILE_MYSQL_ROOT_PASSWORD
image: mariadb:11.4
port: 3306
storage:
size: 10Gi
# storageClassName: your-storage-class
resources:
requests:
cpu: 100m
memory: 256Mi
limits:
memory: 1Gi
myCnf: |
[mariadb]
bind-address=*
default_storage_engine=InnoDB
binlog_format=row
innodb_autoinc_lock_mode=2
innodb_buffer_pool_size=256M
max_allowed_packet=256M

View File

@@ -0,0 +1,39 @@
# apiVersion: apps/v1
# kind: Deployment
# metadata:
# name: seafile-memcached
# namespace: seafile
# spec:
# replicas: 1
# selector:
# matchLabels:
# app: seafile-memcached
# template:
# metadata:
# labels:
# app: seafile-memcached
# spec:
# containers:
# - name: memcached
# image: memcached:1.6-alpine
# args: ["-m", "128"] # 128MB memory limit
# ports:
# - containerPort: 11211
# resources:
# requests:
# memory: 64Mi
# cpu: 25m
# limits:
# memory: 192Mi
# ---
# apiVersion: v1
# kind: Service
# metadata:
# name: seafile-memcached
# namespace: seafile
# spec:
# selector:
# app: seafile-memcached
# ports:
# - port: 11211
# targetPort: 11211

View File

@@ -0,0 +1,67 @@
seafile:
initMode: true
# The following are the configurations of seafile container
configs:
image: seafileltd/seafile-mc:13.0-latest
seafileDataVolume:
storage: 10Gi
# The following are environments of seafile services
env:
# for Seafile server
TIME_ZONE: "UTC"
SEAFILE_LOG_TO_STDOUT: "true"
SITE_ROOT: "/"
SEAFILE_SERVER_HOSTNAME: "seafile.lab.home.hrajfrisbee.cz"
SEAFILE_SERVER_PROTOCOL: "https"
# for database
SEAFILE_MYSQL_DB_HOST: "seafile-mariadb"
SEAFILE_MYSQL_DB_PORT: "3306"
SEAFILE_MYSQL_DB_USER: "seafile"
SEAFILE_MYSQL_DB_CCNET_DB_NAME: "ccnet-db"
SEAFILE_MYSQL_DB_SEAFILE_DB_NAME: "seafile-db"
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME: "seahub-db"
# for cache
CACHE_PROVIDER: "redis"
## for redis
REDIS_HOST: "redis"
REDIS_PORT: "6379"
## for memcached
#MEMCACHED_HOST: ""
#MEMCACHED_PORT: "11211"
# for notification
ENABLE_NOTIFICATION_SERVER: "false"
NOTIFICATION_SERVER_URL: ""
# for seadoc
ENABLE_SEADOC: "false"
SEADOC_SERVER_URL: "" # only valid in ENABLE_SEADOC = true
# for Seafile AI
ENABLE_SEAFILE_AI: "false"
SEAFILE_AI_SERVER_URL: ""
# for Metadata server
MD_FILE_COUNT_LIMIT: "100000"
# initialization (only valid in first-time deployment and initMode = true)
## for Seafile admin
INIT_SEAFILE_ADMIN_EMAIL: "kacerr.cz@gmail.com"
# if you are using another secret name / key for seafile or mysql, please make correct the following fields:
#secretsMap:
# DB_ROOT_PASSWD: # Env's name
# secret: seafile-secret # secret's name, `seafile-secret` if not specify
# key: INIT_SEAFILE_MYSQL_ROOT_PASSWORD # secret's key, `Env's name` if not specify
# extra configurations
extraResources: {}
extraEnv: []
extraVolumes: []

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
kubernetes.io/metadata.name: seafile
name: seafile

View File

@@ -0,0 +1,4 @@
## deployment
it looks like seafile deployment is not "straightforward" it first has to be started in `initialization mode` - `initMode: true` and after initialization switched into `normal` mode.

View File

@@ -0,0 +1,84 @@
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
namespace: seafile
data:
redis.conf: |
maxmemory 128mb
maxmemory-policy allkeys-lru
appendonly yes
appendfsync everysec
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: seafile
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
strategy:
type: Recreate
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
args:
- redis-server
- /etc/redis/redis.conf
ports:
- containerPort: 6379
name: redis
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
memory: 256Mi
volumeMounts:
- name: redis-config
mountPath: /etc/redis
- name: redis-data
mountPath: /data
livenessProbe:
exec:
command: [redis-cli, ping]
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
exec:
command: [redis-cli, ping]
initialDelaySeconds: 3
periodSeconds: 5
volumes:
- name: redis-config
configMap:
name: redis-config
- name: redis-data
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: seafile
labels:
app: redis
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
name: redis
type: ClusterIP

Some files were not shown because too many files have changed in this diff Show More