Helm chart refactoring & automation (#31)

* remove test data

* Create helm chart using the suggested structure from helm3

* Fix minor naming consistency in Dockerfile

* Move skaffold to use helm chart

* improve skaffold configuration

* Update chart name to use the naming convenction

* update sample path

* Update contribution guideline

* Add helm chart validation rules

* Add chart home since is a required field

* Add linting action for helm charts

* Add fixes to chart definition

* fix timeout duration

* Update kind cluster

* test CI with minikube

* Add MetalLB to test load balancer feature

* Publish chart when merged on master

* test publishing chart with fake tag

* move charts dir

* finalize charts publishing CI

* reformat skaffold
This commit is contained in:
Marco Vito Moscaritolo 2020-06-20 21:37:46 +02:00 committed by GitHub
parent 2e6dc7962f
commit 20b498f76c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
35 changed files with 624 additions and 780 deletions

View File

@ -2,19 +2,19 @@ name: Docker Image CI
on:
push:
branches: [ master ]
branches: [master]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build and push Docker images
uses: docker/build-push-action@v1.1.0
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
repository: caddy/ingress
tag_with_ref: true
tag_with_sha: true
- uses: actions/checkout@v2
- name: Build and push Docker images
uses: docker/build-push-action@v1.1.0
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
repository: caddy/ingress
tag_with_ref: true
tag_with_sha: true

24
.github/workflows/helmchart-release.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: Release Charts
on:
push:
branches: [master]
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v1
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Run chart-releaser
uses: helm/chart-releaser-action@master
with:
charts_dir: charts
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

54
.github/workflows/helmchart.yml vendored Normal file
View File

@ -0,0 +1,54 @@
name: Lint and Test Charts
on: pull_request
jobs:
lint-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Fetch history
run: git fetch --prune --unshallow
- name: Run chart-testing (lint)
id: lint
uses: helm/chart-testing-action@v1.0.0-rc.1
with:
image: quay.io/helmpack/chart-testing:v3.0.0-rc.1
command: lint
- name: Create kind cluster
uses: helm/kind-action@v1.0.0-rc.1
with:
version: "v0.8.1"
# Only build a kind cluster if there are chart changes to test.
if: steps.lint.outputs.changed == 'true'
- name: Install MetalLB to allow LoadBalancer services
run: |
kubectl create ns metallb-system
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/metallb.yaml
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: config
namespace: metallb-system
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.17.255.1-172.17.255.200
EOF
if: steps.lint.outputs.changed == 'true'
- name: Run chart-testing (install)
uses: helm/chart-testing-action@v1.0.0-rc.1
with:
image: quay.io/helmpack/chart-testing:v3.0.0-rc.1
command: install

View File

@ -1,54 +1,60 @@
## Requirements
We will explain how to contribute to this project using a linux machine, in order to be able ot easly contribute you need:
- A running kubernetes cluster (if you don't have one see *Setup a local cluster* section)
- [helm 3](https://helm.sh/) installed on your machine
- [skaffold](https://skaffold.dev/) installed on your machine
- A machine with a public IP in order to use let's encrypt (you can provision ad-hoc machine on any clud provider you use)
- A domain that redirect to server IP
- [kind](https://github.com/kubernetes-sigs/kind) (to create a development cluster)
- [skaffold](https://skaffold.dev/) (to improve development experience)
- [Docker HUB](https://hub.docker.com) account (to store your docker images)
### Setup a local cluster
## Setup a development cluster
- You need a machine with [docker](https://docker.io) up & running
- You need to install [kind](https://kind.sigs.k8s.io/) on your machine
We create a three node cluster (master plus two worker), we start to setup the configuration:
Than we can create a two nodes cluster (one master and one worker):
```bash
cat <<EOF >> cluster.yml
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
EOF
```
than we create the cluster
```bash
kind create cluster --config=cluster.yml
```
and activate the `kubectl` config via:
```
kind export kubeconfig
```
## Configure your docker credentials
Authenticate your docker intance:
```
docker login
```
## Setup development env
Replace the docker image you are going to use in `kubernetes/generated/deployment.yaml` and `skaffold.yaml` replacing `MYACCOUNT` with your Docker Hub account in `docker.io/MYACCOUNT/caddy-ingress-controller`
Replace also the domain name to use in `hack/test/example-ingress.yaml` from `kubernetes.localhost` to your domain (ensure also that the subdomain `example1` and `example2` are resolved to the server public IP)
Replace also the domain name to use in `hack/test/example-ingress.yaml` from `MYDOMAIN.TDL` to your domain (ensore also that the subdomain `example1` and `example2` are resolved to the server public IP)
Create a namespace to host the caddy ingress controller:
```
kubectl create ns caddy-system
```
Than we can start skaffold using:
```
skaffold dev --port-forward
```
this will automatically:
- build your docker image every time you change some code
- update kubernetes config every time you change some file
- expose the caddy ingress controller (port 80 and 443) on publc server
- update the helm release every time you change the helm chart
- expose the caddy ingress controller (port 8080 and 8443)
You can test that all work as expected with:
```
curl -H 'Host: example1.kubernetes.localhost http://127.0.0.1:80/hello1
curl -H 'Host: example1.kubernetes.localhost http://127.0.0.1:80/hello2
curl -H 'Host: example2.kubernetes.localhost http://127.0.0.1:80/hello1
curl -H 'Host: example2.kubernetes.localhost http://127.0.0.1:80/hello2
```
## Notes
- You can change local port forwarded by skaffold by changing the port values in the `skaffold.yaml` file on section `portForward` `localPort`. Remind that you can forward only port greather than 1024 if you execute it as non root user
- You can delete your local cluster with the command `kind delete cluster`
- To use TLS your domain should be publically resolved to your cluster IP in order to allow Let's Encript to validate the domain

View File

@ -10,11 +10,11 @@ COPY ./internal ./internal
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o ./bin/ingress-controller ./cmd/caddy
FROM alpine:latest as certs
FROM alpine:latest AS certs
RUN apk --update add ca-certificates
FROM scratch
COPY --from=builder /app/bin/ingress-controller .
COPY --from=certs /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
EXPOSE 80 443
ENTRYPOINT ["/ingress-controller"]
ENTRYPOINT ["/ingress-controller"]

View File

@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@ -0,0 +1,15 @@
apiVersion: v2
name: caddy-ingress-controller
home: https://github.com/caddyserver/ingress
description: A helm chart for the Caddy Kubernetes ingress controller
type: application
version: 0.0.1-rc1
appVersion: v0.1.0
keywords:
- ingress-controller
- caddyserver
sources:
- https://github.com/caddyserver/ingress
maintainers:
- name: mavimo
url: https://github.com/mavimo

View File

@ -0,0 +1,63 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "caddy-ingress-controller.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "caddy-ingress-controller.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "caddy-ingress-controller.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "caddy-ingress-controller.labels" -}}
helm.sh/chart: {{ include "caddy-ingress-controller.chart" . }}
{{ include "caddy-ingress-controller.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "caddy-ingress-controller.selectorLabels" -}}
app.kubernetes.io/name: {{ include "caddy-ingress-controller.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "caddy-ingress-controller.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "caddy-ingress-controller.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

View File

@ -1,8 +1,8 @@
{{- if .Values.caddyingresscontroller.rbac.create }}
{{- if .Values.ingressController.rbac.create }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.name }}-role
name: {{ include "caddy-ingress-controller.name" . }}-role
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
@ -25,4 +25,4 @@ rules:
- list
- get
- watch
{{- end }}
{{- end }}

View File

@ -0,0 +1,15 @@
{{- if .Values.ingressController.rbac.create }}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ include "caddy-ingress-controller.name" . }}-role-binding
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ include "caddy-ingress-controller.name" . }}-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: {{ include "caddy-ingress-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -0,0 +1,81 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "caddy-ingress-controller.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "caddy-ingress-controller.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "caddy-ingress-controller.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "caddy-ingress-controller.labels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "caddy-ingress-controller.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
{{- if .Values.minikube }}
hostPort: 80 # optional, required if running in minikube
{{- end }}
- name: https
containerPort: 443
protocol: TCP
{{- if .Values.minikube }}
hostPort: 443 # optional, required if running in minikube
{{- end }}
resources:
{{- toYaml .Values.resources | nindent 12 }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: tmp
mountPath: /tmp
args:
{{- if .Values.ingressController.autotls }}
- -tls
- -email={{ .Values.ingressController.email }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
volumes:
- name: tmp
emptyDir: {}

View File

@ -4,11 +4,12 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.name }}
name: {{ include "caddy-ingress-controller.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.name }}
{{- include "caddy-ingress-controller.labels" . | nindent 4 }}
spec:
type: "LoadBalancer"
ports:
- name: http
port: 80
@ -19,6 +20,5 @@ spec:
protocol: TCP
targetPort: https
selector:
app: {{ .Values.name }}
type: "LoadBalancer"
{{- end }}
{{- include "caddy-ingress-controller.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "caddy-ingress-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "caddy-ingress-controller.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,154 @@
{
"definitions": {},
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"required": [
"replicaCount",
"minikube",
"image",
"imagePullSecrets",
"nameOverride",
"fullnameOverride",
"ingressController",
"serviceAccount",
"podAnnotations",
"podSecurityContext",
"securityContext",
"resources",
"nodeSelector",
"tolerations",
"affinity"
],
"properties": {
"replicaCount": {
"$id": "#/properties/replicaCount",
"type": "number"
},
"minikube": {
"$id": "#/properties/minikube",
"type": "boolean"
},
"image": {
"$id": "#/properties/image",
"type": "object",
"required": [
"repository",
"tag",
"pullPolicy"
],
"properties": {
"repository": {
"$id": "#/properties/image/properties/repository",
"type": "string"
},
"tag": {
"$id": "#/properties/image/properties/tag",
"type": "string"
},
"pullPolicy": {
"$id": "#/properties/image/properties/pullPolicy",
"type": "string",
"enum": [
"Always",
"IfNotPresent",
"Never"
]
}
}
},
"imagePullSecrets": {
"$id": "#/properties/imagePullSecrets",
"type": "array"
},
"nameOverride": {
"$id": "#/properties/nameOverride",
"type": "string"
},
"fullnameOverride": {
"$id": "#/properties/fullnameOverride",
"type": "string"
},
"ingressController": {
"$id": "#/properties/ingressController",
"type": "object",
"required": [
"rbac",
"autotls",
"email"
],
"properties": {
"rbac": {
"$id": "#/properties/ingressController/properties/rbac",
"type": "object",
"required": [
"create"
],
"properties": {
"create": {
"$id": "#/properties/ingressController/properties/rbac/properties/create",
"type": "boolean"
}
}
},
"autotls": {
"$id": "#/properties/ingressController/properties/autotls",
"type": "boolean"
},
"email": {
"$id": "#/properties/ingressController/properties/email",
"type": "string"
}
}
},
"serviceAccount": {
"$id": "#/properties/serviceAccount",
"type": "object",
"required": [
"create",
"name"
],
"properties": {
"create": {
"$id": "#/properties/serviceAccount/properties/create",
"type": "boolean"
},
"name": {
"$id": "#/properties/serviceAccount/properties/name",
"type": "string"
},
"annotations": {
"$id": "#/properties/serviceAccount/properties/annotations",
"type": "object"
}
}
},
"podAnnotations": {
"$id": "#/properties/podAnnotations",
"type": "object"
},
"podSecurityContext": {
"$id": "#/properties/podSecurityContext",
"type": "object"
},
"securityContext": {
"$id": "#/properties/securityContext",
"type": "object"
},
"resources": {
"$id": "#/properties/resources",
"type": "object"
},
"nodeSelector": {
"$id": "#/properties/nodeSelector",
"type": "object"
},
"tolerations": {
"$id": "#/properties/tolerations",
"type": "array"
},
"affinity": {
"$id": "#/properties/affinity",
"type": "object"
}
}
}

View File

@ -0,0 +1,68 @@
# Default values for caddy-ingress-controller.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
# Use to test in minikube context
minikube: false
image:
repository: caddy/ingress
pullPolicy: IfNotPresent
tag: "latest"
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
# Default values for the caddy ingress controller.
ingressController:
rbac:
create: true
# If setting autotls the following email value must be set
# to an email address that you manage
autotls: false
email: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: "caddy-ingress-controller"
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 0
runAsGroup: 0
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}

10
ct.yaml Normal file
View File

@ -0,0 +1,10 @@
# See https://github.com/helm/chart-testing#configuration
remote: origin
validate-maintainers: true
validate-chart-schema: true
validate-yaml: true
check-version-increment: true
all: true
chart-dirs:
- charts
helm-extra-args: --timeout 600s

View File

@ -1,4 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: caddy-system

View File

@ -1,307 +0,0 @@
---
apiVersion: v1
kind: Namespace
metadata:
labels:
app: metallb
name: metallb-system
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
labels:
app: metallb
name: speaker
namespace: metallb-system
spec:
allowPrivilegeEscalation: false
allowedCapabilities:
- NET_ADMIN
- NET_RAW
- SYS_ADMIN
fsGroup:
rule: RunAsAny
hostNetwork: true
hostPorts:
- max: 7472
min: 7472
privileged: true
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: metallb
name: controller
namespace: metallb-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: metallb
name: speaker
namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: metallb
name: metallb-system:controller
rules:
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- update
- apiGroups:
- ''
resources:
- services/status
verbs:
- update
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: metallb
name: metallb-system:speaker
rules:
- apiGroups:
- ''
resources:
- services
- endpoints
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
resourceNames:
- speaker
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: metallb
name: config-watcher
namespace: metallb-system
rules:
- apiGroups:
- ''
resources:
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: metallb
name: metallb-system:controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:controller
subjects:
- kind: ServiceAccount
name: controller
namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app: metallb
name: metallb-system:speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:speaker
subjects:
- kind: ServiceAccount
name: speaker
namespace: metallb-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: metallb
name: config-watcher
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: config-watcher
subjects:
- kind: ServiceAccount
name: controller
- kind: ServiceAccount
name: speaker
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: metallb
component: speaker
name: speaker
namespace: metallb-system
spec:
selector:
matchLabels:
app: metallb
component: speaker
template:
metadata:
annotations:
prometheus.io/port: '7472'
prometheus.io/scrape: 'true'
labels:
app: metallb
component: speaker
spec:
containers:
- args:
- --port=7472
- --config=config
env:
- name: METALLB_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: METALLB_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
image: metallb/speaker:v0.8.2
imagePullPolicy: IfNotPresent
name: speaker
ports:
- containerPort: 7472
name: monitoring
resources:
limits:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
- SYS_ADMIN
drop:
- ALL
readOnlyRootFilesystem: true
hostNetwork: true
nodeSelector:
beta.kubernetes.io/os: linux
serviceAccountName: speaker
terminationGracePeriodSeconds: 0
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: metallb
component: controller
name: controller
namespace: metallb-system
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: metallb
component: controller
template:
metadata:
annotations:
prometheus.io/port: '7472'
prometheus.io/scrape: 'true'
labels:
app: metallb
component: controller
spec:
containers:
- args:
- --port=7472
- --config=config
image: metallb/controller:v0.8.2
imagePullPolicy: IfNotPresent
name: controller
ports:
- containerPort: 7472
name: monitoring
resources:
limits:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true
nodeSelector:
beta.kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: controller
terminationGracePeriodSeconds: 0
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 172.17.255.1-172.17.255.250

View File

@ -1,26 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: caddy-ingress-controller-role
namespace: caddy-system
rules:
- apiGroups:
- ""
- "networking.k8s.io"
resources:
- ingresses
- ingresses/status
- secrets
verbs: ["*"]
- apiGroups:
- ""
resources:
- services
- pods
- nodes
- routes
- extensions
verbs:
- list
- get
- watch

View File

@ -1,13 +0,0 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: caddy-ingress-controller-role-binding
namespace: caddy-system
roleRef:
kind: ClusterRole
name: caddy-ingress-controller-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: caddy-ingress-controller
namespace: caddy-system

View File

@ -1,69 +0,0 @@
# this is an example config map for the caddy ingress controller
apiVersion: v1
kind: ConfigMap
metadata:
name: caddy-config
namespace: caddy-system
data:
config.json: '
{
"storage": {
"system": "secret_store",
"namespace": "caddy-system"
},
"apps": {
"http": {
"servers": {
"ingress_server": {
"listen": [
":80",
":443"
],
"routes": [
{
"match": [
{
"host": [
"danny2.kubed.co"
],
"path": [
"/hello2"
]
}
],
"handle": [
{
"handler": "log",
"filename": "/etc/caddy/access.log"
},
{
"handler": "reverse_proxy",
"load_balance_type": "random",
"upstreams": [
{
"host": "http://example2.default.svc.cluster.local"
}
]
}
],
}
]
}
}
},
"tls": {
"automation": {
"policies": [
{
"management": {
"module": "acme",
"email": "test@test.com"
}
}
]
},
"session_tickets": {}
}
}
}
'

View File

@ -1,80 +0,0 @@
# uncomment the config map below
# if configuring caddy with a config map
# ensure that you update ./configmap.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: caddy-ingress-controller
namespace: caddy-system
labels:
app: caddy-ingress-controller
chart: "caddy-ingress-controller-v0.1.0"
release: "release-name"
heritage: "Tiller"
version: v0.1.0
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: caddy-ingress-controller
release: "release-name"
template:
metadata:
labels:
app: caddy-ingress-controller
chart: "caddy-ingress-controller-v0.1.2"
release: "release-name"
heritage: "Tiller"
version: v0.1.0
spec:
serviceAccountName: caddy-ingress-controller
volumes:
- name: tmp
emptyDir: {}
# - name: config-volume
# configMap:
# name: caddy-config
containers:
- name: caddy-ingress-controller
image: docker.io/MYACCOUNT/caddy-ingress-controller
imagePullPolicy: IfNotPresent
volumeMounts:
- name: tmp
mountPath: /tmp
# - name: config-volume
# mountPath: /etc/caddy/config.json
# subPath: config.json
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 0
runAsGroup: 0
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: metrics
containerPort: 9090
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# args:
# - -tls
# - -tls-use-staging
# - -email=test@test.com

View File

@ -1,20 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: caddy-ingress-controller
namespace: caddy-system
labels:
app: caddy-ingress-controller
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app: caddy-ingress-controller
type: "LoadBalancer"

View File

@ -1,12 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: caddy-system
labels:
app: caddy-ingress-controller
chart: "caddy-ingress-controller-v0.1.0"
release: "release-name"
heritage: "Tiller"
version: v0.1.0
name: caddy-ingress-controller

View File

@ -1,4 +0,0 @@
apiVersion: v1
description: A helm chart for the Caddy Kubernetes ingress controller
name: caddy-ingress-controller
version: v0.1.0

View File

@ -1,15 +0,0 @@
{{- if .Values.caddyingresscontroller.rbac.create }}
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: {{ .Values.name }}-role-binding
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: {{ .Values.name }}-role
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: {{ .Values.serviceAccountName }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@ -1,77 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ .Values.name }}
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.name }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.caddyingresscontroller.deployment.labels }}
{{ toYaml .Values.caddyingresscontroller.deployment.labels | indent 4 }}
{{- end }}
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: {{ .Values.name }}
release: {{ .Release.Name | quote }}
template:
metadata:
labels:
app: {{ .Values.name }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.caddyingresscontroller.deployment.labels }}
{{ toYaml .Values.caddyingresscontroller.deployment.labels | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ .Values.serviceAccountName }}
containers:
- name: {{ .Values.name }}
image: "{{ .Values.caddyingresscontroller.image.name }}:{{ .Values.caddyingresscontroller.image.tag }}"
imagePullPolicy: {{ .Values.caddyingresscontroller.image.pullPolicy }}
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 0
runAsGroup: 0
ports:
- name: http
containerPort: 80
{{- if .Values.minikube }}
hostPort: 80 # optional, required if running in minikube
{{- end }}
- name: https
containerPort: 443
{{- if .Values.minikube }}
hostPort: 443 # optional, required if running in minikube
{{- end }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: tmp
mountPath: /tmp
args:
{{- if .Values.autotls }}
- -tls
- -email={{ .Values.email }}
{{- end }}
volumes:
- name: tmp
emptyDir: {}

View File

@ -1,18 +0,0 @@
{{- if .Values.caddyingresscontroller.serviceAccount.create }}
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: {{ .Release.Namespace }}
labels:
app: {{ .Values.name }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name | quote }}
heritage: {{ .Release.Service | quote }}
{{- if .Values.caddyingresscontroller.serviceAccount.labels }}
{{ toYaml .Values.caddyingresscontroller.serviceAccount.labels | indent 4 }}
{{- end }}
{{- if .Values.caddyingresscontroller.matchLabels }}
{{ toYaml .Values.caddyingresscontroller.matchLabels | indent 4 }}
{{- end }}
name: {{ .Values.serviceAccountName }}
{{- end }}

View File

@ -1,36 +0,0 @@
# Default values for the caddy ingress controller.
kubernetes:
host: https://kubernetes.default
caddyingresscontroller:
tolerations: {}
deployment:
labels:
version: "v0.1.0"
config:
labels:
version: "v0.1.0"
rbac:
create: true
# Service account config for the agent pods
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
labels:
version: "v0.1.0"
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name: caddy-ingress-controller
image:
name: "gcr.io/danny-239313/ingresscontroller"
tag: "v0.1.0"
pullPolicy: IfNotPresent
name: "caddy-ingress-controller"
serviceAccountName: "caddy-ingress-controller"
minikube: false
# If setting autotls the following email value must be set
# to an email address that you manage
autotls: false
email: ""

View File

@ -1,24 +1,24 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
name: example1
labels:
app: example
app: example1
spec:
replicas: 1
selector:
matchLabels:
app: example
app: example1
template:
metadata:
labels:
app: example
app: example1
spec:
containers:
- name: httpecho
image: hashicorp/http-echo
args:
- "-listen=:8080"
- "-text=hello world"
- "-text=hello world 1"
ports:
- containerPort: 8080
- containerPort: 8080

View File

@ -6,29 +6,29 @@ metadata:
kubernetes.io/ingress.class: caddy
spec:
rules:
- host: example1.MYDOMAIN.TDL
- host: example1.kubernetes.localhost
http:
paths:
- path: /hello2
backend:
serviceName: example2
servicePort: 8080
- path: /hello
backend:
serviceName: example
servicePort: 8080
- host: example2.MYDOMAIN.TDL
http:
paths:
- path: /hello2
backend:
serviceName: example2
servicePort: 8080
- path: /hello1
backend:
serviceName: example
serviceName: example1
servicePort: 8080
- path: /hello2
backend:
serviceName: example2
servicePort: 8080
- host: example2.kubernetes.localhost
http:
paths:
- path: /hello1
backend:
serviceName: example1
servicePort: 8080
- path: /hello2
backend:
serviceName: example2
servicePort: 8080
# tls:
# - secretName: ssl-example2.MYDOMAIN.TDL
# - secretName: ssl-example2.kubernetes.localhost
# hosts:
# - example2.caddy.dev

View File

@ -1,12 +1,13 @@
kind: Service
apiVersion: v1
metadata:
name: example
name: example1
spec:
type: ClusterIP
selector:
app: example
app: example1
ports:
- protocol: TCP
- name: http
protocol: TCP
port: 8080
targetPort: 8080

View File

@ -7,6 +7,7 @@ spec:
selector:
app: example2
ports:
- protocol: TCP
- name: http
protocol: TCP
port: 8080
targetPort: 8080

View File

@ -1,33 +1,30 @@
apiVersion: skaffold/v2alpha1
apiVersion: skaffold/v2beta3
kind: Config
metadata:
name: caddy-ingress-controller
build:
artifacts:
- image: docker.io/MYACCOUNT/caddy-ingress-controller
- image: caddy/ingress
deploy:
helm:
releases:
- name: caddy-ingress-development
namespace: caddy-system
chartPath: charts/caddy-ingress-controller
recreatePods: true
kubectl:
manifests:
- kubernetes/deploy/00_namespace.yaml
- kubernetes/deploy/01_metallb.yaml
- hack/test/example-deployment.yaml
- hack/test/example-ingress.yaml
- hack/test/example-deployment2.yaml
- hack/test/example-service2.yaml
- hack/test/example-service.yaml
- kubernetes/generated/clusterrole.yaml
- kubernetes/generated/clusterrolebinding.yaml
- kubernetes/generated/deployment.yaml
- kubernetes/generated/serviceaccount.yaml
- kubernetes/generated/loadbalancer.yaml
- kubernetes/sample/*.yaml
portForward:
- resourceType: service
resourceName: caddy-ingress-controller
namespace: caddy-system
address: 0.0.0.0
port: 80
localPort: 80
- resourceType: service
resourceName: caddy-ingress-controller
namespace: caddy-system
address: 0.0.0.0
port: 443
localPort: 443
- resourceType: service
resourceName: caddy-ingress-development-caddy-ingress-controller
namespace: caddy-system
address: 0.0.0.0
port: 80
localPort: 8080
- resourceType: service
resourceName: caddy-ingress-development-caddy-ingress-controller
namespace: caddy-system
address: 0.0.0.0
port: 443
localPort: 8443