Fixed files
Some checks failed
Gitea Actions Demo Training / Explore-Gitea-Actions (push) Failing after 14s
Some checks failed
Gitea Actions Demo Training / Explore-Gitea-Actions (push) Failing after 14s
This commit is contained in:
@@ -32,9 +32,9 @@ jobs:
|
|||||||
# steps:
|
# steps:
|
||||||
# - name: Run another shell script
|
# - name: Run another shell script
|
||||||
# run: |
|
# run: |
|
||||||
# set -x
|
# set -x
|
||||||
# env | sort
|
# env | sort
|
||||||
# cd ${{ github.workspace }}
|
# cd ${{ github.workspace }}
|
||||||
# ls
|
# ls
|
||||||
# cd slides
|
# cd slides
|
||||||
# ./build.sh once
|
# ./build.sh once
|
||||||
|
|||||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -43,8 +43,8 @@ crash.log
|
|||||||
crash.*.log
|
crash.*.log
|
||||||
|
|
||||||
# Exclude all .tfvars files, which are likely to contain sensitive data, such as
|
# Exclude all .tfvars files, which are likely to contain sensitive data, such as
|
||||||
# password, private keys, and other secrets. These should not be part of version
|
# password, private keys, and other secrets. These should not be part of version
|
||||||
# control as they are data points which are potentially sensitive and subject
|
# control as they are data points which are potentially sensitive and subject
|
||||||
# to change depending on the environment.
|
# to change depending on the environment.
|
||||||
*.tfvars
|
*.tfvars
|
||||||
*.tfvars.json
|
*.tfvars.json
|
||||||
|
|||||||
11
.pre-commit-config.yaml
Normal file
11
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
repos:
|
||||||
|
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||||
|
rev: v2.3.0
|
||||||
|
hooks:
|
||||||
|
#- id: check-yaml
|
||||||
|
- id: end-of-file-fixer
|
||||||
|
- id: trailing-whitespace
|
||||||
|
#- repo: https://github.com/psf/black
|
||||||
|
# rev: 22.10.0
|
||||||
|
# hooks:
|
||||||
|
# - id: black
|
||||||
2
.vscode/settings.json
vendored
2
.vscode/settings.json
vendored
@@ -1,4 +1,4 @@
|
|||||||
{
|
{
|
||||||
"ansible.python.interpreterPath": "/opt/homebrew/bin/python3",
|
"ansible.python.interpreterPath": "/opt/homebrew/bin/python3",
|
||||||
"GitHooks.hooksDirectory": "/Users/marco/Gitea/training/containers/.git/hooks"
|
"GitHooks.hooksDirectory": "/Users/marco/Gitea/training/containers/.git/hooks"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -33,4 +33,3 @@ subsets:
|
|||||||
ports:
|
ports:
|
||||||
- port: 8000
|
- port: 8000
|
||||||
protocol: TCP
|
protocol: TCP
|
||||||
|
|
||||||
|
|||||||
@@ -15,4 +15,3 @@ spec:
|
|||||||
- http01:
|
- http01:
|
||||||
ingress:
|
ingress:
|
||||||
class: traefik
|
class: traefik
|
||||||
|
|
||||||
|
|||||||
@@ -15,4 +15,3 @@ spec:
|
|||||||
kind: Coffee
|
kind: Coffee
|
||||||
shortNames:
|
shortNames:
|
||||||
- cof
|
- cof
|
||||||
|
|
||||||
|
|||||||
@@ -18,4 +18,3 @@ spec:
|
|||||||
kind: Coffee
|
kind: Coffee
|
||||||
shortNames:
|
shortNames:
|
||||||
- cof
|
- cof
|
||||||
|
|
||||||
|
|||||||
@@ -25,4 +25,3 @@ spec:
|
|||||||
- name: docker-socket
|
- name: docker-socket
|
||||||
hostPath:
|
hostPath:
|
||||||
path: /var/run/docker.sock
|
path: /var/run/docker.sock
|
||||||
|
|
||||||
|
|||||||
@@ -28,8 +28,8 @@ spec:
|
|||||||
- -Dconfig.file=/conf/application.conf
|
- -Dconfig.file=/conf/application.conf
|
||||||
env:
|
env:
|
||||||
- name: ELASTICSEARCH_PASSWORD
|
- name: ELASTICSEARCH_PASSWORD
|
||||||
valueFrom:
|
valueFrom:
|
||||||
secretKeyRef:
|
secretKeyRef:
|
||||||
name: demo-es-elastic-user
|
name: demo-es-elastic-user
|
||||||
key: elastic
|
key: elastic
|
||||||
|
|
||||||
|
|||||||
@@ -18,4 +18,3 @@ spec:
|
|||||||
use-ssl: false
|
use-ssl: false
|
||||||
data-volume-size: 10Gi
|
data-volume-size: 10Gi
|
||||||
java-options: "-Xms512m -Xmx512m"
|
java-options: "-Xms512m -Xmx512m"
|
||||||
|
|
||||||
|
|||||||
@@ -26,7 +26,7 @@ rules:
|
|||||||
resources: ["storageclasses"]
|
resources: ["storageclasses"]
|
||||||
verbs: ["get", "list", "create", "delete", "deletecollection"]
|
verbs: ["get", "list", "create", "delete", "deletecollection"]
|
||||||
- apiGroups: [""]
|
- apiGroups: [""]
|
||||||
resources: ["persistentvolumes", "persistentvolumeclaims", "services", "secrets", "configmaps"]
|
resources: ["persistentvolumes", "persistentvolumeclaims", "services", "secrets", "configmaps"]
|
||||||
verbs: ["create", "get", "update", "delete", "list"]
|
verbs: ["create", "get", "update", "delete", "list"]
|
||||||
- apiGroups: ["batch"]
|
- apiGroups: ["batch"]
|
||||||
resources: ["cronjobs", "jobs"]
|
resources: ["cronjobs", "jobs"]
|
||||||
|
|||||||
@@ -27,4 +27,3 @@ source:
|
|||||||
#host: node1
|
#host: node1
|
||||||
#reportingComponent: ""
|
#reportingComponent: ""
|
||||||
#reportingInstance: ""
|
#reportingInstance: ""
|
||||||
|
|
||||||
|
|||||||
@@ -33,4 +33,3 @@ source:
|
|||||||
component: gitops-sync
|
component: gitops-sync
|
||||||
#reportingComponent: ""
|
#reportingComponent: ""
|
||||||
#reportingInstance: ""
|
#reportingInstance: ""
|
||||||
|
|
||||||
|
|||||||
@@ -31,4 +31,3 @@ spec:
|
|||||||
containers:
|
containers:
|
||||||
- name: web
|
- name: web
|
||||||
image: nginx
|
image: nginx
|
||||||
|
|
||||||
|
|||||||
@@ -13,4 +13,3 @@ spec:
|
|||||||
volumeMounts:
|
volumeMounts:
|
||||||
- name: config
|
- name: config
|
||||||
mountPath: /usr/local/etc/haproxy/
|
mountPath: /usr/local/etc/haproxy/
|
||||||
|
|
||||||
|
|||||||
@@ -26,4 +26,3 @@ spec:
|
|||||||
target:
|
target:
|
||||||
type: Value
|
type: Value
|
||||||
value: 0.1
|
value: 0.1
|
||||||
|
|
||||||
|
|||||||
@@ -7,4 +7,3 @@ spec:
|
|||||||
containers:
|
containers:
|
||||||
- name: hello
|
- name: hello
|
||||||
image: nginx
|
image: nginx
|
||||||
|
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ spec:
|
|||||||
- name: git-clone
|
- name: git-clone
|
||||||
image: alpine
|
image: alpine
|
||||||
command: ["sh", "-c"]
|
command: ["sh", "-c"]
|
||||||
args:
|
args:
|
||||||
- |
|
- |
|
||||||
apk add --no-cache git &&
|
apk add --no-cache git &&
|
||||||
git clone git://github.com/jpetazzo/container.training /workspace
|
git clone git://github.com/jpetazzo/container.training /workspace
|
||||||
@@ -26,4 +26,3 @@ spec:
|
|||||||
mountPath: /workspace
|
mountPath: /workspace
|
||||||
volumes:
|
volumes:
|
||||||
- name: workspace
|
- name: workspace
|
||||||
|
|
||||||
|
|||||||
@@ -6,10 +6,10 @@ spec:
|
|||||||
rules:
|
rules:
|
||||||
- name: create-ingress
|
- name: create-ingress
|
||||||
match:
|
match:
|
||||||
resources:
|
resources:
|
||||||
kinds:
|
kinds:
|
||||||
- Service
|
- Service
|
||||||
generate:
|
generate:
|
||||||
kind: Ingress
|
kind: Ingress
|
||||||
name: "{{request.object.metadata.name}}"
|
name: "{{request.object.metadata.name}}"
|
||||||
namespace: "{{request.object.metadata.namespace}}"
|
namespace: "{{request.object.metadata.namespace}}"
|
||||||
|
|||||||
@@ -6,14 +6,14 @@ spec:
|
|||||||
rules:
|
rules:
|
||||||
- name: create-ingress
|
- name: create-ingress
|
||||||
match:
|
match:
|
||||||
resources:
|
resources:
|
||||||
kinds:
|
kinds:
|
||||||
- Service
|
- Service
|
||||||
preconditions:
|
preconditions:
|
||||||
- key: "{{request.object.spec.ports[0].name}}"
|
- key: "{{request.object.spec.ports[0].name}}"
|
||||||
operator: Equals
|
operator: Equals
|
||||||
value: http
|
value: http
|
||||||
generate:
|
generate:
|
||||||
kind: Ingress
|
kind: Ingress
|
||||||
name: "{{request.object.metadata.name}}"
|
name: "{{request.object.metadata.name}}"
|
||||||
namespace: "{{request.object.metadata.namespace}}"
|
namespace: "{{request.object.metadata.namespace}}"
|
||||||
|
|||||||
@@ -6,14 +6,14 @@ spec:
|
|||||||
rules:
|
rules:
|
||||||
- name: create-ingress
|
- name: create-ingress
|
||||||
match:
|
match:
|
||||||
resources:
|
resources:
|
||||||
kinds:
|
kinds:
|
||||||
- Service
|
- Service
|
||||||
preconditions:
|
preconditions:
|
||||||
- key: http
|
- key: http
|
||||||
operator: In
|
operator: In
|
||||||
value: "{{request.object.spec.ports[*].name}}"
|
value: "{{request.object.spec.ports[*].name}}"
|
||||||
generate:
|
generate:
|
||||||
kind: Ingress
|
kind: Ingress
|
||||||
name: "{{request.object.metadata.name}}"
|
name: "{{request.object.metadata.name}}"
|
||||||
namespace: "{{request.object.metadata.namespace}}"
|
namespace: "{{request.object.metadata.namespace}}"
|
||||||
|
|||||||
@@ -8,14 +8,14 @@ spec:
|
|||||||
rules:
|
rules:
|
||||||
- name: create-ingress
|
- name: create-ingress
|
||||||
match:
|
match:
|
||||||
resources:
|
resources:
|
||||||
kinds:
|
kinds:
|
||||||
- Service
|
- Service
|
||||||
preconditions:
|
preconditions:
|
||||||
- key: "{{request.object.spec.ports[*].port}}"
|
- key: "{{request.object.spec.ports[*].port}}"
|
||||||
operator: AnyIn
|
operator: AnyIn
|
||||||
value: [ 80 ]
|
value: [ 80 ]
|
||||||
generate:
|
generate:
|
||||||
kind: Ingress
|
kind: Ingress
|
||||||
name: "{{request.object.metadata.name}}"
|
name: "{{request.object.metadata.name}}"
|
||||||
namespace: "{{request.object.metadata.namespace}}"
|
namespace: "{{request.object.metadata.namespace}}"
|
||||||
|
|||||||
@@ -11,14 +11,14 @@ spec:
|
|||||||
name: ingress-domain-name
|
name: ingress-domain-name
|
||||||
namespace: "{{request.object.metadata.namespace}}"
|
namespace: "{{request.object.metadata.namespace}}"
|
||||||
match:
|
match:
|
||||||
resources:
|
resources:
|
||||||
kinds:
|
kinds:
|
||||||
- Service
|
- Service
|
||||||
preconditions:
|
preconditions:
|
||||||
- key: "{{request.object.spec.ports[0].name}}"
|
- key: "{{request.object.spec.ports[0].name}}"
|
||||||
operator: Equals
|
operator: Equals
|
||||||
value: http
|
value: http
|
||||||
generate:
|
generate:
|
||||||
kind: Ingress
|
kind: Ingress
|
||||||
name: "{{request.object.metadata.name}}"
|
name: "{{request.object.metadata.name}}"
|
||||||
namespace: "{{request.object.metadata.namespace}}"
|
namespace: "{{request.object.metadata.namespace}}"
|
||||||
|
|||||||
@@ -6,13 +6,13 @@ spec:
|
|||||||
rules:
|
rules:
|
||||||
- name: setup-limitrange
|
- name: setup-limitrange
|
||||||
match:
|
match:
|
||||||
resources:
|
resources:
|
||||||
kinds:
|
kinds:
|
||||||
- Namespace
|
- Namespace
|
||||||
generate:
|
generate:
|
||||||
kind: LimitRange
|
kind: LimitRange
|
||||||
name: default-limitrange
|
name: default-limitrange
|
||||||
namespace: "{{request.object.metadata.name}}"
|
namespace: "{{request.object.metadata.name}}"
|
||||||
data:
|
data:
|
||||||
spec:
|
spec:
|
||||||
limits:
|
limits:
|
||||||
@@ -31,13 +31,13 @@ spec:
|
|||||||
memory: 250Mi
|
memory: 250Mi
|
||||||
- name: setup-resourcequota
|
- name: setup-resourcequota
|
||||||
match:
|
match:
|
||||||
resources:
|
resources:
|
||||||
kinds:
|
kinds:
|
||||||
- Namespace
|
- Namespace
|
||||||
generate:
|
generate:
|
||||||
kind: ResourceQuota
|
kind: ResourceQuota
|
||||||
name: default-resourcequota
|
name: default-resourcequota
|
||||||
namespace: "{{request.object.metadata.name}}"
|
namespace: "{{request.object.metadata.name}}"
|
||||||
data:
|
data:
|
||||||
spec:
|
spec:
|
||||||
hard:
|
hard:
|
||||||
@@ -47,17 +47,16 @@ spec:
|
|||||||
limits.memory: 20Gi
|
limits.memory: 20Gi
|
||||||
- name: setup-networkpolicy
|
- name: setup-networkpolicy
|
||||||
match:
|
match:
|
||||||
resources:
|
resources:
|
||||||
kinds:
|
kinds:
|
||||||
- Namespace
|
- Namespace
|
||||||
generate:
|
generate:
|
||||||
kind: NetworkPolicy
|
kind: NetworkPolicy
|
||||||
name: default-networkpolicy
|
name: default-networkpolicy
|
||||||
namespace: "{{request.object.metadata.name}}"
|
namespace: "{{request.object.metadata.name}}"
|
||||||
data:
|
data:
|
||||||
spec:
|
spec:
|
||||||
podSelector: {}
|
podSelector: {}
|
||||||
ingress:
|
ingress:
|
||||||
- from:
|
- from:
|
||||||
- podSelector: {}
|
- podSelector: {}
|
||||||
|
|
||||||
|
|||||||
@@ -28,4 +28,3 @@ spec:
|
|||||||
- key: "{{ request.object.metadata.labels.color }}"
|
- key: "{{ request.object.metadata.labels.color }}"
|
||||||
operator: NotEquals
|
operator: NotEquals
|
||||||
value: "{{ request.oldObject.metadata.labels.color }}"
|
value: "{{ request.oldObject.metadata.labels.color }}"
|
||||||
|
|
||||||
|
|||||||
@@ -25,4 +25,3 @@ spec:
|
|||||||
message: "Once label color has been added, it cannot be removed."
|
message: "Once label color has been added, it cannot be removed."
|
||||||
deny:
|
deny:
|
||||||
conditions:
|
conditions:
|
||||||
|
|
||||||
|
|||||||
@@ -6,10 +6,10 @@ spec:
|
|||||||
rules:
|
rules:
|
||||||
- name: create-role
|
- name: create-role
|
||||||
match:
|
match:
|
||||||
resources:
|
resources:
|
||||||
kinds:
|
kinds:
|
||||||
- Certificate
|
- Certificate
|
||||||
generate:
|
generate:
|
||||||
kind: Role
|
kind: Role
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
name: "{{request.object.metadata.name}}"
|
name: "{{request.object.metadata.name}}"
|
||||||
@@ -26,10 +26,10 @@ spec:
|
|||||||
- "{{request.object.metadata.name}}"
|
- "{{request.object.metadata.name}}"
|
||||||
- name: create-rolebinding
|
- name: create-rolebinding
|
||||||
match:
|
match:
|
||||||
resources:
|
resources:
|
||||||
kinds:
|
kinds:
|
||||||
- Certificate
|
- Certificate
|
||||||
generate:
|
generate:
|
||||||
kind: RoleBinding
|
kind: RoleBinding
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
name: "{{request.object.metadata.name}}"
|
name: "{{request.object.metadata.name}}"
|
||||||
@@ -43,4 +43,3 @@ spec:
|
|||||||
- kind: ServiceAccount
|
- kind: ServiceAccount
|
||||||
name: default
|
name: default
|
||||||
namespace: "{{request.object.metadata.namespace}}"
|
namespace: "{{request.object.metadata.namespace}}"
|
||||||
|
|
||||||
|
|||||||
@@ -155,6 +155,3 @@ data:
|
|||||||
containers:
|
containers:
|
||||||
- name: helper-pod
|
- name: helper-pod
|
||||||
image: busybox
|
image: busybox
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -11,4 +11,3 @@ spec:
|
|||||||
- podSelector:
|
- podSelector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
run: testcurl
|
run: testcurl
|
||||||
|
|
||||||
|
|||||||
@@ -7,4 +7,3 @@ spec:
|
|||||||
matchLabels:
|
matchLabels:
|
||||||
app: testweb
|
app: testweb
|
||||||
ingress: []
|
ingress: []
|
||||||
|
|
||||||
|
|||||||
@@ -18,4 +18,3 @@ spec:
|
|||||||
app: webui
|
app: webui
|
||||||
ingress:
|
ingress:
|
||||||
- from: []
|
- from: []
|
||||||
|
|
||||||
|
|||||||
@@ -18,4 +18,3 @@ spec:
|
|||||||
- name: www
|
- name: www
|
||||||
mountPath: /www/
|
mountPath: /www/
|
||||||
restartPolicy: OnFailure
|
restartPolicy: OnFailure
|
||||||
|
|
||||||
|
|||||||
@@ -21,4 +21,3 @@ spec:
|
|||||||
volumeMounts:
|
volumeMounts:
|
||||||
- mountPath: /mnt/storage
|
- mountPath: /mnt/storage
|
||||||
name: storage
|
name: storage
|
||||||
|
|
||||||
|
|||||||
@@ -339,7 +339,7 @@ spec:
|
|||||||
image: portworx/oci-monitor:2.5.1
|
image: portworx/oci-monitor:2.5.1
|
||||||
imagePullPolicy: Always
|
imagePullPolicy: Always
|
||||||
args:
|
args:
|
||||||
["-c", "px-workshop", "-s", "/dev/loop4", "-secret_type", "k8s", "-j", "auto", "-b",
|
["-c", "px-workshop", "-s", "/dev/loop4", "-secret_type", "k8s", "-j", "auto", "-b",
|
||||||
"-x", "kubernetes"]
|
"-x", "kubernetes"]
|
||||||
env:
|
env:
|
||||||
- name: "AUTO_NODE_RECOVERY_TIMEOUT_IN_SECS"
|
- name: "AUTO_NODE_RECOVERY_TIMEOUT_IN_SECS"
|
||||||
@@ -348,7 +348,7 @@ spec:
|
|||||||
value: "v4"
|
value: "v4"
|
||||||
- name: CSI_ENDPOINT
|
- name: CSI_ENDPOINT
|
||||||
value: unix:///var/lib/kubelet/plugins/pxd.portworx.com/csi.sock
|
value: unix:///var/lib/kubelet/plugins/pxd.portworx.com/csi.sock
|
||||||
|
|
||||||
livenessProbe:
|
livenessProbe:
|
||||||
periodSeconds: 30
|
periodSeconds: 30
|
||||||
initialDelaySeconds: 840 # allow image pull in slow networks
|
initialDelaySeconds: 840 # allow image pull in slow networks
|
||||||
|
|||||||
@@ -37,4 +37,3 @@ spec:
|
|||||||
resources:
|
resources:
|
||||||
requests:
|
requests:
|
||||||
storage: 1Gi
|
storage: 1Gi
|
||||||
|
|
||||||
|
|||||||
@@ -36,4 +36,3 @@ rules:
|
|||||||
resources: ['podsecuritypolicies']
|
resources: ['podsecuritypolicies']
|
||||||
verbs: ['use']
|
verbs: ['use']
|
||||||
resourceNames: ['privileged']
|
resourceNames: ['privileged']
|
||||||
|
|
||||||
|
|||||||
@@ -35,4 +35,3 @@ rules:
|
|||||||
resources: ['podsecuritypolicies']
|
resources: ['podsecuritypolicies']
|
||||||
verbs: ['use']
|
verbs: ['use']
|
||||||
resourceNames: ['restricted']
|
resourceNames: ['restricted']
|
||||||
|
|
||||||
|
|||||||
@@ -17,4 +17,4 @@ spec:
|
|||||||
# kind: PersistentVolumeClaim
|
# kind: PersistentVolumeClaim
|
||||||
# apiVersion: v1
|
# apiVersion: v1
|
||||||
# namespace: default
|
# namespace: default
|
||||||
# name: my-pvc-XYZ45
|
# name: my-pvc-XYZ45
|
||||||
|
|||||||
@@ -12,4 +12,3 @@ spec:
|
|||||||
configMapKeyRef:
|
configMapKeyRef:
|
||||||
name: registry
|
name: registry
|
||||||
key: http.addr
|
key: http.addr
|
||||||
|
|
||||||
|
|||||||
@@ -8,4 +8,3 @@ provisioner: kubernetes.io/portworx-volume
|
|||||||
parameters:
|
parameters:
|
||||||
repl: "2"
|
repl: "2"
|
||||||
priority_io: "high"
|
priority_io: "high"
|
||||||
|
|
||||||
|
|||||||
@@ -69,7 +69,7 @@ add_namespace() {
|
|||||||
echo ---
|
echo ---
|
||||||
kubectl create serviceaccount -n kubernetes-dashboard cluster-admin \
|
kubectl create serviceaccount -n kubernetes-dashboard cluster-admin \
|
||||||
-o yaml --dry-run=client \
|
-o yaml --dry-run=client \
|
||||||
#
|
#
|
||||||
echo ---
|
echo ---
|
||||||
cat <<EOF
|
cat <<EOF
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
|||||||
@@ -30,4 +30,3 @@ subjects:
|
|||||||
- kind: ServiceAccount
|
- kind: ServiceAccount
|
||||||
name: jean.doe
|
name: jean.doe
|
||||||
namespace: users
|
namespace: users
|
||||||
|
|
||||||
|
|||||||
@@ -61,4 +61,3 @@ spec:
|
|||||||
operator: In
|
operator: In
|
||||||
values:
|
values:
|
||||||
- node4
|
- node4
|
||||||
|
|
||||||
|
|||||||
@@ -16,4 +16,4 @@ spec:
|
|||||||
selector:
|
selector:
|
||||||
app: #@ data.values.name
|
app: #@ data.values.name
|
||||||
type: #@ data.values.type
|
type: #@ data.values.type
|
||||||
#@ end
|
#@ end
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
#@ load("@ytt:data", "data")
|
#@ load("@ytt:data", "data")
|
||||||
#@ load("@ytt:library", "library")
|
#@ load("@ytt:library", "library")
|
||||||
#@ load("@ytt:template", "template")
|
#@ load("@ytt:template", "template")
|
||||||
#@
|
#@
|
||||||
#@ component = library.get("component")
|
#@ component = library.get("component")
|
||||||
#@
|
#@
|
||||||
#@ defaults = {}
|
#@ defaults = {}
|
||||||
#@ for name in data.values:
|
#@ for name in data.values:
|
||||||
#@ if name.startswith("_"):
|
#@ if name.startswith("_"):
|
||||||
|
|||||||
@@ -16,4 +16,4 @@ spec:
|
|||||||
selector:
|
selector:
|
||||||
app: #@ data.values.name
|
app: #@ data.values.name
|
||||||
type: #@ data.values.type
|
type: #@ data.values.type
|
||||||
#@ end
|
#@ end
|
||||||
|
|||||||
@@ -1,9 +1,9 @@
|
|||||||
#@ load("@ytt:data", "data")
|
#@ load("@ytt:data", "data")
|
||||||
#@ load("@ytt:library", "library")
|
#@ load("@ytt:library", "library")
|
||||||
#@ load("@ytt:template", "template")
|
#@ load("@ytt:template", "template")
|
||||||
#@
|
#@
|
||||||
#@ component = library.get("component")
|
#@ component = library.get("component")
|
||||||
#@
|
#@
|
||||||
#@ defaults = {}
|
#@ defaults = {}
|
||||||
#@ for name in data.values:
|
#@ for name in data.values:
|
||||||
#@ if name.startswith("_"):
|
#@ if name.startswith("_"):
|
||||||
|
|||||||
@@ -273,7 +273,7 @@ You should see one or more versions of Python 3. If you don't,
|
|||||||
install it with `brew install python`.
|
install it with `brew install python`.
|
||||||
|
|
||||||
2) Verify that `python` points to Python3.
|
2) Verify that `python` points to Python3.
|
||||||
|
|
||||||
```
|
```
|
||||||
ls -la /usr/local/bin/python
|
ls -la /usr/local/bin/python
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -5,4 +5,4 @@
|
|||||||
"variables": {},
|
"variables": {},
|
||||||
"resources": [],
|
"resources": [],
|
||||||
"outputs": {}
|
"outputs": {}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
resource_group="workshop-rg"
|
resource_group="workshop-rg"
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
# time ./workshopctl start \
|
# time ./workshopctl start \
|
||||||
# --infra infra/azure \
|
# --infra infra/azure \
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
|
|
||||||
resource_group="workshop-rg"
|
resource_group="workshop-rg"
|
||||||
|
|
||||||
|
|||||||
@@ -1,2 +1,2 @@
|
|||||||
INFRACLASS=terraform
|
INFRACLASS=terraform
|
||||||
TERRAFORM=azure
|
TERRAFORM=azure
|
||||||
|
|||||||
@@ -21,4 +21,3 @@ export OS_FLAVOR=s1-4
|
|||||||
export OS_IMAGE=896c5f54-51dc-44f0-8c22-ce99ba7164df
|
export OS_IMAGE=896c5f54-51dc-44f0-8c22-ce99ba7164df
|
||||||
# You can create a key with `openstack keypair create --public-key ~/.ssh/id_rsa.pub containertraining`
|
# You can create a key with `openstack keypair create --public-key ~/.ssh/id_rsa.pub containertraining`
|
||||||
export OS_KEY=containertraining
|
export OS_KEY=containertraining
|
||||||
|
|
||||||
|
|||||||
@@ -92,4 +92,4 @@ need_settings() {
|
|||||||
need_login_password() {
|
need_login_password() {
|
||||||
USER_LOGIN=$(yq -r .user_login < tags/$TAG/settings.yaml)
|
USER_LOGIN=$(yq -r .user_login < tags/$TAG/settings.yaml)
|
||||||
USER_PASSWORD=$(yq -r .user_password < tags/$TAG/settings.yaml)
|
USER_PASSWORD=$(yq -r .user_password < tags/$TAG/settings.yaml)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -59,7 +59,7 @@ _cmd_clean() {
|
|||||||
info "Removing $TAG..."
|
info "Removing $TAG..."
|
||||||
rm -rf "$TAG"
|
rm -rf "$TAG"
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
}
|
}
|
||||||
|
|
||||||
_cmd createuser "Create the user that students will use"
|
_cmd createuser "Create the user that students will use"
|
||||||
@@ -291,7 +291,7 @@ EOF
|
|||||||
COMPOSE_VERSION=v2.11.1
|
COMPOSE_VERSION=v2.11.1
|
||||||
# shellcheck disable=SC2016
|
# shellcheck disable=SC2016
|
||||||
COMPOSE_PLATFORM='linux-$(uname -m)'
|
COMPOSE_PLATFORM='linux-$(uname -m)'
|
||||||
|
|
||||||
# Just in case you need Compose 1.X, you can use the following lines.
|
# Just in case you need Compose 1.X, you can use the following lines.
|
||||||
# (But it will probably only work for x86_64 machines.)
|
# (But it will probably only work for x86_64 machines.)
|
||||||
#COMPOSE_VERSION=1.29.2
|
#COMPOSE_VERSION=1.29.2
|
||||||
|
|||||||
@@ -19,14 +19,14 @@ where:
|
|||||||
-i <image>
|
-i <image>
|
||||||
-k <kernel>
|
-k <kernel>
|
||||||
<sorting> is one of:
|
<sorting> is one of:
|
||||||
-R by region
|
-R by region
|
||||||
-N by name
|
-N by name
|
||||||
-V by version
|
-V by version
|
||||||
-A by arch
|
-A by arch
|
||||||
-T by type
|
-T by type
|
||||||
-D by date
|
-D by date
|
||||||
-I by image
|
-I by image
|
||||||
-K by kernel
|
-K by kernel
|
||||||
<options> can be:
|
<options> can be:
|
||||||
-q just show AMI
|
-q just show AMI
|
||||||
|
|
||||||
|
|||||||
@@ -27,4 +27,4 @@ infra_opensg() {
|
|||||||
|
|
||||||
infra_disableaddrchecks() {
|
infra_disableaddrchecks() {
|
||||||
warning "infra_disableaddrchecks is unsupported on $INFRACLASS."
|
warning "infra_disableaddrchecks is unsupported on $INFRACLASS."
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -54,7 +54,7 @@ infra_stop() {
|
|||||||
info "Counting instances..."
|
info "Counting instances..."
|
||||||
linode_get_ids_by_tag $TAG | wc -l
|
linode_get_ids_by_tag $TAG | wc -l
|
||||||
info "Deleting instances..."
|
info "Deleting instances..."
|
||||||
linode_get_ids_by_tag $TAG |
|
linode_get_ids_by_tag $TAG |
|
||||||
xargs -n1 -P10 \
|
xargs -n1 -P10 \
|
||||||
linode-cli linodes delete
|
linode-cli linodes delete
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -31,8 +31,8 @@ infra_start() {
|
|||||||
|
|
||||||
infra_stop() {
|
infra_stop() {
|
||||||
info "Counting instances..."
|
info "Counting instances..."
|
||||||
oscli_get_instances_json $TAG |
|
oscli_get_instances_json $TAG |
|
||||||
jq -r .[].Name |
|
jq -r .[].Name |
|
||||||
wc -l
|
wc -l
|
||||||
info "Deleting instances..."
|
info "Deleting instances..."
|
||||||
oscli_get_instances_json $TAG |
|
oscli_get_instances_json $TAG |
|
||||||
|
|||||||
@@ -35,7 +35,7 @@ infra_stop() {
|
|||||||
info "Counting instances..."
|
info "Counting instances..."
|
||||||
scw_get_ids_by_tag $TAG | wc -l
|
scw_get_ids_by_tag $TAG | wc -l
|
||||||
info "Deleting instances..."
|
info "Deleting instances..."
|
||||||
scw_get_ids_by_tag $TAG |
|
scw_get_ids_by_tag $TAG |
|
||||||
xargs -n1 -P10 \
|
xargs -n1 -P10 \
|
||||||
scw instance server delete zone=${SCW_ZONE} force-shutdown=true with-ip=true
|
scw instance server delete zone=${SCW_ZONE} force-shutdown=true with-ip=true
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -39,4 +39,3 @@
|
|||||||
|
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|
||||||
|
|||||||
@@ -146,7 +146,7 @@ div {
|
|||||||
*/
|
*/
|
||||||
/**/
|
/**/
|
||||||
width: 33%;
|
width: 33%;
|
||||||
/**/
|
/**/
|
||||||
}
|
}
|
||||||
|
|
||||||
p {
|
p {
|
||||||
|
|||||||
@@ -146,7 +146,7 @@ div {
|
|||||||
*/
|
*/
|
||||||
/**/
|
/**/
|
||||||
width: 33%;
|
width: 33%;
|
||||||
/**/
|
/**/
|
||||||
}
|
}
|
||||||
|
|
||||||
p {
|
p {
|
||||||
|
|||||||
@@ -10,4 +10,4 @@ terraform {
|
|||||||
|
|
||||||
provider "azurerm" {
|
provider "azurerm" {
|
||||||
features {}
|
features {}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,4 +2,3 @@ resource "openstack_compute_keypair_v2" "ssh_deploy_key" {
|
|||||||
name = var.prefix
|
name = var.prefix
|
||||||
public_key = file("~/.ssh/id_rsa.pub")
|
public_key = file("~/.ssh/id_rsa.pub")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -19,5 +19,3 @@ resource "openstack_networking_router_interface_v2" "router_internal" {
|
|||||||
router_id = openstack_networking_router_v2.router.id
|
router_id = openstack_networking_router_v2.router.id
|
||||||
subnet_id = openstack_networking_subnet_v2.internal.id
|
subnet_id = openstack_networking_subnet_v2.internal.id
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -9,4 +9,3 @@ resource "openstack_networking_secgroup_rule_v2" "full_access" {
|
|||||||
remote_ip_prefix = "0.0.0.0/0"
|
remote_ip_prefix = "0.0.0.0/0"
|
||||||
security_group_id = openstack_networking_secgroup_v2.full_access.id
|
security_group_id = openstack_networking_secgroup_v2.full_access.id
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
2
slides/.vscode/settings.json
vendored
2
slides/.vscode/settings.json
vendored
@@ -1,3 +1,3 @@
|
|||||||
{
|
{
|
||||||
"ansible.python.interpreterPath": "/opt/homebrew/bin/python3"
|
"ansible.python.interpreterPath": "/opt/homebrew/bin/python3"
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
FROM alpine:3.17
|
FROM alpine:3.17
|
||||||
RUN apk add --no-cache py3-pip git zip inotify-tools
|
RUN apk add --no-cache py3-pip git zip inotify-tools
|
||||||
COPY requirements.txt .
|
COPY requirements.txt .
|
||||||
RUN pip3 install -r requirements.txt
|
RUN pip3 install -r requirements.txt
|
||||||
|
|||||||
@@ -5,4 +5,3 @@ https://www.youtube.com/watch?v=MHv6cWjvQjM&list=PLkA60AVN3hh-biQ6SCtBJ-WVTyBmmY
|
|||||||
|
|
||||||
Cilium: Network and Application Security with BPF and XDP
|
Cilium: Network and Application Security with BPF and XDP
|
||||||
https://www.youtube.com/watch?v=ilKlmTDdFgk&list=PLkA60AVN3hh-biQ6SCtBJ-WVTyBmmYho8&index=9
|
https://www.youtube.com/watch?v=ilKlmTDdFgk&list=PLkA60AVN3hh-biQ6SCtBJ-WVTyBmmYho8&index=9
|
||||||
|
|
||||||
|
|||||||
@@ -159,7 +159,7 @@ total 919644
|
|||||||
```
|
```
|
||||||
]
|
]
|
||||||
|
|
||||||
You could also do a `tar tvf python_image.tar`
|
You could also do a `tar tvf python_image.tar`
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -257,4 +257,4 @@ The push refers to repository [node1:443/python]
|
|||||||
974e52a24adf: Waiting
|
974e52a24adf: Waiting
|
||||||
latest: digest: sha256:cbaa654007e0c2f2e2869ae69f9e9924826872d405c02647f65f5a72b597e853 size: 2007
|
latest: digest: sha256:cbaa654007e0c2f2e2869ae69f9e9924826872d405c02647f65f5a72b597e853 size: 2007
|
||||||
```
|
```
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -8,7 +8,7 @@
|
|||||||
|
|
||||||
var io = require('socket.io-client');
|
var io = require('socket.io-client');
|
||||||
var socket = io('http://localhost:3000');
|
var socket = io('http://localhost:3000');
|
||||||
socket.on('connect_error', function(){
|
socket.on('connect_error', function(){
|
||||||
console.log('connection error');
|
console.log('connection error');
|
||||||
socket.close();
|
socket.close();
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -18,4 +18,3 @@ socket.on('slide change', function (n) {
|
|||||||
slideshow.gotoSlide(n);
|
slideshow.gotoSlide(n);
|
||||||
leader = true;
|
leader = true;
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -4,4 +4,3 @@
|
|||||||
tmux set-option -g status-left ""
|
tmux set-option -g status-left ""
|
||||||
tmux set-option -g status-right ""
|
tmux set-option -g status-right ""
|
||||||
tmux set-option -g status-style bg=cyan
|
tmux set-option -g status-style bg=cyan
|
||||||
|
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ set -e
|
|||||||
build_slides() {
|
build_slides() {
|
||||||
./index.py
|
./index.py
|
||||||
for YAML in *.yml; do
|
for YAML in *.yml; do
|
||||||
./markmaker.py $YAML > $YAML.html || {
|
./markmaker.py $YAML > $YAML.html || {
|
||||||
rm $YAML.html
|
rm $YAML.html
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
@@ -30,7 +30,7 @@ forever)
|
|||||||
echo >&2 "First install 'inotifywait' with apt, brew, etc."
|
echo >&2 "First install 'inotifywait' with apt, brew, etc."
|
||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
|
|
||||||
while true; do
|
while true; do
|
||||||
inotifywait -e modify -e delete -e create -r .
|
inotifywait -e modify -e delete -e create -r .
|
||||||
build_slides
|
build_slides
|
||||||
@@ -40,4 +40,4 @@ forever)
|
|||||||
*)
|
*)
|
||||||
echo "$0 <once|forever>"
|
echo "$0 <once|forever>"
|
||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|||||||
@@ -13,4 +13,3 @@ services:
|
|||||||
- ..:/repo
|
- ..:/repo
|
||||||
working_dir: /repo/slides
|
working_dir: /repo/slides
|
||||||
command: ./build.sh forever
|
command: ./build.sh forever
|
||||||
|
|
||||||
|
|||||||
@@ -235,7 +235,7 @@ instructions.
|
|||||||
|
|
||||||
It also affects `CMD` and `ENTRYPOINT`, since it sets the working
|
It also affects `CMD` and `ENTRYPOINT`, since it sets the working
|
||||||
directory used when starting the container.
|
directory used when starting the container.
|
||||||
|
|
||||||
```dockerfile
|
```dockerfile
|
||||||
WORKDIR /src
|
WORKDIR /src
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ In this section, we will create our first container image.
|
|||||||
It will be a basic distribution image, but we will pre-install
|
It will be a basic distribution image, but we will pre-install
|
||||||
the package `figlet`.
|
the package `figlet`.
|
||||||
|
|
||||||
We will:
|
We will:
|
||||||
|
|
||||||
* Create a container from a base image.
|
* Create a container from a base image.
|
||||||
|
|
||||||
@@ -124,11 +124,11 @@ Let's run this image:
|
|||||||
```bash
|
```bash
|
||||||
$ docker run -it <newImageId>
|
$ docker run -it <newImageId>
|
||||||
root@fcfb62f0bfde:/# figlet hello
|
root@fcfb62f0bfde:/# figlet hello
|
||||||
_ _ _
|
_ _ _
|
||||||
| |__ ___| | | ___
|
| |__ ___| | | ___
|
||||||
| '_ \ / _ \ | |/ _ \
|
| '_ \ / _ \ | |/ _ \
|
||||||
| | | | __/ | | (_) |
|
| | | | __/ | | (_) |
|
||||||
|_| |_|\___|_|_|\___/
|
|_| |_|\___|_|_|\___/
|
||||||
```
|
```
|
||||||
|
|
||||||
It works! 🎉
|
It works! 🎉
|
||||||
|
|||||||
@@ -284,11 +284,11 @@ The resulting image is not different from the one produced manually.
|
|||||||
```bash
|
```bash
|
||||||
$ docker run -ti figlet
|
$ docker run -ti figlet
|
||||||
root@91f3c974c9a1:/# figlet hello
|
root@91f3c974c9a1:/# figlet hello
|
||||||
_ _ _
|
_ _ _
|
||||||
| |__ ___| | | ___
|
| |__ ___| | | ___
|
||||||
| '_ \ / _ \ | |/ _ \
|
| '_ \ / _ \ | |/ _ \
|
||||||
| | | | __/ | | (_) |
|
| | | | __/ | | (_) |
|
||||||
|_| |_|\___|_|_|\___/
|
|_| |_|\___|_|_|\___/
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -232,7 +232,7 @@ Sometimes, binary releases be like:
|
|||||||
Linux_arm64.tar.gz
|
Linux_arm64.tar.gz
|
||||||
Linux_ppc64le.tar.gz
|
Linux_ppc64le.tar.gz
|
||||||
Linux_s390x.tar.gz
|
Linux_s390x.tar.gz
|
||||||
Linux_x86_64.tar.gz
|
Linux_x86_64.tar.gz
|
||||||
```
|
```
|
||||||
|
|
||||||
This needs a bit of custom mapping.
|
This needs a bit of custom mapping.
|
||||||
|
|||||||
@@ -71,11 +71,11 @@ And run it:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker run figlet
|
$ docker run figlet
|
||||||
_ _ _
|
_ _ _
|
||||||
| | | | | |
|
| | | | | |
|
||||||
| | _ | | | | __
|
| | _ | | | | __
|
||||||
|/ \ |/ |/ |/ / \_
|
|/ \ |/ |/ |/ / \_
|
||||||
| |_/|__/|__/|__/\__/
|
| |_/|__/|__/|__/\__/
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -87,7 +87,7 @@ If we want to get a shell into our container (instead of running
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker run -it figlet bash
|
$ docker run -it figlet bash
|
||||||
root@7ac86a641116:/#
|
root@7ac86a641116:/#
|
||||||
```
|
```
|
||||||
|
|
||||||
* We specified `bash`.
|
* We specified `bash`.
|
||||||
@@ -105,10 +105,10 @@ In other words, we would like to be able to do this:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker run figlet salut
|
$ docker run figlet salut
|
||||||
_
|
_
|
||||||
| |
|
| |
|
||||||
, __, | | _|_
|
, __, | | _|_
|
||||||
/ \_/ | |/ | | |
|
/ \_/ | |/ | | |
|
||||||
\/ \_/|_/|__/ \_/|_/|_/
|
\/ \_/|_/|__/ \_/|_/|_/
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -173,10 +173,10 @@ And run it:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker run figlet salut
|
$ docker run figlet salut
|
||||||
_
|
_
|
||||||
| |
|
| |
|
||||||
, __, | | _|_
|
, __, | | _|_
|
||||||
/ \_/ | |/ | | |
|
/ \_/ | |/ | | |
|
||||||
\/ \_/|_/|__/ \_/|_/|_/
|
\/ \_/|_/|__/ \_/|_/|_/
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -232,10 +232,10 @@ Run it without parameters:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker run myfiglet
|
$ docker run myfiglet
|
||||||
_ _ _ _
|
_ _ _ _
|
||||||
| | | | | | | | |
|
| | | | | | | | |
|
||||||
| | _ | | | | __ __ ,_ | | __|
|
| | _ | | | | __ __ ,_ | | __|
|
||||||
|/ \ |/ |/ |/ / \_ | | |_/ \_/ | |/ / |
|
|/ \ |/ |/ |/ / \_ | | |_/ \_/ | |/ / |
|
||||||
| |_/|__/|__/|__/\__/ \/ \/ \__/ |_/|__/\_/|_/
|
| |_/|__/|__/|__/\__/ \/ \/ \__/ |_/|__/\_/|_/
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -247,11 +247,11 @@ Now let's pass extra arguments to the image.
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker run myfiglet hola mundo
|
$ docker run myfiglet hola mundo
|
||||||
_ _
|
_ _
|
||||||
| | | | |
|
| | | | |
|
||||||
| | __ | | __, _ _ _ _ _ __| __
|
| | __ | | __, _ _ _ _ _ __| __
|
||||||
|/ \ / \_|/ / | / |/ |/ | | | / |/ | / | / \_
|
|/ \ / \_|/ / | / |/ |/ | | | / |/ | / | / \_
|
||||||
| |_/\__/ |__/\_/|_/ | | |_/ \_/|_/ | |_/\_/|_/\__/
|
| |_/\__/ |__/\_/|_/ | | |_/ \_/|_/ | |_/\_/|_/\__/
|
||||||
```
|
```
|
||||||
|
|
||||||
We overrode `CMD` but still used `ENTRYPOINT`.
|
We overrode `CMD` but still used `ENTRYPOINT`.
|
||||||
@@ -269,7 +269,7 @@ We use the `--entrypoint` parameter:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker run -it --entrypoint bash myfiglet
|
$ docker run -it --entrypoint bash myfiglet
|
||||||
root@6027e44e2955:/#
|
root@6027e44e2955:/#
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
@@ -278,7 +278,7 @@ For the full list, check: https://docs.docker.com/compose/compose-file/
|
|||||||
|
|
||||||
`frontcopy_www`, `frontcopy_www_1`, `frontcopy_db_1`
|
`frontcopy_www`, `frontcopy_www_1`, `frontcopy_db_1`
|
||||||
|
|
||||||
- Alternatively, use `docker-compose -p frontcopy`
|
- Alternatively, use `docker-compose -p frontcopy`
|
||||||
|
|
||||||
(to set the `--project-name` of a stack, which default to the dir name)
|
(to set the `--project-name` of a stack, which default to the dir name)
|
||||||
|
|
||||||
@@ -292,10 +292,10 @@ We have `ps`, `docker ps`, and similarly, `docker-compose ps`:
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker-compose ps
|
$ docker-compose ps
|
||||||
Name Command State Ports
|
Name Command State Ports
|
||||||
----------------------------------------------------------------------------
|
----------------------------------------------------------------------------
|
||||||
trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp
|
trainingwheels_redis_1 /entrypoint.sh red Up 6379/tcp
|
||||||
trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp
|
trainingwheels_www_1 python counter.py Up 0.0.0.0:8000->5000/tcp
|
||||||
```
|
```
|
||||||
|
|
||||||
Shows the status of all the containers of our stack.
|
Shows the status of all the containers of our stack.
|
||||||
@@ -378,7 +378,7 @@ Use `docker-compose down -v` to remove everything including volumes.
|
|||||||
- `docker-compose down -v`/`--volumes` deletes volumes
|
- `docker-compose down -v`/`--volumes` deletes volumes
|
||||||
|
|
||||||
(but **not** `docker-compose down && docker-compose down -v`!)
|
(but **not** `docker-compose down && docker-compose down -v`!)
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Managing volumes explicitly
|
## Managing volumes explicitly
|
||||||
|
|||||||
@@ -220,4 +220,3 @@ We've learned how to:
|
|||||||
|
|
||||||
* Create links between containers.
|
* Create links between containers.
|
||||||
* Use names and links to communicate across containers.
|
* Use names and links to communicate across containers.
|
||||||
|
|
||||||
|
|||||||
@@ -30,7 +30,7 @@ Note: strictly speaking, the Docker API is not fully REST.
|
|||||||
|
|
||||||
Some operations (e.g. dealing with interactive containers
|
Some operations (e.g. dealing with interactive containers
|
||||||
and log streaming) don't fit the REST model.
|
and log streaming) don't fit the REST model.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
class: pic
|
class: pic
|
||||||
|
|||||||
@@ -92,7 +92,7 @@ $ docker run -d -P nginx
|
|||||||
- In other scenarios (`docker-machine`, local VM...):
|
- In other scenarios (`docker-machine`, local VM...):
|
||||||
|
|
||||||
*use the IP address of the Docker VM*
|
*use the IP address of the Docker VM*
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Connecting to our web server (GUI)
|
## Connecting to our web server (GUI)
|
||||||
|
|||||||
@@ -1,3 +1,3 @@
|
|||||||
# Building containers from scratch
|
# Building containers from scratch
|
||||||
|
|
||||||
(This is a "bonus section" done if time permits.)
|
(This is a "bonus section" done if time permits.)
|
||||||
|
|||||||
@@ -246,7 +246,7 @@ If you see the abbreviation "thinp" it stands for "thin provisioning".
|
|||||||
|
|
||||||
(Instead of the block level for Device Mapper.)
|
(Instead of the block level for Device Mapper.)
|
||||||
|
|
||||||
- In practice, we create a "subvolume" and
|
- In practice, we create a "subvolume" and
|
||||||
later take a "snapshot" of that subvolume.
|
later take a "snapshot" of that subvolume.
|
||||||
|
|
||||||
Imagine: `mkdir` with Super Powers and `cp -a` with Super Powers.
|
Imagine: `mkdir` with Super Powers and `cp -a` with Super Powers.
|
||||||
@@ -275,7 +275,7 @@ class: extra-details
|
|||||||
|
|
||||||
- You can run out of chunks (and get `No space left on device`)
|
- You can run out of chunks (and get `No space left on device`)
|
||||||
even though `df` shows space available.
|
even though `df` shows space available.
|
||||||
|
|
||||||
(Because chunks are only partially allocated.)
|
(Because chunks are only partially allocated.)
|
||||||
|
|
||||||
- Quick fix:
|
- Quick fix:
|
||||||
|
|||||||
@@ -93,7 +93,7 @@ Success!
|
|||||||
* It is possible to do e.g. `COPY . .`
|
* It is possible to do e.g. `COPY . .`
|
||||||
|
|
||||||
(but it might require some extra precautions to avoid copying too much)
|
(but it might require some extra precautions to avoid copying too much)
|
||||||
|
|
||||||
* In older Dockerfiles, you might see the `ADD` command; consider it deprecated
|
* In older Dockerfiles, you might see the `ADD` command; consider it deprecated
|
||||||
|
|
||||||
(it is similar to `COPY` but can automatically extract archives)
|
(it is similar to `COPY` but can automatically extract archives)
|
||||||
|
|||||||
@@ -252,7 +252,7 @@ class: extra-details
|
|||||||
* No re-usable components, APIs, tools.
|
* No re-usable components, APIs, tools.
|
||||||
<br/>(At best: VM abstractions, e.g. libvirt.)
|
<br/>(At best: VM abstractions, e.g. libvirt.)
|
||||||
|
|
||||||
Analogy:
|
Analogy:
|
||||||
|
|
||||||
* Shipping containers are not just steel boxes.
|
* Shipping containers are not just steel boxes.
|
||||||
* They are steel boxes that are a standard size, with the same hooks and holes.
|
* They are steel boxes that are a standard size, with the same hooks and holes.
|
||||||
|
|||||||
@@ -308,19 +308,19 @@ That entrypoint will generally be a script, performing any combination of:
|
|||||||
```dockerfile
|
```dockerfile
|
||||||
#!/bin/sh
|
#!/bin/sh
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
# first arg is '-f' or '--some-option'
|
# first arg is '-f' or '--some-option'
|
||||||
# or first arg is 'something.conf'
|
# or first arg is 'something.conf'
|
||||||
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
|
if [ "${1#-}" != "$1" ] || [ "${1%.conf}" != "$1" ]; then
|
||||||
set -- redis-server "$@"
|
set -- redis-server "$@"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# allow the container to be started with '--user'
|
# allow the container to be started with '--user'
|
||||||
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
|
if [ "$1" = 'redis-server' -a "$(id -u)" = '0' ]; then
|
||||||
chown -R redis .
|
chown -R redis .
|
||||||
exec su-exec redis "$0" "$@"
|
exec su-exec redis "$0" "$@"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
exec "$@"
|
exec "$@"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -153,7 +153,7 @@ Would we give the same answers to the questions on the previous slide?
|
|||||||
|
|
||||||
## The CNCF
|
## The CNCF
|
||||||
|
|
||||||
- Non-profit, part of the Linux Foundation; founded in December 2015.
|
- Non-profit, part of the Linux Foundation; founded in December 2015.
|
||||||
|
|
||||||
*The Cloud Native Computing Foundation builds sustainable ecosystems and fosters
|
*The Cloud Native Computing Foundation builds sustainable ecosystems and fosters
|
||||||
a community around a constellation of high-quality projects that orchestrate
|
a community around a constellation of high-quality projects that orchestrate
|
||||||
@@ -170,4 +170,3 @@ Would we give the same answers to the questions on the previous slide?
|
|||||||
class: pic
|
class: pic
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|||||||
@@ -99,11 +99,11 @@ The `figlet` program takes a message as parameter.
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
root@04c0bb0a6c07:/# figlet hello
|
root@04c0bb0a6c07:/# figlet hello
|
||||||
_ _ _
|
_ _ _
|
||||||
| |__ ___| | | ___
|
| |__ ___| | | ___
|
||||||
| '_ \ / _ \ | |/ _ \
|
| '_ \ / _ \ | |/ _ \
|
||||||
| | | | __/ | | (_) |
|
| | | | __/ | | (_) |
|
||||||
|_| |_|\___|_|_|\___/
|
|_| |_|\___|_|_|\___/
|
||||||
```
|
```
|
||||||
|
|
||||||
Beautiful! 😍
|
Beautiful! 😍
|
||||||
@@ -192,7 +192,7 @@ Now try to run `figlet`. Does that work?
|
|||||||
## Starting another container
|
## Starting another container
|
||||||
|
|
||||||
What if we start a new container, and try to run `figlet` again?
|
What if we start a new container, and try to run `figlet` again?
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ docker run -it ubuntu
|
$ docker run -it ubuntu
|
||||||
root@b13c164401fb:/# figlet
|
root@b13c164401fb:/# figlet
|
||||||
|
|||||||
@@ -248,7 +248,7 @@ We will change our Dockerfile to:
|
|||||||
|
|
||||||
* add the `hello` binary to the second stage
|
* add the `hello` binary to the second stage
|
||||||
|
|
||||||
* make sure that `CMD` is in the second stage
|
* make sure that `CMD` is in the second stage
|
||||||
|
|
||||||
The resulting Dockerfile is on the next slide.
|
The resulting Dockerfile is on the next slide.
|
||||||
|
|
||||||
|
|||||||
@@ -114,7 +114,7 @@ $ docker inspect <containerID> | jq .
|
|||||||
|
|
||||||
## Using `--format`
|
## Using `--format`
|
||||||
|
|
||||||
You can specify a format string, which will be parsed by
|
You can specify a format string, which will be parsed by
|
||||||
Go's text/template package.
|
Go's text/template package.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|||||||
@@ -26,8 +26,8 @@ middleware, and services.*
|
|||||||
|
|
||||||
--
|
--
|
||||||
|
|
||||||
*[...] orchestration is often discussed in the context of
|
*[...] orchestration is often discussed in the context of
|
||||||
__service-oriented architecture__, __virtualization__, provisioning,
|
__service-oriented architecture__, __virtualization__, provisioning,
|
||||||
Converged Infrastructure and __dynamic datacenter__ topics.*
|
Converged Infrastructure and __dynamic datacenter__ topics.*
|
||||||
|
|
||||||
--
|
--
|
||||||
@@ -53,15 +53,15 @@ What does that really mean?
|
|||||||
## Example 1: dynamic cloud instances
|
## Example 1: dynamic cloud instances
|
||||||
|
|
||||||
- Every night, scale down
|
- Every night, scale down
|
||||||
|
|
||||||
(by shutting down extraneous replicated instances)
|
(by shutting down extraneous replicated instances)
|
||||||
|
|
||||||
- Every morning, scale up
|
- Every morning, scale up
|
||||||
|
|
||||||
(by deploying new copies)
|
(by deploying new copies)
|
||||||
|
|
||||||
- "Pay for what you use"
|
- "Pay for what you use"
|
||||||
|
|
||||||
(i.e. save big $$$ here)
|
(i.e. save big $$$ here)
|
||||||
|
|
||||||
---
|
---
|
||||||
@@ -71,7 +71,7 @@ What does that really mean?
|
|||||||
How do we implement this?
|
How do we implement this?
|
||||||
|
|
||||||
- Crontab
|
- Crontab
|
||||||
|
|
||||||
- Autoscaling (save even bigger $$$)
|
- Autoscaling (save even bigger $$$)
|
||||||
|
|
||||||
That's *relatively* easy.
|
That's *relatively* easy.
|
||||||
@@ -113,11 +113,11 @@ Now, how are things for our IAAS provider?
|
|||||||
- If only we could turn off unused servers during the night...
|
- If only we could turn off unused servers during the night...
|
||||||
|
|
||||||
- Problem: we can only turn off a server if it's totally empty!
|
- Problem: we can only turn off a server if it's totally empty!
|
||||||
|
|
||||||
(i.e. all VMs on it are stopped/moved)
|
(i.e. all VMs on it are stopped/moved)
|
||||||
|
|
||||||
- Solution: *migrate* VMs and shutdown empty servers
|
- Solution: *migrate* VMs and shutdown empty servers
|
||||||
|
|
||||||
(e.g. combine two hypervisors with 40% load into 80%+0%,
|
(e.g. combine two hypervisors with 40% load into 80%+0%,
|
||||||
<br/>and shut down the one at 0%)
|
<br/>and shut down the one at 0%)
|
||||||
|
|
||||||
@@ -132,11 +132,11 @@ How do we implement this?
|
|||||||
- Start hosts again when capacity gets low
|
- Start hosts again when capacity gets low
|
||||||
|
|
||||||
- Ability to "live migrate" VMs
|
- Ability to "live migrate" VMs
|
||||||
|
|
||||||
(Xen already did this 10+ years ago)
|
(Xen already did this 10+ years ago)
|
||||||
|
|
||||||
- Rebalance VMs on a regular basis
|
- Rebalance VMs on a regular basis
|
||||||
|
|
||||||
- what if a VM is stopped while we move it?
|
- what if a VM is stopped while we move it?
|
||||||
- should we allow provisioning on hosts involved in a migration?
|
- should we allow provisioning on hosts involved in a migration?
|
||||||
|
|
||||||
@@ -148,7 +148,7 @@ How do we implement this?
|
|||||||
|
|
||||||
According to Wikipedia (again):
|
According to Wikipedia (again):
|
||||||
|
|
||||||
*In computing, scheduling is the method by which threads,
|
*In computing, scheduling is the method by which threads,
|
||||||
processes or data flows are given access to system resources.*
|
processes or data flows are given access to system resources.*
|
||||||
|
|
||||||
The scheduler is concerned mainly with:
|
The scheduler is concerned mainly with:
|
||||||
@@ -439,4 +439,4 @@ It depends on:
|
|||||||
???
|
???
|
||||||
|
|
||||||
:EN:- Orchestration overview
|
:EN:- Orchestration overview
|
||||||
:FR:- Survol de techniques d'orchestration
|
:FR:- Survol de techniques d'orchestration
|
||||||
|
|||||||
@@ -215,7 +215,7 @@ On the other hand, the application will never be slowed down because of swap.
|
|||||||
- Most storage drivers do not support limiting the disk usage of containers.
|
- Most storage drivers do not support limiting the disk usage of containers.
|
||||||
|
|
||||||
(With the exception of devicemapper, but the limit cannot be set easily.)
|
(With the exception of devicemapper, but the limit cannot be set easily.)
|
||||||
|
|
||||||
- This means that a single container could exhaust disk space for everyone.
|
- This means that a single container could exhaust disk space for everyone.
|
||||||
|
|
||||||
- In practice, however, this is not a concern, because:
|
- In practice, however, this is not a concern, because:
|
||||||
|
|||||||
@@ -31,7 +31,7 @@ Analogy: attaching to a container is like plugging a keyboard and screen to a ph
|
|||||||
* The "detach" sequence is `^P^Q`.
|
* The "detach" sequence is `^P^Q`.
|
||||||
|
|
||||||
* Otherwise you can detach by killing the Docker client.
|
* Otherwise you can detach by killing the Docker client.
|
||||||
|
|
||||||
(But not by hitting `^C`, as this would deliver `SIGINT` to the container.)
|
(But not by hitting `^C`, as this would deliver `SIGINT` to the container.)
|
||||||
|
|
||||||
What does `-it` stand for?
|
What does `-it` stand for?
|
||||||
|
|||||||
@@ -23,9 +23,9 @@ At the end of this section, you will be able to:
|
|||||||
Remember that a container must run on the kernel of the OS it's on.
|
Remember that a container must run on the kernel of the OS it's on.
|
||||||
|
|
||||||
- This is both a benefit and a limitation.
|
- This is both a benefit and a limitation.
|
||||||
|
|
||||||
(It makes containers lightweight, but limits them to a specific kernel.)
|
(It makes containers lightweight, but limits them to a specific kernel.)
|
||||||
|
|
||||||
- At its launch in 2013, Docker did only support Linux, and only on amd64 CPUs.
|
- At its launch in 2013, Docker did only support Linux, and only on amd64 CPUs.
|
||||||
|
|
||||||
- Since then, many platforms and OS have been added.
|
- Since then, many platforms and OS have been added.
|
||||||
@@ -45,10 +45,10 @@ Remember that a container must run on the kernel of the OS it's on.
|
|||||||
- Early 2016, Windows 10 gained support for running Windows binaries in containers.
|
- Early 2016, Windows 10 gained support for running Windows binaries in containers.
|
||||||
|
|
||||||
- These are known as "Windows Containers"
|
- These are known as "Windows Containers"
|
||||||
|
|
||||||
- Win 10 expects Docker for Windows to be installed for full features
|
- Win 10 expects Docker for Windows to be installed for full features
|
||||||
|
|
||||||
- These must run in Hyper-V mini-VM's with a Windows Server x64 kernel
|
- These must run in Hyper-V mini-VM's with a Windows Server x64 kernel
|
||||||
|
|
||||||
- No "scratch" containers, so use "Core" and "Nano" Server OS base layers
|
- No "scratch" containers, so use "Core" and "Nano" Server OS base layers
|
||||||
|
|
||||||
@@ -161,4 +161,4 @@ Places to Look:
|
|||||||
|
|
||||||
- Docker Captain [Nicholas Dille](https://dille.name/blog/)
|
- Docker Captain [Nicholas Dille](https://dille.name/blog/)
|
||||||
|
|
||||||
- Docker Captain [Stefan Scherer](https://stefanscherer.github.io/)
|
- Docker Captain [Stefan Scherer](https://stefanscherer.github.io/)
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user