This was discussed and agreed in #246. It will probably break a few outstanding PRs as well as a few external links but it's for the better good long term.
4.2 KiB
Namespaces
-
We cannot have two resources with the same name
(Or can we...?)
--
-
We cannot have two resources of the same type with the same name
(But it's OK to have a
rngservice, arngdeployment, and arngdaemon set!)
--
-
We cannot have two resources of the same type with the same name in the same namespace
(But it's OK to have e.g. two
rngservices in different namespaces!)
--
-
In other words: the tuple (type, name, namespace) needs to be unique
(In the resource YAML, the type is called
Kind)
Pre-existing namespaces
-
If we deploy a cluster with
kubeadm, we have three namespaces:-
default(for our applications) -
kube-system(for the control plane) -
kube-public(contains one secret used for cluster discovery)
-
-
If we deploy differently, we may have different namespaces
Creating namespaces
-
Creating a namespace is done with the
kubectl create namespacecommand:kubectl create namespace blue -
We can also get fancy and use a very minimal YAML snippet, e.g.:
kubectl apply -f- <<EOF apiVersion: v1 kind: Namespace metadata: name: blue EOF -
The two methods above are identical
-
If we are using a tool like Helm, it will create namespaces automatically
Using namespaces
-
We can pass a
-nor--namespaceflag to mostkubectlcommands:kubectl -n blue get svc -
We can also use contexts
-
A context is a (user, cluster, namespace) tuple
-
We can manipulate contexts with the
kubectl configcommand
Creating a context
- We are going to create a context for the
bluenamespace
.exercise[
-
View existing contexts to see the cluster name and the current user:
kubectl config get-contexts -
Create a new context:
kubectl config set-context blue --namespace=blue \ --cluster=kubernetes --user=kubernetes-admin
]
We have created a context; but this is just some configuration values.
The namespace doesn't exist yet.
Using a context
- Let's switch to our new context and deploy the DockerCoins chart
.exercise[
-
Use the
bluecontext:kubectl config use-context blue -
Deploy DockerCoins:
helm install dockercoins
]
In the last command line, dockercoins is just the local path where
we created our Helm chart before.
Viewing the deployed app
- Let's see if our Helm chart worked correctly!
.exercise[
-
Retrieve the port number allocated to the
webuiservice:kubectl get svc webui -
Point our browser to http://X.X.X.X:3xxxx
]
Note: it might take a minute or two for the app to be up and running.
Namespaces and isolation
-
Namespaces do not provide isolation
-
A pod in the
greennamespace can communicate with a pod in thebluenamespace -
A pod in the
defaultnamespace can communicate with a pod in thekube-systemnamespace -
CoreDNS uses a different subdomain for each namespace
-
Example: from any pod in the cluster, you can connect to the Kubernetes API with:
https://kubernetes.default.svc.cluster.local:443/
Isolating pods
-
Actual isolation is implemented with network policies
-
Network policies are resources (like deployments, services, namespaces...)
-
Network policies specify which flows are allowed:
-
between pods
-
from pods to the outside world
-
and vice-versa
-
Network policies overview
-
We can create as many network policies as we want
-
Each network policy has:
-
a pod selector: "which pods are targeted by the policy?"
-
lists of ingress and/or egress rules: "which peers and ports are allowed or blocked?"
-
-
If a pod is not targeted by any policy, traffic is allowed by default
-
If a pod is targeted by at least one policy, traffic must be allowed explicitly
More about network policies
-
This remains a high level overview of network policies
-
For more details, check: