🪓 Split "kubectl expose" and "service types"

This commit is contained in:
Jérôme Petazzoni
2023-01-13 17:50:22 +01:00
parent b984049603
commit e6eb157cc6
6 changed files with 629 additions and 506 deletions

View File

@@ -18,6 +18,108 @@
---
## Running containers with open ports
- Since `ping` doesn't have anything to connect to, we'll have to run something else
- We are going to use `jpetazzo/color`, a tiny HTTP server written in Go
- `jpetazzo/color` listens on port 80
- It serves a page showing the pod's name
(this will be useful when checking load balancing behavior)
- We could also use the `nginx` official image instead
(but we wouldn't be able to tell the backends from each other)
---
## Running our HTTP server
- We will create a deployment with `kubectl create deployment`
- This will create a Pod running our HTTP server
.lab[
- Create a deployment named `blue`:
```bash
kubectl create deployment blue --image=jpetazzo/color
```
]
---
## Connecting to the HTTP server
- Let's connect to the HTTP server directly
(just to make sure everything works fine; we'll add the Service later)
.lab[
- Get the IP address of the Pod:
```bash
kubectl get pods -o wide
```
- Send an HTTP request to the Pod:
```bash
curl http://`IP-ADDRESSS`
```
]
You should see a response from the Pod.
---
class: extra-details
## Running with a local cluster
If you're running with a local cluster (Docker Desktop, KinD, minikube...),
you might get a connection timeout (or a message like "no route to host")
because the Pod isn't reachable directly from your local machine.
In that case, you can test the connection to the Pod by running a shell
*inside* the cluster:
```bash
kubectl run -it --rm my-test-pod --image=fedora
```
Then run `curl` in that Pod.
---
## The Pod doesn't have a "stable identity"
- The IP address that we used above isn't "stable"
(if the Pod gets deleted, the replacement Pod will have a different address)
.lab[
- Check the IP addresses of running Pods:
```bash
watch kubectl get pods -o wide
```
- Delete the Pod:
```bash
kubectl delete pod `blue-xxxxxxxx-yyyyy`
```
- Check that the replacement Pod has a different IP address
]
---
## Services in a nutshell
- Services give us a *stable endpoint* to connect to a pod or a group of pods
@@ -36,6 +138,164 @@
---
## Exposing our deployment
- Let's create a Service for our Deployment
.lab[
- Expose the HTTP port of our server:
```bash
kubectl expose deployment blue --port=80
```
- Look up which IP address was allocated:
```bash
kubectl get service
```
]
- By default, this created a `ClusterIP` service
(we'll discuss later the different types of services)
---
class: extra-details
## Services are layer 4 constructs
- Services can have IP addresses, but they are still *layer 4*
(i.e. a service is not just an IP address; it's an IP address + protocol + port)
- As a result: you *have to* indicate the port number for your service
(with some exceptions, like `ExternalName` or headless services, covered later)
---
## Testing our service
- We will now send a few HTTP requests to our Pod
.lab[
- Let's obtain the IP address that was allocated for our service, *programmatically:*
```bash
CLUSTER_IP=$(kubectl get svc blue -o go-template='{{ .spec.clusterIP }}')
```
<!--
```hide kubectl wait deploy blue --for condition=available```
```key ^D```
```key ^C```
-->
- Send a few requests:
```bash
for i in $(seq 10); do curl http://$CLUSTER_IP; done
```
]
---
## A *stable* endpoint
- Let's see what happens when the Pod has a problem
.lab[
- Keep sending requests to the Service address:
```bash
while sleep 0.3; do curl http://$CLUSTER_IP; done
```
- Meanwhile, delete the Pod:
```bash
kubectl delete pod `blue-xxxxxxxx-yyyyy`
```
]
- There might be a short interruption when we delete the pod...
- ...But requests will keep flowing after that (without requiring a manual intervention)
---
## Load balancing
- The Service will also act as a load balancer
(if there are multiple Pods in the Deployment)
.lab[
- Scale up the Deployment:
```bash
kubectl scale deployment blue --replicas=3
```
- Send a bunch of requests to the Service:
```bash
for i in $(seq 20); do curl http://$CLUSTER_IP; done
```
]
- Our requests are load balanced across the Pods!
---
## DNS integration
- Kubernetes provides an internal DNS resolver
- The resolver maps service names to their internal addresses
- By default, this only works *inside Pods* (not from the nodes themselves)
.lab[
- Get a shell in a Pod:
```bash
kubectl run --rm -it --image=fedora test-dns-integration
```
- Try to resolve the `blue` Service from the Pod:
```bash
curl blue
```
]
---
class: extra-details
## Under the hood...
- Check the content of `/etc/resolv.conf` inside a Pod
- It will have `nameserver X.X.X.X` (e.g. 10.96.0.10)
- Now check `kubectl get service kube-dns --namespace=kube-system`
- ...It's the same address! 😉
- The FQDN of a service is actually:
`<service-name>.<namespace>.svc.<cluster-domain>`
- `<cluster-domain>` defaults to `cluster.local`
- And the `search` includes `<namespace>.svc.<cluster-domain>`
---
## Advantages of services
- We don't need to look up the IP address of the pod(s)
@@ -54,510 +314,10 @@
(when a pod fails, the service seamlessly sends traffic to its replacement)
---
## Many kinds and flavors of service
- There are different types of services:
`ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName`
- There are also *headless services*
- Services can also have optional *external IPs*
- There is also another resource type called *Ingress*
(specifically for HTTP services)
- Wow, that's a lot! Let's start with the basics ...
---
## `ClusterIP`
- It's the default service type
- A virtual IP address is allocated for the service
(in an internal, private range; e.g. 10.96.0.0/12)
- This IP address is reachable only from within the cluster (nodes and pods)
- Our code can connect to the service using the original port number
- Perfect for internal communication, within the cluster
---
class: pic
![](images/kubernetes-services/11-CIP-by-addr.png)
---
class: pic
![](images/kubernetes-services/12-CIP-by-name.png)
---
class: pic
![](images/kubernetes-services/13-CIP-both.png)
---
class: pic
![](images/kubernetes-services/14-CIP-headless.png)
---
## `LoadBalancer`
- An external load balancer is allocated for the service
(typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...)
- This is available only when the underlying infrastructure provides some kind of
"load balancer as a service"
- Each service of that type will typically cost a little bit of money
(e.g. a few cents per hour on AWS or GCE)
- Ideally, traffic would flow directly from the load balancer to the pods
- In practice, it will often flow through a `NodePort` first
---
class: pic
![](images/kubernetes-services/31-LB-no-service.png)
---
class: pic
![](images/kubernetes-services/32-LB-plus-cip.png)
---
class: pic
![](images/kubernetes-services/33-LB-plus-lb.png)
---
class: pic
![](images/kubernetes-services/34-LB-internal-traffic.png)
---
class: pic
![](images/kubernetes-services/35-LB-pending.png)
---
class: pic
![](images/kubernetes-services/36-LB-ccm.png)
---
class: pic
![](images/kubernetes-services/37-LB-externalip.png)
---
class: pic
![](images/kubernetes-services/38-LB-external-traffic.png)
---
class: pic
![](images/kubernetes-services/39-LB-all-traffic.png)
---
class: pic
![](images/kubernetes-services/41-NP-why.png)
---
class: pic
![](images/kubernetes-services/42-NP-how-1.png)
---
class: pic
![](images/kubernetes-services/43-NP-how-2.png)
---
class: pic
![](images/kubernetes-services/44-NP-how-3.png)
---
class: pic
![](images/kubernetes-services/45-NP-how-4.png)
---
class: pic
![](images/kubernetes-services/46-NP-how-5.png)
---
class: pic
![](images/kubernetes-services/47-NP-only.png)
---
## `NodePort`
- A port number is allocated for the service
(by default, in the 30000-32767 range)
- That port is made available *on all our nodes* and anybody can connect to it
(we can connect to any node on that port to reach the service)
- Our code needs to be changed to connect to that new port number
- Under the hood: `kube-proxy` sets up a bunch of `iptables` rules on our nodes
- Sometimes, it's the only available option for external traffic
(e.g. most clusters deployed with kubeadm or on-premises)
---
## Running containers with open ports
- Since `ping` doesn't have anything to connect to, we'll have to run something else
- We could use the `nginx` official image, but ...
... we wouldn't be able to tell the backends from each other!
- We are going to use `jpetazzo/color`, a tiny HTTP server written in Go
- `jpetazzo/color` listens on port 80
- It serves a page showing the pod's name
(this will be useful when checking load balancing behavior)
---
## Creating a deployment for our HTTP server
- We will create a deployment with `kubectl create deployment`
- Then we will scale it with `kubectl scale`
.lab[
- In another window, watch the pods (to see when they are created):
```bash
kubectl get pods -w
```
<!--
```wait NAME```
```tmux split-pane -h```
-->
- Create a deployment for this very lightweight HTTP server:
```bash
kubectl create deployment blue --image=jpetazzo/color
```
- Scale it to 10 replicas:
```bash
kubectl scale deployment blue --replicas=10
```
]
---
## Exposing our deployment
- We'll create a default `ClusterIP` service
.lab[
- Expose the HTTP port of our server:
```bash
kubectl expose deployment blue --port=80
```
- Look up which IP address was allocated:
```bash
kubectl get service
```
]
---
## Services are layer 4 constructs
- You can assign IP addresses to services, but they are still *layer 4*
(i.e. a service is not an IP address; it's an IP address + protocol + port)
- This is caused by the current implementation of `kube-proxy`
(it relies on mechanisms that don't support layer 3)
- As a result: you *have to* indicate the port number for your service
(with some exceptions, like `ExternalName` or headless services, covered later)
---
## Testing our service
- We will now send a few HTTP requests to our pods
.lab[
- Let's obtain the IP address that was allocated for our service, *programmatically:*
```bash
IP=$(kubectl get svc blue -o go-template --template '{{ .spec.clusterIP }}')
```
<!--
```hide kubectl wait deploy blue --for condition=available```
```key ^D```
```key ^C```
-->
- Send a few requests:
```bash
curl http://$IP:80/
```
]
--
Try it a few times! Our requests are load balanced across multiple pods.
---
class: extra-details
## `ExternalName`
- Services of type `ExternalName` are quite different
- No load balancer (internal or external) is created
- Only a DNS entry gets added to the DNS managed by Kubernetes
- That DNS entry will just be a `CNAME` to a provided record
Example:
```bash
kubectl create service externalname k8s --external-name kubernetes.io
```
*Creates a CNAME `k8s` pointing to `kubernetes.io`*
---
class: extra-details
## External IPs
- We can add an External IP to a service, e.g.:
```bash
kubectl expose deploy my-little-deploy --port=80 --external-ip=1.2.3.4
```
- `1.2.3.4` should be the address of one of our nodes
(it could also be a virtual address, service address, or VIP, shared by multiple nodes)
- Connections to `1.2.3.4:80` will be sent to our service
- External IPs will also show up on services of type `LoadBalancer`
(they will be added automatically by the process provisioning the load balancer)
---
class: extra-details
## Headless services
- Sometimes, we want to access our scaled services directly:
- if we want to save a tiny little bit of latency (typically less than 1ms)
- if we need to connect over arbitrary ports (instead of a few fixed ones)
- if we need to communicate over another protocol than UDP or TCP
- if we want to decide how to balance the requests client-side
- ...
- In that case, we can use a "headless service"
---
class: extra-details
## Creating a headless services
- A headless service is obtained by setting the `clusterIP` field to `None`
(Either with `--cluster-ip=None`, or by providing a custom YAML)
- As a result, the service doesn't have a virtual IP address
- Since there is no virtual IP address, there is no load balancer either
- CoreDNS will return the pods' IP addresses as multiple `A` records
- This gives us an easy way to discover all the replicas for a deployment
---
class: extra-details
## Services and endpoints
- A service has a number of "endpoints"
- Each endpoint is a host + port where the service is available
- The endpoints are maintained and updated automatically by Kubernetes
.lab[
- Check the endpoints that Kubernetes has associated with our `blue` service:
```bash
kubectl describe service blue
```
]
In the output, there will be a line starting with `Endpoints:`.
That line will list a bunch of addresses in `host:port` format.
---
class: extra-details
## Viewing endpoint details
- When we have many endpoints, our display commands truncate the list
```bash
kubectl get endpoints
```
- If we want to see the full list, we can use one of the following commands:
```bash
kubectl describe endpoints blue
kubectl get endpoints blue -o yaml
```
- These commands will show us a list of IP addresses
- These IP addresses should match the addresses of the corresponding pods:
```bash
kubectl get pods -l app=blue -o wide
```
---
class: extra-details
## `endpoints` not `endpoint`
- `endpoints` is the only resource that cannot be singular
```bash
$ kubectl get endpoint
error: the server doesn't have a resource type "endpoint"
```
- This is because the type itself is plural (unlike every other resource)
- There is no `endpoint` object: `type Endpoints struct`
- The type doesn't represent a single endpoint, but a list of endpoints
---
class: extra-details
## The DNS zone
- In the `kube-system` namespace, there should be a service named `kube-dns`
- This is the internal DNS server that can resolve service names
- The default domain name for the service we created is `default.svc.cluster.local`
.lab[
- Get the IP address of the internal DNS server:
```bash
IP=$(kubectl -n kube-system get svc kube-dns -o jsonpath={.spec.clusterIP})
```
- Resolve the cluster IP for the `blue` service:
```bash
host blue.default.svc.cluster.local $IP
```
]
---
class: extra-details
## `Ingress`
- Ingresses are another type (kind) of resource
- They are specifically for HTTP services
(not TCP or UDP)
- They can also handle TLS certificates, URL rewriting ...
- They require an *Ingress Controller* to function
---
class: pic
![](images/kubernetes-services/61-ING.png)
---
class: pic
![](images/kubernetes-services/62-ING-path.png)
---
class: pic
![](images/kubernetes-services/63-ING-policy.png)
---
class: pic
![](images/kubernetes-services/64-ING-nolocal.png)
???
:EN:- Service discovery and load balancing
:EN:- Accessing pods through services
:EN:- Service types: ClusterIP, NodePort, LoadBalancer
:EN:- Service discovery and load balancing
:FR:- Exposer un service
:FR:- Différents types de services : ClusterIP, NodePort, LoadBalancer
:FR:- Utiliser CoreDNS pour la *service discovery*
:FR:- Le DNS interne de Kubernetes et la *service discovery*

359
slides/k8s/service-types.md Normal file
View File

@@ -0,0 +1,359 @@
# Service Types
- There are different types of services:
`ClusterIP`, `NodePort`, `LoadBalancer`, `ExternalName`
- There are also *headless services*
- Services can also have optional *external IPs*
- There is also another resource type called *Ingress*
(specifically for HTTP services)
- Wow, that's a lot! Let's start with the basics ...
---
## `ClusterIP`
- It's the default service type
- A virtual IP address is allocated for the service
(in an internal, private range; e.g. 10.96.0.0/12)
- This IP address is reachable only from within the cluster (nodes and pods)
- Our code can connect to the service using the original port number
- Perfect for internal communication, within the cluster
---
class: pic
![](images/kubernetes-services/11-CIP-by-addr.png)
---
class: pic
![](images/kubernetes-services/12-CIP-by-name.png)
---
class: pic
![](images/kubernetes-services/13-CIP-both.png)
---
class: pic
![](images/kubernetes-services/14-CIP-headless.png)
---
## `LoadBalancer`
- An external load balancer is allocated for the service
(typically a cloud load balancer, e.g. ELB on AWS, GLB on GCE ...)
- This is available only when the underlying infrastructure provides some kind of
"load balancer as a service"
- Each service of that type will typically cost a little bit of money
(e.g. a few cents per hour on AWS or GCE)
- Ideally, traffic would flow directly from the load balancer to the pods
- In practice, it will often flow through a `NodePort` first
---
class: pic
![](images/kubernetes-services/31-LB-no-service.png)
---
class: pic
![](images/kubernetes-services/32-LB-plus-cip.png)
---
class: pic
![](images/kubernetes-services/33-LB-plus-lb.png)
---
class: pic
![](images/kubernetes-services/34-LB-internal-traffic.png)
---
class: pic
![](images/kubernetes-services/35-LB-pending.png)
---
class: pic
![](images/kubernetes-services/36-LB-ccm.png)
---
class: pic
![](images/kubernetes-services/37-LB-externalip.png)
---
class: pic
![](images/kubernetes-services/38-LB-external-traffic.png)
---
class: pic
![](images/kubernetes-services/39-LB-all-traffic.png)
---
class: pic
![](images/kubernetes-services/41-NP-why.png)
---
class: pic
![](images/kubernetes-services/42-NP-how-1.png)
---
class: pic
![](images/kubernetes-services/43-NP-how-2.png)
---
class: pic
![](images/kubernetes-services/44-NP-how-3.png)
---
class: pic
![](images/kubernetes-services/45-NP-how-4.png)
---
class: pic
![](images/kubernetes-services/46-NP-how-5.png)
---
class: pic
![](images/kubernetes-services/47-NP-only.png)
---
## `NodePort`
- A port number is allocated for the service
(by default, in the 30000-32767 range)
- That port is made available *on all our nodes* and anybody can connect to it
(we can connect to any node on that port to reach the service)
- Our code needs to be changed to connect to that new port number
- Under the hood: `kube-proxy` sets up a bunch of `iptables` rules on our nodes
- Sometimes, it's the only available option for external traffic
(e.g. most clusters deployed with kubeadm or on-premises)
---
class: extra-details
## `ExternalName`
- Services of type `ExternalName` are quite different
- No load balancer (internal or external) is created
- Only a DNS entry gets added to the DNS managed by Kubernetes
- That DNS entry will just be a `CNAME` to a provided record
Example:
```bash
kubectl create service externalname k8s --external-name kubernetes.io
```
*Creates a CNAME `k8s` pointing to `kubernetes.io`*
---
class: extra-details
## External IPs
- We can add an External IP to a service, e.g.:
```bash
kubectl expose deploy my-little-deploy --port=80 --external-ip=1.2.3.4
```
- `1.2.3.4` should be the address of one of our nodes
(it could also be a virtual address, service address, or VIP, shared by multiple nodes)
- Connections to `1.2.3.4:80` will be sent to our service
- External IPs will also show up on services of type `LoadBalancer`
(they will be added automatically by the process provisioning the load balancer)
---
class: extra-details
## Headless services
- Sometimes, we want to access our scaled services directly:
- if we want to save a tiny little bit of latency (typically less than 1ms)
- if we need to connect over arbitrary ports (instead of a few fixed ones)
- if we need to communicate over another protocol than UDP or TCP
- if we want to decide how to balance the requests client-side
- ...
- In that case, we can use a "headless service"
---
class: extra-details
## Creating a headless services
- A headless service is obtained by setting the `clusterIP` field to `None`
(Either with `--cluster-ip=None`, or by providing a custom YAML)
- As a result, the service doesn't have a virtual IP address
- Since there is no virtual IP address, there is no load balancer either
- CoreDNS will return the pods' IP addresses as multiple `A` records
- This gives us an easy way to discover all the replicas for a deployment
---
class: extra-details
## Services and endpoints
- A service has a number of "endpoints"
- Each endpoint is a host + port where the service is available
- The endpoints are maintained and updated automatically by Kubernetes
.lab[
- Check the endpoints that Kubernetes has associated with our `blue` service:
```bash
kubectl describe service blue
```
]
In the output, there will be a line starting with `Endpoints:`.
That line will list a bunch of addresses in `host:port` format.
---
class: extra-details
## Viewing endpoint details
- When we have many endpoints, our display commands truncate the list
```bash
kubectl get endpoints
```
- If we want to see the full list, we can use one of the following commands:
```bash
kubectl describe endpoints blue
kubectl get endpoints blue -o yaml
```
- These commands will show us a list of IP addresses
- These IP addresses should match the addresses of the corresponding pods:
```bash
kubectl get pods -l app=blue -o wide
```
---
class: extra-details
## `endpoints` not `endpoint`
- `endpoints` is the only resource that cannot be singular
```bash
$ kubectl get endpoint
error: the server doesn't have a resource type "endpoint"
```
- This is because the type itself is plural (unlike every other resource)
- There is no `endpoint` object: `type Endpoints struct`
- The type doesn't represent a single endpoint, but a list of endpoints
---
class: extra-details
## `Ingress`
- Ingresses are another type (kind) of resource
- They are specifically for HTTP services
(not TCP or UDP)
- They can also handle TLS certificates, URL rewriting ...
- They require an *Ingress Controller* to function
---
class: pic
![](images/kubernetes-services/61-ING.png)
---
class: pic
![](images/kubernetes-services/62-ING-path.png)
---
class: pic
![](images/kubernetes-services/63-ING-policy.png)
---
class: pic
![](images/kubernetes-services/64-ING-nolocal.png)
???
:EN:- Service types: ClusterIP, NodePort, LoadBalancer
:FR:- Différents types de services : ClusterIP, NodePort, LoadBalancer

View File

@@ -42,8 +42,9 @@ content:
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/service-types.md
- k8s/kubenet.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md

View File

@@ -39,7 +39,7 @@ content:
- k8s/concepts-k8s.md
- shared/declarative.md
- k8s/declarative.md
- k8s/kubenet.md
#- k8s/kubenet.md
- k8s/kubectlget.md
- k8s/setup-overview.md
#- k8s/setup-devel.md
@@ -51,6 +51,7 @@ content:
- k8s/kubectl-logs.md
- k8s/deploymentslideshow.md
- k8s/kubectlexpose.md
#- k8s/service-types.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md

View File

@@ -46,8 +46,9 @@ content:
- k8s/declarative.md
- k8s/deploymentslideshow.md
-
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/service-types.md
- k8s/kubenet.md
- k8s/shippingimages.md
- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md

View File

@@ -45,8 +45,9 @@ content:
- shared/declarative.md
- k8s/declarative.md
- k8s/deploymentslideshow.md
- k8s/kubenet.md
- k8s/kubectlexpose.md
- k8s/service-types.md
- k8s/kubenet.md
- k8s/shippingimages.md
#- k8s/buildshiprun-selfhosted.md
- k8s/buildshiprun-dockerhub.md