mirror of
https://github.com/krkn-chaos/krkn.git
synced 2026-03-13 07:00:22 +00:00
Compare commits
28 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9cd086f59c | ||
|
|
1057917731 | ||
|
|
5484828b67 | ||
|
|
d18b6332e5 | ||
|
|
89a0e166f1 | ||
|
|
624f50acd1 | ||
|
|
e02c6d1287 | ||
|
|
04425a8d8a | ||
|
|
f3933f0e62 | ||
|
|
56ff0a8c72 | ||
|
|
9378cd74cd | ||
|
|
4d3491da0f | ||
|
|
d6ce66160b | ||
|
|
ef1a55438b | ||
|
|
d8f54b83a2 | ||
|
|
4870c86515 | ||
|
|
6ae17cf678 | ||
|
|
ce9f8aa050 | ||
|
|
05148317c1 | ||
|
|
5f836f294b | ||
|
|
cfa1bb09a0 | ||
|
|
5ddfff5a85 | ||
|
|
7d18487228 | ||
|
|
08de42c91a | ||
|
|
dc7d5bb01b | ||
|
|
ea3444d375 | ||
|
|
7b660a0878 | ||
|
|
5fe0655f22 |
32
.github/workflows/docker-image.yml
vendored
32
.github/workflows/docker-image.yml
vendored
@@ -1,8 +1,7 @@
|
||||
name: Docker Image CI
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
tags: ['v[0-9].[0-9]+.[0-9]+']
|
||||
pull_request:
|
||||
|
||||
jobs:
|
||||
@@ -12,30 +11,43 @@ jobs:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v3
|
||||
- name: Build the Docker images
|
||||
if: startsWith(github.ref, 'refs/tags')
|
||||
run: |
|
||||
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/
|
||||
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/ --build-arg TAG=${GITHUB_REF#refs/tags/}
|
||||
docker tag quay.io/krkn-chaos/krkn quay.io/redhat-chaos/krkn
|
||||
docker tag quay.io/krkn-chaos/krkn quay.io/krkn-chaos/krkn:${GITHUB_REF#refs/tags/}
|
||||
docker tag quay.io/krkn-chaos/krkn quay.io/redhat-chaos/krkn:${GITHUB_REF#refs/tags/}
|
||||
|
||||
- name: Test Build the Docker images
|
||||
if: ${{ github.event_name == 'pull_request' }}
|
||||
run: |
|
||||
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/ --build-arg PR_NUMBER=${{ github.event.pull_request.number }}
|
||||
- name: Login in quay
|
||||
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
|
||||
if: startsWith(github.ref, 'refs/tags')
|
||||
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
|
||||
env:
|
||||
QUAY_USER: ${{ secrets.QUAY_USERNAME }}
|
||||
QUAY_TOKEN: ${{ secrets.QUAY_PASSWORD }}
|
||||
- name: Push the KrknChaos Docker images
|
||||
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
|
||||
run: docker push quay.io/krkn-chaos/krkn
|
||||
if: startsWith(github.ref, 'refs/tags')
|
||||
run: |
|
||||
docker push quay.io/krkn-chaos/krkn
|
||||
docker push quay.io/krkn-chaos/krkn:${GITHUB_REF#refs/tags/}
|
||||
- name: Login in to redhat-chaos quay
|
||||
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
|
||||
if: startsWith(github.ref, 'refs/tags/v')
|
||||
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
|
||||
env:
|
||||
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
|
||||
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}
|
||||
- name: Push the RedHat Chaos Docker images
|
||||
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
|
||||
run: docker push quay.io/redhat-chaos/krkn
|
||||
if: startsWith(github.ref, 'refs/tags')
|
||||
run: |
|
||||
docker push quay.io/redhat-chaos/krkn
|
||||
docker push quay.io/redhat-chaos/krkn:${GITHUB_REF#refs/tags/}
|
||||
- name: Rebuild krkn-hub
|
||||
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
|
||||
if: startsWith(github.ref, 'refs/tags')
|
||||
uses: redhat-chaos/actions/krkn-hub@main
|
||||
with:
|
||||
QUAY_USER: ${{ secrets.QUAY_USERNAME }}
|
||||
QUAY_TOKEN: ${{ secrets.QUAY_PASSWORD }}
|
||||
AUTOPUSH: ${{ secrets.AUTOPUSH }}
|
||||
|
||||
13
README.md
13
README.md
@@ -41,18 +41,6 @@ After installation, refer back to the below sections for supported scenarios and
|
||||
#### Running Kraken with minimal configuration tweaks
|
||||
For cases where you want to run Kraken with minimal configuration changes, refer to [krkn-hub](https://github.com/krkn-chaos/krkn-hub). One use case is CI integration where you do not want to carry around different configuration files for the scenarios.
|
||||
|
||||
### Setting up infrastructure dependencies
|
||||
Kraken indexes the metrics specified in the profile into Elasticsearch in addition to leveraging Cerberus for understanding the health of the Kubernetes cluster under test. More information on the features is documented below. The infrastructure pieces can be easily installed and uninstalled by running:
|
||||
|
||||
```
|
||||
$ cd kraken
|
||||
$ podman-compose up or $ docker-compose up # Spins up the containers specified in the docker-compose.yml file present in the run directory.
|
||||
$ podman-compose down or $ docker-compose down # Delete the containers installed.
|
||||
```
|
||||
This will manage the Cerberus and Elasticsearch containers on the host on which you are running Kraken.
|
||||
|
||||
**NOTE**: Make sure you have enough resources (memory and disk) on the machine on top of which the containers are running as Elasticsearch is resource intensive. Cerberus monitors the system components by default, the [config](config/cerberus.yaml) can be tweaked to add applications namespaces, routes and other components to monitor as well. The command will keep running until killed since detached mode is not supported as of now.
|
||||
|
||||
|
||||
### Config
|
||||
Instructions on how to setup the config and the options supported can be found at [Config](docs/config.md).
|
||||
@@ -76,6 +64,7 @@ Scenario type | Kubernetes
|
||||
[Network_Chaos](docs/network_chaos.md) | :heavy_check_mark: |
|
||||
[ManagedCluster Scenarios](docs/managedcluster_scenarios.md) | :heavy_check_mark: |
|
||||
[Service Hijacking Scenarios](docs/service_hijacking_scenarios.md) | :heavy_check_mark: |
|
||||
[SYN Flood Scenarios](docs/syn_flood_scenarios.md) | :heavy_check_mark: |
|
||||
|
||||
|
||||
### Kraken scenario pass/fail criteria and report
|
||||
|
||||
@@ -88,3 +88,42 @@
|
||||
- expr: ALERTS{severity="critical", alertstate="firing"} > 0
|
||||
description: Critical prometheus alert. {{$labels.alertname}}
|
||||
severity: warning
|
||||
|
||||
# etcd CPU and usage increase
|
||||
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-etcd', container='etcd'}[1m])) * 100 / sum(machine_cpu_cores) > 5
|
||||
description: Etcd CPU usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
# etcd memory usage increase
|
||||
- expr: sum(deriv(container_memory_usage_bytes{image!='', namespace='openshift-etcd', container='etcd'}[5m])) * 100 / sum(node_memory_MemTotal_bytes) > 5
|
||||
description: Etcd memory usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
# Openshift API server CPU and memory usage increase
|
||||
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-apiserver', container='openshift-apiserver'}[1m])) * 100 / sum(machine_cpu_cores) > 5
|
||||
description: openshift apiserver cpu usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
- expr: (sum(deriv(container_memory_usage_bytes{namespace='openshift-apiserver', container='openshift-apiserver'}[5m]))) * 100 / sum(node_memory_MemTotal_bytes) > 5
|
||||
description: openshift apiserver memory usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
# Openshift kube API server CPU and memory usage increase
|
||||
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-kube-apiserver', container='kube-apiserver'}[1m])) * 100 / sum(machine_cpu_cores) > 5
|
||||
description: openshift apiserver cpu usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
- expr: (sum(deriv(container_memory_usage_bytes{namespace='openshift-kube-apiserver', container='kube-apiserver'}[5m]))) * 100 / sum(node_memory_MemTotal_bytes) > 5
|
||||
description: openshift apiserver memory usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
# Master node CPU usage increase
|
||||
- expr: (sum((sum(deriv(pod:container_cpu_usage:sum{container="",pod!=""}[5m])) BY (namespace, pod) * on(pod, namespace) group_left(node) (node_namespace_pod:kube_pod_info:) ) * on(node) group_left(role) (max by (node) (kube_node_role{role="master"})))) * 100 / sum(machine_cpu_cores) > 5
|
||||
description: master nodes cpu usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
# Master nodes memory usage increase
|
||||
- expr: (sum((sum(deriv(container_memory_usage_bytes{container="",pod!=""}[5m])) BY (namespace, pod) * on(pod, namespace) group_left(node) (node_namespace_pod:kube_pod_info:) ) * on(node) group_left(role) (max by (node) (kube_node_role{role="master"})))) * 100 / sum(node_memory_MemTotal_bytes) > 5
|
||||
description: master nodes memory usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
|
||||
@@ -99,3 +99,41 @@
|
||||
- expr: ALERTS{severity="critical", alertstate="firing"} > 0
|
||||
description: Critical prometheus alert. {{$labels.alertname}}
|
||||
severity: warning
|
||||
|
||||
# etcd CPU and usage increase
|
||||
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-etcd', container='etcd'}[1m])) * 100 / sum(machine_cpu_cores) > 5
|
||||
description: Etcd CPU usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
# etcd memory usage increase
|
||||
- expr: sum(deriv(container_memory_usage_bytes{image!='', namespace='openshift-etcd', container='etcd'}[5m])) * 100 / sum(node_memory_MemTotal_bytes) > 5
|
||||
description: Etcd memory usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
# Openshift API server CPU and memory usage increase
|
||||
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-apiserver', container='openshift-apiserver'}[1m])) * 100 / sum(machine_cpu_cores) > 5
|
||||
description: openshift apiserver cpu usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
- expr: (sum(deriv(container_memory_usage_bytes{namespace='openshift-apiserver', container='openshift-apiserver'}[5m]))) * 100 / sum(node_memory_MemTotal_bytes) > 5
|
||||
description: openshift apiserver memory usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
# Openshift kube API server CPU and memory usage increase
|
||||
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-kube-apiserver', container='kube-apiserver'}[1m])) * 100 / sum(machine_cpu_cores) > 5
|
||||
description: openshift apiserver cpu usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
- expr: (sum(deriv(container_memory_usage_bytes{namespace='openshift-kube-apiserver', container='kube-apiserver'}[5m]))) * 100 / sum(node_memory_MemTotal_bytes) > 5
|
||||
description: openshift apiserver memory usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
# Master node CPU usage increase
|
||||
- expr: (sum((sum(deriv(pod:container_cpu_usage:sum{container="",pod!=""}[5m])) BY (namespace, pod) * on(pod, namespace) group_left(node) (node_namespace_pod:kube_pod_info:) ) * on(node) group_left(role) (max by (node) (kube_node_role{role="master"})))) * 100 / sum(machine_cpu_cores) > 5
|
||||
description: master nodes cpu usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
# Master nodes memory usage increase
|
||||
- expr: (sum((sum(deriv(container_memory_usage_bytes{container="",pod!=""}[5m])) BY (namespace, pod) * on(pod, namespace) group_left(node) (node_namespace_pod:kube_pod_info:) ) * on(node) group_left(role) (max by (node) (kube_node_role{role="master"})))) * 100 / sum(node_memory_MemTotal_bytes) > 5
|
||||
description: master nodes memory usage increased significantly by {{$value}}%
|
||||
severity: critical
|
||||
|
||||
@@ -44,6 +44,8 @@ kraken:
|
||||
- scenarios/openshift/network_chaos.yaml
|
||||
- service_hijacking:
|
||||
- scenarios/kube/service_hijacking.yaml
|
||||
- syn_flood:
|
||||
- scenarios/kube/syn_flood.yaml
|
||||
|
||||
cerberus:
|
||||
cerberus_enabled: False # Enable it when cerberus is previously installed
|
||||
|
||||
@@ -1,49 +1,54 @@
|
||||
# azure-client
|
||||
FROM mcr.microsoft.com/azure-cli:latest as azure-cli
|
||||
|
||||
# oc build
|
||||
FROM golang:1.22.2 AS oc-build
|
||||
RUN apt-get update && apt-get install -y libkrb5-dev
|
||||
FROM golang:1.22.4 AS oc-build
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends libkrb5-dev
|
||||
WORKDIR /tmp
|
||||
RUN git clone --branch release-4.18 https://github.com/openshift/oc.git
|
||||
WORKDIR /tmp/oc
|
||||
RUN go mod edit -go 1.22.3 &&\
|
||||
go get github.com/moby/buildkit@v0.12.5 &&\
|
||||
go get github.com/containerd/containerd@v1.7.11&&\
|
||||
go get github.com/docker/docker@v25.0.5&&\
|
||||
go mod tidy && go mod vendor
|
||||
RUN make GO_REQUIRED_MIN_VERSION:= oc
|
||||
|
||||
FROM registry.access.redhat.com/ubi9/ubi:latest
|
||||
FROM fedora:40
|
||||
ARG PR_NUMBER
|
||||
ARG TAG
|
||||
RUN groupadd -g 1001 krkn && useradd -m -u 1001 -g krkn krkn
|
||||
RUN dnf update -y
|
||||
|
||||
# krkn version that will be built
|
||||
ENV KRKN_VERSION v1.6.0
|
||||
ENV KUBECONFIG /home/krkn/.kube/config
|
||||
|
||||
ENV KUBECONFIG /root/.kube/config
|
||||
|
||||
# update yum and install dependencies
|
||||
RUN yum update -y glibc glibc-common glibc-minimal-langpack runc
|
||||
RUN rpm -e --allmatches --nodeps --noscripts --notriggers python3-requests
|
||||
RUN yum install -y git python39 python3-pip jq gettext wget
|
||||
|
||||
# get yq
|
||||
RUN wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && chmod +x /usr/bin/yq
|
||||
|
||||
# get kubectl
|
||||
# install kubectl
|
||||
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" &&\
|
||||
cp kubectl /usr/local/bin/kubectl && chmod +x /usr/local/bin/kubectl &&\
|
||||
cp kubectl /usr/bin/kubectl && chmod +x /usr/bin/kubectl
|
||||
|
||||
# copy azure client binary from azure-cli image
|
||||
COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
|
||||
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
|
||||
RUN dnf update && dnf install -y --setopt=install_weak_deps=False \
|
||||
git python39 jq yq gettext wget which &&\
|
||||
dnf clean all
|
||||
|
||||
# copy oc client binary from oc-build image
|
||||
COPY --from=oc-build /tmp/oc/oc /usr/bin/oc
|
||||
COPY --from=oc-build /tmp/oc/oc /usr/local/bin/oc
|
||||
|
||||
# krkn build
|
||||
RUN python3.9 -m pip install -U pip
|
||||
RUN git clone https://github.com/krkn-chaos/krkn.git --branch $KRKN_VERSION /root/kraken && \
|
||||
mkdir -p /root/.kube
|
||||
WORKDIR /root/kraken
|
||||
RUN pip3.9 install -r requirements.txt
|
||||
RUN pip3.9 install virtualenv
|
||||
RUN git clone https://github.com/krkn-chaos/krkn.git /home/krkn/kraken && \
|
||||
mkdir -p /home/krkn/.kube
|
||||
|
||||
WORKDIR /root/kraken
|
||||
WORKDIR /home/krkn/kraken
|
||||
|
||||
# default behaviour will be to build main
|
||||
# if it is a PR trigger the PR itself will be checked out
|
||||
RUN if [ -n "$PR_NUMBER" ]; then git fetch origin pull/${PR_NUMBER}/head:pr-${PR_NUMBER} && git checkout pr-${PR_NUMBER};fi
|
||||
# if it is a TAG trigger checkout the tag
|
||||
RUN if [ -n "$TAG" ]; then git checkout "$TAG";fi
|
||||
|
||||
RUN python3.9 -m ensurepip
|
||||
RUN pip3.9 install -r requirements.txt
|
||||
RUN pip3.9 install jsonschema
|
||||
|
||||
RUN chown -R krkn:krkn /home/krkn && chmod 755 /home/krkn
|
||||
USER krkn
|
||||
ENTRYPOINT ["python3.9", "run_kraken.py"]
|
||||
CMD ["--config=config/config.yaml"]
|
||||
CMD ["--config=config/config.yaml"]
|
||||
|
||||
@@ -1,29 +0,0 @@
|
||||
# Dockerfile for kraken
|
||||
|
||||
FROM ppc64le/centos:8
|
||||
|
||||
FROM mcr.microsoft.com/azure-cli:latest as azure-cli
|
||||
|
||||
LABEL org.opencontainers.image.authors="Red Hat OpenShift Chaos Engineering"
|
||||
|
||||
ENV KUBECONFIG /root/.kube/config
|
||||
|
||||
# Copy azure client binary from azure-cli image
|
||||
COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
|
||||
|
||||
# Install dependencies
|
||||
RUN yum install -y git python39 python3-pip jq gettext wget && \
|
||||
python3.9 -m pip install -U pip && \
|
||||
git clone https://github.com/redhat-chaos/krkn.git --branch v1.5.14 /root/kraken && \
|
||||
mkdir -p /root/.kube && cd /root/kraken && \
|
||||
pip3.9 install -r requirements.txt && \
|
||||
pip3.9 install virtualenv && \
|
||||
wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && chmod +x /usr/bin/yq
|
||||
|
||||
# Get Kubernetes and OpenShift clients from stable releases
|
||||
WORKDIR /tmp
|
||||
RUN wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz && tar -xvf openshift-client-linux.tar.gz && cp oc /usr/local/bin/oc && cp oc /usr/bin/oc && cp kubectl /usr/local/bin/kubectl && cp kubectl /usr/bin/kubectl
|
||||
|
||||
WORKDIR /root/kraken
|
||||
|
||||
ENTRYPOINT python3.9 run_kraken.py --config=config/config.yaml
|
||||
@@ -12,35 +12,3 @@ Refer [instructions](https://github.com/redhat-chaos/krkn/blob/main/docs/install
|
||||
### Run Custom Kraken Image
|
||||
|
||||
Refer to [instructions](https://github.com/redhat-chaos/krkn/blob/main/containers/build_own_image-README.md) for information on how to run a custom containerized version of kraken using podman.
|
||||
|
||||
|
||||
### Kraken as a KubeApp ( Unsupported and not recommended )
|
||||
|
||||
#### GENERAL NOTES:
|
||||
|
||||
- It is not generally recommended to run Kraken internal to the cluster as the pod which is running Kraken might get disrupted, the suggested use case to run kraken from inside k8s/OpenShift is to target **another** cluster (eg. to bypass network restrictions or to leverage cluster's computational resources)
|
||||
|
||||
- your kubeconfig might contain several cluster contexts and credentials so be sure, before creating the ConfigMap, to keep **only** the credentials related to the destination cluster. Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) for more details
|
||||
- to add privileges to the service account you must be logged in the cluster with an highly privileged account (ideally kubeadmin)
|
||||
|
||||
|
||||
|
||||
To run containerized Kraken as a Kubernetes/OpenShift Deployment, follow these steps:
|
||||
|
||||
1. Configure the [config.yaml](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) file according to your requirements.
|
||||
|
||||
**NOTE**: both the scenarios ConfigMaps are needed regardless you're running kraken in Kubernetes or OpenShift
|
||||
|
||||
2. Create a namespace under which you want to run the kraken pod using `kubectl create ns <namespace>`.
|
||||
3. Switch to `<namespace>` namespace:
|
||||
- In Kubernetes, use `kubectl config set-context --current --namespace=<namespace>`
|
||||
- In OpenShift, use `oc project <namespace>`
|
||||
|
||||
4. Create a ConfigMap named kube-config using `kubectl create configmap kube-config --from-file=<path_to_kubeconfig>` *(eg. ~/.kube/config)*
|
||||
5. Create a ConfigMap named kraken-config using `kubectl create configmap kraken-config --from-file=<path_to_kraken>/config`
|
||||
6. Create a ConfigMap named scenarios-config using `kubectl create configmap scenarios-config --from-file=<path_to_kraken>/scenarios`
|
||||
7. Create a ConfigMap named scenarios-openshift-config using `kubectl create configmap scenarios-openshift-config --from-file=<path_to_kraken>/scenarios/openshift`
|
||||
8. Create a ConfigMap named scenarios-kube-config using `kubectl create configmap scenarios-kube-config --from-file=<path_to_kraken>/scenarios/kube`
|
||||
9. Create a service account to run the kraken pod `kubectl create serviceaccount useroot`.
|
||||
10. In Openshift, add privileges to service account and execute `oc adm policy add-scc-to-user privileged -z useroot`.
|
||||
11. Create a Job using `kubectl apply -f <path_to_kraken>/containers/kraken.yml` and monitor the status using `oc get jobs` and `oc get pods`.
|
||||
|
||||
@@ -1,49 +0,0 @@
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: kraken
|
||||
spec:
|
||||
parallelism: 1
|
||||
completions: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
tool: Kraken
|
||||
spec:
|
||||
serviceAccountName: useroot
|
||||
containers:
|
||||
- name: kraken
|
||||
securityContext:
|
||||
privileged: true
|
||||
image: quay.io/redhat-chaos/krkn
|
||||
command: ["/bin/sh", "-c"]
|
||||
args: ["python3.9 run_kraken.py -c config/config.yaml"]
|
||||
volumeMounts:
|
||||
- mountPath: "/root/.kube"
|
||||
name: config
|
||||
- mountPath: "/root/kraken/config"
|
||||
name: kraken-config
|
||||
- mountPath: "/root/kraken/scenarios"
|
||||
name: scenarios-config
|
||||
- mountPath: "/root/kraken/scenarios/openshift"
|
||||
name: scenarios-openshift-config
|
||||
- mountPath: "/root/kraken/scenarios/kube"
|
||||
name: scenarios-kube-config
|
||||
restartPolicy: Never
|
||||
volumes:
|
||||
- name: config
|
||||
configMap:
|
||||
name: kube-config
|
||||
- name: kraken-config
|
||||
configMap:
|
||||
name: kraken-config
|
||||
- name: scenarios-config
|
||||
configMap:
|
||||
name: scenarios-config
|
||||
- name: scenarios-openshift-config
|
||||
configMap:
|
||||
name: scenarios-openshift-config
|
||||
- name: scenarios-kube-config
|
||||
configMap:
|
||||
name: scenarios-kube-config
|
||||
@@ -1,31 +0,0 @@
|
||||
version: "3"
|
||||
services:
|
||||
elastic:
|
||||
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
|
||||
deploy:
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
network_mode: host
|
||||
environment:
|
||||
discovery.type: single-node
|
||||
kibana:
|
||||
image: docker.elastic.co/kibana/kibana:7.13.2
|
||||
deploy:
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
network_mode: host
|
||||
environment:
|
||||
ELASTICSEARCH_HOSTS: "http://0.0.0.0:9200"
|
||||
cerberus:
|
||||
image: quay.io/openshift-scale/cerberus:latest
|
||||
privileged: true
|
||||
deploy:
|
||||
replicas: 1
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
network_mode: host
|
||||
volumes:
|
||||
- ./config/cerberus.yaml:/root/cerberus/config/config.yaml:Z # Modify the config in case of the need to monitor additional components
|
||||
- ${HOME}/.kube/config:/root/.kube/config:Z
|
||||
@@ -27,14 +27,12 @@ After creating the service account you will need to enable the account using the
|
||||
|
||||
## Azure
|
||||
|
||||
**NOTE**: For Azure node killing scenarios, make sure [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) is installed.
|
||||
|
||||
You will also need to create a service principal and give it the correct access, see [here](https://docs.openshift.com/container-platform/4.5/installing/installing_azure/installing-azure-account.html) for creating the service principal and setting the proper permissions.
|
||||
**NOTE**: You will need to create a service principal and give it the correct access, see [here](https://docs.openshift.com/container-platform/4.5/installing/installing_azure/installing-azure-account.html) for creating the service principal and setting the proper permissions.
|
||||
|
||||
To properly run the service principal requires “Azure Active Directory Graph/Application.ReadWrite.OwnedBy” api permission granted and “User Access Administrator”.
|
||||
|
||||
Before running you will need to set the following:
|
||||
1. Login using ```az login```
|
||||
1. ```export AZURE_SUBSCRIPTION_ID=<subscription_id>```
|
||||
|
||||
2. ```export AZURE_TENANT_ID=<tenant_id>```
|
||||
|
||||
|
||||
@@ -43,12 +43,3 @@ $ python3.9 run_kraken.py --config <config_file_location>
|
||||
[Krkn-hub](https://github.com/krkn-chaos/krkn-hub) is a wrapper that allows running Krkn chaos scenarios via podman or docker runtime with scenario parameters/configuration defined as environment variables.
|
||||
|
||||
Refer [instructions](https://github.com/krkn-chaos/krkn-hub#supported-chaos-scenarios) to get started.
|
||||
|
||||
|
||||
### Run Kraken as a Kubernetes deployment ( unsupported option - standalone or containerized deployers are recommended )
|
||||
Refer [Instructions](https://github.com/krkn-chaos/krkn/blob/main/containers/README.md) on how to deploy and run Kraken as a Kubernetes/OpenShift deployment.
|
||||
|
||||
|
||||
Refer to the [chaos-kraken chart manpage](https://artifacthub.io/packages/helm/startx/chaos-kraken)
|
||||
and especially the [kraken configuration values](https://artifacthub.io/packages/helm/startx/chaos-kraken#chaos-kraken-values-dictionary)
|
||||
for details on how to configure this chart.
|
||||
|
||||
@@ -16,17 +16,20 @@ The following node chaos scenarios are supported:
|
||||
**NOTE**: If the node does not recover from the node_crash_scenario injection, reboot the node to get it back to Ready state.
|
||||
|
||||
**NOTE**: node_start_scenario, node_stop_scenario, node_stop_start_scenario, node_termination_scenario
|
||||
, node_reboot_scenario and stop_start_kubelet_scenario are supported only on AWS, Azure, OpenStack, BareMetal, GCP
|
||||
, VMware and Alibaba as of now.
|
||||
|
||||
**NOTE**: Node scenarios are supported only when running the standalone version of Kraken until https://github.com/redhat-chaos/krkn/issues/106 gets fixed.
|
||||
, node_reboot_scenario and stop_start_kubelet_scenario are supported on AWS, Azure, OpenStack, BareMetal, GCP
|
||||
, VMware and Alibaba.
|
||||
|
||||
|
||||
#### AWS
|
||||
|
||||
How to set up AWS cli to run node scenarios is defined [here](cloud_setup.md#aws).
|
||||
Cloud setup instructions can be found [here](cloud_setup.md#aws). Sample scenario config can be found [here](https://github.com/krkn-chaos/krkn/blob/main/scenarios/openshift/aws_node_scenarios.yml).
|
||||
|
||||
|
||||
|
||||
#### Baremetal
|
||||
|
||||
Sample scenario config can be found [here](https://github.com/krkn-chaos/krkn/blob/main/scenarios/openshift/baremetal_node_scenarios.yml).
|
||||
|
||||
**NOTE**: Baremetal requires setting the IPMI user and password to power on, off, and reboot nodes, using the config options `bm_user` and `bm_password`. It can either be set in the root of the entry in the scenarios config, or it can be set per machine.
|
||||
|
||||
If no per-machine addresses are specified, kraken attempts to use the BMC value in the BareMetalHost object. To list them, you can do 'oc get bmh -o wide --all-namespaces'. If the BMC values are blank, you must specify them per-machine using the config option 'bmc_addr' as specified below.
|
||||
@@ -38,6 +41,8 @@ See the example node scenario or the example below.
|
||||
|
||||
**NOTE**: Baremetal machines are fragile. Some node actions can occasionally corrupt the filesystem if it does not shut down properly, and sometimes the kubelet does not start properly.
|
||||
|
||||
|
||||
|
||||
#### Docker
|
||||
|
||||
The Docker provider can be used to run node scenarios against kind clusters.
|
||||
@@ -46,8 +51,11 @@ The Docker provider can be used to run node scenarios against kind clusters.
|
||||
|
||||
kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
|
||||
|
||||
|
||||
|
||||
#### GCP
|
||||
How to set up GCP cli to run node scenarios is defined [here](cloud_setup.md#gcp).
|
||||
Cloud setup instructions can be found [here](cloud_setup.md#gcp). Sample scenario config can be found [here](https://github.com/krkn-chaos/krkn/blob/main/scenarios/openshift/gcp_node_scenarios.yml).
|
||||
|
||||
|
||||
#### Openstack
|
||||
|
||||
@@ -60,9 +68,11 @@ The supported node level chaos scenarios on an OPENSTACK cloud are `node_stop_st
|
||||
To execute the scenario, ensure the value for `ssh_private_key` in the node scenarios config file is set with the correct private key file path for ssh connection to the helper node. Ensure passwordless ssh is configured on the host running Kraken and the helper node to avoid connection errors.
|
||||
|
||||
|
||||
|
||||
#### Azure
|
||||
|
||||
How to set up Azure cli to run node scenarios is defined [here](cloud_setup.md#azure).
|
||||
Cloud setup instructions can be found [here](cloud_setup.md#azure). Sample scenario config can be found [here](https://github.com/krkn-chaos/krkn/blob/main/scenarios/openshift/azure_node_scenarios.yml).
|
||||
|
||||
|
||||
|
||||
#### Alibaba
|
||||
@@ -73,43 +83,28 @@ How to set up Alibaba cli to run node scenarios is defined [here](cloud_setup.md
|
||||
. Releasing a node is 2 steps, stopping the node and then releasing it.
|
||||
|
||||
|
||||
|
||||
#### VMware
|
||||
How to set up VMware vSphere to run node scenarios is defined [here](cloud_setup.md#vmware)
|
||||
|
||||
This cloud type uses a different configuration style, see actions below and [example config file](../scenarios/openshift/vmware_node_scenarios.yml)
|
||||
|
||||
*vmware-node-terminate, vmware-node-reboot, vmware-node-stop, vmware-node-start*
|
||||
- vmware-node-terminate
|
||||
- vmware-node-reboot
|
||||
- vmware-node-stop
|
||||
- vmware-node-start
|
||||
|
||||
|
||||
|
||||
#### IBMCloud
|
||||
How to set up IBMCloud to run node scenarios is defined [here](cloud_setup.md#ibmcloud)
|
||||
|
||||
This cloud type uses a different configuration style, see actions below and [example config file](../scenarios/openshift/ibmcloud_node_scenarios.yml)
|
||||
|
||||
*ibmcloud-node-terminate, ibmcloud-node-reboot, ibmcloud-node-stop, ibmcloud-node-start
|
||||
*
|
||||
|
||||
|
||||
#### IBMCloud and Vmware example
|
||||
|
||||
|
||||
```
|
||||
- id: ibmcloud-node-stop
|
||||
config:
|
||||
name: "<node_name>"
|
||||
label_selector: "node-role.kubernetes.io/worker" # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection
|
||||
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time)
|
||||
instance_count: 1 # Number of nodes to perform action/select that match the label selector
|
||||
timeout: 30 # Duration to wait for completion of node scenario injection
|
||||
skip_openshift_checks: False # Set to True if you don't want to wait for the status of the nodes to change on OpenShift before passing the scenario
|
||||
- id: ibmcloud-node-start
|
||||
config:
|
||||
name: "<node_name>" #Same name as before
|
||||
label_selector: "node-role.kubernetes.io/worker" # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection
|
||||
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time)
|
||||
instance_count: 1 # Number of nodes to perform action/select that match the label selector
|
||||
timeout: 30 # Duration to wait for completion of node scenario injection
|
||||
skip_openshift_checks: False # Set to True if you don't want to wait for the status of the nodes to change on OpenShift before passing the scenario
|
||||
```
|
||||
- ibmcloud-node-terminate
|
||||
- ibmcloud-node-reboot
|
||||
- ibmcloud-node-stop
|
||||
- ibmcloud-node-start
|
||||
|
||||
|
||||
|
||||
@@ -118,60 +113,3 @@ This cloud type uses a different configuration style, see actions below and [exa
|
||||
**NOTE**: The `node_crash_scenario` and `stop_kubelet_scenario` scenario is supported independent of the cloud platform.
|
||||
|
||||
Use 'generic' or do not add the 'cloud_type' key to your scenario if your cluster is not set up using one of the current supported cloud types.
|
||||
|
||||
Node scenarios can be injected by placing the node scenarios config files under node_scenarios option in the kraken config. Refer to [node_scenarios_example](https://github.com/redhat-chaos/krkn/blob/main/scenarios/node_scenarios_example.yml) config file.
|
||||
|
||||
|
||||
```
|
||||
node_scenarios:
|
||||
- actions: # Node chaos scenarios to be injected.
|
||||
- node_stop_start_scenario
|
||||
- stop_start_kubelet_scenario
|
||||
- node_crash_scenario
|
||||
node_name: # Node on which scenario has to be injected.
|
||||
label_selector: node-role.kubernetes.io/worker # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection.
|
||||
instance_count: 1 # Number of nodes to perform action/select that match the label selector.
|
||||
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time).
|
||||
timeout: 120 # Duration to wait for completion of node scenario injection.
|
||||
cloud_type: aws # Cloud type on which Kubernetes/OpenShift runs.
|
||||
- actions:
|
||||
- node_reboot_scenario
|
||||
node_name:
|
||||
label_selector: node-role.kubernetes.io/infra
|
||||
instance_count: 1
|
||||
timeout: 120
|
||||
cloud_type: azure
|
||||
- actions:
|
||||
- node_crash_scenario
|
||||
node_name:
|
||||
label_selector: node-role.kubernetes.io/infra
|
||||
instance_count: 1
|
||||
timeout: 120
|
||||
- actions:
|
||||
- stop_start_helper_node_scenario # Node chaos scenario for helper node.
|
||||
instance_count: 1
|
||||
timeout: 120
|
||||
helper_node_ip: # ip address of the helper node.
|
||||
service: # Check status of the services on the helper node.
|
||||
- haproxy
|
||||
- dhcpd
|
||||
- named
|
||||
ssh_private_key: /root/.ssh/id_rsa # ssh key to access the helper node.
|
||||
cloud_type: openstack
|
||||
- actions:
|
||||
- node_stop_start_scenario
|
||||
node_name:
|
||||
label_selector: node-role.kubernetes.io/worker
|
||||
instance_count: 1
|
||||
timeout: 120
|
||||
cloud_type: bm
|
||||
bmc_user: defaultuser # For baremetal (bm) cloud type. The default IPMI username. Optional if specified for all machines.
|
||||
bmc_password: defaultpass # For baremetal (bm) cloud type. The default IPMI password. Optional if specified for all machines.
|
||||
bmc_info: # This section is here to specify baremetal per-machine info, so it is optional if there is no per-machine info.
|
||||
node-1: # The node name for the baremetal machine
|
||||
bmc_addr: mgmt-machine1.example.com # Optional. For baremetal nodes with the IPMI BMC address missing from 'oc get bmh'.
|
||||
node-2:
|
||||
bmc_addr: mgmt-machine2.example.com
|
||||
bmc_user: user # The baremetal IPMI user. Overrides the default IPMI user specified above. Optional if the default is set.
|
||||
bmc_password: pass # The baremetal IPMI password. Overrides the default IPMI user specified above. Optional if the default is set.
|
||||
```
|
||||
|
||||
33
docs/syn_flood_scenarios.md
Normal file
33
docs/syn_flood_scenarios.md
Normal file
@@ -0,0 +1,33 @@
|
||||
### SYN Flood Scenarios
|
||||
|
||||
This scenario generates a substantial amount of TCP traffic directed at one or more Kubernetes services within
|
||||
the cluster to test the server's resiliency under extreme traffic conditions.
|
||||
It can also target hosts outside the cluster by specifying a reachable IP address or hostname.
|
||||
This scenario leverages the distributed nature of Kubernetes clusters to instantiate multiple instances
|
||||
of the same pod against a single host, significantly increasing the effectiveness of the attack.
|
||||
The configuration also allows for the specification of multiple node selectors, enabling Kubernetes to schedule
|
||||
the attacker pods on a user-defined subset of nodes to make the test more realistic.
|
||||
|
||||
```yaml
|
||||
packet-size: 120 # hping3 packet size
|
||||
window-size: 64 # hping 3 TCP window size
|
||||
duration: 10 # chaos scenario duration
|
||||
namespace: default # namespace where the target service(s) are deployed
|
||||
target-service: target-svc # target service name (if set target-service-label must be empty)
|
||||
target-port: 80 # target service TCP port
|
||||
target-service-label : "" # target service label, can be used to target multiple target at the same time
|
||||
# if they have the same label set (if set target-service must be empty)
|
||||
number-of-pods: 2 # number of attacker pod instantiated per each target
|
||||
image: quay.io/krkn-chaos/krkn-syn-flood # syn flood attacker container image
|
||||
attacker-nodes: # this will set the node affinity to schedule the attacker node. Per each node label selector
|
||||
# can be specified multiple values in this way the kube scheduler will schedule the attacker pods
|
||||
# in the best way possible based on the provided labels. Multiple labels can be specified
|
||||
kubernetes.io/hostname:
|
||||
- host_1
|
||||
- host_2
|
||||
kubernetes.io/os:
|
||||
- linux
|
||||
|
||||
```
|
||||
|
||||
The attacker container source code is available [here](https://github.com/krkn-chaos/krkn-syn-flood).
|
||||
@@ -1,892 +0,0 @@
|
||||
import logging
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
|
||||
from kubernetes import client, config, utils, watch
|
||||
from kubernetes.client.rest import ApiException
|
||||
from kubernetes.dynamic.client import DynamicClient
|
||||
from kubernetes.stream import stream
|
||||
|
||||
from ..kubernetes.resources import (PVC, ChaosEngine, ChaosResult, Container,
|
||||
LitmusChaosObject, Pod, Volume,
|
||||
VolumeMount)
|
||||
|
||||
kraken_node_name = ""
|
||||
|
||||
|
||||
# Load kubeconfig and initialize kubernetes python client
|
||||
def initialize_clients(kubeconfig_path):
|
||||
global cli
|
||||
global batch_cli
|
||||
global watch_resource
|
||||
global api_client
|
||||
global dyn_client
|
||||
global custom_object_client
|
||||
try:
|
||||
if kubeconfig_path:
|
||||
config.load_kube_config(kubeconfig_path)
|
||||
else:
|
||||
config.load_incluster_config()
|
||||
api_client = client.ApiClient()
|
||||
cli = client.CoreV1Api(api_client)
|
||||
batch_cli = client.BatchV1Api(api_client)
|
||||
custom_object_client = client.CustomObjectsApi(api_client)
|
||||
dyn_client = DynamicClient(api_client)
|
||||
watch_resource = watch.Watch()
|
||||
except ApiException as e:
|
||||
logging.error("Failed to initialize kubernetes client: %s\n" % e)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def get_host() -> str:
|
||||
"""Returns the Kubernetes server URL"""
|
||||
return client.configuration.Configuration.get_default_copy().host
|
||||
|
||||
|
||||
def get_clusterversion_string() -> str:
|
||||
"""
|
||||
Returns clusterversion status text on OpenShift, empty string
|
||||
on other distributions
|
||||
"""
|
||||
try:
|
||||
cvs = custom_object_client.list_cluster_custom_object(
|
||||
"config.openshift.io",
|
||||
"v1",
|
||||
"clusterversions",
|
||||
)
|
||||
for cv in cvs["items"]:
|
||||
for condition in cv["status"]["conditions"]:
|
||||
if condition["type"] == "Progressing":
|
||||
return condition["message"]
|
||||
return ""
|
||||
except client.exceptions.ApiException as e:
|
||||
if e.status == 404:
|
||||
return ""
|
||||
else:
|
||||
raise
|
||||
|
||||
|
||||
# List all namespaces
|
||||
def list_namespaces(label_selector=None):
|
||||
namespaces = []
|
||||
try:
|
||||
if label_selector:
|
||||
ret = cli.list_namespace(
|
||||
pretty=True,
|
||||
label_selector=label_selector
|
||||
)
|
||||
else:
|
||||
ret = cli.list_namespace(pretty=True)
|
||||
except ApiException as e:
|
||||
logging.error(
|
||||
"Exception when calling CoreV1Api->list_namespaced_pod: %s\n" % e
|
||||
)
|
||||
raise e
|
||||
for namespace in ret.items:
|
||||
namespaces.append(namespace.metadata.name)
|
||||
return namespaces
|
||||
|
||||
|
||||
def get_namespace_status(namespace_name):
|
||||
"""Get status of a given namespace"""
|
||||
ret = ""
|
||||
try:
|
||||
ret = cli.read_namespace_status(namespace_name)
|
||||
except ApiException as e:
|
||||
logging.error(
|
||||
"Exception when calling CoreV1Api->read_namespace_status: %s\n" % e
|
||||
)
|
||||
return ret.status.phase
|
||||
|
||||
|
||||
def delete_namespace(namespace):
|
||||
"""Deletes a given namespace using kubernetes python client"""
|
||||
try:
|
||||
api_response = cli.delete_namespace(namespace)
|
||||
logging.debug(
|
||||
"Namespace deleted. status='%s'" % str(api_response.status)
|
||||
)
|
||||
return api_response
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"Exception when calling \
|
||||
CoreV1Api->delete_namespace: %s\n"
|
||||
% e
|
||||
)
|
||||
|
||||
|
||||
def check_namespaces(namespaces, label_selectors=None):
|
||||
"""Check if all the watch_namespaces are valid"""
|
||||
try:
|
||||
valid_namespaces = list_namespaces(label_selectors)
|
||||
regex_namespaces = set(namespaces) - set(valid_namespaces)
|
||||
final_namespaces = set(namespaces) - set(regex_namespaces)
|
||||
valid_regex = set()
|
||||
if regex_namespaces:
|
||||
for namespace in valid_namespaces:
|
||||
for regex_namespace in regex_namespaces:
|
||||
if re.search(regex_namespace, namespace):
|
||||
final_namespaces.add(namespace)
|
||||
valid_regex.add(regex_namespace)
|
||||
break
|
||||
invalid_namespaces = regex_namespaces - valid_regex
|
||||
if invalid_namespaces:
|
||||
raise Exception(
|
||||
"There exists no namespaces matching: %s" %
|
||||
(invalid_namespaces)
|
||||
)
|
||||
return list(final_namespaces)
|
||||
except Exception as e:
|
||||
logging.info("%s" % (e))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# List nodes in the cluster
|
||||
def list_nodes(label_selector=None):
|
||||
nodes = []
|
||||
try:
|
||||
if label_selector:
|
||||
ret = cli.list_node(pretty=True, label_selector=label_selector)
|
||||
else:
|
||||
ret = cli.list_node(pretty=True)
|
||||
except ApiException as e:
|
||||
logging.error("Exception when calling CoreV1Api->list_node: %s\n" % e)
|
||||
raise e
|
||||
for node in ret.items:
|
||||
nodes.append(node.metadata.name)
|
||||
return nodes
|
||||
|
||||
|
||||
# List nodes in the cluster that can be killed
|
||||
def list_killable_nodes(label_selector=None):
|
||||
nodes = []
|
||||
try:
|
||||
if label_selector:
|
||||
ret = cli.list_node(pretty=True, label_selector=label_selector)
|
||||
else:
|
||||
ret = cli.list_node(pretty=True)
|
||||
except ApiException as e:
|
||||
logging.error("Exception when calling CoreV1Api->list_node: %s\n" % e)
|
||||
raise e
|
||||
for node in ret.items:
|
||||
if kraken_node_name != node.metadata.name:
|
||||
for cond in node.status.conditions:
|
||||
if str(cond.type) == "Ready" and str(cond.status) == "True":
|
||||
nodes.append(node.metadata.name)
|
||||
return nodes
|
||||
|
||||
|
||||
# List managedclusters attached to the hub that can be killed
|
||||
def list_killable_managedclusters(label_selector=None):
|
||||
managedclusters = []
|
||||
try:
|
||||
ret = custom_object_client.list_cluster_custom_object(
|
||||
group="cluster.open-cluster-management.io",
|
||||
version="v1",
|
||||
plural="managedclusters",
|
||||
label_selector=label_selector
|
||||
)
|
||||
except ApiException as e:
|
||||
logging.error("Exception when calling CustomObjectsApi->list_cluster_custom_object: %s\n" % e)
|
||||
raise e
|
||||
for managedcluster in ret['items']:
|
||||
conditions = managedcluster['status']['conditions']
|
||||
available = list(filter(lambda condition: condition['reason'] == 'ManagedClusterAvailable', conditions))
|
||||
if available and available[0]['status'] == 'True':
|
||||
managedclusters.append(managedcluster['metadata']['name'])
|
||||
return managedclusters
|
||||
|
||||
# List pods in the given namespace
|
||||
def list_pods(namespace, label_selector=None):
|
||||
pods = []
|
||||
try:
|
||||
if label_selector:
|
||||
ret = cli.list_namespaced_pod(
|
||||
namespace,
|
||||
pretty=True,
|
||||
label_selector=label_selector
|
||||
)
|
||||
else:
|
||||
ret = cli.list_namespaced_pod(namespace, pretty=True)
|
||||
except ApiException as e:
|
||||
logging.error(
|
||||
"Exception when calling \
|
||||
CoreV1Api->list_namespaced_pod: %s\n"
|
||||
% e
|
||||
)
|
||||
raise e
|
||||
for pod in ret.items:
|
||||
pods.append(pod.metadata.name)
|
||||
return pods
|
||||
|
||||
|
||||
def get_all_pods(label_selector=None):
|
||||
pods = []
|
||||
if label_selector:
|
||||
ret = cli.list_pod_for_all_namespaces(
|
||||
pretty=True,
|
||||
label_selector=label_selector
|
||||
)
|
||||
else:
|
||||
ret = cli.list_pod_for_all_namespaces(pretty=True)
|
||||
for pod in ret.items:
|
||||
pods.append([pod.metadata.name, pod.metadata.namespace])
|
||||
return pods
|
||||
|
||||
|
||||
# Execute command in pod
|
||||
def exec_cmd_in_pod(
|
||||
command,
|
||||
pod_name,
|
||||
namespace,
|
||||
container=None,
|
||||
base_command="bash"
|
||||
):
|
||||
|
||||
exec_command = [base_command, "-c", command]
|
||||
try:
|
||||
if container:
|
||||
ret = stream(
|
||||
cli.connect_get_namespaced_pod_exec,
|
||||
pod_name,
|
||||
namespace,
|
||||
container=container,
|
||||
command=exec_command,
|
||||
stderr=True,
|
||||
stdin=False,
|
||||
stdout=True,
|
||||
tty=False,
|
||||
)
|
||||
else:
|
||||
ret = stream(
|
||||
cli.connect_get_namespaced_pod_exec,
|
||||
pod_name,
|
||||
namespace,
|
||||
command=exec_command,
|
||||
stderr=True,
|
||||
stdin=False,
|
||||
stdout=True,
|
||||
tty=False,
|
||||
)
|
||||
except Exception:
|
||||
return False
|
||||
return ret
|
||||
|
||||
|
||||
def delete_pod(name, namespace):
|
||||
try:
|
||||
cli.delete_namespaced_pod(name=name, namespace=namespace)
|
||||
while cli.read_namespaced_pod(name=name, namespace=namespace):
|
||||
time.sleep(1)
|
||||
except ApiException as e:
|
||||
if e.status == 404:
|
||||
logging.info("Pod already deleted")
|
||||
else:
|
||||
logging.error("Failed to delete pod %s" % e)
|
||||
raise e
|
||||
|
||||
|
||||
def create_pod(body, namespace, timeout=120):
|
||||
try:
|
||||
pod_stat = None
|
||||
pod_stat = cli.create_namespaced_pod(body=body, namespace=namespace)
|
||||
end_time = time.time() + timeout
|
||||
while True:
|
||||
pod_stat = cli.read_namespaced_pod(
|
||||
name=body["metadata"]["name"],
|
||||
namespace=namespace
|
||||
)
|
||||
if pod_stat.status.phase == "Running":
|
||||
break
|
||||
if time.time() > end_time:
|
||||
raise Exception("Starting pod failed")
|
||||
time.sleep(1)
|
||||
except Exception as e:
|
||||
logging.error("Pod creation failed %s" % e)
|
||||
if pod_stat:
|
||||
logging.error(pod_stat.status.container_statuses)
|
||||
delete_pod(body["metadata"]["name"], namespace)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def read_pod(name, namespace="default"):
|
||||
return cli.read_namespaced_pod(name=name, namespace=namespace)
|
||||
|
||||
|
||||
def get_pod_log(name, namespace="default"):
|
||||
return cli.read_namespaced_pod_log(
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
_return_http_data_only=True,
|
||||
_preload_content=False
|
||||
)
|
||||
|
||||
|
||||
def get_containers_in_pod(pod_name, namespace):
|
||||
pod_info = cli.read_namespaced_pod(pod_name, namespace)
|
||||
container_names = []
|
||||
|
||||
for cont in pod_info.spec.containers:
|
||||
container_names.append(cont.name)
|
||||
return container_names
|
||||
|
||||
|
||||
def delete_job(name, namespace="default"):
|
||||
try:
|
||||
api_response = batch_cli.delete_namespaced_job(
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
body=client.V1DeleteOptions(
|
||||
propagation_policy="Foreground",
|
||||
grace_period_seconds=0
|
||||
),
|
||||
)
|
||||
logging.debug("Job deleted. status='%s'" % str(api_response.status))
|
||||
return api_response
|
||||
except ApiException as api:
|
||||
logging.warn(
|
||||
"Exception when calling \
|
||||
BatchV1Api->create_namespaced_job: %s"
|
||||
% api
|
||||
)
|
||||
logging.warn("Job already deleted\n")
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"Exception when calling \
|
||||
BatchV1Api->delete_namespaced_job: %s\n"
|
||||
% e
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def create_job(body, namespace="default"):
|
||||
try:
|
||||
api_response = batch_cli.create_namespaced_job(
|
||||
body=body,
|
||||
namespace=namespace
|
||||
)
|
||||
return api_response
|
||||
except ApiException as api:
|
||||
logging.warn(
|
||||
"Exception when calling \
|
||||
BatchV1Api->create_job: %s"
|
||||
% api
|
||||
)
|
||||
if api.status == 409:
|
||||
logging.warn("Job already present")
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"Exception when calling \
|
||||
BatchV1Api->create_namespaced_job: %s"
|
||||
% e
|
||||
)
|
||||
raise
|
||||
|
||||
|
||||
def create_manifestwork(body, namespace):
|
||||
try:
|
||||
api_response = custom_object_client.create_namespaced_custom_object(
|
||||
group="work.open-cluster-management.io",
|
||||
version="v1",
|
||||
plural="manifestworks",
|
||||
body=body,
|
||||
namespace=namespace
|
||||
)
|
||||
return api_response
|
||||
except ApiException as e:
|
||||
print("Exception when calling CustomObjectsApi->create_namespaced_custom_object: %s\n" % e)
|
||||
|
||||
|
||||
def delete_manifestwork(namespace):
|
||||
try:
|
||||
api_response = custom_object_client.delete_namespaced_custom_object(
|
||||
group="work.open-cluster-management.io",
|
||||
version="v1",
|
||||
plural="manifestworks",
|
||||
name="managedcluster-scenarios-template",
|
||||
namespace=namespace
|
||||
)
|
||||
return api_response
|
||||
except ApiException as e:
|
||||
print("Exception when calling CustomObjectsApi->delete_namespaced_custom_object: %s\n" % e)
|
||||
|
||||
def get_job_status(name, namespace="default"):
|
||||
try:
|
||||
return batch_cli.read_namespaced_job_status(
|
||||
name=name,
|
||||
namespace=namespace
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"Exception when calling \
|
||||
BatchV1Api->read_namespaced_job_status: %s"
|
||||
% e
|
||||
)
|
||||
raise
|
||||
|
||||
|
||||
# Monitor the status of the cluster nodes and set the status to true or false
|
||||
def monitor_nodes():
|
||||
nodes = list_nodes()
|
||||
notready_nodes = []
|
||||
node_kerneldeadlock_status = "False"
|
||||
for node in nodes:
|
||||
try:
|
||||
node_info = cli.read_node_status(node, pretty=True)
|
||||
except ApiException as e:
|
||||
logging.error(
|
||||
"Exception when calling \
|
||||
CoreV1Api->read_node_status: %s\n"
|
||||
% e
|
||||
)
|
||||
raise e
|
||||
for condition in node_info.status.conditions:
|
||||
if condition.type == "KernelDeadlock":
|
||||
node_kerneldeadlock_status = condition.status
|
||||
elif condition.type == "Ready":
|
||||
node_ready_status = condition.status
|
||||
else:
|
||||
continue
|
||||
if node_kerneldeadlock_status != "False" or node_ready_status != "True": # noqa # noqa
|
||||
notready_nodes.append(node)
|
||||
if len(notready_nodes) != 0:
|
||||
status = False
|
||||
else:
|
||||
status = True
|
||||
return status, notready_nodes
|
||||
|
||||
|
||||
# Monitor the status of the pods in the specified namespace
|
||||
# and set the status to true or false
|
||||
def monitor_namespace(namespace):
|
||||
pods = list_pods(namespace)
|
||||
notready_pods = []
|
||||
for pod in pods:
|
||||
try:
|
||||
pod_info = cli.read_namespaced_pod_status(
|
||||
pod,
|
||||
namespace,
|
||||
pretty=True
|
||||
)
|
||||
except ApiException as e:
|
||||
logging.error(
|
||||
"Exception when calling \
|
||||
CoreV1Api->read_namespaced_pod_status: %s\n"
|
||||
% e
|
||||
)
|
||||
raise e
|
||||
pod_status = pod_info.status.phase
|
||||
if (
|
||||
pod_status != "Running" and
|
||||
pod_status != "Completed" and
|
||||
pod_status != "Succeeded"
|
||||
):
|
||||
notready_pods.append(pod)
|
||||
if len(notready_pods) != 0:
|
||||
status = False
|
||||
else:
|
||||
status = True
|
||||
return status, notready_pods
|
||||
|
||||
|
||||
# Monitor component namespace
|
||||
def monitor_component(iteration, component_namespace):
|
||||
watch_component_status, failed_component_pods = \
|
||||
monitor_namespace(component_namespace)
|
||||
logging.info(
|
||||
"Iteration %s: %s: %s" % (
|
||||
iteration,
|
||||
component_namespace,
|
||||
watch_component_status
|
||||
)
|
||||
)
|
||||
return watch_component_status, failed_component_pods
|
||||
|
||||
|
||||
def apply_yaml(path, namespace='default'):
|
||||
"""
|
||||
Apply yaml config to create Kubernetes resources
|
||||
|
||||
Args:
|
||||
path (string)
|
||||
- Path to the YAML file
|
||||
namespace (string)
|
||||
- Namespace to create the resource
|
||||
|
||||
Returns:
|
||||
The object created
|
||||
"""
|
||||
|
||||
return utils.create_from_yaml(
|
||||
api_client,
|
||||
yaml_file=path,
|
||||
namespace=namespace
|
||||
)
|
||||
|
||||
|
||||
def get_pod_info(name: str, namespace: str = 'default') -> Pod:
|
||||
"""
|
||||
Function to retrieve information about a specific pod
|
||||
in a given namespace. The kubectl command is given by:
|
||||
kubectl get pods <name> -n <namespace>
|
||||
|
||||
Args:
|
||||
name (string)
|
||||
- Name of the pod
|
||||
|
||||
namespace (string)
|
||||
- Namespace to look for the pod
|
||||
|
||||
Returns:
|
||||
- Data class object of type Pod with the output of the above
|
||||
kubectl command in the given format if the pod exists
|
||||
- Returns None if the pod doesn't exist
|
||||
"""
|
||||
pod_exists = check_if_pod_exists(name=name, namespace=namespace)
|
||||
if pod_exists:
|
||||
response = cli.read_namespaced_pod(
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
pretty='true'
|
||||
)
|
||||
container_list = []
|
||||
|
||||
# Create a list of containers present in the pod
|
||||
for container in response.spec.containers:
|
||||
volume_mount_list = []
|
||||
for volume_mount in container.volume_mounts:
|
||||
volume_mount_list.append(
|
||||
VolumeMount(
|
||||
name=volume_mount.name,
|
||||
mountPath=volume_mount.mount_path
|
||||
)
|
||||
)
|
||||
container_list.append(
|
||||
Container(
|
||||
name=container.name,
|
||||
image=container.image,
|
||||
volumeMounts=volume_mount_list
|
||||
)
|
||||
)
|
||||
|
||||
for i, container in enumerate(response.status.container_statuses):
|
||||
container_list[i].ready = container.ready
|
||||
|
||||
# Create a list of volumes associated with the pod
|
||||
volume_list = []
|
||||
for volume in response.spec.volumes:
|
||||
volume_name = volume.name
|
||||
pvc_name = (
|
||||
volume.persistent_volume_claim.claim_name
|
||||
if volume.persistent_volume_claim is not None
|
||||
else None
|
||||
)
|
||||
volume_list.append(Volume(name=volume_name, pvcName=pvc_name))
|
||||
|
||||
# Create the Pod data class object
|
||||
pod_info = Pod(
|
||||
name=response.metadata.name,
|
||||
podIP=response.status.pod_ip,
|
||||
namespace=response.metadata.namespace,
|
||||
containers=container_list,
|
||||
nodeName=response.spec.node_name,
|
||||
volumes=volume_list
|
||||
)
|
||||
return pod_info
|
||||
else:
|
||||
logging.error(
|
||||
"Pod '%s' doesn't exist in namespace '%s'" % (
|
||||
str(name),
|
||||
str(namespace)
|
||||
)
|
||||
)
|
||||
return None
|
||||
|
||||
|
||||
def get_litmus_chaos_object(
|
||||
kind: str,
|
||||
name: str,
|
||||
namespace: str
|
||||
) -> LitmusChaosObject:
|
||||
"""
|
||||
Function that returns an object of a custom resource type of
|
||||
the litmus project. Currently, only ChaosEngine and ChaosResult
|
||||
objects are supported.
|
||||
|
||||
Args:
|
||||
kind (string)
|
||||
- The custom resource type
|
||||
|
||||
namespace (string)
|
||||
- Namespace where the custom object is present
|
||||
|
||||
Returns:
|
||||
Data class object of a subclass of LitmusChaosObject
|
||||
"""
|
||||
|
||||
group = 'litmuschaos.io'
|
||||
version = 'v1alpha1'
|
||||
|
||||
if kind.lower() == 'chaosengine':
|
||||
plural = 'chaosengines'
|
||||
response = custom_object_client.get_namespaced_custom_object(
|
||||
group=group,
|
||||
plural=plural,
|
||||
version=version,
|
||||
namespace=namespace,
|
||||
name=name
|
||||
)
|
||||
try:
|
||||
engine_status = response['status']['engineStatus']
|
||||
exp_status = response['status']['experiments'][0]['status']
|
||||
except Exception:
|
||||
engine_status = 'Not Initialized'
|
||||
exp_status = 'Not Initialized'
|
||||
custom_object = ChaosEngine(
|
||||
kind='ChaosEngine',
|
||||
group=group,
|
||||
namespace=namespace,
|
||||
name=name,
|
||||
plural=plural,
|
||||
version=version,
|
||||
engineStatus=engine_status,
|
||||
expStatus=exp_status
|
||||
)
|
||||
elif kind.lower() == 'chaosresult':
|
||||
plural = 'chaosresults'
|
||||
response = custom_object_client.get_namespaced_custom_object(
|
||||
group=group,
|
||||
plural=plural,
|
||||
version=version,
|
||||
namespace=namespace,
|
||||
name=name
|
||||
)
|
||||
try:
|
||||
verdict = response['status']['experimentStatus']['verdict']
|
||||
fail_step = response['status']['experimentStatus']['failStep']
|
||||
except Exception:
|
||||
verdict = 'N/A'
|
||||
fail_step = 'N/A'
|
||||
custom_object = ChaosResult(
|
||||
kind='ChaosResult',
|
||||
group=group,
|
||||
namespace=namespace,
|
||||
name=name,
|
||||
plural=plural,
|
||||
version=version,
|
||||
verdict=verdict,
|
||||
failStep=fail_step
|
||||
)
|
||||
else:
|
||||
logging.error("Invalid litmus chaos custom resource name")
|
||||
custom_object = None
|
||||
return custom_object
|
||||
|
||||
|
||||
def check_if_namespace_exists(name: str) -> bool:
|
||||
"""
|
||||
Function that checks if a namespace exists by parsing through
|
||||
the list of projects.
|
||||
Args:
|
||||
name (string)
|
||||
- Namespace name
|
||||
|
||||
Returns:
|
||||
Boolean value indicating whether the namespace exists or not
|
||||
"""
|
||||
|
||||
v1_projects = dyn_client.resources.get(
|
||||
api_version='project.openshift.io/v1',
|
||||
kind='Project'
|
||||
)
|
||||
project_list = v1_projects.get()
|
||||
return True if name in str(project_list) else False
|
||||
|
||||
|
||||
def check_if_pod_exists(name: str, namespace: str) -> bool:
|
||||
"""
|
||||
Function that checks if a pod exists in the given namespace
|
||||
Args:
|
||||
name (string)
|
||||
- Pod name
|
||||
|
||||
namespace (string)
|
||||
- Namespace name
|
||||
|
||||
Returns:
|
||||
Boolean value indicating whether the pod exists or not
|
||||
"""
|
||||
|
||||
namespace_exists = check_if_namespace_exists(namespace)
|
||||
if namespace_exists:
|
||||
pod_list = list_pods(namespace=namespace)
|
||||
if name in pod_list:
|
||||
return True
|
||||
else:
|
||||
logging.error("Namespace '%s' doesn't exist" % str(namespace))
|
||||
return False
|
||||
|
||||
|
||||
def check_if_pvc_exists(name: str, namespace: str) -> bool:
|
||||
"""
|
||||
Function that checks if a namespace exists by parsing through
|
||||
the list of projects.
|
||||
Args:
|
||||
name (string)
|
||||
- PVC name
|
||||
|
||||
namespace (string)
|
||||
- Namespace name
|
||||
|
||||
Returns:
|
||||
Boolean value indicating whether the Persistent Volume Claim
|
||||
exists or not.
|
||||
"""
|
||||
namespace_exists = check_if_namespace_exists(namespace)
|
||||
if namespace_exists:
|
||||
response = cli.list_namespaced_persistent_volume_claim(
|
||||
namespace=namespace
|
||||
)
|
||||
pvc_list = [pvc.metadata.name for pvc in response.items]
|
||||
if name in pvc_list:
|
||||
return True
|
||||
else:
|
||||
logging.error("Namespace '%s' doesn't exist" % str(namespace))
|
||||
return False
|
||||
|
||||
|
||||
def get_pvc_info(name: str, namespace: str) -> PVC:
|
||||
"""
|
||||
Function to retrieve information about a Persistent Volume Claim in a
|
||||
given namespace
|
||||
|
||||
Args:
|
||||
name (string)
|
||||
- Name of the persistent volume claim
|
||||
|
||||
namespace (string)
|
||||
- Namespace where the persistent volume claim is present
|
||||
|
||||
Returns:
|
||||
- A PVC data class containing the name, capacity, volume name,
|
||||
namespace and associated pod names of the PVC if the PVC exists
|
||||
- Returns None if the PVC doesn't exist
|
||||
"""
|
||||
|
||||
pvc_exists = check_if_pvc_exists(name=name, namespace=namespace)
|
||||
if pvc_exists:
|
||||
pvc_info_response = cli.read_namespaced_persistent_volume_claim(
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
pretty=True
|
||||
)
|
||||
pod_list_response = cli.list_namespaced_pod(namespace=namespace)
|
||||
|
||||
capacity = pvc_info_response.status.capacity['storage']
|
||||
volume_name = pvc_info_response.spec.volume_name
|
||||
|
||||
# Loop through all pods in the namespace to find associated PVCs
|
||||
pvc_pod_list = []
|
||||
for pod in pod_list_response.items:
|
||||
for volume in pod.spec.volumes:
|
||||
if (
|
||||
volume.persistent_volume_claim is not None
|
||||
and volume.persistent_volume_claim.claim_name == name
|
||||
):
|
||||
pvc_pod_list.append(pod.metadata.name)
|
||||
|
||||
pvc_info = PVC(
|
||||
name=name,
|
||||
capacity=capacity,
|
||||
volumeName=volume_name,
|
||||
podNames=pvc_pod_list,
|
||||
namespace=namespace
|
||||
)
|
||||
return pvc_info
|
||||
else:
|
||||
logging.error(
|
||||
"PVC '%s' doesn't exist in namespace '%s'" % (
|
||||
str(name),
|
||||
str(namespace)
|
||||
)
|
||||
)
|
||||
return None
|
||||
|
||||
|
||||
# Find the node kraken is deployed on
|
||||
# Set global kraken node to not delete
|
||||
def find_kraken_node():
|
||||
pods = get_all_pods()
|
||||
kraken_pod_name = None
|
||||
for pod in pods:
|
||||
if "kraken-deployment" in pod[0]:
|
||||
kraken_pod_name = pod[0]
|
||||
kraken_project = pod[1]
|
||||
break
|
||||
# have to switch to proper project
|
||||
|
||||
if kraken_pod_name:
|
||||
# get kraken-deployment pod, find node name
|
||||
try:
|
||||
node_name = get_pod_info(kraken_pod_name, kraken_project).nodeName
|
||||
global kraken_node_name
|
||||
kraken_node_name = node_name
|
||||
except Exception as e:
|
||||
logging.info("%s" % (e))
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
# Watch for a specific node status
|
||||
def watch_node_status(node, status, timeout, resource_version):
|
||||
count = timeout
|
||||
for event in watch_resource.stream(
|
||||
cli.list_node,
|
||||
field_selector=f"metadata.name={node}",
|
||||
timeout_seconds=timeout,
|
||||
resource_version=f"{resource_version}"
|
||||
):
|
||||
conditions = [
|
||||
status
|
||||
for status in event["object"].status.conditions
|
||||
if status.type == "Ready"
|
||||
]
|
||||
if conditions[0].status == status:
|
||||
watch_resource.stop()
|
||||
break
|
||||
else:
|
||||
count -= 1
|
||||
logging.info(
|
||||
"Status of node " + node + ": " + str(conditions[0].status)
|
||||
)
|
||||
if not count:
|
||||
watch_resource.stop()
|
||||
|
||||
|
||||
# Watch for a specific managedcluster status
|
||||
# TODO: Implement this with a watcher instead of polling
|
||||
def watch_managedcluster_status(managedcluster, status, timeout):
|
||||
elapsed_time = 0
|
||||
while True:
|
||||
conditions = custom_object_client.get_cluster_custom_object_status(
|
||||
"cluster.open-cluster-management.io", "v1", "managedclusters", managedcluster
|
||||
)['status']['conditions']
|
||||
available = list(filter(lambda condition: condition['reason'] == 'ManagedClusterAvailable', conditions))
|
||||
if status == "True":
|
||||
if available and available[0]['status'] == "True":
|
||||
logging.info("Status of managedcluster " + managedcluster + ": Available")
|
||||
return True
|
||||
else:
|
||||
if not available:
|
||||
logging.info("Status of managedcluster " + managedcluster + ": Unavailable")
|
||||
return True
|
||||
time.sleep(2)
|
||||
elapsed_time += 2
|
||||
if elapsed_time >= timeout:
|
||||
logging.info("Timeout waiting for managedcluster " + managedcluster + " to become: " + status)
|
||||
return False
|
||||
|
||||
|
||||
# Get the resource version for the specified node
|
||||
def get_node_resource_version(node):
|
||||
return cli.read_node(name=node).metadata.resource_version
|
||||
@@ -1,74 +0,0 @@
|
||||
from dataclasses import dataclass
|
||||
from typing import List
|
||||
|
||||
|
||||
@dataclass(frozen=True, order=False)
|
||||
class Volume:
|
||||
"""Data class to hold information regarding volumes in a pod"""
|
||||
name: str
|
||||
pvcName: str
|
||||
|
||||
|
||||
@dataclass(order=False)
|
||||
class VolumeMount:
|
||||
"""Data class to hold information regarding volume mounts"""
|
||||
name: str
|
||||
mountPath: str
|
||||
|
||||
|
||||
@dataclass(frozen=True, order=False)
|
||||
class PVC:
|
||||
"""Data class to hold information regarding persistent volume claims"""
|
||||
name: str
|
||||
capacity: str
|
||||
volumeName: str
|
||||
podNames: List[str]
|
||||
namespace: str
|
||||
|
||||
|
||||
@dataclass(order=False)
|
||||
class Container:
|
||||
"""Data class to hold information regarding containers in a pod"""
|
||||
image: str
|
||||
name: str
|
||||
volumeMounts: List[VolumeMount]
|
||||
ready: bool = False
|
||||
|
||||
|
||||
@dataclass(frozen=True, order=False)
|
||||
class Pod:
|
||||
"""Data class to hold information regarding a pod"""
|
||||
name: str
|
||||
podIP: str
|
||||
namespace: str
|
||||
containers: List[Container]
|
||||
nodeName: str
|
||||
volumes: List[Volume]
|
||||
|
||||
|
||||
@dataclass(frozen=True, order=False)
|
||||
class LitmusChaosObject:
|
||||
"""Data class to hold information regarding a custom object of litmus project"""
|
||||
kind: str
|
||||
group: str
|
||||
namespace: str
|
||||
name: str
|
||||
plural: str
|
||||
version: str
|
||||
|
||||
|
||||
@dataclass(frozen=True, order=False)
|
||||
class ChaosEngine(LitmusChaosObject):
|
||||
"""Data class to hold information regarding a ChaosEngine object"""
|
||||
engineStatus: str
|
||||
expStatus: str
|
||||
|
||||
|
||||
@dataclass(frozen=True, order=False)
|
||||
class ChaosResult(LitmusChaosObject):
|
||||
"""Data class to hold information regarding a ChaosResult object"""
|
||||
verdict: str
|
||||
failStep: str
|
||||
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import sys
|
||||
import logging
|
||||
import time
|
||||
import kraken.invoke.command as runcommand
|
||||
import kraken.node_actions.common_node_functions as nodeaction
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
@@ -18,9 +19,11 @@ class abstract_node_scenarios:
|
||||
pass
|
||||
|
||||
# Node scenario to stop and then start the node
|
||||
def node_stop_start_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_stop_start_scenario(self, instance_kill_count, node, timeout, duration):
|
||||
logging.info("Starting node_stop_start_scenario injection")
|
||||
self.node_stop_scenario(instance_kill_count, node, timeout)
|
||||
logging.info("Waiting for %s seconds before starting the node" % (duration))
|
||||
time.sleep(duration)
|
||||
self.node_start_scenario(instance_kill_count, node, timeout)
|
||||
logging.info("node_stop_start_scenario has been successfully injected!")
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
|
||||
import time
|
||||
import yaml
|
||||
import os
|
||||
import kraken.invoke.command as runcommand
|
||||
import logging
|
||||
import kraken.node_actions.common_node_functions as nodeaction
|
||||
@@ -17,9 +17,9 @@ class Azure:
|
||||
# Acquire a credential object using CLI-based authentication.
|
||||
credentials = DefaultAzureCredential()
|
||||
logging.info("credential " + str(credentials))
|
||||
az_account = runcommand.invoke("az account list -o yaml")
|
||||
az_account_yaml = yaml.safe_load(az_account, Loader=yaml.FullLoader)
|
||||
subscription_id = az_account_yaml[0]["id"]
|
||||
# az_account = runcommand.invoke("az account list -o yaml")
|
||||
# az_account_yaml = yaml.safe_load(az_account, Loader=yaml.FullLoader)
|
||||
subscription_id = os.getenv("AZURE_SUBSCRIPTION_ID")
|
||||
self.compute_client = ComputeManagementClient(credentials, subscription_id)
|
||||
|
||||
# Get the instance ID of the node
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import logging
|
||||
import json
|
||||
import kraken.node_actions.common_node_functions as nodeaction
|
||||
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
|
||||
from googleapiclient import discovery
|
||||
@@ -10,11 +12,19 @@ from krkn_lib.k8s import KrknKubernetes
|
||||
|
||||
class GCP:
|
||||
def __init__(self):
|
||||
try:
|
||||
gapp_creds = os.getenv("GOOGLE_APPLICATION_CREDENTIALS")
|
||||
with open(gapp_creds, "r") as f:
|
||||
f_str = f.read()
|
||||
self.project = json.loads(f_str)['project_id']
|
||||
#self.project = runcommand.invoke("gcloud config get-value project").split("/n")[0].strip()
|
||||
logging.info("project " + str(self.project) + "!")
|
||||
credentials = GoogleCredentials.get_application_default()
|
||||
self.client = discovery.build("compute", "v1", credentials=credentials, cache_discovery=False)
|
||||
|
||||
self.project = runcommand.invoke("gcloud config get-value project").split("/n")[0].strip()
|
||||
logging.info("project " + str(self.project) + "!")
|
||||
credentials = GoogleCredentials.get_application_default()
|
||||
self.client = discovery.build("compute", "v1", credentials=credentials, cache_discovery=False)
|
||||
except Exception as e:
|
||||
logging.error("Error on setting up GCP connection: " + str(e))
|
||||
sys.exit(1)
|
||||
|
||||
# Get the instance ID of the node
|
||||
def get_instance_id(self, node):
|
||||
|
||||
@@ -100,6 +100,8 @@ def inject_node_scenario(action, node_scenario, node_scenario_object, kubecli: K
|
||||
)
|
||||
node_name = get_yaml_item_value(node_scenario, "node_name", "")
|
||||
label_selector = get_yaml_item_value(node_scenario, "label_selector", "")
|
||||
if action == "node_stop_start_scenario":
|
||||
duration = get_yaml_item_value(node_scenario, "duration", 120)
|
||||
timeout = get_yaml_item_value(node_scenario, "timeout", 120)
|
||||
service = get_yaml_item_value(node_scenario, "service", "")
|
||||
ssh_private_key = get_yaml_item_value(
|
||||
@@ -121,7 +123,7 @@ def inject_node_scenario(action, node_scenario, node_scenario_object, kubecli: K
|
||||
elif action == "node_stop_scenario":
|
||||
node_scenario_object.node_stop_scenario(run_kill_count, single_node, timeout)
|
||||
elif action == "node_stop_start_scenario":
|
||||
node_scenario_object.node_stop_start_scenario(run_kill_count, single_node, timeout)
|
||||
node_scenario_object.node_stop_start_scenario(run_kill_count, single_node, timeout, duration)
|
||||
elif action == "node_termination_scenario":
|
||||
node_scenario_object.node_termination_scenario(run_kill_count, single_node, timeout)
|
||||
elif action == "node_reboot_scenario":
|
||||
|
||||
@@ -53,7 +53,7 @@ class Plugins:
|
||||
def unserialize_scenario(self, file: str) -> Any:
|
||||
return serialization.load_from_file(abspath(file))
|
||||
|
||||
def run(self, file: str, kubeconfig_path: str, kraken_config: str):
|
||||
def run(self, file: str, kubeconfig_path: str, kraken_config: str, run_uuid:str):
|
||||
"""
|
||||
Run executes a series of steps
|
||||
"""
|
||||
@@ -102,7 +102,8 @@ class Plugins:
|
||||
unserialized_input.kubeconfig_path = kubeconfig_path
|
||||
if "kraken_config" in step.schema.input.properties:
|
||||
unserialized_input.kraken_config = kraken_config
|
||||
output_id, output_data = step.schema(unserialized_input)
|
||||
output_id, output_data = step.schema(params=unserialized_input, run_id=run_uuid)
|
||||
|
||||
logging.info(step.render_output(output_id, output_data) + "\n")
|
||||
if output_id in step.error_output_ids:
|
||||
raise Exception(
|
||||
@@ -253,7 +254,8 @@ def run(scenarios: List[str],
|
||||
failed_post_scenarios: List[str],
|
||||
wait_duration: int,
|
||||
telemetry: KrknTelemetryKubernetes,
|
||||
kubecli: KrknKubernetes
|
||||
kubecli: KrknKubernetes,
|
||||
run_uuid: str
|
||||
) -> (List[str], list[ScenarioTelemetry]):
|
||||
|
||||
scenario_telemetries: list[ScenarioTelemetry] = []
|
||||
@@ -268,7 +270,7 @@ def run(scenarios: List[str],
|
||||
|
||||
try:
|
||||
start_monitoring(pool, kill_scenarios)
|
||||
PLUGINS.run(scenario, kubeconfig_path, kraken_config)
|
||||
PLUGINS.run(scenario, kubeconfig_path, kraken_config, run_uuid)
|
||||
result = pool.join()
|
||||
scenario_telemetry.affected_pods = result
|
||||
if result.error:
|
||||
|
||||
@@ -14,11 +14,14 @@ from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.utils.functions import log_exception
|
||||
|
||||
def multiprocess_nodes(cloud_object_function, nodes):
|
||||
def multiprocess_nodes(cloud_object_function, nodes, processes=0):
|
||||
try:
|
||||
# pool object with number of element
|
||||
|
||||
pool = ThreadPool(processes=len(nodes))
|
||||
if processes == 0:
|
||||
pool = ThreadPool(processes=len(nodes))
|
||||
else:
|
||||
pool = ThreadPool(processes=processes)
|
||||
logging.info("nodes type " + str(type(nodes[0])))
|
||||
if type(nodes[0]) is tuple:
|
||||
node_id = []
|
||||
@@ -45,10 +48,12 @@ def cluster_shut_down(shut_down_config, kubecli: KrknKubernetes):
|
||||
shut_down_duration = shut_down_config["shut_down_duration"]
|
||||
cloud_type = shut_down_config["cloud_type"]
|
||||
timeout = shut_down_config["timeout"]
|
||||
processes = 0
|
||||
if cloud_type.lower() == "aws":
|
||||
cloud_object = AWS()
|
||||
elif cloud_type.lower() == "gcp":
|
||||
cloud_object = GCP()
|
||||
processes = 1
|
||||
elif cloud_type.lower() == "openstack":
|
||||
cloud_object = OPENSTACKCLOUD()
|
||||
elif cloud_type.lower() in ["azure", "az"]:
|
||||
@@ -71,7 +76,7 @@ def cluster_shut_down(shut_down_config, kubecli: KrknKubernetes):
|
||||
for _ in range(runs):
|
||||
logging.info("Starting cluster_shut_down scenario injection")
|
||||
stopping_nodes = set(node_id)
|
||||
multiprocess_nodes(cloud_object.stop_instances, node_id)
|
||||
multiprocess_nodes(cloud_object.stop_instances, node_id, processes)
|
||||
stopped_nodes = stopping_nodes.copy()
|
||||
while len(stopping_nodes) > 0:
|
||||
for node in stopping_nodes:
|
||||
@@ -101,7 +106,7 @@ def cluster_shut_down(shut_down_config, kubecli: KrknKubernetes):
|
||||
time.sleep(shut_down_duration)
|
||||
logging.info("Restarting the nodes")
|
||||
restarted_nodes = set(node_id)
|
||||
multiprocess_nodes(cloud_object.start_instances, node_id)
|
||||
multiprocess_nodes(cloud_object.start_instances, node_id, processes)
|
||||
logging.info("Wait for each node to be running again")
|
||||
not_running_nodes = restarted_nodes.copy()
|
||||
while len(not_running_nodes) > 0:
|
||||
|
||||
1
kraken/syn_flood/__init__.py
Normal file
1
kraken/syn_flood/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
from .syn_flood import *
|
||||
132
kraken/syn_flood/syn_flood.py
Normal file
132
kraken/syn_flood/syn_flood.py
Normal file
@@ -0,0 +1,132 @@
|
||||
import logging
|
||||
import os.path
|
||||
import time
|
||||
from typing import List
|
||||
|
||||
import krkn_lib.utils
|
||||
import yaml
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
|
||||
|
||||
|
||||
def run(scenarios_list: list[str], krkn_kubernetes: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
|
||||
scenario_telemetries: list[ScenarioTelemetry] = []
|
||||
failed_post_scenarios = []
|
||||
for scenario in scenarios_list:
|
||||
scenario_telemetry = ScenarioTelemetry()
|
||||
scenario_telemetry.scenario = scenario
|
||||
scenario_telemetry.start_timestamp = time.time()
|
||||
telemetry.set_parameters_base64(scenario_telemetry, scenario)
|
||||
|
||||
try:
|
||||
pod_names = []
|
||||
config = parse_config(scenario)
|
||||
if config["target-service-label"]:
|
||||
target_services = krkn_kubernetes.select_service_by_label(config["namespace"], config["target-service-label"])
|
||||
else:
|
||||
target_services = [config["target-service"]]
|
||||
|
||||
for target in target_services:
|
||||
if not krkn_kubernetes.service_exists(target, config["namespace"]):
|
||||
raise Exception(f"{target} service not found")
|
||||
for i in range(config["number-of-pods"]):
|
||||
pod_name = "syn-flood-" + krkn_lib.utils.get_random_string(10)
|
||||
krkn_kubernetes.deploy_syn_flood(pod_name,
|
||||
config["namespace"],
|
||||
config["image"],
|
||||
target,
|
||||
config["target-port"],
|
||||
config["packet-size"],
|
||||
config["window-size"],
|
||||
config["duration"],
|
||||
config["attacker-nodes"]
|
||||
)
|
||||
pod_names.append(pod_name)
|
||||
|
||||
logging.info("waiting all the attackers to finish:")
|
||||
did_finish = False
|
||||
finished_pods = []
|
||||
while not did_finish:
|
||||
for pod_name in pod_names:
|
||||
if not krkn_kubernetes.is_pod_running(pod_name, config["namespace"]):
|
||||
finished_pods.append(pod_name)
|
||||
if set(pod_names) == set(finished_pods):
|
||||
did_finish = True
|
||||
time.sleep(1)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to run syn flood scenario {scenario}: {e}")
|
||||
failed_post_scenarios.append(scenario)
|
||||
scenario_telemetry.exit_status = 1
|
||||
else:
|
||||
scenario_telemetry.exit_status = 0
|
||||
scenario_telemetry.end_timestamp = time.time()
|
||||
scenario_telemetries.append(scenario_telemetry)
|
||||
return failed_post_scenarios, scenario_telemetries
|
||||
|
||||
def parse_config(scenario_file: str) -> dict[str,any]:
|
||||
if not os.path.exists(scenario_file):
|
||||
raise Exception(f"failed to load scenario file {scenario_file}")
|
||||
|
||||
try:
|
||||
with open(scenario_file) as stream:
|
||||
config = yaml.safe_load(stream)
|
||||
except Exception:
|
||||
raise Exception(f"{scenario_file} is not a valid yaml file")
|
||||
|
||||
missing = []
|
||||
if not check_key_value(config ,"packet-size"):
|
||||
missing.append("packet-size")
|
||||
if not check_key_value(config,"window-size"):
|
||||
missing.append("window-size")
|
||||
if not check_key_value(config, "duration"):
|
||||
missing.append("duration")
|
||||
if not check_key_value(config, "namespace"):
|
||||
missing.append("namespace")
|
||||
if not check_key_value(config, "number-of-pods"):
|
||||
missing.append("number-of-pods")
|
||||
if not check_key_value(config, "target-port"):
|
||||
missing.append("target-port")
|
||||
if not check_key_value(config, "image"):
|
||||
missing.append("image")
|
||||
if "target-service" not in config.keys():
|
||||
missing.append("target-service")
|
||||
if "target-service-label" not in config.keys():
|
||||
missing.append("target-service-label")
|
||||
|
||||
|
||||
|
||||
|
||||
if len(missing) > 0:
|
||||
raise Exception(f"{(',').join(missing)} parameter(s) are missing")
|
||||
|
||||
if not config["target-service"] and not config["target-service-label"]:
|
||||
raise Exception("you have either to set a target service or a label")
|
||||
if config["target-service"] and config["target-service-label"]:
|
||||
raise Exception("you cannot select both target-service and target-service-label")
|
||||
|
||||
if 'attacker-nodes' and not is_node_affinity_correct(config['attacker-nodes']):
|
||||
raise Exception("attacker-nodes format is not correct")
|
||||
return config
|
||||
|
||||
def check_key_value(dictionary, key):
|
||||
if key in dictionary:
|
||||
value = dictionary[key]
|
||||
if value is not None and value != '':
|
||||
return True
|
||||
return False
|
||||
|
||||
def is_node_affinity_correct(obj) -> bool:
|
||||
if not isinstance(obj, dict):
|
||||
return False
|
||||
for key in obj.keys():
|
||||
if not isinstance(key, str):
|
||||
return False
|
||||
if not isinstance(obj[key], list):
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
|
||||
|
||||
12
kraken/utils/TeeLogHandler.py
Normal file
12
kraken/utils/TeeLogHandler.py
Normal file
@@ -0,0 +1,12 @@
|
||||
import logging
|
||||
class TeeLogHandler(logging.Handler):
|
||||
logs: list[str] = []
|
||||
name = "TeeLogHandler"
|
||||
|
||||
def get_output(self) -> str:
|
||||
return "\n".join(self.logs)
|
||||
|
||||
def emit(self, record):
|
||||
self.logs.append(self.formatter.format(record))
|
||||
def __del__(self):
|
||||
pass
|
||||
1
kraken/utils/__init__.py
Normal file
1
kraken/utils/__init__.py
Normal file
@@ -0,0 +1 @@
|
||||
from .TeeLogHandler import TeeLogHandler
|
||||
@@ -1,9 +1,9 @@
|
||||
aliyun-python-sdk-core==2.13.36
|
||||
aliyun-python-sdk-ecs==4.24.25
|
||||
arcaflow==0.17.1
|
||||
arcaflow-plugin-sdk==0.10.0
|
||||
arcaflow-plugin-sdk==0.14.0
|
||||
arcaflow==0.17.2
|
||||
boto3==1.28.61
|
||||
azure-identity==1.15.0
|
||||
azure-identity==1.16.1
|
||||
azure-keyvault==4.2.0
|
||||
azure-mgmt-compute==30.5.0
|
||||
itsdangerous==2.0.1
|
||||
@@ -15,27 +15,26 @@ google-api-python-client==2.116.0
|
||||
ibm_cloud_sdk_core==3.18.0
|
||||
ibm_vpc==0.20.0
|
||||
jinja2==3.1.4
|
||||
krkn-lib==2.1.3
|
||||
krkn-lib==2.1.9
|
||||
lxml==5.1.0
|
||||
kubernetes==26.1.0
|
||||
kubernetes==28.1.0
|
||||
oauth2client==4.1.3
|
||||
pandas==2.2.0
|
||||
openshift-client==1.0.21
|
||||
paramiko==3.4.0
|
||||
podman-compose==1.0.6
|
||||
pyVmomi==8.0.2.0.1
|
||||
pyfiglet==1.0.2
|
||||
pytest==8.0.0
|
||||
python-ipmi==0.5.4
|
||||
python-openstackclient==6.5.0
|
||||
requests==2.32.0
|
||||
requests==2.32.2
|
||||
service_identity==24.1.0
|
||||
PyYAML==6.0
|
||||
setuptools==65.5.1
|
||||
PyYAML==6.0.1
|
||||
setuptools==70.0.0
|
||||
werkzeug==3.0.3
|
||||
wheel==0.42.0
|
||||
zope.interface==5.4.0
|
||||
|
||||
git+https://github.com/krkn-chaos/arcaflow-plugin-kill-pod.git
|
||||
git+https://github.com/krkn-chaos/arcaflow-plugin-kill-pod.git@v0.1.0
|
||||
git+https://github.com/vmware/vsphere-automation-sdk-python.git@v8.0.0.0
|
||||
cryptography>=42.0.4 # not directly required, pinned by Snyk to avoid a vulnerability
|
||||
|
||||
139
run_kraken.py
139
run_kraken.py
@@ -27,7 +27,7 @@ import kraken.arcaflow_plugin as arcaflow_plugin
|
||||
import kraken.prometheus as prometheus_plugin
|
||||
import kraken.service_hijacking.service_hijacking as service_hijacking_plugin
|
||||
import server as server
|
||||
from kraken import plugins
|
||||
from kraken import plugins, syn_flood
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
from krkn_lib.ocp import KrknOpenshift
|
||||
from krkn_lib.telemetry.elastic import KrknElastic
|
||||
@@ -35,13 +35,14 @@ from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
from krkn_lib.models.telemetry import ChaosRunTelemetry
|
||||
from krkn_lib.utils import SafeLogger
|
||||
from krkn_lib.utils.functions import get_yaml_item_value
|
||||
from krkn_lib.utils.functions import get_yaml_item_value, get_junit_test_case
|
||||
|
||||
from kraken.utils import TeeLogHandler
|
||||
|
||||
report_file = ""
|
||||
|
||||
# Main function
|
||||
def main(cfg):
|
||||
def main(cfg) -> int:
|
||||
# Start kraken
|
||||
print(pyfiglet.figlet_format("kraken"))
|
||||
logging.info("Starting kraken")
|
||||
@@ -108,7 +109,8 @@ def main(cfg):
|
||||
logging.error(
|
||||
"Cannot read the kubeconfig file at %s, please check" % kubeconfig_path
|
||||
)
|
||||
sys.exit(1)
|
||||
#sys.exit(1)
|
||||
return 1
|
||||
logging.info("Initializing client to talk to the Kubernetes cluster")
|
||||
|
||||
# Generate uuid for the run
|
||||
@@ -142,10 +144,12 @@ def main(cfg):
|
||||
# Set up kraken url to track signal
|
||||
if not 0 <= int(port) <= 65535:
|
||||
logging.error("%s isn't a valid port number, please check" % (port))
|
||||
sys.exit(1)
|
||||
#sys.exit(1)
|
||||
return 1
|
||||
if not signal_address:
|
||||
logging.error("Please set the signal address in the config")
|
||||
sys.exit(1)
|
||||
#sys.exit(1)
|
||||
return 1
|
||||
address = (signal_address, port)
|
||||
|
||||
# If publish_running_status is False this should keep us going
|
||||
@@ -176,7 +180,7 @@ def main(cfg):
|
||||
except Exception:
|
||||
logging.error("invalid distribution selected, running openshift scenarios against kubernetes cluster."
|
||||
"Please set 'kubernetes' in config.yaml krkn.platform and try again")
|
||||
sys.exit(1)
|
||||
return 1
|
||||
if cv != "":
|
||||
logging.info(cv)
|
||||
else:
|
||||
@@ -251,7 +255,7 @@ def main(cfg):
|
||||
"plugin_scenarios with the "
|
||||
"kill-pods configuration instead."
|
||||
)
|
||||
sys.exit(1)
|
||||
return 1
|
||||
elif scenario_type == "arcaflow_scenarios":
|
||||
failed_post_scenarios, scenario_telemetries = arcaflow_plugin.run(
|
||||
scenarios_list, kubeconfig_path, telemetry_k8s
|
||||
@@ -266,7 +270,8 @@ def main(cfg):
|
||||
failed_post_scenarios,
|
||||
wait_duration,
|
||||
telemetry_k8s,
|
||||
kubecli
|
||||
kubecli,
|
||||
run_uuid
|
||||
)
|
||||
chaos_telemetry.scenarios.extend(scenario_telemetries)
|
||||
# krkn_lib
|
||||
@@ -353,6 +358,10 @@ def main(cfg):
|
||||
logging.info("Running Service Hijacking Chaos")
|
||||
failed_post_scenarios, scenario_telemetries = service_hijacking_plugin.run(scenarios_list, wait_duration, kubecli, telemetry_k8s)
|
||||
chaos_telemetry.scenarios.extend(scenario_telemetries)
|
||||
elif scenario_type == "syn_flood":
|
||||
logging.info("Running Syn Flood Chaos")
|
||||
failed_post_scenarios, scenario_telemetries = syn_flood.run(scenarios_list, kubecli, telemetry_k8s)
|
||||
chaos_telemetry.scenarios.extend(scenario_telemetries)
|
||||
|
||||
# Check for critical alerts when enabled
|
||||
post_critical_alerts = 0
|
||||
@@ -448,17 +457,20 @@ def main(cfg):
|
||||
)
|
||||
else:
|
||||
logging.error("Alert profile is not defined")
|
||||
sys.exit(1)
|
||||
#sys.exit(1)
|
||||
return 1
|
||||
|
||||
if post_critical_alerts > 0:
|
||||
logging.error("Critical alerts are firing, please check; exiting")
|
||||
sys.exit(2)
|
||||
#sys.exit(2)
|
||||
return 2
|
||||
|
||||
if failed_post_scenarios:
|
||||
logging.error(
|
||||
"Post scenarios are still failing at the end of all iterations"
|
||||
)
|
||||
sys.exit(2)
|
||||
#sys.exit(2)
|
||||
return 2
|
||||
|
||||
logging.info(
|
||||
"Successfully finished running Kraken. UUID for the run: "
|
||||
@@ -466,7 +478,11 @@ def main(cfg):
|
||||
)
|
||||
else:
|
||||
logging.error("Cannot find a config at %s, please check" % (cfg))
|
||||
sys.exit(1)
|
||||
#sys.exit(1)
|
||||
return 2
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
@@ -486,19 +502,102 @@ if __name__ == "__main__":
|
||||
help="output report location",
|
||||
default="kraken.report",
|
||||
)
|
||||
|
||||
|
||||
|
||||
|
||||
parser.add_option(
|
||||
"--junit-testcase",
|
||||
dest="junit_testcase",
|
||||
help="junit test case description",
|
||||
default=None,
|
||||
)
|
||||
|
||||
parser.add_option(
|
||||
"--junit-testcase-path",
|
||||
dest="junit_testcase_path",
|
||||
help="junit test case path",
|
||||
default=None,
|
||||
)
|
||||
|
||||
parser.add_option(
|
||||
"--junit-testcase-version",
|
||||
dest="junit_testcase_version",
|
||||
help="junit test case version",
|
||||
default=None,
|
||||
)
|
||||
|
||||
(options, args) = parser.parse_args()
|
||||
report_file = options.output
|
||||
tee_handler = TeeLogHandler()
|
||||
handlers = [logging.FileHandler(report_file, mode="w"), logging.StreamHandler(), tee_handler]
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s [%(levelname)s] %(message)s",
|
||||
handlers=[
|
||||
logging.FileHandler(report_file, mode="w"),
|
||||
logging.StreamHandler(),
|
||||
],
|
||||
handlers=handlers,
|
||||
)
|
||||
option_error = False
|
||||
|
||||
# used to check if there is any missing or wrong parameter that prevents
|
||||
# the creation of the junit file
|
||||
junit_error = False
|
||||
junit_normalized_path = None
|
||||
retval = 0
|
||||
junit_start_time = time.time()
|
||||
# checks if both mandatory options for junit are set
|
||||
if options.junit_testcase_path and not options.junit_testcase:
|
||||
logging.error("please set junit test case description with --junit-testcase [description] option")
|
||||
option_error = True
|
||||
junit_error = True
|
||||
|
||||
if options.junit_testcase and not options.junit_testcase_path:
|
||||
logging.error("please set junit test case path with --junit-testcase-path [path] option")
|
||||
option_error = True
|
||||
junit_error = True
|
||||
|
||||
# normalized path
|
||||
if options.junit_testcase:
|
||||
junit_normalized_path = os.path.normpath(options.junit_testcase_path)
|
||||
|
||||
if not os.path.exists(junit_normalized_path):
|
||||
logging.error(f"{junit_normalized_path} do not exists, please select a valid path")
|
||||
option_error = True
|
||||
junit_error = True
|
||||
|
||||
if not os.path.isdir(junit_normalized_path):
|
||||
logging.error(f"{junit_normalized_path} is a file, please select a valid folder path")
|
||||
option_error = True
|
||||
junit_error = True
|
||||
|
||||
if not os.access(junit_normalized_path, os.W_OK):
|
||||
logging.error(f"{junit_normalized_path} is not writable, please select a valid path")
|
||||
option_error = True
|
||||
junit_error = True
|
||||
|
||||
if options.cfg is None:
|
||||
logging.error("Please check if you have passed the config")
|
||||
sys.exit(1)
|
||||
option_error = True
|
||||
|
||||
if option_error:
|
||||
retval = 1
|
||||
else:
|
||||
main(options.cfg)
|
||||
retval = main(options.cfg)
|
||||
|
||||
junit_endtime = time.time()
|
||||
|
||||
# checks the minimum required parameters to write the junit file
|
||||
if junit_normalized_path and not junit_error:
|
||||
junit_testcase_xml = get_junit_test_case(
|
||||
success=True if retval == 0 else False,
|
||||
time=int(junit_endtime - junit_start_time),
|
||||
test_suite_name="krkn-test-suite",
|
||||
test_case_description=options.junit_testcase,
|
||||
test_stdout=tee_handler.get_output(),
|
||||
test_version=options.junit_testcase_version
|
||||
)
|
||||
junit_testcase_file_path = f"{junit_normalized_path}/junit_krkn_{int(time.time())}.xml"
|
||||
logging.info(f"writing junit XML testcase in {junit_testcase_file_path}")
|
||||
with open(junit_testcase_file_path, "w") as stream:
|
||||
stream.write(junit_testcase_xml)
|
||||
|
||||
sys.exit(retval)
|
||||
|
||||
16
scenarios/kube/syn_flood.yaml
Normal file
16
scenarios/kube/syn_flood.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
packet-size: 120 # hping3 packet size
|
||||
window-size: 64 # hping 3 TCP window size
|
||||
duration: 10 # chaos scenario duration
|
||||
namespace: default # namespace where the target service(s) are deployed
|
||||
target-service: elasticsearch # target service name (if set target-service-label must be empty)
|
||||
target-port: 9200 # target service TCP port
|
||||
target-service-label : "" # target service label, can be used to target multiple target at the same time
|
||||
# if they have the same label set (if set target-service must be empty)
|
||||
number-of-pods: 2 # number of attacker pod instantiated per each target
|
||||
image: quay.io/krkn-chaos/krkn-syn-flood:v1.0.0 # syn flood attacker container image
|
||||
attacker-nodes: # this will set the node affinity to schedule the attacker node. Per each node label selector
|
||||
node-role.kubernetes.io/worker: # can be specified multiple values in this way the kube scheduler will schedule the attacker pods
|
||||
- "" # in the best way possible based on the provided labels. Multiple labels can be specified
|
||||
# set empty value `attacker-nodes: {}` to let kubernetes schedule the pods
|
||||
|
||||
|
||||
@@ -1,14 +1,13 @@
|
||||
node_scenarios:
|
||||
- actions: # node chaos scenarios to be injected
|
||||
- node_stop_start_scenario
|
||||
- stop_start_kubelet_scenario
|
||||
- node_crash_scenario
|
||||
node_name: # node on which scenario has to be injected; can set multiple names separated by comma
|
||||
label_selector: node-role.kubernetes.io/worker # when node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection
|
||||
instance_count: 1 # Number of nodes to perform action/select that match the label selector
|
||||
runs: 1 # number of times to inject each scenario under actions (will perform on same node each time)
|
||||
timeout: 120 # duration to wait for completion of node scenario injection
|
||||
cloud_type: aws # cloud type on which Kubernetes/OpenShift runs
|
||||
timeout: 360 # duration to wait for completion of node scenario injection
|
||||
duration: 120 # duration to stop the node before running the start action
|
||||
cloud_type: aws # cloud type on which Kubernetes/OpenShift runs
|
||||
- actions:
|
||||
- node_reboot_scenario
|
||||
node_name:
|
||||
16
scenarios/openshift/azure_node_scenarios.yml
Normal file
16
scenarios/openshift/azure_node_scenarios.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
node_scenarios:
|
||||
- actions:
|
||||
- node_reboot_scenario
|
||||
node_name:
|
||||
label_selector: node-role.kubernetes.io/infra
|
||||
instance_count: 1
|
||||
timeout: 120
|
||||
cloud_type: azure
|
||||
- actions:
|
||||
- node_stop_start_scenario
|
||||
node_name:
|
||||
label_selector: node-role.kubernetes.io/infra
|
||||
instance_count: 1
|
||||
timeout: 360
|
||||
duration: 120
|
||||
cloud_type: azure
|
||||
19
scenarios/openshift/baremetal_node_scenarios.yml
Normal file
19
scenarios/openshift/baremetal_node_scenarios.yml
Normal file
@@ -0,0 +1,19 @@
|
||||
node_scenarios:
|
||||
- actions: # Node chaos scenarios to be injected.
|
||||
- node_stop_start_scenario
|
||||
node_name: # Node on which scenario has to be injected.
|
||||
label_selector: node-role.kubernetes.io/worker # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection.
|
||||
instance_count: 1 # Number of nodes to perform action/select that match the label selector.
|
||||
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time).
|
||||
timeout: 360 # Duration to wait for completion of node scenario injection.
|
||||
duration: 120 # Duration to stop the node before running the start action
|
||||
cloud_type: bm # Cloud type on which Kubernetes/OpenShift runs.
|
||||
bmc_user: defaultuser # For baremetal (bm) cloud type. The default IPMI username. Optional if specified for all machines.
|
||||
bmc_password: defaultpass # For baremetal (bm) cloud type. The default IPMI password. Optional if specified for all machines.
|
||||
bmc_info: # This section is here to specify baremetal per-machine info, so it is optional if there is no per-machine info.
|
||||
node-1: # The node name for the baremetal machine
|
||||
bmc_addr: mgmt-machine1.example.com # Optional. For baremetal nodes with the IPMI BMC address missing from 'oc get bmh'.
|
||||
node-2:
|
||||
bmc_addr: mgmt-machine2.example.com
|
||||
bmc_user: user # The baremetal IPMI user. Overrides the default IPMI user specified above. Optional if the default is set.
|
||||
bmc_password: pass # The baremetal IPMI password. Overrides the default IPMI user specified above. Optional if the default is set
|
||||
16
scenarios/openshift/gcp_node_scenarios.yml
Normal file
16
scenarios/openshift/gcp_node_scenarios.yml
Normal file
@@ -0,0 +1,16 @@
|
||||
node_scenarios:
|
||||
- actions:
|
||||
- node_reboot_scenario
|
||||
node_name:
|
||||
label_selector: node-role.kubernetes.io/worker
|
||||
instance_count: 1
|
||||
timeout: 120
|
||||
cloud_type: gcp
|
||||
- actions:
|
||||
- node_stop_start_scenario
|
||||
node_name:
|
||||
label_selector: node-role.kubernetes.io/worker
|
||||
instance_count: 1
|
||||
timeout: 360
|
||||
duration: 120
|
||||
cloud_type: gcp
|
||||
@@ -5,5 +5,6 @@
|
||||
label_selector: "node-role.kubernetes.io/worker" # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection
|
||||
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time)
|
||||
instance_count: 1 # Number of nodes to perform action/select that match the label selector
|
||||
timeout: 30 # Duration to wait for completion of node scenario injection
|
||||
skip_openshift_checks: False # Set to True if you don't want to wait for the status of the nodes to change on OpenShift before passing the scenario
|
||||
timeout: 360 # Duration to wait for completion of node scenario injection
|
||||
duration: 120 # Duration to stop the node before running the start action
|
||||
skip_openshift_checks: False # Set to True if you don't want to wait for the status of the nodes to change on OpenShift before passing the scenario
|
||||
|
||||
@@ -39,7 +39,7 @@ class NetworkScenariosTest(unittest.TestCase):
|
||||
|
||||
def test_network_chaos(self):
|
||||
output_id, output_data = ingress_shaping.network_chaos(
|
||||
ingress_shaping.NetworkScenarioConfig(
|
||||
params=ingress_shaping.NetworkScenarioConfig(
|
||||
label_selector="node-role.kubernetes.io/control-plane",
|
||||
instance_count=1,
|
||||
network_params={
|
||||
@@ -47,7 +47,8 @@ class NetworkScenariosTest(unittest.TestCase):
|
||||
"loss": "0.02",
|
||||
"bandwidth": "100mbit"
|
||||
}
|
||||
)
|
||||
),
|
||||
run_id="network-shaping-test"
|
||||
)
|
||||
if output_id == "error":
|
||||
logging.error(output_data.error)
|
||||
|
||||
@@ -10,7 +10,7 @@ class RunPythonPluginTest(unittest.TestCase):
|
||||
tmp_file = tempfile.NamedTemporaryFile()
|
||||
tmp_file.write(bytes("print('Hello world!')", 'utf-8'))
|
||||
tmp_file.flush()
|
||||
output_id, output_data = run_python_file(RunPythonFileInput(tmp_file.name))
|
||||
output_id, output_data = run_python_file(params=RunPythonFileInput(tmp_file.name), run_id="test-python-plugin-success")
|
||||
self.assertEqual("success", output_id)
|
||||
self.assertEqual("Hello world!\n", output_data.stdout)
|
||||
|
||||
@@ -18,7 +18,7 @@ class RunPythonPluginTest(unittest.TestCase):
|
||||
tmp_file = tempfile.NamedTemporaryFile()
|
||||
tmp_file.write(bytes("import sys\nprint('Hello world!')\nsys.exit(42)\n", 'utf-8'))
|
||||
tmp_file.flush()
|
||||
output_id, output_data = run_python_file(RunPythonFileInput(tmp_file.name))
|
||||
output_id, output_data = run_python_file(params=RunPythonFileInput(tmp_file.name), run_id="test-python-plugin-error")
|
||||
self.assertEqual("error", output_id)
|
||||
self.assertEqual(42, output_data.exit_code)
|
||||
self.assertEqual("Hello world!\n", output_data.stdout)
|
||||
|
||||
Reference in New Issue
Block a user