Compare commits

...

30 Commits

Author SHA1 Message Date
Tullio Sebastiani
32142cc159 CVEs fix (#698)
* golang cves fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* arcaflow update

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-09-20 08:33:41 -04:00
Paige Patton
34bfc0d3d9 Adding aws bare metal (#695)
* adding aws bare metal

rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

* no found reservations

rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

---------

Co-authored-by: Auto User <auto@users.noreply.github.com>
2024-09-18 13:55:58 -04:00
Tullio Sebastiani
736c90e937 Namespaced cluster events and logs integration (#690)
* namespaced events integration

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* namespaced logs  implementation

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

namespaced logs plugin scenario

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

namespaced logs integration

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* logs collection fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* krkn-lib 3.1.0 update

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-09-12 11:54:57 +02:00
Naga Ravi Chaitanya Elluri
5e7938ba4a Update default configuration pointer for the node scenarios (#693)
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-09-09 22:10:25 -04:00
Paige Patton
b525f83261 restart kubelet (#688)
rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Auto User <auto@users.noreply.github.com>
2024-09-09 21:57:53 -04:00
Paige Patton
26460a0dce Adding elastic set to none (#691)
* adding elastic set to none

rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Auto User <auto@users.noreply.github.com>

* too many ls

rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

---------

Signed-off-by: Auto User <auto@users.noreply.github.com>
Co-authored-by: Auto User <auto@users.noreply.github.com>
2024-09-05 16:05:19 -04:00
dependabot[bot]
7968c2a776 Bump actions/download-artifact from 3 to 4.1.7 in /.github/workflows
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 3 to 4.1.7.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v3...v4.1.7)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-09-03 23:03:39 -04:00
Tullio Sebastiani
6186555c15 Elastic search krkn-lib integration (#658)
* Elastic search krkn-lib integration

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

removed default urls

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* Fix alerts bug on prometheus

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* fixed prometheus object initialization bug

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* updated requirements to krkn-lib 2.1.8

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* disabled alerts and metrics by default

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* reverted requirement to elastic branch on krkn-lib

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* numpy downgrade

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* maximium retries added to hijacking funtest

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* added elastic settings to funtest config

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* krkn-lib 3.0.0 update

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-08-28 10:46:42 -04:00
Tullio Sebastiani
9cd086f59c Adds the startup option to produce prow junit XML output for sippy integration (#684)
* removed legacy kubernetes module

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* added sippy junit XML file production options

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* krkn-lib update

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

krkn-lib update

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-08-13 12:40:34 +02:00
Naga Ravi Chaitanya Elluri
1057917731 Add duration parameter for node scenarios
This option is enabled only for node_stop_start scenario where
user will want to stop the node for certain duration to understand
the impact before starting the node back on. This commit also bumps
the timeout for the scenario to 360 seconds from 120 seconds to make
sure there's enough time for the node to get to Ready state from the
Kubernetes side after the node is started on the infra side.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-08-12 13:40:18 -04:00
Naga Ravi Chaitanya Elluri
5484828b67 Deprecate running krkn as kubernetes app
This commit removes the instructions on running krkn as kubernetes
deployment as it is not supported/maintained and also not recommended.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-08-09 13:44:43 -04:00
Naga Ravi Chaitanya Elluri
d18b6332e5 Improve node-scenario docs
This commit adds sample configuration files for each of the supported
platforms.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-08-07 13:52:15 -04:00
Paige Patton
89a0e166f1 no multiprocess for gcp shutdown (#682)
rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Auto User <auto@users.noreply.github.com>
2024-08-03 18:43:52 -04:00
Naga Ravi Chaitanya Elluri
624f50acd1 Output rate of increase for the SLO queries
This commit:
- Also switches the rate queries severity to critical as 5%
  threshold is high for low scale/density clusters and needs to be flagged.
- Adds rate queries to openshift alerts file
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-08-01 12:29:35 -04:00
Tullio Sebastiani
e02c6d1287 SYN flood scenario (#668)
* scenario config file

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* syn flood plugin

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* run_krkn.py updaated

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* requirements.txt + documentation + config.yaml

* set node selector defaults to worker

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-29 15:31:37 -04:00
jtydlack
04425a8d8a Add alerts to alert.yaml
Signed-off-by: jtydlack <139967002+jtydlack@users.noreply.github.com>
2024-07-25 10:51:15 -04:00
Naga Ravi Chaitanya Elluri
f3933f0e62 fix: requirements.txt to reduce vulnerabilities (#673)
The following vulnerabilities are fixed by pinning transitive dependencies:
- https://snyk.io/vuln/SNYK-PYTHON-SETUPTOOLS-7448482

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
2024-07-22 10:12:14 -04:00
Naga Ravi Chaitanya Elluri
56ff0a8c72 Deprecate setting release version in the container source file
This commit also deprecates building container image for ppc64le as it
is not actively maintained. We will add support if users request for it
in the future.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-07-18 12:56:08 -04:00
Tullio Sebastiani
9378cd74cd krkn-lib update v2.1.6 to fix pod monitoring time calculations (#674)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-16 18:04:24 +02:00
Paige Patton
4d3491da0f adidng action token passing (#671)
rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-07-15 12:50:20 -04:00
Naga Ravi Chaitanya Elluri
d6ce66160b Remove podman-compose dependency
We are not using it in the krkn code base and removing it fixes one
of the license issues reported by FOSSA. This commit also removes
setting up dependencies using docker/podman compose as it not actively
maintained.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-07-10 17:25:33 -04:00
Paige Rubendall
ef1a55438b taking out need for az cli to be installed
rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-07-05 15:18:06 -04:00
Tullio Sebastiani
d8f54b83a2 fixed image push issue
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-05 10:32:01 -04:00
Tullio Sebastiani
4870c86515 moves the krkn-hub build from push on main to tag (#660)
* moves the krkn-hub build from push on main to tag + final image enhancement

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fixed syntax

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

typo

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

typo

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* quotes

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-05 16:09:34 +02:00
Naga Ravi Chaitanya Elluri
6ae17cf678 Update dockerfile to install azure-cli using dnf
Avoids architecture issues such as "bash: /usr/bin/az: cannot execute: required file not found"

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-07-03 18:35:45 -04:00
Tullio Sebastiani
ce9f8aa050 Dockerfile update v1.6.2 (#659)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-03 16:34:37 +02:00
Paige Patton
05148317c1 taking out one glcoud call (#657)
rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-07-03 16:14:19 +02:00
Tullio Sebastiani
5f836f294b Kill pod arca plugin update adaptation (#656)
* new kill-pod interface adaptation

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* unit test fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* requirements update

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* fixed duplicate requirement

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* added conditional dockerfile build

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

removed useless print

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-03 15:50:43 +02:00
snyk-bot
cfa1bb09a0 fix: requirements.txt to reduce vulnerabilities
The following vulnerabilities are fixed by pinning transitive dependencies:
- https://snyk.io/vuln/SNYK-PYTHON-REQUESTS-6928867
2024-06-24 10:23:37 -04:00
Naga Ravi Chaitanya Elluri
5ddfff5a85 Make krkn dir executable
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-06-20 14:32:20 -04:00
52 changed files with 1253 additions and 1452 deletions

View File

@@ -1,8 +1,7 @@
name: Docker Image CI
on:
push:
branches:
- main
tags: ['v[0-9].[0-9]+.[0-9]+']
pull_request:
jobs:
@@ -12,30 +11,43 @@ jobs:
- name: Check out code
uses: actions/checkout@v3
- name: Build the Docker images
if: startsWith(github.ref, 'refs/tags')
run: |
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/ --build-arg TAG=${GITHUB_REF#refs/tags/}
docker tag quay.io/krkn-chaos/krkn quay.io/redhat-chaos/krkn
docker tag quay.io/krkn-chaos/krkn quay.io/krkn-chaos/krkn:${GITHUB_REF#refs/tags/}
docker tag quay.io/krkn-chaos/krkn quay.io/redhat-chaos/krkn:${GITHUB_REF#refs/tags/}
- name: Test Build the Docker images
if: ${{ github.event_name == 'pull_request' }}
run: |
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/ --build-arg PR_NUMBER=${{ github.event.pull_request.number }}
- name: Login in quay
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
if: startsWith(github.ref, 'refs/tags')
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
env:
QUAY_USER: ${{ secrets.QUAY_USERNAME }}
QUAY_TOKEN: ${{ secrets.QUAY_PASSWORD }}
- name: Push the KrknChaos Docker images
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker push quay.io/krkn-chaos/krkn
if: startsWith(github.ref, 'refs/tags')
run: |
docker push quay.io/krkn-chaos/krkn
docker push quay.io/krkn-chaos/krkn:${GITHUB_REF#refs/tags/}
- name: Login in to redhat-chaos quay
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
if: startsWith(github.ref, 'refs/tags/v')
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
env:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}
- name: Push the RedHat Chaos Docker images
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker push quay.io/redhat-chaos/krkn
if: startsWith(github.ref, 'refs/tags')
run: |
docker push quay.io/redhat-chaos/krkn
docker push quay.io/redhat-chaos/krkn:${GITHUB_REF#refs/tags/}
- name: Rebuild krkn-hub
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
if: startsWith(github.ref, 'refs/tags')
uses: redhat-chaos/actions/krkn-hub@main
with:
QUAY_USER: ${{ secrets.QUAY_USERNAME }}
QUAY_TOKEN: ${{ secrets.QUAY_PASSWORD }}
AUTOPUSH: ${{ secrets.AUTOPUSH }}

View File

@@ -169,7 +169,7 @@ jobs:
path: krkn-lib-docs
ssh-key: ${{ secrets.KRKN_LIB_DOCS_PRIV_KEY }}
- name: Download json coverage
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4.1.7
with:
name: coverage.json
- name: Set up Python

View File

@@ -50,3 +50,15 @@ telemetry:
oc_cli_path: /usr/bin/oc # optional, if not specified will be search in $PATH
events_backup: True # enables/disables cluster events collection
telemetry_group: "funtests"
elastic:
enable_elastic: True
collect_metrics: False
collect_alerts: False
verify_certs: False
elastic_url: "https://192.168.39.196" # To track results in elasticsearch, give url to server here; will post telemetry details when url and index not blank
elastic_port: 32766
username: "elastic"
password: "test"
metrics_index: "krkn-metrics"
alerts_index: "krkn-alerts"
telemetry_index: "krkn-telemetry"

View File

@@ -42,7 +42,14 @@ function functional_test_service_hijacking {
python3 -m coverage run -a run_kraken.py -c CI/config/service_hijacking.yaml > /dev/null 2>&1 &
PID=$!
#Waiting the hijacking to have effect
while [ `curl -X GET -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL/list/index.php` == 404 ]; do echo "waiting scenario to kick in."; sleep 1; done;
COUNTER=0
while [ `curl -X GET -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL/list/index.php` == 404 ]
do
echo "waiting scenario to kick in."
sleep 1
COUNTER=$((COUNTER+1))
[ $COUNTER -eq "100" ] && echo "maximum number of retry reached, test failed" && exit 1
done
#Checking Step 1 GET on /list/index.php
OUT_GET="`curl -X GET -s $SERVICE_URL/list/index.php`"

View File

@@ -41,18 +41,6 @@ After installation, refer back to the below sections for supported scenarios and
#### Running Kraken with minimal configuration tweaks
For cases where you want to run Kraken with minimal configuration changes, refer to [krkn-hub](https://github.com/krkn-chaos/krkn-hub). One use case is CI integration where you do not want to carry around different configuration files for the scenarios.
### Setting up infrastructure dependencies
Kraken indexes the metrics specified in the profile into Elasticsearch in addition to leveraging Cerberus for understanding the health of the Kubernetes cluster under test. More information on the features is documented below. The infrastructure pieces can be easily installed and uninstalled by running:
```
$ cd kraken
$ podman-compose up or $ docker-compose up # Spins up the containers specified in the docker-compose.yml file present in the run directory.
$ podman-compose down or $ docker-compose down # Delete the containers installed.
```
This will manage the Cerberus and Elasticsearch containers on the host on which you are running Kraken.
**NOTE**: Make sure you have enough resources (memory and disk) on the machine on top of which the containers are running as Elasticsearch is resource intensive. Cerberus monitors the system components by default, the [config](config/cerberus.yaml) can be tweaked to add applications namespaces, routes and other components to monitor as well. The command will keep running until killed since detached mode is not supported as of now.
### Config
Instructions on how to setup the config and the options supported can be found at [Config](docs/config.md).
@@ -76,6 +64,7 @@ Scenario type | Kubernetes
[Network_Chaos](docs/network_chaos.md) | :heavy_check_mark: |
[ManagedCluster Scenarios](docs/managedcluster_scenarios.md) | :heavy_check_mark: |
[Service Hijacking Scenarios](docs/service_hijacking_scenarios.md) | :heavy_check_mark: |
[SYN Flood Scenarios](docs/syn_flood_scenarios.md) | :heavy_check_mark: |
### Kraken scenario pass/fail criteria and report

View File

@@ -88,3 +88,42 @@
- expr: ALERTS{severity="critical", alertstate="firing"} > 0
description: Critical prometheus alert. {{$labels.alertname}}
severity: warning
# etcd CPU and usage increase
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-etcd', container='etcd'}[1m])) * 100 / sum(machine_cpu_cores) > 5
description: Etcd CPU usage increased significantly by {{$value}}%
severity: critical
# etcd memory usage increase
- expr: sum(deriv(container_memory_usage_bytes{image!='', namespace='openshift-etcd', container='etcd'}[5m])) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: Etcd memory usage increased significantly by {{$value}}%
severity: critical
# Openshift API server CPU and memory usage increase
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-apiserver', container='openshift-apiserver'}[1m])) * 100 / sum(machine_cpu_cores) > 5
description: openshift apiserver cpu usage increased significantly by {{$value}}%
severity: critical
- expr: (sum(deriv(container_memory_usage_bytes{namespace='openshift-apiserver', container='openshift-apiserver'}[5m]))) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: openshift apiserver memory usage increased significantly by {{$value}}%
severity: critical
# Openshift kube API server CPU and memory usage increase
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-kube-apiserver', container='kube-apiserver'}[1m])) * 100 / sum(machine_cpu_cores) > 5
description: openshift apiserver cpu usage increased significantly by {{$value}}%
severity: critical
- expr: (sum(deriv(container_memory_usage_bytes{namespace='openshift-kube-apiserver', container='kube-apiserver'}[5m]))) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: openshift apiserver memory usage increased significantly by {{$value}}%
severity: critical
# Master node CPU usage increase
- expr: (sum((sum(deriv(pod:container_cpu_usage:sum{container="",pod!=""}[5m])) BY (namespace, pod) * on(pod, namespace) group_left(node) (node_namespace_pod:kube_pod_info:) ) * on(node) group_left(role) (max by (node) (kube_node_role{role="master"})))) * 100 / sum(machine_cpu_cores) > 5
description: master nodes cpu usage increased significantly by {{$value}}%
severity: critical
# Master nodes memory usage increase
- expr: (sum((sum(deriv(container_memory_usage_bytes{container="",pod!=""}[5m])) BY (namespace, pod) * on(pod, namespace) group_left(node) (node_namespace_pod:kube_pod_info:) ) * on(node) group_left(role) (max by (node) (kube_node_role{role="master"})))) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: master nodes memory usage increased significantly by {{$value}}%
severity: critical

View File

@@ -99,3 +99,41 @@
- expr: ALERTS{severity="critical", alertstate="firing"} > 0
description: Critical prometheus alert. {{$labels.alertname}}
severity: warning
# etcd CPU and usage increase
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-etcd', container='etcd'}[1m])) * 100 / sum(machine_cpu_cores) > 5
description: Etcd CPU usage increased significantly by {{$value}}%
severity: critical
# etcd memory usage increase
- expr: sum(deriv(container_memory_usage_bytes{image!='', namespace='openshift-etcd', container='etcd'}[5m])) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: Etcd memory usage increased significantly by {{$value}}%
severity: critical
# Openshift API server CPU and memory usage increase
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-apiserver', container='openshift-apiserver'}[1m])) * 100 / sum(machine_cpu_cores) > 5
description: openshift apiserver cpu usage increased significantly by {{$value}}%
severity: critical
- expr: (sum(deriv(container_memory_usage_bytes{namespace='openshift-apiserver', container='openshift-apiserver'}[5m]))) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: openshift apiserver memory usage increased significantly by {{$value}}%
severity: critical
# Openshift kube API server CPU and memory usage increase
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-kube-apiserver', container='kube-apiserver'}[1m])) * 100 / sum(machine_cpu_cores) > 5
description: openshift apiserver cpu usage increased significantly by {{$value}}%
severity: critical
- expr: (sum(deriv(container_memory_usage_bytes{namespace='openshift-kube-apiserver', container='kube-apiserver'}[5m]))) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: openshift apiserver memory usage increased significantly by {{$value}}%
severity: critical
# Master node CPU usage increase
- expr: (sum((sum(deriv(pod:container_cpu_usage:sum{container="",pod!=""}[5m])) BY (namespace, pod) * on(pod, namespace) group_left(node) (node_namespace_pod:kube_pod_info:) ) * on(node) group_left(role) (max by (node) (kube_node_role{role="master"})))) * 100 / sum(machine_cpu_cores) > 5
description: master nodes cpu usage increased significantly by {{$value}}%
severity: critical
# Master nodes memory usage increase
- expr: (sum((sum(deriv(container_memory_usage_bytes{container="",pod!=""}[5m])) BY (namespace, pod) * on(pod, namespace) group_left(node) (node_namespace_pod:kube_pod_info:) ) * on(node) group_left(role) (max by (node) (kube_node_role{role="master"})))) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: master nodes memory usage increased significantly by {{$value}}%
severity: critical

View File

@@ -23,7 +23,7 @@ kraken:
- scenarios/openshift/network_chaos_ingress.yml
- scenarios/openshift/prom_kill.yml
- node_scenarios: # List of chaos node scenarios to load
- scenarios/openshift/node_scenarios_example.yml
- scenarios/openshift/aws_node_scenarios.yml
- plugin_scenarios:
- scenarios/openshift/openshift-apiserver.yml
- scenarios/openshift/openshift-kube-apiserver.yml
@@ -44,6 +44,8 @@ kraken:
- scenarios/openshift/network_chaos.yaml
- service_hijacking:
- scenarios/kube/service_hijacking.yaml
- syn_flood:
- scenarios/kube/syn_flood.yaml
cerberus:
cerberus_enabled: False # Enable it when cerberus is previously installed
@@ -53,12 +55,27 @@ cerberus:
performance_monitoring:
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_url: '' # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
uuid: # uuid for the run is generated by default if not set
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
enable_metrics: False
alert_profile: config/alerts.yaml # Path or URL to alert profile with the prometheus queries
metrics_profile: config/metrics.yaml
check_critical_alerts: False # When enabled will check prometheus for critical alerts firing post chaos
elastic:
enable_elastic: False
collect_metrics: False
collect_alerts: False
verify_certs: False
elastic_url: "" # To track results in elasticsearch, give url to server here; will post telemetry details when url and index not blank
elastic_port: 32766
username: "elastic"
password: "test"
metrics_index: "krkn-metrics"
alerts_index: "krkn-alerts"
telemetry_index: "krkn-telemetry"
tunings:
wait_duration: 60 # Duration to wait between each chaos scenario
iterations: 1 # Number of times to execute the scenarios
@@ -92,9 +109,7 @@ telemetry:
- "(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d+Z).+" # 2023-09-15T11:20:36.123425532Z log
oc_cli_path: /usr/bin/oc # optional, if not specified will be search in $PATH
events_backup: True # enables/disables cluster events collection
elastic:
elastic_url: "" # To track results in elasticsearch, give url to server here; will post telemetry details when url and index not blank
elastic_index: "" # Elastic search index pattern to post results to

View File

@@ -1,26 +1,23 @@
# azure-client
FROM mcr.microsoft.com/azure-cli:latest as azure-cli
# oc build
FROM golang:1.22.4 AS oc-build
RUN apt-get update && apt-get install -y libkrb5-dev
FROM golang:1.22.5 AS oc-build
RUN apt-get update && apt-get install -y --no-install-recommends libkrb5-dev
WORKDIR /tmp
RUN git clone --branch release-4.18 https://github.com/openshift/oc.git
WORKDIR /tmp/oc
RUN go mod edit -go 1.22.3 &&\
RUN go mod edit -go 1.22.5 &&\
go get github.com/moby/buildkit@v0.12.5 &&\
go get github.com/containerd/containerd@v1.7.11&&\
go get github.com/docker/docker@v25.0.5&&\
go get github.com/docker/docker@v25.0.6&&\
go get github.com/opencontainers/runc@v1.1.14&&\
go mod tidy && go mod vendor
RUN make GO_REQUIRED_MIN_VERSION:= oc
FROM fedora:40
ARG PR_NUMBER
ARG TAG
RUN groupadd -g 1001 krkn && useradd -m -u 1001 -g krkn krkn
RUN dnf update -y
# krkn version that will be built
ENV KRKN_VERSION v1.6.1
ENV KUBECONFIG /home/krkn/.kube/config
# install kubectl
@@ -29,22 +26,30 @@ RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/s
cp kubectl /usr/bin/kubectl && chmod +x /usr/bin/kubectl
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
RUN dnf update && dnf install -y git python39 jq yq gettext wget which
# copy azure client binary from azure-cli image
COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
RUN dnf update && dnf install -y --setopt=install_weak_deps=False \
git python39 jq yq gettext wget which &&\
dnf clean all
# copy oc client binary from oc-build image
COPY --from=oc-build /tmp/oc/oc /usr/bin/oc
# krkn build
RUN git clone https://github.com/krkn-chaos/krkn.git --branch $KRKN_VERSION /home/krkn/kraken && \
RUN git clone https://github.com/krkn-chaos/krkn.git /home/krkn/kraken && \
mkdir -p /home/krkn/.kube
WORKDIR /home/krkn/kraken
# default behaviour will be to build main
# if it is a PR trigger the PR itself will be checked out
RUN if [ -n "$PR_NUMBER" ]; then git fetch origin pull/${PR_NUMBER}/head:pr-${PR_NUMBER} && git checkout pr-${PR_NUMBER};fi
# if it is a TAG trigger checkout the tag
RUN if [ -n "$TAG" ]; then git checkout "$TAG";fi
RUN python3.9 -m ensurepip
RUN pip3.9 install -r requirements.txt
RUN pip3.9 install jsonschema
RUN chown -R krkn:krkn /home/krkn
RUN chown -R krkn:krkn /home/krkn && chmod 755 /home/krkn
USER krkn
ENTRYPOINT ["python3.9", "run_kraken.py"]
CMD ["--config=config/config.yaml"]
CMD ["--config=config/config.yaml"]

View File

@@ -1,29 +0,0 @@
# Dockerfile for kraken
FROM ppc64le/centos:8
FROM mcr.microsoft.com/azure-cli:latest as azure-cli
LABEL org.opencontainers.image.authors="Red Hat OpenShift Chaos Engineering"
ENV KUBECONFIG /root/.kube/config
# Copy azure client binary from azure-cli image
COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
# Install dependencies
RUN yum install -y git python39 python3-pip jq gettext wget && \
python3.9 -m pip install -U pip && \
git clone https://github.com/redhat-chaos/krkn.git --branch v1.5.14 /root/kraken && \
mkdir -p /root/.kube && cd /root/kraken && \
pip3.9 install -r requirements.txt && \
pip3.9 install virtualenv && \
wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && chmod +x /usr/bin/yq
# Get Kubernetes and OpenShift clients from stable releases
WORKDIR /tmp
RUN wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz && tar -xvf openshift-client-linux.tar.gz && cp oc /usr/local/bin/oc && cp oc /usr/bin/oc && cp kubectl /usr/local/bin/kubectl && cp kubectl /usr/bin/kubectl
WORKDIR /root/kraken
ENTRYPOINT python3.9 run_kraken.py --config=config/config.yaml

View File

@@ -12,35 +12,3 @@ Refer [instructions](https://github.com/redhat-chaos/krkn/blob/main/docs/install
### Run Custom Kraken Image
Refer to [instructions](https://github.com/redhat-chaos/krkn/blob/main/containers/build_own_image-README.md) for information on how to run a custom containerized version of kraken using podman.
### Kraken as a KubeApp ( Unsupported and not recommended )
#### GENERAL NOTES:
- It is not generally recommended to run Kraken internal to the cluster as the pod which is running Kraken might get disrupted, the suggested use case to run kraken from inside k8s/OpenShift is to target **another** cluster (eg. to bypass network restrictions or to leverage cluster's computational resources)
- your kubeconfig might contain several cluster contexts and credentials so be sure, before creating the ConfigMap, to keep **only** the credentials related to the destination cluster. Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) for more details
- to add privileges to the service account you must be logged in the cluster with an highly privileged account (ideally kubeadmin)
To run containerized Kraken as a Kubernetes/OpenShift Deployment, follow these steps:
1. Configure the [config.yaml](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) file according to your requirements.
**NOTE**: both the scenarios ConfigMaps are needed regardless you're running kraken in Kubernetes or OpenShift
2. Create a namespace under which you want to run the kraken pod using `kubectl create ns <namespace>`.
3. Switch to `<namespace>` namespace:
- In Kubernetes, use `kubectl config set-context --current --namespace=<namespace>`
- In OpenShift, use `oc project <namespace>`
4. Create a ConfigMap named kube-config using `kubectl create configmap kube-config --from-file=<path_to_kubeconfig>` *(eg. ~/.kube/config)*
5. Create a ConfigMap named kraken-config using `kubectl create configmap kraken-config --from-file=<path_to_kraken>/config`
6. Create a ConfigMap named scenarios-config using `kubectl create configmap scenarios-config --from-file=<path_to_kraken>/scenarios`
7. Create a ConfigMap named scenarios-openshift-config using `kubectl create configmap scenarios-openshift-config --from-file=<path_to_kraken>/scenarios/openshift`
8. Create a ConfigMap named scenarios-kube-config using `kubectl create configmap scenarios-kube-config --from-file=<path_to_kraken>/scenarios/kube`
9. Create a service account to run the kraken pod `kubectl create serviceaccount useroot`.
10. In Openshift, add privileges to service account and execute `oc adm policy add-scc-to-user privileged -z useroot`.
11. Create a Job using `kubectl apply -f <path_to_kraken>/containers/kraken.yml` and monitor the status using `oc get jobs` and `oc get pods`.

View File

@@ -1,49 +0,0 @@
---
apiVersion: batch/v1
kind: Job
metadata:
name: kraken
spec:
parallelism: 1
completions: 1
template:
metadata:
labels:
tool: Kraken
spec:
serviceAccountName: useroot
containers:
- name: kraken
securityContext:
privileged: true
image: quay.io/redhat-chaos/krkn
command: ["/bin/sh", "-c"]
args: ["python3.9 run_kraken.py -c config/config.yaml"]
volumeMounts:
- mountPath: "/root/.kube"
name: config
- mountPath: "/root/kraken/config"
name: kraken-config
- mountPath: "/root/kraken/scenarios"
name: scenarios-config
- mountPath: "/root/kraken/scenarios/openshift"
name: scenarios-openshift-config
- mountPath: "/root/kraken/scenarios/kube"
name: scenarios-kube-config
restartPolicy: Never
volumes:
- name: config
configMap:
name: kube-config
- name: kraken-config
configMap:
name: kraken-config
- name: scenarios-config
configMap:
name: scenarios-config
- name: scenarios-openshift-config
configMap:
name: scenarios-openshift-config
- name: scenarios-kube-config
configMap:
name: scenarios-kube-config

View File

@@ -1,31 +0,0 @@
version: "3"
services:
elastic:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
deploy:
replicas: 1
restart_policy:
condition: on-failure
network_mode: host
environment:
discovery.type: single-node
kibana:
image: docker.elastic.co/kibana/kibana:7.13.2
deploy:
replicas: 1
restart_policy:
condition: on-failure
network_mode: host
environment:
ELASTICSEARCH_HOSTS: "http://0.0.0.0:9200"
cerberus:
image: quay.io/openshift-scale/cerberus:latest
privileged: true
deploy:
replicas: 1
restart_policy:
condition: on-failure
network_mode: host
volumes:
- ./config/cerberus.yaml:/root/cerberus/config/config.yaml:Z # Modify the config in case of the need to monitor additional components
- ${HOME}/.kube/config:/root/.kube/config:Z

View File

@@ -27,14 +27,12 @@ After creating the service account you will need to enable the account using the
## Azure
**NOTE**: For Azure node killing scenarios, make sure [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) is installed.
You will also need to create a service principal and give it the correct access, see [here](https://docs.openshift.com/container-platform/4.5/installing/installing_azure/installing-azure-account.html) for creating the service principal and setting the proper permissions.
**NOTE**: You will need to create a service principal and give it the correct access, see [here](https://docs.openshift.com/container-platform/4.5/installing/installing_azure/installing-azure-account.html) for creating the service principal and setting the proper permissions.
To properly run the service principal requires “Azure Active Directory Graph/Application.ReadWrite.OwnedBy” api permission granted and “User Access Administrator”.
Before running you will need to set the following:
1. Login using ```az login```
1. ```export AZURE_SUBSCRIPTION_ID=<subscription_id>```
2. ```export AZURE_TENANT_ID=<tenant_id>```

View File

@@ -43,12 +43,3 @@ $ python3.9 run_kraken.py --config <config_file_location>
[Krkn-hub](https://github.com/krkn-chaos/krkn-hub) is a wrapper that allows running Krkn chaos scenarios via podman or docker runtime with scenario parameters/configuration defined as environment variables.
Refer [instructions](https://github.com/krkn-chaos/krkn-hub#supported-chaos-scenarios) to get started.
### Run Kraken as a Kubernetes deployment ( unsupported option - standalone or containerized deployers are recommended )
Refer [Instructions](https://github.com/krkn-chaos/krkn/blob/main/containers/README.md) on how to deploy and run Kraken as a Kubernetes/OpenShift deployment.
Refer to the [chaos-kraken chart manpage](https://artifacthub.io/packages/helm/startx/chaos-kraken)
and especially the [kraken configuration values](https://artifacthub.io/packages/helm/startx/chaos-kraken#chaos-kraken-values-dictionary)
for details on how to configure this chart.

View File

@@ -9,24 +9,28 @@ The following node chaos scenarios are supported:
5. **node_reboot_scenario**: Scenario to reboot the node instance.
6. **stop_kubelet_scenario**: Scenario to stop the kubelet of the node instance.
7. **stop_start_kubelet_scenario**: Scenario to stop and start the kubelet of the node instance.
8. **node_crash_scenario**: Scenario to crash the node instance.
9. **stop_start_helper_node_scenario**: Scenario to stop and start the helper node and check service status.
8. **restart_kubelet_scenario**: Scenario to restart the kubelet of the node instance.
9. **node_crash_scenario**: Scenario to crash the node instance.
10. **stop_start_helper_node_scenario**: Scenario to stop and start the helper node and check service status.
**NOTE**: If the node does not recover from the node_crash_scenario injection, reboot the node to get it back to Ready state.
**NOTE**: node_start_scenario, node_stop_scenario, node_stop_start_scenario, node_termination_scenario
, node_reboot_scenario and stop_start_kubelet_scenario are supported only on AWS, Azure, OpenStack, BareMetal, GCP
, VMware and Alibaba as of now.
**NOTE**: Node scenarios are supported only when running the standalone version of Kraken until https://github.com/redhat-chaos/krkn/issues/106 gets fixed.
, node_reboot_scenario and stop_start_kubelet_scenario are supported on AWS, Azure, OpenStack, BareMetal, GCP
, VMware and Alibaba.
#### AWS
How to set up AWS cli to run node scenarios is defined [here](cloud_setup.md#aws).
Cloud setup instructions can be found [here](cloud_setup.md#aws). Sample scenario config can be found [here](https://github.com/krkn-chaos/krkn/blob/main/scenarios/openshift/aws_node_scenarios.yml).
#### Baremetal
Sample scenario config can be found [here](https://github.com/krkn-chaos/krkn/blob/main/scenarios/openshift/baremetal_node_scenarios.yml).
**NOTE**: Baremetal requires setting the IPMI user and password to power on, off, and reboot nodes, using the config options `bm_user` and `bm_password`. It can either be set in the root of the entry in the scenarios config, or it can be set per machine.
If no per-machine addresses are specified, kraken attempts to use the BMC value in the BareMetalHost object. To list them, you can do 'oc get bmh -o wide --all-namespaces'. If the BMC values are blank, you must specify them per-machine using the config option 'bmc_addr' as specified below.
@@ -38,6 +42,8 @@ See the example node scenario or the example below.
**NOTE**: Baremetal machines are fragile. Some node actions can occasionally corrupt the filesystem if it does not shut down properly, and sometimes the kubelet does not start properly.
#### Docker
The Docker provider can be used to run node scenarios against kind clusters.
@@ -46,8 +52,11 @@ The Docker provider can be used to run node scenarios against kind clusters.
kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
#### GCP
How to set up GCP cli to run node scenarios is defined [here](cloud_setup.md#gcp).
Cloud setup instructions can be found [here](cloud_setup.md#gcp). Sample scenario config can be found [here](https://github.com/krkn-chaos/krkn/blob/main/scenarios/openshift/gcp_node_scenarios.yml).
#### Openstack
@@ -60,9 +69,11 @@ The supported node level chaos scenarios on an OPENSTACK cloud are `node_stop_st
To execute the scenario, ensure the value for `ssh_private_key` in the node scenarios config file is set with the correct private key file path for ssh connection to the helper node. Ensure passwordless ssh is configured on the host running Kraken and the helper node to avoid connection errors.
#### Azure
How to set up Azure cli to run node scenarios is defined [here](cloud_setup.md#azure).
Cloud setup instructions can be found [here](cloud_setup.md#azure). Sample scenario config can be found [here](https://github.com/krkn-chaos/krkn/blob/main/scenarios/openshift/azure_node_scenarios.yml).
#### Alibaba
@@ -73,43 +84,28 @@ How to set up Alibaba cli to run node scenarios is defined [here](cloud_setup.md
. Releasing a node is 2 steps, stopping the node and then releasing it.
#### VMware
How to set up VMware vSphere to run node scenarios is defined [here](cloud_setup.md#vmware)
This cloud type uses a different configuration style, see actions below and [example config file](../scenarios/openshift/vmware_node_scenarios.yml)
*vmware-node-terminate, vmware-node-reboot, vmware-node-stop, vmware-node-start*
- vmware-node-terminate
- vmware-node-reboot
- vmware-node-stop
- vmware-node-start
#### IBMCloud
How to set up IBMCloud to run node scenarios is defined [here](cloud_setup.md#ibmcloud)
This cloud type uses a different configuration style, see actions below and [example config file](../scenarios/openshift/ibmcloud_node_scenarios.yml)
*ibmcloud-node-terminate, ibmcloud-node-reboot, ibmcloud-node-stop, ibmcloud-node-start
*
#### IBMCloud and Vmware example
```
- id: ibmcloud-node-stop
config:
name: "<node_name>"
label_selector: "node-role.kubernetes.io/worker" # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time)
instance_count: 1 # Number of nodes to perform action/select that match the label selector
timeout: 30 # Duration to wait for completion of node scenario injection
skip_openshift_checks: False # Set to True if you don't want to wait for the status of the nodes to change on OpenShift before passing the scenario
- id: ibmcloud-node-start
config:
name: "<node_name>" #Same name as before
label_selector: "node-role.kubernetes.io/worker" # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time)
instance_count: 1 # Number of nodes to perform action/select that match the label selector
timeout: 30 # Duration to wait for completion of node scenario injection
skip_openshift_checks: False # Set to True if you don't want to wait for the status of the nodes to change on OpenShift before passing the scenario
```
- ibmcloud-node-terminate
- ibmcloud-node-reboot
- ibmcloud-node-stop
- ibmcloud-node-start
@@ -118,60 +114,3 @@ This cloud type uses a different configuration style, see actions below and [exa
**NOTE**: The `node_crash_scenario` and `stop_kubelet_scenario` scenario is supported independent of the cloud platform.
Use 'generic' or do not add the 'cloud_type' key to your scenario if your cluster is not set up using one of the current supported cloud types.
Node scenarios can be injected by placing the node scenarios config files under node_scenarios option in the kraken config. Refer to [node_scenarios_example](https://github.com/redhat-chaos/krkn/blob/main/scenarios/node_scenarios_example.yml) config file.
```
node_scenarios:
- actions: # Node chaos scenarios to be injected.
- node_stop_start_scenario
- stop_start_kubelet_scenario
- node_crash_scenario
node_name: # Node on which scenario has to be injected.
label_selector: node-role.kubernetes.io/worker # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection.
instance_count: 1 # Number of nodes to perform action/select that match the label selector.
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time).
timeout: 120 # Duration to wait for completion of node scenario injection.
cloud_type: aws # Cloud type on which Kubernetes/OpenShift runs.
- actions:
- node_reboot_scenario
node_name:
label_selector: node-role.kubernetes.io/infra
instance_count: 1
timeout: 120
cloud_type: azure
- actions:
- node_crash_scenario
node_name:
label_selector: node-role.kubernetes.io/infra
instance_count: 1
timeout: 120
- actions:
- stop_start_helper_node_scenario # Node chaos scenario for helper node.
instance_count: 1
timeout: 120
helper_node_ip: # ip address of the helper node.
service: # Check status of the services on the helper node.
- haproxy
- dhcpd
- named
ssh_private_key: /root/.ssh/id_rsa # ssh key to access the helper node.
cloud_type: openstack
- actions:
- node_stop_start_scenario
node_name:
label_selector: node-role.kubernetes.io/worker
instance_count: 1
timeout: 120
cloud_type: bm
bmc_user: defaultuser # For baremetal (bm) cloud type. The default IPMI username. Optional if specified for all machines.
bmc_password: defaultpass # For baremetal (bm) cloud type. The default IPMI password. Optional if specified for all machines.
bmc_info: # This section is here to specify baremetal per-machine info, so it is optional if there is no per-machine info.
node-1: # The node name for the baremetal machine
bmc_addr: mgmt-machine1.example.com # Optional. For baremetal nodes with the IPMI BMC address missing from 'oc get bmh'.
node-2:
bmc_addr: mgmt-machine2.example.com
bmc_user: user # The baremetal IPMI user. Overrides the default IPMI user specified above. Optional if the default is set.
bmc_password: pass # The baremetal IPMI password. Overrides the default IPMI user specified above. Optional if the default is set.
```

View File

@@ -0,0 +1,33 @@
### SYN Flood Scenarios
This scenario generates a substantial amount of TCP traffic directed at one or more Kubernetes services within
the cluster to test the server's resiliency under extreme traffic conditions.
It can also target hosts outside the cluster by specifying a reachable IP address or hostname.
This scenario leverages the distributed nature of Kubernetes clusters to instantiate multiple instances
of the same pod against a single host, significantly increasing the effectiveness of the attack.
The configuration also allows for the specification of multiple node selectors, enabling Kubernetes to schedule
the attacker pods on a user-defined subset of nodes to make the test more realistic.
```yaml
packet-size: 120 # hping3 packet size
window-size: 64 # hping 3 TCP window size
duration: 10 # chaos scenario duration
namespace: default # namespace where the target service(s) are deployed
target-service: target-svc # target service name (if set target-service-label must be empty)
target-port: 80 # target service TCP port
target-service-label : "" # target service label, can be used to target multiple target at the same time
# if they have the same label set (if set target-service must be empty)
number-of-pods: 2 # number of attacker pod instantiated per each target
image: quay.io/krkn-chaos/krkn-syn-flood # syn flood attacker container image
attacker-nodes: # this will set the node affinity to schedule the attacker node. Per each node label selector
# can be specified multiple values in this way the kube scheduler will schedule the attacker pods
# in the best way possible based on the provided labels. Multiple labels can be specified
kubernetes.io/hostname:
- host_1
- host_2
kubernetes.io/os:
- linux
```
The attacker container source code is available [here](https://github.com/krkn-chaos/krkn-syn-flood).

View File

@@ -1,18 +1,24 @@
import yaml
import logging
import time
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
import kraken.cerberus.setup as cerberus
from jinja2 import Template
import kraken.invoke.command as runcommand
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
from kraken import utils
# Reads the scenario config, applies and deletes a network policy to
# block the traffic for the specified duration
def run(scenarios_list, config, wait_duration,kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
def run(scenarios_list,
config,
wait_duration,
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str) -> (list[str], list[ScenarioTelemetry]):
failed_post_scenarios = ""
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
@@ -20,7 +26,7 @@ def run(scenarios_list, config, wait_duration,kubecli: KrknKubernetes, telemetry
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = app_outage_config
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, app_outage_config)
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, app_outage_config)
if len(app_outage_config) > 1:
try:
with open(app_outage_config, "r") as f:
@@ -57,7 +63,7 @@ spec:
# Block the traffic by creating network policy
logging.info("Creating the network policy")
kubecli.create_net_policy(yaml_spec, namespace)
telemetry.kubecli.create_net_policy(yaml_spec, namespace)
# wait for the specified duration
logging.info("Waiting for the specified duration in the config: %s" % (duration))
@@ -65,7 +71,7 @@ spec:
# unblock the traffic by deleting the network policy
logging.info("Deleting the network policy")
kubecli.delete_net_policy("kraken-deny", namespace)
telemetry.kubecli.delete_net_policy("kraken-deny", namespace)
logging.info("End of scenario. Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
@@ -79,6 +85,16 @@ spec:
else:
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -5,23 +5,47 @@ import yaml
import logging
from pathlib import Path
from typing import List
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from .context_auth import ContextAuth
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from .. import utils
def run(scenarios_list: List[str], kubeconfig_path: str, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
def run(scenarios_list: List[str],
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str
) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
failed_post_scenarios = []
for scenario in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry,scenario)
start_time = time.time()
scenario_telemetry.start_timestamp = start_time
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, scenario)
engine_args = build_args(scenario)
status_code = run_workflow(engine_args, kubeconfig_path)
scenario_telemetry.end_timestamp = time.time()
status_code = run_workflow(engine_args, telemetry.kubecli.get_kubeconfig_path())
end_time = time.time()
scenario_telemetry.end_timestamp = end_time
scenario_telemetry.exit_status = status_code
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(start_time),
int(end_time))
# this is the design proposal for the namespaced logs collection
# check the krkn-lib latest commit to follow also the changes made here
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(start_time),
int(end_time))
scenario_telemetries.append(scenario_telemetry)
if status_code != 0:
failed_post_scenarios.append(scenario)

View File

@@ -1,892 +0,0 @@
import logging
import re
import sys
import time
from kubernetes import client, config, utils, watch
from kubernetes.client.rest import ApiException
from kubernetes.dynamic.client import DynamicClient
from kubernetes.stream import stream
from ..kubernetes.resources import (PVC, ChaosEngine, ChaosResult, Container,
LitmusChaosObject, Pod, Volume,
VolumeMount)
kraken_node_name = ""
# Load kubeconfig and initialize kubernetes python client
def initialize_clients(kubeconfig_path):
global cli
global batch_cli
global watch_resource
global api_client
global dyn_client
global custom_object_client
try:
if kubeconfig_path:
config.load_kube_config(kubeconfig_path)
else:
config.load_incluster_config()
api_client = client.ApiClient()
cli = client.CoreV1Api(api_client)
batch_cli = client.BatchV1Api(api_client)
custom_object_client = client.CustomObjectsApi(api_client)
dyn_client = DynamicClient(api_client)
watch_resource = watch.Watch()
except ApiException as e:
logging.error("Failed to initialize kubernetes client: %s\n" % e)
sys.exit(1)
def get_host() -> str:
"""Returns the Kubernetes server URL"""
return client.configuration.Configuration.get_default_copy().host
def get_clusterversion_string() -> str:
"""
Returns clusterversion status text on OpenShift, empty string
on other distributions
"""
try:
cvs = custom_object_client.list_cluster_custom_object(
"config.openshift.io",
"v1",
"clusterversions",
)
for cv in cvs["items"]:
for condition in cv["status"]["conditions"]:
if condition["type"] == "Progressing":
return condition["message"]
return ""
except client.exceptions.ApiException as e:
if e.status == 404:
return ""
else:
raise
# List all namespaces
def list_namespaces(label_selector=None):
namespaces = []
try:
if label_selector:
ret = cli.list_namespace(
pretty=True,
label_selector=label_selector
)
else:
ret = cli.list_namespace(pretty=True)
except ApiException as e:
logging.error(
"Exception when calling CoreV1Api->list_namespaced_pod: %s\n" % e
)
raise e
for namespace in ret.items:
namespaces.append(namespace.metadata.name)
return namespaces
def get_namespace_status(namespace_name):
"""Get status of a given namespace"""
ret = ""
try:
ret = cli.read_namespace_status(namespace_name)
except ApiException as e:
logging.error(
"Exception when calling CoreV1Api->read_namespace_status: %s\n" % e
)
return ret.status.phase
def delete_namespace(namespace):
"""Deletes a given namespace using kubernetes python client"""
try:
api_response = cli.delete_namespace(namespace)
logging.debug(
"Namespace deleted. status='%s'" % str(api_response.status)
)
return api_response
except Exception as e:
logging.error(
"Exception when calling \
CoreV1Api->delete_namespace: %s\n"
% e
)
def check_namespaces(namespaces, label_selectors=None):
"""Check if all the watch_namespaces are valid"""
try:
valid_namespaces = list_namespaces(label_selectors)
regex_namespaces = set(namespaces) - set(valid_namespaces)
final_namespaces = set(namespaces) - set(regex_namespaces)
valid_regex = set()
if regex_namespaces:
for namespace in valid_namespaces:
for regex_namespace in regex_namespaces:
if re.search(regex_namespace, namespace):
final_namespaces.add(namespace)
valid_regex.add(regex_namespace)
break
invalid_namespaces = regex_namespaces - valid_regex
if invalid_namespaces:
raise Exception(
"There exists no namespaces matching: %s" %
(invalid_namespaces)
)
return list(final_namespaces)
except Exception as e:
logging.info("%s" % (e))
sys.exit(1)
# List nodes in the cluster
def list_nodes(label_selector=None):
nodes = []
try:
if label_selector:
ret = cli.list_node(pretty=True, label_selector=label_selector)
else:
ret = cli.list_node(pretty=True)
except ApiException as e:
logging.error("Exception when calling CoreV1Api->list_node: %s\n" % e)
raise e
for node in ret.items:
nodes.append(node.metadata.name)
return nodes
# List nodes in the cluster that can be killed
def list_killable_nodes(label_selector=None):
nodes = []
try:
if label_selector:
ret = cli.list_node(pretty=True, label_selector=label_selector)
else:
ret = cli.list_node(pretty=True)
except ApiException as e:
logging.error("Exception when calling CoreV1Api->list_node: %s\n" % e)
raise e
for node in ret.items:
if kraken_node_name != node.metadata.name:
for cond in node.status.conditions:
if str(cond.type) == "Ready" and str(cond.status) == "True":
nodes.append(node.metadata.name)
return nodes
# List managedclusters attached to the hub that can be killed
def list_killable_managedclusters(label_selector=None):
managedclusters = []
try:
ret = custom_object_client.list_cluster_custom_object(
group="cluster.open-cluster-management.io",
version="v1",
plural="managedclusters",
label_selector=label_selector
)
except ApiException as e:
logging.error("Exception when calling CustomObjectsApi->list_cluster_custom_object: %s\n" % e)
raise e
for managedcluster in ret['items']:
conditions = managedcluster['status']['conditions']
available = list(filter(lambda condition: condition['reason'] == 'ManagedClusterAvailable', conditions))
if available and available[0]['status'] == 'True':
managedclusters.append(managedcluster['metadata']['name'])
return managedclusters
# List pods in the given namespace
def list_pods(namespace, label_selector=None):
pods = []
try:
if label_selector:
ret = cli.list_namespaced_pod(
namespace,
pretty=True,
label_selector=label_selector
)
else:
ret = cli.list_namespaced_pod(namespace, pretty=True)
except ApiException as e:
logging.error(
"Exception when calling \
CoreV1Api->list_namespaced_pod: %s\n"
% e
)
raise e
for pod in ret.items:
pods.append(pod.metadata.name)
return pods
def get_all_pods(label_selector=None):
pods = []
if label_selector:
ret = cli.list_pod_for_all_namespaces(
pretty=True,
label_selector=label_selector
)
else:
ret = cli.list_pod_for_all_namespaces(pretty=True)
for pod in ret.items:
pods.append([pod.metadata.name, pod.metadata.namespace])
return pods
# Execute command in pod
def exec_cmd_in_pod(
command,
pod_name,
namespace,
container=None,
base_command="bash"
):
exec_command = [base_command, "-c", command]
try:
if container:
ret = stream(
cli.connect_get_namespaced_pod_exec,
pod_name,
namespace,
container=container,
command=exec_command,
stderr=True,
stdin=False,
stdout=True,
tty=False,
)
else:
ret = stream(
cli.connect_get_namespaced_pod_exec,
pod_name,
namespace,
command=exec_command,
stderr=True,
stdin=False,
stdout=True,
tty=False,
)
except Exception:
return False
return ret
def delete_pod(name, namespace):
try:
cli.delete_namespaced_pod(name=name, namespace=namespace)
while cli.read_namespaced_pod(name=name, namespace=namespace):
time.sleep(1)
except ApiException as e:
if e.status == 404:
logging.info("Pod already deleted")
else:
logging.error("Failed to delete pod %s" % e)
raise e
def create_pod(body, namespace, timeout=120):
try:
pod_stat = None
pod_stat = cli.create_namespaced_pod(body=body, namespace=namespace)
end_time = time.time() + timeout
while True:
pod_stat = cli.read_namespaced_pod(
name=body["metadata"]["name"],
namespace=namespace
)
if pod_stat.status.phase == "Running":
break
if time.time() > end_time:
raise Exception("Starting pod failed")
time.sleep(1)
except Exception as e:
logging.error("Pod creation failed %s" % e)
if pod_stat:
logging.error(pod_stat.status.container_statuses)
delete_pod(body["metadata"]["name"], namespace)
sys.exit(1)
def read_pod(name, namespace="default"):
return cli.read_namespaced_pod(name=name, namespace=namespace)
def get_pod_log(name, namespace="default"):
return cli.read_namespaced_pod_log(
name=name,
namespace=namespace,
_return_http_data_only=True,
_preload_content=False
)
def get_containers_in_pod(pod_name, namespace):
pod_info = cli.read_namespaced_pod(pod_name, namespace)
container_names = []
for cont in pod_info.spec.containers:
container_names.append(cont.name)
return container_names
def delete_job(name, namespace="default"):
try:
api_response = batch_cli.delete_namespaced_job(
name=name,
namespace=namespace,
body=client.V1DeleteOptions(
propagation_policy="Foreground",
grace_period_seconds=0
),
)
logging.debug("Job deleted. status='%s'" % str(api_response.status))
return api_response
except ApiException as api:
logging.warn(
"Exception when calling \
BatchV1Api->create_namespaced_job: %s"
% api
)
logging.warn("Job already deleted\n")
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->delete_namespaced_job: %s\n"
% e
)
sys.exit(1)
def create_job(body, namespace="default"):
try:
api_response = batch_cli.create_namespaced_job(
body=body,
namespace=namespace
)
return api_response
except ApiException as api:
logging.warn(
"Exception when calling \
BatchV1Api->create_job: %s"
% api
)
if api.status == 409:
logging.warn("Job already present")
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->create_namespaced_job: %s"
% e
)
raise
def create_manifestwork(body, namespace):
try:
api_response = custom_object_client.create_namespaced_custom_object(
group="work.open-cluster-management.io",
version="v1",
plural="manifestworks",
body=body,
namespace=namespace
)
return api_response
except ApiException as e:
print("Exception when calling CustomObjectsApi->create_namespaced_custom_object: %s\n" % e)
def delete_manifestwork(namespace):
try:
api_response = custom_object_client.delete_namespaced_custom_object(
group="work.open-cluster-management.io",
version="v1",
plural="manifestworks",
name="managedcluster-scenarios-template",
namespace=namespace
)
return api_response
except ApiException as e:
print("Exception when calling CustomObjectsApi->delete_namespaced_custom_object: %s\n" % e)
def get_job_status(name, namespace="default"):
try:
return batch_cli.read_namespaced_job_status(
name=name,
namespace=namespace
)
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->read_namespaced_job_status: %s"
% e
)
raise
# Monitor the status of the cluster nodes and set the status to true or false
def monitor_nodes():
nodes = list_nodes()
notready_nodes = []
node_kerneldeadlock_status = "False"
for node in nodes:
try:
node_info = cli.read_node_status(node, pretty=True)
except ApiException as e:
logging.error(
"Exception when calling \
CoreV1Api->read_node_status: %s\n"
% e
)
raise e
for condition in node_info.status.conditions:
if condition.type == "KernelDeadlock":
node_kerneldeadlock_status = condition.status
elif condition.type == "Ready":
node_ready_status = condition.status
else:
continue
if node_kerneldeadlock_status != "False" or node_ready_status != "True": # noqa # noqa
notready_nodes.append(node)
if len(notready_nodes) != 0:
status = False
else:
status = True
return status, notready_nodes
# Monitor the status of the pods in the specified namespace
# and set the status to true or false
def monitor_namespace(namespace):
pods = list_pods(namespace)
notready_pods = []
for pod in pods:
try:
pod_info = cli.read_namespaced_pod_status(
pod,
namespace,
pretty=True
)
except ApiException as e:
logging.error(
"Exception when calling \
CoreV1Api->read_namespaced_pod_status: %s\n"
% e
)
raise e
pod_status = pod_info.status.phase
if (
pod_status != "Running" and
pod_status != "Completed" and
pod_status != "Succeeded"
):
notready_pods.append(pod)
if len(notready_pods) != 0:
status = False
else:
status = True
return status, notready_pods
# Monitor component namespace
def monitor_component(iteration, component_namespace):
watch_component_status, failed_component_pods = \
monitor_namespace(component_namespace)
logging.info(
"Iteration %s: %s: %s" % (
iteration,
component_namespace,
watch_component_status
)
)
return watch_component_status, failed_component_pods
def apply_yaml(path, namespace='default'):
"""
Apply yaml config to create Kubernetes resources
Args:
path (string)
- Path to the YAML file
namespace (string)
- Namespace to create the resource
Returns:
The object created
"""
return utils.create_from_yaml(
api_client,
yaml_file=path,
namespace=namespace
)
def get_pod_info(name: str, namespace: str = 'default') -> Pod:
"""
Function to retrieve information about a specific pod
in a given namespace. The kubectl command is given by:
kubectl get pods <name> -n <namespace>
Args:
name (string)
- Name of the pod
namespace (string)
- Namespace to look for the pod
Returns:
- Data class object of type Pod with the output of the above
kubectl command in the given format if the pod exists
- Returns None if the pod doesn't exist
"""
pod_exists = check_if_pod_exists(name=name, namespace=namespace)
if pod_exists:
response = cli.read_namespaced_pod(
name=name,
namespace=namespace,
pretty='true'
)
container_list = []
# Create a list of containers present in the pod
for container in response.spec.containers:
volume_mount_list = []
for volume_mount in container.volume_mounts:
volume_mount_list.append(
VolumeMount(
name=volume_mount.name,
mountPath=volume_mount.mount_path
)
)
container_list.append(
Container(
name=container.name,
image=container.image,
volumeMounts=volume_mount_list
)
)
for i, container in enumerate(response.status.container_statuses):
container_list[i].ready = container.ready
# Create a list of volumes associated with the pod
volume_list = []
for volume in response.spec.volumes:
volume_name = volume.name
pvc_name = (
volume.persistent_volume_claim.claim_name
if volume.persistent_volume_claim is not None
else None
)
volume_list.append(Volume(name=volume_name, pvcName=pvc_name))
# Create the Pod data class object
pod_info = Pod(
name=response.metadata.name,
podIP=response.status.pod_ip,
namespace=response.metadata.namespace,
containers=container_list,
nodeName=response.spec.node_name,
volumes=volume_list
)
return pod_info
else:
logging.error(
"Pod '%s' doesn't exist in namespace '%s'" % (
str(name),
str(namespace)
)
)
return None
def get_litmus_chaos_object(
kind: str,
name: str,
namespace: str
) -> LitmusChaosObject:
"""
Function that returns an object of a custom resource type of
the litmus project. Currently, only ChaosEngine and ChaosResult
objects are supported.
Args:
kind (string)
- The custom resource type
namespace (string)
- Namespace where the custom object is present
Returns:
Data class object of a subclass of LitmusChaosObject
"""
group = 'litmuschaos.io'
version = 'v1alpha1'
if kind.lower() == 'chaosengine':
plural = 'chaosengines'
response = custom_object_client.get_namespaced_custom_object(
group=group,
plural=plural,
version=version,
namespace=namespace,
name=name
)
try:
engine_status = response['status']['engineStatus']
exp_status = response['status']['experiments'][0]['status']
except Exception:
engine_status = 'Not Initialized'
exp_status = 'Not Initialized'
custom_object = ChaosEngine(
kind='ChaosEngine',
group=group,
namespace=namespace,
name=name,
plural=plural,
version=version,
engineStatus=engine_status,
expStatus=exp_status
)
elif kind.lower() == 'chaosresult':
plural = 'chaosresults'
response = custom_object_client.get_namespaced_custom_object(
group=group,
plural=plural,
version=version,
namespace=namespace,
name=name
)
try:
verdict = response['status']['experimentStatus']['verdict']
fail_step = response['status']['experimentStatus']['failStep']
except Exception:
verdict = 'N/A'
fail_step = 'N/A'
custom_object = ChaosResult(
kind='ChaosResult',
group=group,
namespace=namespace,
name=name,
plural=plural,
version=version,
verdict=verdict,
failStep=fail_step
)
else:
logging.error("Invalid litmus chaos custom resource name")
custom_object = None
return custom_object
def check_if_namespace_exists(name: str) -> bool:
"""
Function that checks if a namespace exists by parsing through
the list of projects.
Args:
name (string)
- Namespace name
Returns:
Boolean value indicating whether the namespace exists or not
"""
v1_projects = dyn_client.resources.get(
api_version='project.openshift.io/v1',
kind='Project'
)
project_list = v1_projects.get()
return True if name in str(project_list) else False
def check_if_pod_exists(name: str, namespace: str) -> bool:
"""
Function that checks if a pod exists in the given namespace
Args:
name (string)
- Pod name
namespace (string)
- Namespace name
Returns:
Boolean value indicating whether the pod exists or not
"""
namespace_exists = check_if_namespace_exists(namespace)
if namespace_exists:
pod_list = list_pods(namespace=namespace)
if name in pod_list:
return True
else:
logging.error("Namespace '%s' doesn't exist" % str(namespace))
return False
def check_if_pvc_exists(name: str, namespace: str) -> bool:
"""
Function that checks if a namespace exists by parsing through
the list of projects.
Args:
name (string)
- PVC name
namespace (string)
- Namespace name
Returns:
Boolean value indicating whether the Persistent Volume Claim
exists or not.
"""
namespace_exists = check_if_namespace_exists(namespace)
if namespace_exists:
response = cli.list_namespaced_persistent_volume_claim(
namespace=namespace
)
pvc_list = [pvc.metadata.name for pvc in response.items]
if name in pvc_list:
return True
else:
logging.error("Namespace '%s' doesn't exist" % str(namespace))
return False
def get_pvc_info(name: str, namespace: str) -> PVC:
"""
Function to retrieve information about a Persistent Volume Claim in a
given namespace
Args:
name (string)
- Name of the persistent volume claim
namespace (string)
- Namespace where the persistent volume claim is present
Returns:
- A PVC data class containing the name, capacity, volume name,
namespace and associated pod names of the PVC if the PVC exists
- Returns None if the PVC doesn't exist
"""
pvc_exists = check_if_pvc_exists(name=name, namespace=namespace)
if pvc_exists:
pvc_info_response = cli.read_namespaced_persistent_volume_claim(
name=name,
namespace=namespace,
pretty=True
)
pod_list_response = cli.list_namespaced_pod(namespace=namespace)
capacity = pvc_info_response.status.capacity['storage']
volume_name = pvc_info_response.spec.volume_name
# Loop through all pods in the namespace to find associated PVCs
pvc_pod_list = []
for pod in pod_list_response.items:
for volume in pod.spec.volumes:
if (
volume.persistent_volume_claim is not None
and volume.persistent_volume_claim.claim_name == name
):
pvc_pod_list.append(pod.metadata.name)
pvc_info = PVC(
name=name,
capacity=capacity,
volumeName=volume_name,
podNames=pvc_pod_list,
namespace=namespace
)
return pvc_info
else:
logging.error(
"PVC '%s' doesn't exist in namespace '%s'" % (
str(name),
str(namespace)
)
)
return None
# Find the node kraken is deployed on
# Set global kraken node to not delete
def find_kraken_node():
pods = get_all_pods()
kraken_pod_name = None
for pod in pods:
if "kraken-deployment" in pod[0]:
kraken_pod_name = pod[0]
kraken_project = pod[1]
break
# have to switch to proper project
if kraken_pod_name:
# get kraken-deployment pod, find node name
try:
node_name = get_pod_info(kraken_pod_name, kraken_project).nodeName
global kraken_node_name
kraken_node_name = node_name
except Exception as e:
logging.info("%s" % (e))
sys.exit(1)
# Watch for a specific node status
def watch_node_status(node, status, timeout, resource_version):
count = timeout
for event in watch_resource.stream(
cli.list_node,
field_selector=f"metadata.name={node}",
timeout_seconds=timeout,
resource_version=f"{resource_version}"
):
conditions = [
status
for status in event["object"].status.conditions
if status.type == "Ready"
]
if conditions[0].status == status:
watch_resource.stop()
break
else:
count -= 1
logging.info(
"Status of node " + node + ": " + str(conditions[0].status)
)
if not count:
watch_resource.stop()
# Watch for a specific managedcluster status
# TODO: Implement this with a watcher instead of polling
def watch_managedcluster_status(managedcluster, status, timeout):
elapsed_time = 0
while True:
conditions = custom_object_client.get_cluster_custom_object_status(
"cluster.open-cluster-management.io", "v1", "managedclusters", managedcluster
)['status']['conditions']
available = list(filter(lambda condition: condition['reason'] == 'ManagedClusterAvailable', conditions))
if status == "True":
if available and available[0]['status'] == "True":
logging.info("Status of managedcluster " + managedcluster + ": Available")
return True
else:
if not available:
logging.info("Status of managedcluster " + managedcluster + ": Unavailable")
return True
time.sleep(2)
elapsed_time += 2
if elapsed_time >= timeout:
logging.info("Timeout waiting for managedcluster " + managedcluster + " to become: " + status)
return False
# Get the resource version for the specified node
def get_node_resource_version(node):
return cli.read_node(name=node).metadata.resource_version

View File

@@ -1,74 +0,0 @@
from dataclasses import dataclass
from typing import List
@dataclass(frozen=True, order=False)
class Volume:
"""Data class to hold information regarding volumes in a pod"""
name: str
pvcName: str
@dataclass(order=False)
class VolumeMount:
"""Data class to hold information regarding volume mounts"""
name: str
mountPath: str
@dataclass(frozen=True, order=False)
class PVC:
"""Data class to hold information regarding persistent volume claims"""
name: str
capacity: str
volumeName: str
podNames: List[str]
namespace: str
@dataclass(order=False)
class Container:
"""Data class to hold information regarding containers in a pod"""
image: str
name: str
volumeMounts: List[VolumeMount]
ready: bool = False
@dataclass(frozen=True, order=False)
class Pod:
"""Data class to hold information regarding a pod"""
name: str
podIP: str
namespace: str
containers: List[Container]
nodeName: str
volumes: List[Volume]
@dataclass(frozen=True, order=False)
class LitmusChaosObject:
"""Data class to hold information regarding a custom object of litmus project"""
kind: str
group: str
namespace: str
name: str
plural: str
version: str
@dataclass(frozen=True, order=False)
class ChaosEngine(LitmusChaosObject):
"""Data class to hold information regarding a ChaosEngine object"""
engineStatus: str
expStatus: str
@dataclass(frozen=True, order=False)
class ChaosResult(LitmusChaosObject):
"""Data class to hold information regarding a ChaosResult object"""
verdict: str
failStep: str

View File

@@ -3,19 +3,26 @@ import logging
import time
import os
import random
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
import kraken.cerberus.setup as cerberus
import kraken.node_actions.common_node_functions as common_node_functions
from jinja2 import Environment, FileSystemLoader
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
from kraken import utils
# krkn_lib
# Reads the scenario config and introduces traffic variations in Node's host network interface.
def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
failed_post_scenarios = ""
def run(scenarios_list,
config,
wait_duration,
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str) -> (list[str], list[ScenarioTelemetry]):
logging.info("Runing the Network Chaos tests")
failed_post_scenarios = ""
scenario_telemetries: list[ScenarioTelemetry] = []
@@ -24,7 +31,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = net_config
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, net_config)
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, net_config)
try:
with open(net_config, "r") as file:
param_lst = ["latency", "loss", "bandwidth"]
@@ -56,11 +63,11 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
node_name_list = [test_node]
nodelst = []
for single_node_name in node_name_list:
nodelst.extend(common_node_functions.get_node(single_node_name, test_node_label, test_instance_count, kubecli))
nodelst.extend(common_node_functions.get_node(single_node_name, test_node_label, test_instance_count, telemetry.kubecli))
file_loader = FileSystemLoader(os.path.abspath(os.path.dirname(__file__)))
env = Environment(loader=file_loader, autoescape=True)
pod_template = env.get_template("pod.j2")
test_interface = verify_interface(test_interface, nodelst, pod_template, kubecli)
test_interface = verify_interface(test_interface, nodelst, pod_template, telemetry.kubecli)
joblst = []
egress_lst = [i for i in param_lst if i in test_egress]
chaos_config = {
@@ -86,13 +93,13 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
job_template.render(jobname=i + str(hash(node))[:5], nodename=node, cmd=exec_cmd)
)
joblst.append(job_body["metadata"]["name"])
api_response = kubecli.create_job(job_body)
api_response = telemetry.kubecli.create_job(job_body)
if api_response is None:
raise Exception("Error creating job")
if test_execution == "serial":
logging.info("Waiting for serial job to finish")
start_time = int(time.time())
wait_for_job(joblst[:], kubecli, test_duration + 300)
wait_for_job(joblst[:], telemetry.kubecli, test_duration + 300)
logging.info("Waiting for wait_duration %s" % wait_duration)
time.sleep(wait_duration)
end_time = int(time.time())
@@ -102,7 +109,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
if test_execution == "parallel":
logging.info("Waiting for parallel job to finish")
start_time = int(time.time())
wait_for_job(joblst[:], kubecli, test_duration + 300)
wait_for_job(joblst[:], telemetry.kubecli, test_duration + 300)
logging.info("Waiting for wait_duration %s" % wait_duration)
time.sleep(wait_duration)
end_time = int(time.time())
@@ -112,13 +119,24 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
raise RuntimeError()
finally:
logging.info("Deleting jobs")
delete_job(joblst[:], kubecli)
delete_job(joblst[:], telemetry.kubecli)
except (RuntimeError, Exception):
scenario_telemetry.exit_status = 1
failed_scenarios.append(net_config)
log_exception(net_config)
else:
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -1,5 +1,6 @@
import sys
import logging
import time
import kraken.invoke.command as runcommand
import kraken.node_actions.common_node_functions as nodeaction
from krkn_lib.k8s import KrknKubernetes
@@ -18,9 +19,11 @@ class abstract_node_scenarios:
pass
# Node scenario to stop and then start the node
def node_stop_start_scenario(self, instance_kill_count, node, timeout):
def node_stop_start_scenario(self, instance_kill_count, node, timeout, duration):
logging.info("Starting node_stop_start_scenario injection")
self.node_stop_scenario(instance_kill_count, node, timeout)
logging.info("Waiting for %s seconds before starting the node" % (duration))
time.sleep(duration)
self.node_start_scenario(instance_kill_count, node, timeout)
logging.info("node_stop_start_scenario has been successfully injected!")
@@ -62,6 +65,26 @@ class abstract_node_scenarios:
self.node_reboot_scenario(instance_kill_count, node, timeout)
logging.info("stop_start_kubelet_scenario has been successfully injected!")
# Node scenario to restart the kubelet
def restart_kubelet_scenario(self, instance_kill_count, node, timeout):
for _ in range(instance_kill_count):
try:
logging.info("Starting restart_kubelet_scenario injection")
logging.info("Restarting the kubelet of the node %s" % (node))
runcommand.run("oc debug node/" + node + " -- chroot /host systemctl restart kubelet &")
nodeaction.wait_for_not_ready_status(node, timeout, self.kubecli)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("The kubelet of the node %s has been restarted" % (node))
logging.info("restart_kubelet_scenario has been successfuly injected!")
except Exception as e:
logging.error(
"Failed to restart the kubelet of the node. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("restart_kubelet_scenario injection failed!")
sys.exit(1)
# Node scenario to crash the node
def node_crash_scenario(self, instance_kill_count, node, timeout):
for _ in range(instance_kill_count):

View File

@@ -13,7 +13,11 @@ class AWS:
# Get the instance ID of the node
def get_instance_id(self, node):
return self.boto_client.describe_instances(Filters=[{"Name": "private-dns-name", "Values": [node]}])[
instance = self.boto_client.describe_instances(Filters=[{"Name": "private-dns-name", "Values": [node]}])
if len(instance['Reservations']) == 0:
node = node[3:].replace('-','.')
instance = self.boto_client.describe_instances(Filters=[{"Name": "private-ip-address", "Values": [node]}])
return instance[
"Reservations"
][0]["Instances"][0]["InstanceId"]

View File

@@ -1,6 +1,6 @@
import time
import yaml
import os
import kraken.invoke.command as runcommand
import logging
import kraken.node_actions.common_node_functions as nodeaction
@@ -17,9 +17,9 @@ class Azure:
# Acquire a credential object using CLI-based authentication.
credentials = DefaultAzureCredential()
logging.info("credential " + str(credentials))
az_account = runcommand.invoke("az account list -o yaml")
az_account_yaml = yaml.safe_load(az_account, Loader=yaml.FullLoader)
subscription_id = az_account_yaml[0]["id"]
# az_account = runcommand.invoke("az account list -o yaml")
# az_account_yaml = yaml.safe_load(az_account, Loader=yaml.FullLoader)
subscription_id = os.getenv("AZURE_SUBSCRIPTION_ID")
self.compute_client = ComputeManagementClient(credentials, subscription_id)
# Get the instance ID of the node

View File

@@ -1,6 +1,8 @@
import os
import sys
import time
import logging
import json
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
from googleapiclient import discovery
@@ -10,11 +12,19 @@ from krkn_lib.k8s import KrknKubernetes
class GCP:
def __init__(self):
try:
gapp_creds = os.getenv("GOOGLE_APPLICATION_CREDENTIALS")
with open(gapp_creds, "r") as f:
f_str = f.read()
self.project = json.loads(f_str)['project_id']
#self.project = runcommand.invoke("gcloud config get-value project").split("/n")[0].strip()
logging.info("project " + str(self.project) + "!")
credentials = GoogleCredentials.get_application_default()
self.client = discovery.build("compute", "v1", credentials=credentials, cache_discovery=False)
self.project = runcommand.invoke("gcloud config get-value project").split("/n")[0].strip()
logging.info("project " + str(self.project) + "!")
credentials = GoogleCredentials.get_application_default()
self.client = discovery.build("compute", "v1", credentials=credentials, cache_discovery=False)
except Exception as e:
logging.error("Error on setting up GCP connection: " + str(e))
sys.exit(1)
# Get the instance ID of the node
def get_instance_id(self, node):

View File

@@ -2,6 +2,10 @@ import yaml
import logging
import sys
import time
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from kraken import utils
from kraken.node_actions.aws_node_scenarios import aws_node_scenarios
from kraken.node_actions.general_cloud_node_scenarios import general_node_scenarios
from kraken.node_actions.az_node_scenarios import azure_node_scenarios
@@ -55,23 +59,27 @@ def get_node_scenario_object(node_scenario, kubecli: KrknKubernetes):
# Run defined scenarios
# krkn_lib
def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
def run(scenarios_list,
config,
wait_duration,
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
for node_scenario_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = node_scenario_config
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, node_scenario_config)
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, node_scenario_config)
with open(node_scenario_config, "r") as f:
node_scenario_config = yaml.full_load(f)
for node_scenario in node_scenario_config["node_scenarios"]:
node_scenario_object = get_node_scenario_object(node_scenario, kubecli)
node_scenario_object = get_node_scenario_object(node_scenario, telemetry.kubecli)
if node_scenario["actions"]:
for action in node_scenario["actions"]:
start_time = int(time.time())
try:
inject_node_scenario(action, node_scenario, node_scenario_object, kubecli)
inject_node_scenario(action, node_scenario, node_scenario_object, telemetry.kubecli)
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
end_time = int(time.time())
@@ -85,6 +93,16 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries
@@ -100,6 +118,8 @@ def inject_node_scenario(action, node_scenario, node_scenario_object, kubecli: K
)
node_name = get_yaml_item_value(node_scenario, "node_name", "")
label_selector = get_yaml_item_value(node_scenario, "label_selector", "")
if action == "node_stop_start_scenario":
duration = get_yaml_item_value(node_scenario, "duration", 120)
timeout = get_yaml_item_value(node_scenario, "timeout", 120)
service = get_yaml_item_value(node_scenario, "service", "")
ssh_private_key = get_yaml_item_value(
@@ -121,13 +141,15 @@ def inject_node_scenario(action, node_scenario, node_scenario_object, kubecli: K
elif action == "node_stop_scenario":
node_scenario_object.node_stop_scenario(run_kill_count, single_node, timeout)
elif action == "node_stop_start_scenario":
node_scenario_object.node_stop_start_scenario(run_kill_count, single_node, timeout)
node_scenario_object.node_stop_start_scenario(run_kill_count, single_node, timeout, duration)
elif action == "node_termination_scenario":
node_scenario_object.node_termination_scenario(run_kill_count, single_node, timeout)
elif action == "node_reboot_scenario":
node_scenario_object.node_reboot_scenario(run_kill_count, single_node, timeout)
elif action == "stop_start_kubelet_scenario":
node_scenario_object.stop_start_kubelet_scenario(run_kill_count, single_node, timeout)
elif action == "restart_kubelet_scenario":
node_scenario_object.restart_kubelet_scenario(run_kill_count, single_node, timeout)
elif action == "stop_kubelet_scenario":
node_scenario_object.stop_kubelet_scenario(run_kill_count, single_node, timeout)
elif action == "node_crash_scenario":

View File

@@ -9,9 +9,11 @@ from arcaflow_plugin_sdk import schema, serialization, jsonschema
from arcaflow_plugin_kill_pod import kill_pods, wait_for_pods
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.k8s.pods_monitor_pool import PodsMonitorPool
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
import kraken.plugins.node_scenarios.vmware_plugin as vmware_plugin
import kraken.plugins.node_scenarios.ibmcloud_plugin as ibmcloud_plugin
from kraken import utils
from kraken.plugins.run_python_plugin import run_python_file
from kraken.plugins.network.ingress_shaping import network_chaos
from kraken.plugins.pod_network_outage.pod_network_outage_plugin import pod_outage
@@ -53,7 +55,7 @@ class Plugins:
def unserialize_scenario(self, file: str) -> Any:
return serialization.load_from_file(abspath(file))
def run(self, file: str, kubeconfig_path: str, kraken_config: str):
def run(self, file: str, kubeconfig_path: str, kraken_config: str, run_uuid:str):
"""
Run executes a series of steps
"""
@@ -102,7 +104,8 @@ class Plugins:
unserialized_input.kubeconfig_path = kubeconfig_path
if "kraken_config" in step.schema.input.properties:
unserialized_input.kraken_config = kraken_config
output_id, output_data = step.schema(unserialized_input)
output_id, output_data = step.schema(params=unserialized_input, run_id=run_uuid)
logging.info(step.render_output(output_id, output_data) + "\n")
if output_id in step.error_output_ids:
raise Exception(
@@ -248,12 +251,12 @@ PLUGINS = Plugins(
def run(scenarios: List[str],
kubeconfig_path: str,
kraken_config: str,
failed_post_scenarios: List[str],
wait_duration: int,
telemetry: KrknTelemetryKubernetes,
kubecli: KrknKubernetes
telemetry: KrknTelemetryOpenshift,
run_uuid: str,
telemetry_request_id: str,
) -> (List[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
@@ -261,14 +264,14 @@ def run(scenarios: List[str],
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, scenario)
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, scenario)
logging.info('scenario ' + str(scenario))
pool = PodsMonitorPool(kubecli)
pool = PodsMonitorPool(telemetry.kubecli)
kill_scenarios = [kill_scenario for kill_scenario in PLUGINS.unserialize_scenario(scenario) if kill_scenario["id"] == "kill-pods"]
try:
start_monitoring(pool, kill_scenarios)
PLUGINS.run(scenario, kubeconfig_path, kraken_config)
PLUGINS.run(scenario, telemetry.kubecli.get_kubeconfig_path(), kraken_config, run_uuid)
result = pool.join()
scenario_telemetry.affected_pods = result
if result.error:
@@ -284,8 +287,19 @@ def run(scenarios: List[str],
scenario_telemetry.exit_status = 0
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
scenario_telemetries.append(scenario_telemetry)
scenario_telemetry.end_timestamp = time.time()
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_post_scenarios, scenario_telemetries

View File

@@ -7,15 +7,17 @@ import sys
import random
import arcaflow_plugin_kill_pod
from krkn_lib.k8s.pods_monitor_pool import PodsMonitorPool
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
import kraken.cerberus.setup as cerberus
import kraken.post_actions.actions as post_actions
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from arcaflow_plugin_sdk import serialization
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
from kraken import utils
# Run pod based scenarios
def run(kubeconfig_path, scenarios_list, config, failed_post_scenarios, wait_duration):
@@ -73,25 +75,26 @@ def run(kubeconfig_path, scenarios_list, config, failed_post_scenarios, wait_dur
# krkn_lib
def container_run(kubeconfig_path,
def container_run(
scenarios_list,
config,
failed_post_scenarios,
wait_duration,
kubecli: KrknKubernetes,
telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str
) -> (list[str], list[ScenarioTelemetry]):
failed_scenarios = []
scenario_telemetries: list[ScenarioTelemetry] = []
pool = PodsMonitorPool(kubecli)
pool = PodsMonitorPool(telemetry.kubecli)
for container_scenario_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = container_scenario_config[0]
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, container_scenario_config[0])
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, container_scenario_config[0])
if len(container_scenario_config) > 1:
pre_action_output = post_actions.run(kubeconfig_path, container_scenario_config[1])
pre_action_output = post_actions.run(telemetry.kubecli.get_kubeconfig_path(), container_scenario_config[1])
else:
pre_action_output = ""
with open(container_scenario_config[0], "r") as f:
@@ -101,7 +104,7 @@ def container_run(kubeconfig_path,
# capture start time
start_time = int(time.time())
try:
killed_containers = container_killing_in_pod(cont_scenario, kubecli)
killed_containers = container_killing_in_pod(cont_scenario, telemetry.kubecli)
logging.info(f"killed containers: {str(killed_containers)}")
result = pool.join()
if result.error:
@@ -125,6 +128,16 @@ def container_run(kubeconfig_path,
else:
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -1,16 +1,30 @@
from __future__ import annotations
import datetime
import os.path
from typing import Optional
from typing import Optional, List, Dict, Any
import urllib3
import logging
import sys
import yaml
from krkn_lib.elastic.krkn_elastic import KrknElastic
from krkn_lib.models.elastic.models import ElasticAlert
from krkn_lib.models.krkn import ChaosRunAlertSummary, ChaosRunAlert
from krkn_lib.prometheus.krkn_prometheus import KrknPrometheus
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
def alerts(prom_cli: KrknPrometheus, start_time, end_time, alert_profile):
def alerts(prom_cli: KrknPrometheus,
elastic: KrknElastic,
run_uuid,
start_time,
end_time,
alert_profile,
elastic_collect_alerts,
elastic_alerts_index
):
if alert_profile is None or os.path.exists(alert_profile) is False:
logging.error(f"{alert_profile} alert profile does not exist")
@@ -20,7 +34,7 @@ def alerts(prom_cli: KrknPrometheus, start_time, end_time, alert_profile):
profile_yaml = yaml.safe_load(profile)
if not isinstance(profile_yaml, list):
logging.error(f"{alert_profile} wrong file format, alert profile must be "
f"a valid yaml file containing a list of items with 3 properties: "
f"a valid yaml file containing a list of items with at least 3 properties: "
f"expr, description, severity" )
sys.exit(1)
@@ -28,9 +42,20 @@ def alerts(prom_cli: KrknPrometheus, start_time, end_time, alert_profile):
if list(alert.keys()).sort() != ["expr", "description", "severity"].sort():
logging.error(f"wrong alert {alert}, skipping")
prom_cli.process_alert(alert,
processed_alert = prom_cli.process_alert(alert,
datetime.datetime.fromtimestamp(start_time),
datetime.datetime.fromtimestamp(end_time))
if processed_alert[0] and processed_alert[1] and elastic and elastic_collect_alerts:
elastic_alert = ElasticAlert(run_uuid=run_uuid,
severity=alert["severity"],
alert=processed_alert[1],
created_at=datetime.datetime.fromtimestamp(processed_alert[0])
)
result = elastic.push_alert(elastic_alert, elastic_alerts_index)
if result == -1:
logging.error("failed to save alert on ElasticSearch")
pass
def critical_alerts(prom_cli: KrknPrometheus,
@@ -86,3 +111,57 @@ def critical_alerts(prom_cli: KrknPrometheus,
if not firing_alerts:
logging.info("No critical alerts are firing!!")
def metrics(prom_cli: KrknPrometheus,
elastic: KrknElastic,
run_uuid,
start_time,
end_time,
metrics_profile,
elastic_collect_metrics,
elastic_metrics_index
) -> list[dict[str, list[(int, float)] | str]]:
metrics_list: list[dict[str, list[(int, float)] | str]] = []
if metrics_profile is None or os.path.exists(metrics_profile) is False:
logging.error(f"{metrics_profile} alert profile does not exist")
sys.exit(1)
with open(metrics_profile) as profile:
profile_yaml = yaml.safe_load(profile)
if not profile_yaml["metrics"] or not isinstance(profile_yaml["metrics"], list):
logging.error(f"{metrics_profile} wrong file format, alert profile must be "
f"a valid yaml file containing a list of items with 3 properties: "
f"expr, description, severity" )
sys.exit(1)
for metric_query in profile_yaml["metrics"]:
if list(metric_query.keys()).sort() != ["query", "metricName", "instant"].sort():
logging.error(f"wrong alert {metric_query}, skipping")
metrics_result = prom_cli.process_prom_query_in_range(
metric_query["query"],
start_time=datetime.datetime.fromtimestamp(start_time),
end_time=datetime.datetime.fromtimestamp(end_time)
)
metric = {"name": metric_query["metricName"], "values":[]}
for returned_metric in metrics_result:
if "values" in returned_metric:
for value in returned_metric["values"]:
try:
metric["values"].append((value[0], float(value[1])))
except ValueError:
pass
metrics_list.append(metric)
if elastic_collect_metrics and elastic:
result = elastic.upload_metrics_to_elasticsearch(run_uuid=run_uuid, index=elastic_metrics_index, raw_data=metrics_list)
if result == -1:
logging.error("failed to save metrics on ElasticSearch")
return metrics_list

View File

@@ -3,15 +3,21 @@ import random
import re
import time
import yaml
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from .. import utils
from ..cerberus import setup as cerberus
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
# krkn_lib
def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
def run(scenarios_list,
config,
wait_duration,
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str) -> (list[str], list[ScenarioTelemetry]):
"""
Reads the scenario config and creates a temp file to fill up the PVC
"""
@@ -22,7 +28,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = app_config
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, app_config)
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, app_config)
try:
if len(app_config) > 1:
with open(app_config, "r") as f:
@@ -85,7 +91,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
"pod_name '%s' will be overridden with one of "
"the pods mounted in the PVC" % (str(pod_name))
)
pvc = kubecli.get_pvc_info(pvc_name, namespace)
pvc = telemetry.kubecli.get_pvc_info(pvc_name, namespace)
try:
# random generator not used for
# security/cryptographic purposes.
@@ -100,7 +106,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
raise RuntimeError()
# Get volume name
pod = kubecli.get_pod_info(name=pod_name, namespace=namespace)
pod = telemetry.kubecli.get_pod_info(name=pod_name, namespace=namespace)
if pod is None:
logging.error(
@@ -117,7 +123,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
if volume.pvcName is not None:
volume_name = volume.name
pvc_name = volume.pvcName
pvc = kubecli.get_pvc_info(pvc_name, namespace)
pvc = telemetry.kubecli.get_pvc_info(pvc_name, namespace)
break
if 'pvc' not in locals():
logging.error(
@@ -144,7 +150,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
# Get PVC capacity and used bytes
command = "df %s -B 1024 | sed 1d" % (str(mount_path))
command_output = (
kubecli.exec_cmd_in_pod(
telemetry.kubecli.exec_cmd_in_pod(
command,
pod_name,
namespace,
@@ -206,7 +212,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
logging.debug(
"Create temp file in the PVC command:\n %s" % command
)
kubecli.exec_cmd_in_pod(
telemetry.kubecli.exec_cmd_in_pod(
command,
pod_name,
namespace,
@@ -216,7 +222,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
# Check if file is created
command = "ls -lh %s" % (str(mount_path))
logging.debug("Check file is created command:\n %s" % command)
response = kubecli.exec_cmd_in_pod(
response = telemetry.kubecli.exec_cmd_in_pod(
command, pod_name, namespace, container_name
)
logging.info("\n" + str(response))
@@ -238,7 +244,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
container_name,
mount_path,
file_size_kb,
kubecli
telemetry.kubecli
)
# sys.exit(1)
raise RuntimeError()
@@ -275,14 +281,14 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
logging.debug(
"Create temp file in the PVC command:\n %s" % command
)
kubecli.exec_cmd_in_pod(
telemetry.kubecli.exec_cmd_in_pod(
command, pod_name, namespace, container_name
)
# Check if file is created
command = "ls -lh %s" % (str(mount_path))
logging.debug("Check file is created command:\n %s" % command)
response = kubecli.exec_cmd_in_pod(
response = telemetry.kubecli.exec_cmd_in_pod(
command, pod_name, namespace, container_name
)
logging.info("\n" + str(response))
@@ -303,7 +309,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
container_name,
mount_path,
file_size_kb,
kubecli
telemetry.kubecli
)
logging.info("End of scenario. Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
@@ -321,6 +327,18 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
log_exception(app_config)
else:
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -1,14 +1,18 @@
import time
import random
import logging
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
import kraken.cerberus.setup as cerberus
import kraken.post_actions.actions as post_actions
import yaml
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
from kraken import utils
def delete_objects(kubecli, namespace):
@@ -156,9 +160,8 @@ def run(
config,
wait_duration,
failed_post_scenarios,
kubeconfig_path,
kubecli: KrknKubernetes,
telemetry: KrknTelemetryKubernetes
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str
) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
@@ -166,10 +169,10 @@ def run(
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario_config[0]
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, scenario_config[0])
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, scenario_config[0])
try:
if len(scenario_config) > 1:
pre_action_output = post_actions.run(kubeconfig_path, scenario_config[1])
pre_action_output = post_actions.run(telemetry.kubecli.get_kubeconfig_path(), scenario_config[1])
else:
pre_action_output = ""
with open(scenario_config[0], "r") as f:
@@ -206,7 +209,7 @@ def run(
start_time = int(time.time())
for i in range(run_count):
killed_namespaces = {}
namespaces = kubecli.check_namespaces([scenario_namespace], scenario_label)
namespaces = telemetry.kubecli.check_namespaces([scenario_namespace], scenario_label)
for j in range(delete_count):
if len(namespaces) == 0:
logging.error(
@@ -220,7 +223,7 @@ def run(
logging.info('Delete objects in selected namespace: ' + selected_namespace )
try:
# delete all pods in namespace
objects = delete_objects(kubecli,selected_namespace)
objects = delete_objects(telemetry.kubecli,selected_namespace)
killed_namespaces[selected_namespace] = objects
logging.info("Deleted all objects in namespace %s was successful" % str(selected_namespace))
except Exception as e:
@@ -236,7 +239,7 @@ def run(
if len(scenario_config) > 1:
try:
failed_post_scenarios = post_actions.check_recovery(
kubeconfig_path, scenario_config, failed_post_scenarios, pre_action_output
telemetry.kubecli.get_kubeconfig_path(), scenario_config, failed_post_scenarios, pre_action_output
)
except Exception as e:
logging.error("Failed to run post action checks: %s" % e)
@@ -244,7 +247,7 @@ def run(
# sys.exit(1)
raise RuntimeError()
else:
failed_post_scenarios = check_all_running_deployment(killed_namespaces, wait_time, kubecli)
failed_post_scenarios = check_all_running_deployment(killed_namespaces, wait_time, telemetry.kubecli)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
@@ -255,6 +258,16 @@ def run(
else:
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -1,20 +1,26 @@
import logging
import time
import yaml
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from krkn_lib.utils import log_exception
from kraken import utils
def run(scenarios_list: list[str],wait_duration: int, krkn_lib: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries= list[ScenarioTelemetry]()
def run(scenarios_list: list[str],
wait_duration: int,
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries = list[ScenarioTelemetry]()
failed_post_scenarios = []
for scenario in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, scenario)
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, scenario)
with open(scenario) as stream:
scenario_config = yaml.safe_load(stream)
@@ -26,9 +32,9 @@ def run(scenarios_list: list[str],wait_duration: int, krkn_lib: KrknKubernetes,
chaos_duration = scenario_config["chaos_duration"]
logging.info(f"checking service {service_name} in namespace: {service_namespace}")
if not krkn_lib.service_exists(service_name, service_namespace):
if not telemetry.kubecli.service_exists(service_name, service_namespace):
logging.error(f"service: {service_name} not found in namespace: {service_namespace}, failed to run scenario.")
fail(scenario_telemetry, scenario_telemetries)
fail_scenario_telemetry(scenario_telemetry)
failed_post_scenarios.append(scenario)
break
try:
@@ -37,18 +43,18 @@ def run(scenarios_list: list[str],wait_duration: int, krkn_lib: KrknKubernetes,
# both named ports and port numbers can be used
if isinstance(target_port, int):
logging.info(f"webservice will listen on port {target_port}")
webservice = krkn_lib.deploy_service_hijacking(service_namespace, plan, image, port_number=target_port)
webservice = telemetry.kubecli.deploy_service_hijacking(service_namespace, plan, image, port_number=target_port)
else:
logging.info(f"traffic will be redirected to named port: {target_port}")
webservice = krkn_lib.deploy_service_hijacking(service_namespace, plan, image, port_name=target_port)
webservice = telemetry.kubecli.deploy_service_hijacking(service_namespace, plan, image, port_name=target_port)
logging.info(f"successfully deployed pod: {webservice.pod_name} "
f"in namespace:{service_namespace} with selector {webservice.selector}!"
)
logging.info(f"patching service: {service_name} to hijack traffic towards: {webservice.pod_name}")
original_service = krkn_lib.replace_service_selector([webservice.selector], service_name, service_namespace)
original_service = telemetry.kubecli.replace_service_selector([webservice.selector], service_name, service_namespace)
if original_service is None:
logging.error(f"failed to patch service: {service_name}, namespace: {service_namespace} with selector {webservice.selector}")
fail(scenario_telemetry, scenario_telemetries)
fail_scenario_telemetry(scenario_telemetry)
failed_post_scenarios.append(scenario)
break
@@ -58,33 +64,40 @@ def run(scenarios_list: list[str],wait_duration: int, krkn_lib: KrknKubernetes,
time.sleep(chaos_duration)
selectors = ["=".join([key, original_service["spec"]["selector"][key]]) for key in original_service["spec"]["selector"].keys()]
logging.info(f"restoring the service selectors {selectors}")
original_service = krkn_lib.replace_service_selector(selectors, service_name, service_namespace)
original_service = telemetry.kubecli.replace_service_selector(selectors, service_name, service_namespace)
if original_service is None:
logging.error(f"failed to restore original service: {service_name}, namespace: {service_namespace} with selectors: {selectors}")
fail(scenario_telemetry, scenario_telemetries)
fail_scenario_telemetry(scenario_telemetry)
failed_post_scenarios.append(scenario)
break
logging.info("selectors successfully restored")
logging.info("undeploying service-hijacking resources...")
krkn_lib.undeploy_service_hijacking(webservice)
telemetry.kubecli.undeploy_service_hijacking(webservice)
logging.info("End of scenario. Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
logging.info("success")
except Exception as e:
logging.error(f"scenario {scenario} failed with exception: {e}")
fail(scenario_telemetry, scenario_telemetries)
failed_post_scenarios.append(scenario)
fail_scenario_telemetry(scenario_telemetry)
log_exception(scenario)
scenario_telemetry.end_timestamp = time.time()
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_post_scenarios, scenario_telemetries
def fail(scenario_telemetry: ScenarioTelemetry, scenario_telemetries: list[ScenarioTelemetry]):
def fail_scenario_telemetry(scenario_telemetry: ScenarioTelemetry):
scenario_telemetry.exit_status = 1
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
scenario_telemetry.end_timestamp = time.time()

View File

@@ -3,6 +3,10 @@ import yaml
import logging
import time
from multiprocessing.pool import ThreadPool
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from .. import utils
from ..cerberus import setup as cerberus
from ..post_actions import actions as post_actions
from ..node_actions.aws_node_scenarios import AWS
@@ -10,15 +14,17 @@ from ..node_actions.openstack_node_scenarios import OPENSTACKCLOUD
from ..node_actions.az_node_scenarios import Azure
from ..node_actions.gcp_node_scenarios import GCP
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import log_exception
def multiprocess_nodes(cloud_object_function, nodes):
def multiprocess_nodes(cloud_object_function, nodes, processes=0):
try:
# pool object with number of element
pool = ThreadPool(processes=len(nodes))
if processes == 0:
pool = ThreadPool(processes=len(nodes))
else:
pool = ThreadPool(processes=processes)
logging.info("nodes type " + str(type(nodes[0])))
if type(nodes[0]) is tuple:
node_id = []
@@ -45,10 +51,12 @@ def cluster_shut_down(shut_down_config, kubecli: KrknKubernetes):
shut_down_duration = shut_down_config["shut_down_duration"]
cloud_type = shut_down_config["cloud_type"]
timeout = shut_down_config["timeout"]
processes = 0
if cloud_type.lower() == "aws":
cloud_object = AWS()
elif cloud_type.lower() == "gcp":
cloud_object = GCP()
processes = 1
elif cloud_type.lower() == "openstack":
cloud_object = OPENSTACKCLOUD()
elif cloud_type.lower() in ["azure", "az"]:
@@ -71,7 +79,7 @@ def cluster_shut_down(shut_down_config, kubecli: KrknKubernetes):
for _ in range(runs):
logging.info("Starting cluster_shut_down scenario injection")
stopping_nodes = set(node_id)
multiprocess_nodes(cloud_object.stop_instances, node_id)
multiprocess_nodes(cloud_object.stop_instances, node_id, processes)
stopped_nodes = stopping_nodes.copy()
while len(stopping_nodes) > 0:
for node in stopping_nodes:
@@ -101,7 +109,7 @@ def cluster_shut_down(shut_down_config, kubecli: KrknKubernetes):
time.sleep(shut_down_duration)
logging.info("Restarting the nodes")
restarted_nodes = set(node_id)
multiprocess_nodes(cloud_object.start_instances, node_id)
multiprocess_nodes(cloud_object.start_instances, node_id, processes)
logging.info("Wait for each node to be running again")
not_running_nodes = restarted_nodes.copy()
while len(not_running_nodes) > 0:
@@ -129,7 +137,11 @@ def cluster_shut_down(shut_down_config, kubecli: KrknKubernetes):
# krkn_lib
def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
def run(scenarios_list,
config,
wait_duration,
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str) -> (list[str], list[ScenarioTelemetry]):
failed_post_scenarios = []
failed_scenarios = []
scenario_telemetries: list[ScenarioTelemetry] = []
@@ -148,7 +160,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = config_path
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, config_path)
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, config_path)
with open(config_path, "r") as f:
shut_down_config_yaml = yaml.full_load(f)
@@ -156,7 +168,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
shut_down_config_yaml["cluster_shut_down_scenario"]
start_time = int(time.time())
try:
cluster_shut_down(shut_down_config_scenario, kubecli)
cluster_shut_down(shut_down_config_scenario, telemetry.kubecli)
logging.info(
"Waiting for the specified duration: %s" % (wait_duration)
)
@@ -180,6 +192,16 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -0,0 +1 @@
from .syn_flood import *

View File

@@ -0,0 +1,148 @@
import logging
import os.path
import time
from typing import List
import krkn_lib.utils
import yaml
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from kraken import utils
def run(scenarios_list: list[str],
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str
) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
failed_post_scenarios = []
for scenario in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario
scenario_telemetry.start_timestamp = time.time()
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, scenario)
try:
pod_names = []
config = parse_config(scenario)
if config["target-service-label"]:
target_services = telemetry.kubecli.select_service_by_label(config["namespace"], config["target-service-label"])
else:
target_services = [config["target-service"]]
for target in target_services:
if not telemetry.kubecli.service_exists(target, config["namespace"]):
raise Exception(f"{target} service not found")
for i in range(config["number-of-pods"]):
pod_name = "syn-flood-" + krkn_lib.utils.get_random_string(10)
telemetry.kubecli.deploy_syn_flood(pod_name,
config["namespace"],
config["image"],
target,
config["target-port"],
config["packet-size"],
config["window-size"],
config["duration"],
config["attacker-nodes"]
)
pod_names.append(pod_name)
logging.info("waiting all the attackers to finish:")
did_finish = False
finished_pods = []
while not did_finish:
for pod_name in pod_names:
if not telemetry.kubecli.is_pod_running(pod_name, config["namespace"]):
finished_pods.append(pod_name)
if set(pod_names) == set(finished_pods):
did_finish = True
time.sleep(1)
except Exception as e:
logging.error(f"Failed to run syn flood scenario {scenario}: {e}")
failed_post_scenarios.append(scenario)
scenario_telemetry.exit_status = 1
else:
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_post_scenarios, scenario_telemetries
def parse_config(scenario_file: str) -> dict[str,any]:
if not os.path.exists(scenario_file):
raise Exception(f"failed to load scenario file {scenario_file}")
try:
with open(scenario_file) as stream:
config = yaml.safe_load(stream)
except Exception:
raise Exception(f"{scenario_file} is not a valid yaml file")
missing = []
if not check_key_value(config ,"packet-size"):
missing.append("packet-size")
if not check_key_value(config,"window-size"):
missing.append("window-size")
if not check_key_value(config, "duration"):
missing.append("duration")
if not check_key_value(config, "namespace"):
missing.append("namespace")
if not check_key_value(config, "number-of-pods"):
missing.append("number-of-pods")
if not check_key_value(config, "target-port"):
missing.append("target-port")
if not check_key_value(config, "image"):
missing.append("image")
if "target-service" not in config.keys():
missing.append("target-service")
if "target-service-label" not in config.keys():
missing.append("target-service-label")
if len(missing) > 0:
raise Exception(f"{(',').join(missing)} parameter(s) are missing")
if not config["target-service"] and not config["target-service-label"]:
raise Exception("you have either to set a target service or a label")
if config["target-service"] and config["target-service-label"]:
raise Exception("you cannot select both target-service and target-service-label")
if 'attacker-nodes' and not is_node_affinity_correct(config['attacker-nodes']):
raise Exception("attacker-nodes format is not correct")
return config
def check_key_value(dictionary, key):
if key in dictionary:
value = dictionary[key]
if value is not None and value != '':
return True
return False
def is_node_affinity_correct(obj) -> bool:
if not isinstance(obj, dict):
return False
for key in obj.keys():
if not isinstance(key, str):
return False
if not isinstance(obj[key], list):
return False
return True

View File

@@ -6,12 +6,12 @@ import re
import yaml
import random
from krkn_lib import utils
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from kubernetes.client import ApiException
from .. import utils
from ..cerberus import setup as cerberus
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value, log_exception, get_random_string
@@ -348,21 +348,25 @@ def check_date_time(object_type, names, kubecli:KrknKubernetes):
# krkn_lib
def run(scenarios_list, config, wait_duration, kubecli:KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
def run(scenarios_list,
config,
wait_duration,
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str) -> (list[str], list[ScenarioTelemetry]):
failed_scenarios = []
scenario_telemetries: list[ScenarioTelemetry] = []
for time_scenario_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = time_scenario_config
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, time_scenario_config)
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, time_scenario_config)
try:
with open(time_scenario_config, "r") as f:
scenario_config = yaml.full_load(f)
for time_scenario in scenario_config["time_scenarios"]:
start_time = int(time.time())
object_type, object_names = skew_time(time_scenario, kubecli)
not_reset = check_date_time(object_type, object_names, kubecli)
object_type, object_names = skew_time(time_scenario, telemetry.kubecli)
not_reset = check_date_time(object_type, object_names, telemetry.kubecli)
if len(not_reset) > 0:
logging.info("Object times were not reset")
logging.info(
@@ -383,6 +387,16 @@ def run(scenarios_list, config, wait_duration, kubecli:KrknKubernetes, telemetry
else:
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -0,0 +1,12 @@
import logging
class TeeLogHandler(logging.Handler):
logs: list[str] = []
name = "TeeLogHandler"
def get_output(self) -> str:
return "\n".join(self.logs)
def emit(self, record):
self.logs.append(self.formatter.format(record))
def __del__(self):
pass

2
kraken/utils/__init__.py Normal file
View File

@@ -0,0 +1,2 @@
from .TeeLogHandler import TeeLogHandler
from .functions import *

60
kraken/utils/functions.py Normal file
View File

@@ -0,0 +1,60 @@
import krkn_lib.utils
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from tzlocal.unix import get_localzone
def populate_cluster_events(scenario_telemetry: ScenarioTelemetry,
scenario_config: dict,
kubecli: KrknKubernetes,
start_timestamp: int,
end_timestamp: int
):
events = []
namespaces = __retrieve_namespaces(scenario_config, kubecli)
if len(namespaces) == 0:
events.extend(kubecli.collect_and_parse_cluster_events(start_timestamp, end_timestamp, str(get_localzone())))
else:
for namespace in namespaces:
events.extend(kubecli.collect_and_parse_cluster_events(start_timestamp, end_timestamp, str(get_localzone()),
namespace=namespace))
scenario_telemetry.set_cluster_events(events)
def collect_and_put_ocp_logs(telemetry_ocp: KrknTelemetryOpenshift,
scenario_config: dict,
request_id: str,
start_timestamp: int,
end_timestamp: int,
):
if (
telemetry_ocp.krkn_telemetry_config and
telemetry_ocp.krkn_telemetry_config["enabled"] and
telemetry_ocp.krkn_telemetry_config["logs_backup"] and
not telemetry_ocp.kubecli.is_kubernetes()
):
namespaces = __retrieve_namespaces(scenario_config, telemetry_ocp.kubecli)
if len(namespaces) > 0:
for namespace in namespaces:
telemetry_ocp.put_ocp_logs(request_id,
telemetry_ocp.krkn_telemetry_config,
start_timestamp,
end_timestamp,
namespace)
else:
telemetry_ocp.put_ocp_logs(request_id,
telemetry_ocp.krkn_telemetry_config,
start_timestamp,
end_timestamp)
def __retrieve_namespaces(scenario_config: dict, kubecli: KrknKubernetes) -> set[str]:
namespaces = list()
namespaces.extend(krkn_lib.utils.deep_get_attribute("namespace", scenario_config))
namespace_patterns = krkn_lib.utils.deep_get_attribute("namespace_pattern", scenario_config)
for pattern in namespace_patterns:
namespaces.extend(kubecli.list_namespaces_by_regex(pattern))
return set(namespaces)

View File

@@ -1,13 +1,20 @@
import yaml
import logging
import time
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from .. import utils
from ..node_actions.aws_node_scenarios import AWS
from ..cerberus import setup as cerberus
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import log_exception
def run(scenarios_list, config, wait_duration, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]) :
def run(scenarios_list,
config,
wait_duration,
telemetry: KrknTelemetryOpenshift,
telemetry_request_id: str) -> (list[str], list[ScenarioTelemetry]) :
"""
filters the subnet of interest and applies the network acl
to create zone outage
@@ -20,7 +27,7 @@ def run(scenarios_list, config, wait_duration, telemetry: KrknTelemetryKubernete
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = zone_outage_config
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, zone_outage_config)
parsed_scenario_config = telemetry.set_parameters_base64(scenario_telemetry, zone_outage_config)
try:
if len(zone_outage_config) > 1:
with open(zone_outage_config, "r") as f:
@@ -116,6 +123,16 @@ def run(scenarios_list, config, wait_duration, telemetry: KrknTelemetryKubernete
else:
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
utils.collect_and_put_ocp_logs(telemetry,
parsed_scenario_config,
telemetry_request_id,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
utils.populate_cluster_events(scenario_telemetry,
parsed_scenario_config,
telemetry.kubecli,
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp))
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -1,7 +1,7 @@
aliyun-python-sdk-core==2.13.36
aliyun-python-sdk-ecs==4.24.25
arcaflow==0.17.2
arcaflow-plugin-sdk==0.10.0
arcaflow-plugin-sdk==0.14.0
arcaflow==0.19.1
boto3==1.28.61
azure-identity==1.16.1
azure-keyvault==4.2.0
@@ -15,27 +15,27 @@ google-api-python-client==2.116.0
ibm_cloud_sdk_core==3.18.0
ibm_vpc==0.20.0
jinja2==3.1.4
krkn-lib==2.1.3
krkn-lib==3.1.0
lxml==5.1.0
kubernetes==26.1.0
kubernetes==28.1.0
numpy==1.26.4
oauth2client==4.1.3
pandas==2.2.0
openshift-client==1.0.21
paramiko==3.4.0
podman-compose==1.0.6
pyVmomi==8.0.2.0.1
pyfiglet==1.0.2
pytest==8.0.0
python-ipmi==0.5.4
python-openstackclient==6.5.0
requests==2.32.0
requests==2.32.2
service_identity==24.1.0
PyYAML==6.0
setuptools==65.5.1
PyYAML==6.0.1
setuptools==70.0.0
werkzeug==3.0.3
wheel==0.42.0
zope.interface==5.4.0
git+https://github.com/krkn-chaos/arcaflow-plugin-kill-pod.git
git+https://github.com/krkn-chaos/arcaflow-plugin-kill-pod.git@v0.1.0
git+https://github.com/vmware/vsphere-automation-sdk-python.git@v8.0.0.0
cryptography>=42.0.4 # not directly required, pinned by Snyk to avoid a vulnerability

View File

@@ -10,8 +10,12 @@ import pyfiglet
import uuid
import time
from krkn_lib.elastic.krkn_elastic import KrknElastic
from krkn_lib.models.elastic import ElasticChaosRunTelemetry
from krkn_lib.models.krkn import ChaosRunOutput, ChaosRunAlertSummary
from krkn_lib.prometheus.krkn_prometheus import KrknPrometheus
from tzlocal.unix import get_localzone
import kraken.time_actions.common_time_functions as time_actions
import kraken.performance_dashboards.setup as performance_dashboards
import kraken.pod_scenarios.setup as pod_scenarios
@@ -27,21 +31,21 @@ import kraken.arcaflow_plugin as arcaflow_plugin
import kraken.prometheus as prometheus_plugin
import kraken.service_hijacking.service_hijacking as service_hijacking_plugin
import server as server
from kraken import plugins
from kraken import plugins, syn_flood
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.ocp import KrknOpenshift
from krkn_lib.telemetry.elastic import KrknElastic
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from krkn_lib.models.telemetry import ChaosRunTelemetry
from krkn_lib.utils import SafeLogger
from krkn_lib.utils.functions import get_yaml_item_value
from krkn_lib.utils.functions import get_yaml_item_value, get_junit_test_case
from kraken.utils import TeeLogHandler
report_file = ""
# Main function
def main(cfg):
def main(cfg) -> int:
# Start kraken
print(pyfiglet.figlet_format("kraken"))
logging.info("Starting kraken")
@@ -93,14 +97,61 @@ def main(cfg):
enable_alerts = get_yaml_item_value(
config["performance_monitoring"], "enable_alerts", False
)
enable_metrics = get_yaml_item_value(
config["performance_monitoring"], "enable_metrics", False
)
# elastic search
enable_elastic = get_yaml_item_value(
config["elastic"], "enable_elastic", False
)
elastic_collect_metrics = get_yaml_item_value(
config["elastic"], "collect_metrics", False
)
elastic_colllect_alerts = get_yaml_item_value(
config["elastic"], "collect_alerts", False
)
elastic_url = get_yaml_item_value(
config["elastic"], "elastic_url", ""
)
elastic_verify_certs = get_yaml_item_value(
config["elastic"], "verify_certs", False
)
elastic_port = get_yaml_item_value(
config["elastic"], "elastic_port", 32766
)
elastic_username = get_yaml_item_value(
config["elastic"], "username", ""
)
elastic_password = get_yaml_item_value(
config["elastic"], "password", ""
)
elastic_metrics_index = get_yaml_item_value(
config["elastic"], "metrics_index", "krkn-metrics"
)
elastic_alerts_index = get_yaml_item_value(
config["elastic"], "alerts_index", "krkn-alerts"
)
elastic_telemetry_index = get_yaml_item_value(
config["elastic"], "telemetry_index", "krkn-telemetry"
)
alert_profile = config["performance_monitoring"].get("alert_profile")
metrics_profile = config["performance_monitoring"].get("metrics_profile")
check_critical_alerts = get_yaml_item_value(
config["performance_monitoring"], "check_critical_alerts", False
)
telemetry_api_url = config["telemetry"].get("api_url")
elastic_config = get_yaml_item_value(config,"elastic",{})
elastic_url = get_yaml_item_value(elastic_config,"elastic_url","")
elastic_index = get_yaml_item_value(elastic_config,"elastic_index","")
# Initialize clients
if (not os.path.isfile(kubeconfig_path) and
@@ -108,7 +159,8 @@ def main(cfg):
logging.error(
"Cannot read the kubeconfig file at %s, please check" % kubeconfig_path
)
sys.exit(1)
#sys.exit(1)
return 1
logging.info("Initializing client to talk to the Kubernetes cluster")
# Generate uuid for the run
@@ -142,10 +194,12 @@ def main(cfg):
# Set up kraken url to track signal
if not 0 <= int(port) <= 65535:
logging.error("%s isn't a valid port number, please check" % (port))
sys.exit(1)
#sys.exit(1)
return 1
if not signal_address:
logging.error("Please set the signal address in the config")
sys.exit(1)
#sys.exit(1)
return 1
address = (signal_address, port)
# If publish_running_status is False this should keep us going
@@ -163,7 +217,7 @@ def main(cfg):
cv = ""
if distribution == "openshift":
cv = ocpcli.get_clusterversion_string()
if prometheus_url is None:
if not prometheus_url:
try:
connection_data = ocpcli.get_prometheus_api_connection_data()
if connection_data:
@@ -176,18 +230,27 @@ def main(cfg):
except Exception:
logging.error("invalid distribution selected, running openshift scenarios against kubernetes cluster."
"Please set 'kubernetes' in config.yaml krkn.platform and try again")
sys.exit(1)
return 1
if cv != "":
logging.info(cv)
else:
logging.info("Cluster version CRD not detected, skipping")
# KrknTelemetry init
telemetry_k8s = KrknTelemetryKubernetes(safe_logger, kubecli)
telemetry_ocp = KrknTelemetryOpenshift(safe_logger, ocpcli)
telemetry_elastic = KrknElastic(safe_logger,elastic_url)
telemetry_k8s = KrknTelemetryKubernetes(safe_logger, kubecli, config["telemetry"])
telemetry_ocp = KrknTelemetryOpenshift(safe_logger, ocpcli, config["telemetry"])
if enable_elastic:
elastic_search = KrknElastic(safe_logger,
elastic_url,
elastic_port,
elastic_verify_certs,
elastic_username,
elastic_password
)
else:
elastic_search = None
summary = ChaosRunAlertSummary()
if enable_alerts or check_critical_alerts:
if enable_metrics or enable_alerts or check_critical_alerts:
prometheus = KrknPrometheus(prometheus_url, prometheus_bearer_token)
logging.info("Server URL: %s" % kubecli.get_host())
@@ -251,35 +314,36 @@ def main(cfg):
"plugin_scenarios with the "
"kill-pods configuration instead."
)
sys.exit(1)
return 1
elif scenario_type == "arcaflow_scenarios":
failed_post_scenarios, scenario_telemetries = arcaflow_plugin.run(
scenarios_list, kubeconfig_path, telemetry_k8s
scenarios_list,
telemetry_ocp,
telemetry_request_id
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
elif scenario_type == "plugin_scenarios":
failed_post_scenarios, scenario_telemetries = plugins.run(
scenarios_list,
kubeconfig_path,
kraken_config,
failed_post_scenarios,
wait_duration,
telemetry_k8s,
kubecli
telemetry_ocp,
run_uuid,
telemetry_request_id
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# krkn_lib
elif scenario_type == "container_scenarios":
logging.info("Running container scenarios")
failed_post_scenarios, scenario_telemetries = pod_scenarios.container_run(
kubeconfig_path,
scenarios_list,
config,
failed_post_scenarios,
wait_duration,
kubecli,
telemetry_k8s
telemetry_ocp,
telemetry_request_id
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
@@ -287,14 +351,21 @@ def main(cfg):
# krkn_lib
elif scenario_type == "node_scenarios":
logging.info("Running node scenarios")
failed_post_scenarios, scenario_telemetries = nodeaction.run(scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
failed_post_scenarios, scenario_telemetries = nodeaction.run(scenarios_list,
config,
wait_duration,
telemetry_ocp,
telemetry_request_id)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Inject managedcluster chaos scenarios specified in the config
# krkn_lib
elif scenario_type == "managedcluster_scenarios":
logging.info("Running managedcluster scenarios")
managedcluster_scenarios.run(
scenarios_list, config, wait_duration, kubecli
scenarios_list,
config,
wait_duration,
kubecli
)
# Inject time skew chaos scenarios specified
@@ -302,12 +373,22 @@ def main(cfg):
# krkn_lib
elif scenario_type == "time_scenarios":
logging.info("Running time skew scenarios")
failed_post_scenarios, scenario_telemetries = time_actions.run(scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
failed_post_scenarios, scenario_telemetries = time_actions.run(scenarios_list,
config,
wait_duration,
telemetry_ocp,
telemetry_request_id
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Inject cluster shutdown scenarios
# krkn_lib
elif scenario_type == "cluster_shut_down_scenarios":
failed_post_scenarios, scenario_telemetries = shut_down.run(scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
failed_post_scenarios, scenario_telemetries = shut_down.run(scenarios_list,
config,
wait_duration,
telemetry_ocp,
telemetry_request_id
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Inject namespace chaos scenarios
@@ -319,39 +400,69 @@ def main(cfg):
config,
wait_duration,
failed_post_scenarios,
kubeconfig_path,
kubecli,
telemetry_k8s
telemetry_ocp,
telemetry_request_id
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Inject zone failures
elif scenario_type == "zone_outages":
logging.info("Inject zone outages")
failed_post_scenarios, scenario_telemetries = zone_outages.run(scenarios_list, config, wait_duration, telemetry_k8s)
failed_post_scenarios, scenario_telemetries = zone_outages.run(scenarios_list,
config,
wait_duration,
telemetry_ocp,
telemetry_request_id
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Application outages
elif scenario_type == "application_outages":
logging.info("Injecting application outage")
failed_post_scenarios, scenario_telemetries = application_outage.run(
scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
scenarios_list,
config,
wait_duration,
telemetry_ocp,
telemetry_request_id
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# PVC scenarios
# krkn_lib
elif scenario_type == "pvc_scenarios":
logging.info("Running PVC scenario")
failed_post_scenarios, scenario_telemetries = pvc_scenario.run(scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
failed_post_scenarios, scenario_telemetries = pvc_scenario.run(scenarios_list,
config,
wait_duration,
telemetry_ocp,
telemetry_request_id
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Network scenarios
# krkn_lib
elif scenario_type == "network_chaos":
logging.info("Running Network Chaos")
failed_post_scenarios, scenario_telemetries = network_chaos.run(scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
failed_post_scenarios, scenario_telemetries = network_chaos.run(scenarios_list,
config,
wait_duration,
telemetry_ocp,
telemetry_request_id
)
elif scenario_type == "service_hijacking":
logging.info("Running Service Hijacking Chaos")
failed_post_scenarios, scenario_telemetries = service_hijacking_plugin.run(scenarios_list, wait_duration, kubecli, telemetry_k8s)
failed_post_scenarios, scenario_telemetries = service_hijacking_plugin.run(scenarios_list,
wait_duration,
telemetry_ocp,
telemetry_request_id
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
elif scenario_type == "syn_flood":
logging.info("Running Syn Flood Chaos")
failed_post_scenarios, scenario_telemetries = syn_flood.run(scenarios_list,
telemetry_ocp,
telemetry_request_id
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Check for critical alerts when enabled
@@ -388,10 +499,16 @@ def main(cfg):
else:
telemetry_k8s.collect_cluster_metadata(chaos_telemetry)
decoded_chaos_run_telemetry = ChaosRunTelemetry(json.loads(chaos_telemetry.to_json()))
telemetry_json = chaos_telemetry.to_json()
decoded_chaos_run_telemetry = ChaosRunTelemetry(json.loads(telemetry_json))
chaos_output.telemetry = decoded_chaos_run_telemetry
logging.info(f"Chaos data:\n{chaos_output.to_json()}")
telemetry_elastic.upload_data_to_elasticsearch(decoded_chaos_run_telemetry.to_json(), elastic_index)
if enable_elastic:
elastic_telemetry = ElasticChaosRunTelemetry(chaos_run_telemetry=decoded_chaos_run_telemetry)
result = elastic_search.push_telemetry(elastic_telemetry, elastic_telemetry_index)
if result == -1:
safe_logger.error(f"failed to save telemetry on elastic search: {chaos_output.to_json()}")
if config["telemetry"]["enabled"]:
logging.info(f'telemetry data will be stored on s3 bucket folder: {telemetry_api_url}/files/'
f'{(config["telemetry"]["telemetry_group"] if config["telemetry"]["telemetry_group"] else "default")}/'
@@ -399,7 +516,6 @@ def main(cfg):
logging.info(f"telemetry upload log: {safe_logger.log_file_name}")
try:
telemetry_k8s.send_telemetry(config["telemetry"], telemetry_request_id, chaos_telemetry)
telemetry_k8s.put_cluster_events(telemetry_request_id, config["telemetry"], start_time, end_time)
telemetry_k8s.put_critical_alerts(telemetry_request_id, config["telemetry"], summary)
# prometheus data collection is available only on Openshift
if config["telemetry"]["prometheus_backup"]:
@@ -428,8 +544,7 @@ def main(cfg):
if prometheus_archive_files:
safe_logger.info("starting prometheus archive upload:")
telemetry_k8s.put_prometheus_data(config["telemetry"], prometheus_archive_files, telemetry_request_id)
if config["telemetry"]["logs_backup"] and distribution == "openshift":
telemetry_ocp.put_ocp_logs(telemetry_request_id, config["telemetry"], start_time, end_time)
except Exception as e:
logging.error(f"failed to send telemetry data: {str(e)}")
else:
@@ -442,23 +557,40 @@ def main(cfg):
if alert_profile:
prometheus_plugin.alerts(
prometheus,
elastic_search,
run_uuid,
start_time,
end_time,
alert_profile,
elastic_colllect_alerts,
elastic_alerts_index
)
else:
logging.error("Alert profile is not defined")
sys.exit(1)
return 1
#sys.exit(1)
if enable_metrics:
prometheus_plugin.metrics(prometheus,
elastic_search,
start_time,
run_uuid,
end_time,
metrics_profile,
elastic_collect_metrics,
elastic_metrics_index)
if post_critical_alerts > 0:
logging.error("Critical alerts are firing, please check; exiting")
sys.exit(2)
#sys.exit(2)
return 2
if failed_post_scenarios:
logging.error(
"Post scenarios are still failing at the end of all iterations"
)
sys.exit(2)
#sys.exit(2)
return 2
logging.info(
"Successfully finished running Kraken. UUID for the run: "
@@ -466,7 +598,11 @@ def main(cfg):
)
else:
logging.error("Cannot find a config at %s, please check" % (cfg))
sys.exit(1)
#sys.exit(1)
return 2
return 0
if __name__ == "__main__":
@@ -486,19 +622,102 @@ if __name__ == "__main__":
help="output report location",
default="kraken.report",
)
parser.add_option(
"--junit-testcase",
dest="junit_testcase",
help="junit test case description",
default=None,
)
parser.add_option(
"--junit-testcase-path",
dest="junit_testcase_path",
help="junit test case path",
default=None,
)
parser.add_option(
"--junit-testcase-version",
dest="junit_testcase_version",
help="junit test case version",
default=None,
)
(options, args) = parser.parse_args()
report_file = options.output
tee_handler = TeeLogHandler()
handlers = [logging.FileHandler(report_file, mode="w"), logging.StreamHandler(), tee_handler]
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler(report_file, mode="w"),
logging.StreamHandler(),
],
handlers=handlers,
)
option_error = False
# used to check if there is any missing or wrong parameter that prevents
# the creation of the junit file
junit_error = False
junit_normalized_path = None
retval = 0
junit_start_time = time.time()
# checks if both mandatory options for junit are set
if options.junit_testcase_path and not options.junit_testcase:
logging.error("please set junit test case description with --junit-testcase [description] option")
option_error = True
junit_error = True
if options.junit_testcase and not options.junit_testcase_path:
logging.error("please set junit test case path with --junit-testcase-path [path] option")
option_error = True
junit_error = True
# normalized path
if options.junit_testcase:
junit_normalized_path = os.path.normpath(options.junit_testcase_path)
if not os.path.exists(junit_normalized_path):
logging.error(f"{junit_normalized_path} do not exists, please select a valid path")
option_error = True
junit_error = True
if not os.path.isdir(junit_normalized_path):
logging.error(f"{junit_normalized_path} is a file, please select a valid folder path")
option_error = True
junit_error = True
if not os.access(junit_normalized_path, os.W_OK):
logging.error(f"{junit_normalized_path} is not writable, please select a valid path")
option_error = True
junit_error = True
if options.cfg is None:
logging.error("Please check if you have passed the config")
sys.exit(1)
option_error = True
if option_error:
retval = 1
else:
main(options.cfg)
retval = main(options.cfg)
junit_endtime = time.time()
# checks the minimum required parameters to write the junit file
if junit_normalized_path and not junit_error:
junit_testcase_xml = get_junit_test_case(
success=True if retval == 0 else False,
time=int(junit_endtime - junit_start_time),
test_suite_name="krkn-test-suite",
test_case_description=options.junit_testcase,
test_stdout=tee_handler.get_output(),
test_version=options.junit_testcase_version
)
junit_testcase_file_path = f"{junit_normalized_path}/junit_krkn_{int(time.time())}.xml"
logging.info(f"writing junit XML testcase in {junit_testcase_file_path}")
with open(junit_testcase_file_path, "w") as stream:
stream.write(junit_testcase_xml)
sys.exit(retval)

View File

@@ -0,0 +1,16 @@
packet-size: 120 # hping3 packet size
window-size: 64 # hping 3 TCP window size
duration: 10 # chaos scenario duration
namespace: default # namespace where the target service(s) are deployed
target-service: elasticsearch # target service name (if set target-service-label must be empty)
target-port: 9200 # target service TCP port
target-service-label : "" # target service label, can be used to target multiple target at the same time
# if they have the same label set (if set target-service must be empty)
number-of-pods: 2 # number of attacker pod instantiated per each target
image: quay.io/krkn-chaos/krkn-syn-flood:v1.0.0 # syn flood attacker container image
attacker-nodes: # this will set the node affinity to schedule the attacker node. Per each node label selector
node-role.kubernetes.io/worker: # can be specified multiple values in this way the kube scheduler will schedule the attacker pods
- "" # in the best way possible based on the provided labels. Multiple labels can be specified
# set empty value `attacker-nodes: {}` to let kubernetes schedule the pods

View File

@@ -1,14 +1,13 @@
node_scenarios:
- actions: # node chaos scenarios to be injected
- node_stop_start_scenario
- stop_start_kubelet_scenario
- node_crash_scenario
node_name: # node on which scenario has to be injected; can set multiple names separated by comma
label_selector: node-role.kubernetes.io/worker # when node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection
instance_count: 1 # Number of nodes to perform action/select that match the label selector
runs: 1 # number of times to inject each scenario under actions (will perform on same node each time)
timeout: 120 # duration to wait for completion of node scenario injection
cloud_type: aws # cloud type on which Kubernetes/OpenShift runs
timeout: 360 # duration to wait for completion of node scenario injection
duration: 120 # duration to stop the node before running the start action
cloud_type: aws # cloud type on which Kubernetes/OpenShift runs
- actions:
- node_reboot_scenario
node_name:

View File

@@ -0,0 +1,16 @@
node_scenarios:
- actions:
- node_reboot_scenario
node_name:
label_selector: node-role.kubernetes.io/infra
instance_count: 1
timeout: 120
cloud_type: azure
- actions:
- node_stop_start_scenario
node_name:
label_selector: node-role.kubernetes.io/infra
instance_count: 1
timeout: 360
duration: 120
cloud_type: azure

View File

@@ -0,0 +1,19 @@
node_scenarios:
- actions: # Node chaos scenarios to be injected.
- node_stop_start_scenario
node_name: # Node on which scenario has to be injected.
label_selector: node-role.kubernetes.io/worker # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection.
instance_count: 1 # Number of nodes to perform action/select that match the label selector.
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time).
timeout: 360 # Duration to wait for completion of node scenario injection.
duration: 120 # Duration to stop the node before running the start action
cloud_type: bm # Cloud type on which Kubernetes/OpenShift runs.
bmc_user: defaultuser # For baremetal (bm) cloud type. The default IPMI username. Optional if specified for all machines.
bmc_password: defaultpass # For baremetal (bm) cloud type. The default IPMI password. Optional if specified for all machines.
bmc_info: # This section is here to specify baremetal per-machine info, so it is optional if there is no per-machine info.
node-1: # The node name for the baremetal machine
bmc_addr: mgmt-machine1.example.com # Optional. For baremetal nodes with the IPMI BMC address missing from 'oc get bmh'.
node-2:
bmc_addr: mgmt-machine2.example.com
bmc_user: user # The baremetal IPMI user. Overrides the default IPMI user specified above. Optional if the default is set.
bmc_password: pass # The baremetal IPMI password. Overrides the default IPMI user specified above. Optional if the default is set

View File

@@ -0,0 +1,16 @@
node_scenarios:
- actions:
- node_reboot_scenario
node_name:
label_selector: node-role.kubernetes.io/worker
instance_count: 1
timeout: 120
cloud_type: gcp
- actions:
- node_stop_start_scenario
node_name:
label_selector: node-role.kubernetes.io/worker
instance_count: 1
timeout: 360
duration: 120
cloud_type: gcp

View File

@@ -5,5 +5,6 @@
label_selector: "node-role.kubernetes.io/worker" # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time)
instance_count: 1 # Number of nodes to perform action/select that match the label selector
timeout: 30 # Duration to wait for completion of node scenario injection
skip_openshift_checks: False # Set to True if you don't want to wait for the status of the nodes to change on OpenShift before passing the scenario
timeout: 360 # Duration to wait for completion of node scenario injection
duration: 120 # Duration to stop the node before running the start action
skip_openshift_checks: False # Set to True if you don't want to wait for the status of the nodes to change on OpenShift before passing the scenario

View File

@@ -39,7 +39,7 @@ class NetworkScenariosTest(unittest.TestCase):
def test_network_chaos(self):
output_id, output_data = ingress_shaping.network_chaos(
ingress_shaping.NetworkScenarioConfig(
params=ingress_shaping.NetworkScenarioConfig(
label_selector="node-role.kubernetes.io/control-plane",
instance_count=1,
network_params={
@@ -47,7 +47,8 @@ class NetworkScenariosTest(unittest.TestCase):
"loss": "0.02",
"bandwidth": "100mbit"
}
)
),
run_id="network-shaping-test"
)
if output_id == "error":
logging.error(output_data.error)

View File

@@ -10,7 +10,7 @@ class RunPythonPluginTest(unittest.TestCase):
tmp_file = tempfile.NamedTemporaryFile()
tmp_file.write(bytes("print('Hello world!')", 'utf-8'))
tmp_file.flush()
output_id, output_data = run_python_file(RunPythonFileInput(tmp_file.name))
output_id, output_data = run_python_file(params=RunPythonFileInput(tmp_file.name), run_id="test-python-plugin-success")
self.assertEqual("success", output_id)
self.assertEqual("Hello world!\n", output_data.stdout)
@@ -18,7 +18,7 @@ class RunPythonPluginTest(unittest.TestCase):
tmp_file = tempfile.NamedTemporaryFile()
tmp_file.write(bytes("import sys\nprint('Hello world!')\nsys.exit(42)\n", 'utf-8'))
tmp_file.flush()
output_id, output_data = run_python_file(RunPythonFileInput(tmp_file.name))
output_id, output_data = run_python_file(params=RunPythonFileInput(tmp_file.name), run_id="test-python-plugin-error")
self.assertEqual("error", output_id)
self.assertEqual(42, output_data.exit_code)
self.assertEqual("Hello world!\n", output_data.stdout)