Compare commits

...

95 Commits

Author SHA1 Message Date
Tullio Sebastiani
e02c6d1287 SYN flood scenario (#668)
* scenario config file

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* syn flood plugin

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* run_krkn.py updaated

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* requirements.txt + documentation + config.yaml

* set node selector defaults to worker

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-29 15:31:37 -04:00
jtydlack
04425a8d8a Add alerts to alert.yaml
Signed-off-by: jtydlack <139967002+jtydlack@users.noreply.github.com>
2024-07-25 10:51:15 -04:00
Naga Ravi Chaitanya Elluri
f3933f0e62 fix: requirements.txt to reduce vulnerabilities (#673)
The following vulnerabilities are fixed by pinning transitive dependencies:
- https://snyk.io/vuln/SNYK-PYTHON-SETUPTOOLS-7448482

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
2024-07-22 10:12:14 -04:00
Naga Ravi Chaitanya Elluri
56ff0a8c72 Deprecate setting release version in the container source file
This commit also deprecates building container image for ppc64le as it
is not actively maintained. We will add support if users request for it
in the future.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-07-18 12:56:08 -04:00
Tullio Sebastiani
9378cd74cd krkn-lib update v2.1.6 to fix pod monitoring time calculations (#674)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-16 18:04:24 +02:00
Paige Patton
4d3491da0f adidng action token passing (#671)
rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-07-15 12:50:20 -04:00
Naga Ravi Chaitanya Elluri
d6ce66160b Remove podman-compose dependency
We are not using it in the krkn code base and removing it fixes one
of the license issues reported by FOSSA. This commit also removes
setting up dependencies using docker/podman compose as it not actively
maintained.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-07-10 17:25:33 -04:00
Paige Rubendall
ef1a55438b taking out need for az cli to be installed
rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-07-05 15:18:06 -04:00
Tullio Sebastiani
d8f54b83a2 fixed image push issue
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-05 10:32:01 -04:00
Tullio Sebastiani
4870c86515 moves the krkn-hub build from push on main to tag (#660)
* moves the krkn-hub build from push on main to tag + final image enhancement

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fixed syntax

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

typo

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

typo

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* quotes

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-05 16:09:34 +02:00
Naga Ravi Chaitanya Elluri
6ae17cf678 Update dockerfile to install azure-cli using dnf
Avoids architecture issues such as "bash: /usr/bin/az: cannot execute: required file not found"

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-07-03 18:35:45 -04:00
Tullio Sebastiani
ce9f8aa050 Dockerfile update v1.6.2 (#659)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-03 16:34:37 +02:00
Paige Patton
05148317c1 taking out one glcoud call (#657)
rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-07-03 16:14:19 +02:00
Tullio Sebastiani
5f836f294b Kill pod arca plugin update adaptation (#656)
* new kill-pod interface adaptation

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* unit test fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* requirements update

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* fixed duplicate requirement

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* added conditional dockerfile build

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

removed useless print

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-07-03 15:50:43 +02:00
snyk-bot
cfa1bb09a0 fix: requirements.txt to reduce vulnerabilities
The following vulnerabilities are fixed by pinning transitive dependencies:
- https://snyk.io/vuln/SNYK-PYTHON-REQUESTS-6928867
2024-06-24 10:23:37 -04:00
Naga Ravi Chaitanya Elluri
5ddfff5a85 Make krkn dir executable
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-06-20 14:32:20 -04:00
Tullio Sebastiani
7d18487228 Dockerfile update
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-06-12 14:36:38 -04:00
Naga Ravi Chaitanya Elluri
08de42c91a Bump arcaflow version to 0.17.2 (#648)
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-06-12 20:29:32 +02:00
dependabot[bot]
dc7d5bb01b Bump azure-identity from 1.15.0 to 1.16.1
Bumps [azure-identity](https://github.com/Azure/azure-sdk-for-python) from 1.15.0 to 1.16.1.
- [Release notes](https://github.com/Azure/azure-sdk-for-python/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-python/blob/main/doc/esrp_release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-python/compare/azure-identity_1.15.0...azure-identity_1.16.1)

---
updated-dependencies:
- dependency-name: azure-identity
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-12 09:17:14 -04:00
Tullio Sebastiani
ea3444d375 added dependencies removed from the hub
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

jsonschema

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-06-11 12:07:28 -04:00
Tullio Sebastiani
7b660a0878 Fixes system and oc vulnerabilities detected by trivy (#644)
* fixes system and oc vulnerabilities detected by trivy

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* updated base image to run as krkn user instead of root

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-06-10 14:26:03 -04:00
Tullio Sebastiani
5fe0655f22 libnghttp2 version update
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-06-06 08:21:08 -04:00
Tullio Sebastiani
5df343c183 dockerfile update
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-06-04 14:36:11 -04:00
Tullio Sebastiani
f364e9f283 Arcaflow upgrade to engine v0.17.1 (#639)
* krkn plugin refactoring to match new engine context path management

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* cpu-hog new syntax

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* memory-hog new syntax

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

removed s from duration

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* io-hog new syntax

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

cpu-hog input

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* path management refactoring agreed with arca team

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

refactoring

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-06-04 14:13:33 -04:00
Tullio Sebastiani
86a7427606 Dockerfile refactoring to build oc together with krkn
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

added oc in /usr/local/bin as well

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fixed dumb docker build copy

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-06-04 10:41:11 -04:00
Mudit Verma
31266fbc3e support for node limits 2024-05-31 11:22:30 -04:00
Tullio Sebastiani
57de3769e7 ubi 9 base image + quay.io vulnerability fixes
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-05-31 10:58:52 -04:00
Paige Rubendall
42fc8eea40 adding wait in pvc scenarios and serivce hijack
rh-pre-commit.version: 2.2.0
rh-pre-commit.check-secrets: ENABLED

Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-05-29 16:34:33 -04:00
dependabot[bot]
22d56e2cdc ---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-22 17:12:46 -04:00
Matt Leader
a259b68221 Updates for Arcaflow Plugin Stress-NG 0.6.0 (#625)
* change for cpu hog

Signed-off-by: Matthew F Leader <mleader@redhat.com>

* change for io hog

Signed-off-by: Matthew F Leader <mleader@redhat.com>

* change for memory hog

Signed-off-by: Matthew F Leader <mleader@redhat.com>

---------

Signed-off-by: Matthew F Leader <mleader@redhat.com>
2024-05-20 12:35:51 -04:00
Tullio Sebastiani
052f83e7d9 added reference to webservice source code in the documentation (#630) 2024-05-14 17:58:06 +02:00
Tullio Sebastiani
fb3bbe4e26 replaced log syntax to allow objects to be printed
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-05-14 11:13:44 -04:00
Naga Ravi Chaitanya Elluri
96ba9be4b8 Add instructions to copy the python package file to docker dir (#616)
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-05-13 12:36:37 -04:00
Naga Ravi Chaitanya Elluri
58d5d1d8dc Have a config in the chaos_recommender dir (#615)
This will make it easy for the users to find, configure and run it.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-05-13 12:33:41 -04:00
Tullio Sebastiani
3fe22a0d8f fixing badgecommit fail when coverage doesn't change
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-05-13 12:30:59 -04:00
Tullio Sebastiani
21b89a32a7 fixing missing import for log_exception
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-05-13 11:58:13 -04:00
Tullio Sebastiani
dbe3ea9718 Dockerfiles update
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-05-13 10:56:58 -04:00
Tullio Sebastiani
a142f6e7a4 Service hijacking scenario (#617)
* WIP: service hijacking scenario

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* wip

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* error handling

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

adapted run_raken.py

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* restored config.yaml

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* added funtest

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

test fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fixed test

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix test

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fixed funtest

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

funtest fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

minor nit

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

added explicit curl method

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

push

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

restored all funtests

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

added mime type test

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fixed pipeline

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

commented unit

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

utf-8

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

test restored

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix test pipeline

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* documentation

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* krkn-lib 2.1.3

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* added other funtests to main merge to collect coverage

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-05-13 10:04:06 +02:00
Tullio Sebastiani
2610a7af67 added coverage badge and build badge to krkn
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

nit

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

permission

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

if main

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-05-10 09:57:10 -04:00
dependabot[bot]
f827f65132 Bump werkzeug from 2.3.8 to 3.0.3 in /utils/chaos_ai/docker (#619)
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.3.8 to 3.0.3.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/2.3.8...3.0.3)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-05-06 16:09:10 -04:00
dependabot[bot]
aa6cbbc11a Bump werkzeug from 3.0.1 to 3.0.3
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 3.0.1 to 3.0.3.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/3.0.1...3.0.3)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-06 16:04:27 -04:00
dependabot[bot]
e17354e54d Bump jinja2 from 3.1.3 to 3.1.4
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.4.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.3...3.1.4)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-05-06 15:44:52 -04:00
Tullio Sebastiani
2dfa5cb0cd fixes missing data in telemetry.json
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-05-06 14:16:09 -04:00
dependabot[bot]
0799008cd5 Bump flask from 2.1.0 to 2.2.5 in /utils/chaos_ai/docker (#611)
Bumps [flask](https://github.com/pallets/flask) from 2.1.0 to 2.2.5.
- [Release notes](https://github.com/pallets/flask/releases)
- [Changelog](https://github.com/pallets/flask/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/flask/compare/2.1.0...2.2.5)

---
updated-dependencies:
- dependency-name: flask
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-04-25 09:11:50 -04:00
Tullio Sebastiani
2327531e46 Dockerfiles update (#614)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-04-24 11:40:58 -04:00
dependabot[bot]
2c14c48a63 Bump werkzeug from 2.2.2 to 2.3.8 in /utils/chaos_ai/docker (#610)
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.2.2 to 2.3.8.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/2.2.2...2.3.8)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-23 15:26:51 +02:00
Tullio Sebastiani
ab98e416a6 Integration of the new pod recovery monitoring strategy implemented in krkn-lib (#609)
* pod monitoring integration in plugin scenario

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* pod monitoring integration in container scenario

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* removed wait-for-pod step from plugin scenario config files

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* introduced global pod recovery time

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

nit

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* introduced krkn_pod_recovery_time in plugin scenario and removed all the references to wait-for-pods

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* functional test fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* main branch functional test fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* increased recovery times

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-04-23 10:49:01 +02:00
Sandeep Hans
19ad2d1a3d initial version of Chaos AI (#606)
* init push

Signed-off-by: Sandeep Hans <shans001@in.ibm.com>

* remove litmus + updated readme

Signed-off-by: Sandeep Hans <shans001@in.ibm.com>

* remove redundant files

Signed-off-by: Sandeep Hans <shans001@in.ibm.com>

* removed generated file+unused reference

---------

Signed-off-by: Sandeep Hans <shans001@in.ibm.com>
Co-authored-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-04-16 10:41:31 -04:00
jtydlcak
804d7cbf58 Accept list of namespaces in chaos recommender
Signed-off-by: jtydlack <139967002+jtydlack@users.noreply.github.com>
2024-04-09 23:32:17 -04:00
Paige Rubendall
54af2fc6ff adding v1.5.12 tag
Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-03-29 18:45:52 -04:00
Paige Rubendall
b79e526cfd adding app outage not creating file (#605)
Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-03-29 14:35:14 -04:00
Naga Ravi Chaitanya Elluri
a5efd7d06c Bump release version to v1.5.11
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-03-22 15:24:04 -04:00
yogananth
a1b81bd382 Fix: Reslove ingress network chaos plugin issue
Added network_chaos to plugin step and job wait time to be based on the test duration and set the default wait_time to 30s

Signed-off-by: yogananth subramanian <ysubrama@redhat.com>
2024-03-22 14:48:17 -04:00
Naga Ravi Chaitanya Elluri
782440c8c4 Copy oc and kubectl clients to additional paths
This will make sure oc and kubectl clients are accessible for users
with both /usr/bin and /usr/local/bin paths set on the host.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-03-21 11:29:50 -04:00
Naga Ravi Chaitanya Elluri
7e2755cbb7 Remove container status badge
Quay is no longer exposing it correctly: https://quay.io/repository/krkn-chaos/krkn/status

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-03-19 15:33:25 -04:00
Naga Ravi Chaitanya Elluri
2babb53d6e Bump cryptography version
This is need to fix the security vulnerability: https://nvd.nist.gov/vuln/detail/CVE-2024-26130.
Note: Reported by FOSSA.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-03-19 14:44:47 -04:00
Tullio Sebastiani
85f76e9193 do not consider exit code 2 as an error in funtests
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-03-17 23:07:46 -04:00
Liangquan Li
8bf21392f1 fix doc's nit
Signed-off-by: Liangquan Li <liangli@redhat.com>
2024-03-13 15:21:57 -04:00
Tullio Sebastiani
606fb60811 changed exit codes on post chaos alerts and post_scenario failure (#592)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-03-07 16:31:55 +01:00
Tullio Sebastiani
fac7c3c6fb lowered arcaflow log level to error (#591)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-03-07 15:32:53 +01:00
Paige Rubendall
8dd9b30030 updating tag (#589)
Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-03-06 13:11:44 -05:00
Naga Ravi Chaitanya Elluri
2d99f17aaf fix: requirements.txt to reduce vulnerabilities (#587)
The following vulnerabilities are fixed by pinning transitive dependencies:
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-3172287
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-3314966
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-3315324
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-3315328
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-3315331
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-3315452
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-3315972
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-3315975
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-3316038
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-3316211
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-5663682
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-5777683
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-5813745
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-5813746
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-5813750
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-5914629
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-6036192
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-6050294
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-6092044
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-6126975
- https://snyk.io/vuln/SNYK-PYTHON-CRYPTOGRAPHY-6210214
- https://snyk.io/vuln/SNYK-PYTHON-SETUPTOOLS-3180412
- https://snyk.io/vuln/SNYK-PYTHON-WHEEL-3180413

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
2024-03-06 12:54:30 -05:00
Tullio Sebastiani
50742a793c updated krkn-lib to 2.1.0 (#588)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-03-06 11:30:01 -05:00
Naga Ravi Chaitanya Elluri
ba6a844544 Add /usr/local/bin to the path for krkn images
This is needed to ensure oc and kubectl binaries under /usr/local/bin
are accessible.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-03-04 16:03:40 -05:00
Tullio Sebastiani
7e7a917dba dockerfiles update (#585)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-03-04 15:59:53 +01:00
Tullio Sebastiani
b9c0bb39c7 checking post run alerts properties presence (#584)
added metric check

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-03-01 18:30:54 +01:00
Tullio Sebastiani
706a886151 checking alert properties presence (#583)
typo fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-03-01 17:58:21 +01:00
Tullio Sebastiani
a1cf9e2c00 fixed typo on funtests (#582)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-03-01 17:09:19 +01:00
Tullio Sebastiani
0f5dfcb823 fixed the telemetry funtest according to the new telemetry API
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-03-01 09:48:56 -05:00
Tullio Sebastiani
1e1015e6e7 added new WS configuration to funtests
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-02-29 11:35:00 -05:00
Tullio Sebastiani
c71ce31779 integrated new telemetry library for WS 2.0
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

updated krkn-lib version

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-02-28 22:58:54 -05:00
Tullio Sebastiani
1298f220a6 Critical alerts collection and upload (#577)
* added prometheus client method for critical alerts

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* adapted run_kraken to the new plugin method for critical_alerts collection + telemetry upload

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* requirements.txt pointing temporarly to git

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* fixed severity level

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* added functional tests

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* exit on post chaos critical alerts

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

log moved

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* removed noisy log

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fixed log

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* updated requirements.txt to krkn-lib 1.4.13

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* krkn lib

* added check on variable that makes kraken return 1 whether post critical alerts are > 0

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-02-28 09:48:29 -05:00
jtydlcak
24059fb731 Add json output file option for recommender (#511)
Output in terminal changed to use json structure.

The json output file names are in format
recommender_namespace_YYYY-MM-DD_HH-MM-SS.

The path to the json file can be specified. Default path is in
kraken/utils/chaos_recommender/recommender_output.

Signed-off-by: jtydlcak <139967002+jtydlack@users.noreply.github.com>
2024-02-27 11:09:00 -05:00
Naga Ravi Chaitanya Elluri
ab951adb78 Expose thresholds config options (#574)
This commit allows users to edit the thresholds in the chaos-recommender
config to be able to identify outliers based on their use case.

Fixes https://github.com/krkn-chaos/krkn/issues/509
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-02-26 09:43:34 -05:00
Paige Rubendall
a9a7fb7e51 updating release version in dockerfiles (#578)
Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-02-21 10:17:02 -05:00
Naga Ravi Chaitanya Elluri
5a8d5b0fe1 Allow critical alerts check when enable_alerts is disabled
This covers use case where user wants to just check for critical alerts
post chaos without having to enable the alerts evaluation feature which
evaluates prom queries specified in an alerts file.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-02-19 23:15:47 -05:00
Paige Rubendall
c440dc4b51 Taking out start and end time for critical alerts (#572)
* taking out start and end time"

Signed-off-by: Paige Rubendall <prubenda@redhat.com>

* adding only break when alert fires

Signed-off-by: Paige Rubendall <prubenda@redhat.com>

* fail at end if alert had fired

Signed-off-by: Paige Rubendall <prubenda@redhat.com>

* adding new krkn-lib function with no range

Signed-off-by: Paige Rubendall <prubenda@redhat.com>

* updating requirements to new krkn-lib

Signed-off-by: Paige Rubendall <prubenda@redhat.com>

---------

Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-02-19 09:28:13 -05:00
Paige Rubendall
b174c51ee0 adding check if connection was properly set
Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-02-15 17:28:20 -05:00
Paige Rubendall
fec0434ce1 adding upload to elastic search
Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-02-13 12:01:40 -05:00
Tullio Sebastiani
1067d5ec8d changed telemetry endpoint for funtests (#571)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-02-13 17:06:20 +01:00
Tullio Sebastiani
85ea1ef7e1 Dockerfiles update (#570)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-02-09 17:20:06 +01:00
Tullio Sebastiani
2e38b8b033 Kubernetes prometheus telemetry + functional tests (#566)
added comment on the node selector input.yaml

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-02-09 16:38:12 +01:00
Tullio Sebastiani
c7ea366756 frozen package versions (#569)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-02-09 16:10:25 +01:00
Paige Rubendall
67d4ee9fa2 updating comment to match query (#568)
Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-02-08 22:09:37 -05:00
Paige Rubendall
fa59834bae updating release versin (#565)
Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-01-25 11:12:00 -05:00
Paige Rubendall
f154bcb692 adding krkn report location
Signed-off-by: Paige Rubendall <prubenda@redhat.com>
2024-01-25 10:45:01 -05:00
Naga Ravi Chaitanya Elluri
60ece4b1b8 Use 0.38.0 wheel version to fix security vulnerability
Reported by https://snyk.io/

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-01-25 09:51:19 -05:00
Naga Ravi Chaitanya Elluri
d660542a40 Add CNCF trademark guidelines and update community members (#560)
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-01-24 14:13:53 -05:00
Naga Ravi Chaitanya Elluri
2e651798fa Update redhat-chaos references with krkn-chaos
The tools are now hosted under https://github.com/krkn-chaos

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-01-24 13:40:39 -05:00
Tullio Sebastiani
f801dfce54 functional tests pointing to real scenario config files
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

typo

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

app_outage fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

typo

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

typo

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-01-18 12:54:39 -05:00
Tullio Sebastiani
8b95458444 Dockerfile v1.5.5 (#558)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
Co-authored-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-01-17 17:06:51 +01:00
Naga Ravi Chaitanya Elluri
ce1ae78f1f Update new references in the docs
This commit also updates the support matrix docs for the time scenarios.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-01-17 10:47:49 -05:00
Tullio Sebastiani
967753489b arcaflow hog scenarios + app outage functional tests
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-01-17 10:40:33 -05:00
Tullio Sebastiani
aa16cb1bf2 fixed io-hog scenario (#555)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-01-17 16:05:35 +01:00
Tullio Sebastiani
ac47e215d8 Functional Tests porting to kubernetes (#553)
* Functional Tests porting to kubernetes

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-01-17 09:48:43 +01:00
145 changed files with 3441 additions and 1196 deletions

View File

@@ -1,51 +0,0 @@
name: Build Krkn
on:
pull_request:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Create multi-node KinD cluster
uses: redhat-chaos/actions/kind@main
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
architecture: 'x64'
- name: Install environment
run: |
sudo apt-get install build-essential python3-dev
pip install --upgrade pip
pip install -r requirements.txt
- name: Run unit tests
run: python -m coverage run -a -m unittest discover -s tests -v
- name: Run CI
run: |
./CI/run.sh
cat ./CI/results.markdown >> $GITHUB_STEP_SUMMARY
echo >> $GITHUB_STEP_SUMMARY
- name: Upload CI logs
uses: actions/upload-artifact@v3
with:
name: ci-logs
path: CI/out
if-no-files-found: error
- name: Collect coverage report
run: |
python -m coverage html
- name: Publish coverage report to job summary
run: |
pip install html2text
html2text --ignore-images --ignore-links -b 0 htmlcov/index.html >> $GITHUB_STEP_SUMMARY
- name: Upload coverage data
uses: actions/upload-artifact@v3
with:
name: coverage
path: htmlcov
if-no-files-found: error
- name: Check CI results
run: grep Fail CI/results.markdown && false || true

View File

@@ -1,8 +1,7 @@
name: Docker Image CI
on:
push:
branches:
- main
tags: ['v[0-9].[0-9]+.[0-9]+']
pull_request:
jobs:
@@ -12,30 +11,43 @@ jobs:
- name: Check out code
uses: actions/checkout@v3
- name: Build the Docker images
if: startsWith(github.ref, 'refs/tags')
run: |
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/ --build-arg TAG=${GITHUB_REF#refs/tags/}
docker tag quay.io/krkn-chaos/krkn quay.io/redhat-chaos/krkn
docker tag quay.io/krkn-chaos/krkn quay.io/krkn-chaos/krkn:${GITHUB_REF#refs/tags/}
docker tag quay.io/krkn-chaos/krkn quay.io/redhat-chaos/krkn:${GITHUB_REF#refs/tags/}
- name: Test Build the Docker images
if: ${{ github.event_name == 'pull_request' }}
run: |
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/ --build-arg PR_NUMBER=${{ github.event.pull_request.number }}
- name: Login in quay
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
if: startsWith(github.ref, 'refs/tags')
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
env:
QUAY_USER: ${{ secrets.QUAY_USERNAME }}
QUAY_TOKEN: ${{ secrets.QUAY_PASSWORD }}
- name: Push the KrknChaos Docker images
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker push quay.io/krkn-chaos/krkn
if: startsWith(github.ref, 'refs/tags')
run: |
docker push quay.io/krkn-chaos/krkn
docker push quay.io/krkn-chaos/krkn:${GITHUB_REF#refs/tags/}
- name: Login in to redhat-chaos quay
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
if: startsWith(github.ref, 'refs/tags/v')
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
env:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}
- name: Push the RedHat Chaos Docker images
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker push quay.io/redhat-chaos/krkn
if: startsWith(github.ref, 'refs/tags')
run: |
docker push quay.io/redhat-chaos/krkn
docker push quay.io/redhat-chaos/krkn:${GITHUB_REF#refs/tags/}
- name: Rebuild krkn-hub
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
if: startsWith(github.ref, 'refs/tags')
uses: redhat-chaos/actions/krkn-hub@main
with:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}
QUAY_USER: ${{ secrets.QUAY_USERNAME }}
QUAY_TOKEN: ${{ secrets.QUAY_PASSWORD }}
AUTOPUSH: ${{ secrets.AUTOPUSH }}

View File

@@ -1,129 +0,0 @@
on: issue_comment
jobs:
check_user:
# This job only runs for pull request comments
name: Check User Authorization
env:
USERS: ${{vars.USERS}}
if: contains(github.event.comment.body, '/funtest') && contains(github.event.comment.html_url, '/pull/')
runs-on: ubuntu-latest
steps:
- name: Check User
run: |
for name in `echo $USERS`
do
name="${name//$'\r'/}"
name="${name//$'\n'/}"
if [ $name == "${{github.event.sender.login}}" ]
then
echo "user ${{github.event.sender.login}} authorized, action started..."
exit 0
fi
done
echo "user ${{github.event.sender.login}} is not allowed to run functional tests Action"
exit 1
pr_commented:
# This job only runs for pull request comments containing /functional
name: Functional Tests
if: contains(github.event.comment.body, '/funtest') && contains(github.event.comment.html_url, '/pull/')
runs-on: ubuntu-latest
needs:
- check_user
steps:
- name: Check out Kraken
uses: actions/checkout@v3
- name: Checkout Pull Request
run: gh pr checkout ${{ github.event.issue.number }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Install OC CLI
uses: redhat-actions/oc-installer@v1
with:
oc_version: latest
- name: Install python 3.9
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Setup kraken dependencies
run: pip install -r requirements.txt
- name: Create Workdir & export the path
run: |
mkdir workdir
echo "WORKDIR_PATH=`pwd`/workdir" >> $GITHUB_ENV
- name: Generate run id
run: |
echo "RUN_ID=`date +%s`" > $GITHUB_ENV
echo "Run Id: ${RUN_ID}"
- name: Write Pull Secret
env:
PULLSECRET_BASE64: ${{ secrets.PS_64 }}
run: |
echo "$PULLSECRET_BASE64" | base64 --decode > pullsecret.txt
- name: Write Boot Private Key
env:
BOOT_KEY: ${{ secrets.CRC_KEY_FILE }}
run: |
echo -n "$BOOT_KEY" > key.txt
- name: Teardown CRC (Post Action)
uses: webiny/action-post-run@3.0.0
id: post-run-command
with:
run: podman run --rm -v "${{ github.workspace }}:/workspace:z" -e AWS_ACCESS_KEY_ID="${{ secrets.AWS_ACCESS_KEY_ID }}" -e AWS_SECRET_ACCESS_KEY="${{ secrets.AWS_SECRET_ACCESS_KEY }}" -e AWS_DEFAULT_REGION=us-west-2 quay.io/crcont/crc-cloud:v0.0.2 destroy --project-name "chaos-funtest-${{ env.RUN_ID }}" --backed-url "s3://krkn-crc-state/${{ env.RUN_ID }}" --provider "aws"
- name: Create cluster
run: |
podman run --name crc-cloud-create --rm \
-v ${PWD}:/workspace:z \
-e AWS_ACCESS_KEY_ID="${{ secrets.AWS_ACCESS_KEY_ID }}" \
-e AWS_SECRET_ACCESS_KEY="${{ secrets.AWS_SECRET_ACCESS_KEY }}" \
-e AWS_DEFAULT_REGION="us-west-2" \
quay.io/crcont/crc-cloud:v0.0.2 \
create aws \
--project-name "chaos-funtest-${RUN_ID}" \
--backed-url "s3://krkn-crc-state/${RUN_ID}" \
--output "/workspace" \
--aws-ami-id "ami-00f5eaf98cf42ef9f" \
--pullsecret-filepath /workspace/pullsecret.txt \
--key-filepath /workspace/key.txt
- name: Setup kubeconfig
continue-on-error: true
run: |
ssh -o StrictHostKeyChecking=no -i id_rsa core@$(cat host) "cat /opt/kubeconfig" > kubeconfig
sed -i "s/https:\/\/api.crc.testing:6443/https:\/\/`cat host`.nip.io:6443/g" kubeconfig
echo "KUBECONFIG=${PWD}/kubeconfig" > $GITHUB_ENV
- name: Example deployment, GitHub Action env init
env:
NAMESPACE: test-namespace
DEPLOYMENT_NAME: test-nginx
run: ./CI/CRC/init_github_action.sh
- name: Setup test suite
run: |
yq -i '.kraken.port="8081"' CI/config/common_test_config.yaml
yq -i '.kraken.signal_address="0.0.0.0"' CI/config/common_test_config.yaml
yq -i '.kraken.kubeconfig_path="'${KUBECONFIG}'"' CI/config/common_test_config.yaml
echo "test_app_outages_gh" > ./CI/tests/my_tests
echo "test_container" >> ./CI/tests/my_tests
echo "test_namespace" >> ./CI/tests/my_tests
echo "test_net_chaos" >> ./CI/tests/my_tests
echo "test_time" >> ./CI/tests/my_tests
- name: Print affected config files
run: |
echo -e "## CI/config/common_test_config.yaml\n\n"
cat CI/config/common_test_config.yaml
- name: Running test suite
run: |
./CI/run.sh
- name: Print test output
run: cat CI/out/*
- name: Create coverage report
run: |
echo "# Test results" > $GITHUB_STEP_SUMMARY
cat CI/results.markdown >> $GITHUB_STEP_SUMMARY
echo "# Test coverage" >> $GITHUB_STEP_SUMMARY
python -m coverage report --format=markdown >> $GITHUB_STEP_SUMMARY

198
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,198 @@
name: Functional & Unit Tests
on:
pull_request:
push:
branches:
- main
jobs:
tests:
# Common steps
name: Functional & Unit Tests
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Create multi-node KinD cluster
uses: redhat-chaos/actions/kind@main
- name: Install Helm & add repos
run: |
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add stable https://charts.helm.sh/stable
helm repo update
- name: Deploy prometheus & Port Forwarding
run: |
kubectl create namespace prometheus-k8s
helm install \
--wait --timeout 360s \
kind-prometheus \
prometheus-community/kube-prometheus-stack \
--namespace prometheus-k8s \
--set prometheus.service.nodePort=30000 \
--set prometheus.service.type=NodePort \
--set grafana.service.nodePort=31000 \
--set grafana.service.type=NodePort \
--set alertmanager.service.nodePort=32000 \
--set alertmanager.service.type=NodePort \
--set prometheus-node-exporter.service.nodePort=32001 \
--set prometheus-node-exporter.service.type=NodePort
SELECTOR=`kubectl -n prometheus-k8s get service kind-prometheus-kube-prome-prometheus -o wide --no-headers=true | awk '{ print $7 }'`
POD_NAME=`kubectl -n prometheus-k8s get pods --selector="$SELECTOR" --no-headers=true | awk '{ print $1 }'`
kubectl -n prometheus-k8s port-forward $POD_NAME 9090:9090 &
sleep 5
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
architecture: 'x64'
- name: Install environment
run: |
sudo apt-get install build-essential python3-dev
pip install --upgrade pip
pip install -r requirements.txt
- name: Deploy test workloads
run: |
kubectl apply -f CI/templates/outage_pod.yaml
kubectl wait --for=condition=ready pod -l scenario=outage --timeout=300s
kubectl apply -f CI/templates/container_scenario_pod.yaml
kubectl wait --for=condition=ready pod -l scenario=container --timeout=300s
kubectl create namespace namespace-scenario
kubectl apply -f CI/templates/time_pod.yaml
kubectl wait --for=condition=ready pod -l scenario=time-skew --timeout=300s
kubectl apply -f CI/templates/service_hijacking.yaml
kubectl wait --for=condition=ready pod -l "app.kubernetes.io/name=proxy" --timeout=300s
- name: Get Kind nodes
run: |
kubectl get nodes --show-labels=true
# Pull request only steps
- name: Run unit tests
if: github.event_name == 'pull_request'
run: python -m coverage run -a -m unittest discover -s tests -v
- name: Setup Pull Request Functional Tests
if: |
github.event_name == 'pull_request'
run: |
yq -i '.kraken.port="8081"' CI/config/common_test_config.yaml
yq -i '.kraken.signal_address="0.0.0.0"' CI/config/common_test_config.yaml
yq -i '.kraken.performance_monitoring="localhost:9090"' CI/config/common_test_config.yaml
echo "test_service_hijacking" > ./CI/tests/functional_tests
echo "test_app_outages" >> ./CI/tests/functional_tests
echo "test_container" >> ./CI/tests/functional_tests
echo "test_namespace" >> ./CI/tests/functional_tests
echo "test_net_chaos" >> ./CI/tests/functional_tests
echo "test_time" >> ./CI/tests/functional_tests
echo "test_arca_cpu_hog" >> ./CI/tests/functional_tests
echo "test_arca_memory_hog" >> ./CI/tests/functional_tests
echo "test_arca_io_hog" >> ./CI/tests/functional_tests
# Push on main only steps + all other functional to collect coverage
# for the badge
- name: Configure AWS Credentials
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region : ${{ secrets.AWS_REGION }}
- name: Setup Post Merge Request Functional Tests
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: |
yq -i '.kraken.port="8081"' CI/config/common_test_config.yaml
yq -i '.kraken.signal_address="0.0.0.0"' CI/config/common_test_config.yaml
yq -i '.kraken.performance_monitoring="localhost:9090"' CI/config/common_test_config.yaml
yq -i '.telemetry.username="${{secrets.TELEMETRY_USERNAME}}"' CI/config/common_test_config.yaml
yq -i '.telemetry.password="${{secrets.TELEMETRY_PASSWORD}}"' CI/config/common_test_config.yaml
echo "test_telemetry" > ./CI/tests/functional_tests
echo "test_service_hijacking" >> ./CI/tests/functional_tests
echo "test_app_outages" >> ./CI/tests/functional_tests
echo "test_container" >> ./CI/tests/functional_tests
echo "test_namespace" >> ./CI/tests/functional_tests
echo "test_net_chaos" >> ./CI/tests/functional_tests
echo "test_time" >> ./CI/tests/functional_tests
echo "test_arca_cpu_hog" >> ./CI/tests/functional_tests
echo "test_arca_memory_hog" >> ./CI/tests/functional_tests
echo "test_arca_io_hog" >> ./CI/tests/functional_tests
# Final common steps
- name: Run Functional tests
env:
AWS_BUCKET: ${{ secrets.AWS_BUCKET }}
run: |
./CI/run.sh
cat ./CI/results.markdown >> $GITHUB_STEP_SUMMARY
echo >> $GITHUB_STEP_SUMMARY
- name: Upload CI logs
uses: actions/upload-artifact@v3
with:
name: ci-logs
path: CI/out
if-no-files-found: error
- name: Collect coverage report
run: |
python -m coverage html
python -m coverage json
- name: Publish coverage report to job summary
run: |
pip install html2text
html2text --ignore-images --ignore-links -b 0 htmlcov/index.html >> $GITHUB_STEP_SUMMARY
- name: Upload coverage data
uses: actions/upload-artifact@v3
with:
name: coverage
path: htmlcov
if-no-files-found: error
- name: Upload json coverage
uses: actions/upload-artifact@v3
with:
name: coverage.json
path: coverage.json
if-no-files-found: error
- name: Check CI results
run: grep Fail CI/results.markdown && false || true
badge:
permissions:
contents: write
name: Generate Coverage Badge
runs-on: ubuntu-latest
needs:
- tests
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps:
- name: Check out doc repo
uses: actions/checkout@master
with:
repository: krkn-chaos/krkn-lib-docs
path: krkn-lib-docs
ssh-key: ${{ secrets.KRKN_LIB_DOCS_PRIV_KEY }}
- name: Download json coverage
uses: actions/download-artifact@v3
with:
name: coverage.json
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.9
- name: Copy badge on GitHub Page Repo
env:
COLOR: yellow
run: |
# generate coverage badge on previously calculated total coverage
# and copy in the docs page
export TOTAL=$(python -c "import json;print(json.load(open('coverage.json'))['totals']['percent_covered_display'])")
[[ $TOTAL > 40 ]] && COLOR=green
echo "TOTAL: $TOTAL"
echo "COLOR: $COLOR"
curl "https://img.shields.io/badge/coverage-$TOTAL%25-$COLOR" > ./krkn-lib-docs/coverage_badge_krkn.svg
- name: Push updated Coverage Badge
run: |
cd krkn-lib-docs
git add .
git config user.name "krkn-chaos"
git config user.email "<>"
git commit -m "[KRKN] Coverage Badge ${GITHUB_REF##*/}" || echo "no changes to commit"
git push

3
.gitignore vendored
View File

@@ -16,6 +16,7 @@ __pycache__/*
*.out
kube-burner*
kube_burner*
recommender_*.json
# Project files
.ropeproject
@@ -61,7 +62,7 @@ inspect.local.*
!CI/config/common_test_config.yaml
CI/out/*
CI/ci_results
CI/scenarios/*node.yaml
CI/legacy/*node.yaml
CI/results.markdown
#env

View File

@@ -1,44 +0,0 @@
apiVersion: v1
kind: Namespace
metadata:
name: $NAMESPACE
---
apiVersion: v1
kind: Service
metadata:
name: $DEPLOYMENT_NAME-service
namespace: $NAMESPACE
spec:
selector:
app: $DEPLOYMENT_NAME
ports:
- name: http
port: 80
targetPort: 8080
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: $NAMESPACE
name: $DEPLOYMENT_NAME-deployment
spec:
replicas: 3
selector:
matchLabels:
app: $DEPLOYMENT_NAME
template:
metadata:
labels:
app: $DEPLOYMENT_NAME
spec:
containers:
- name: $DEPLOYMENT_NAME
image: nginxinc/nginx-unprivileged:stable-alpine
ports:
- name: http
containerPort: 8080

View File

@@ -1,58 +0,0 @@
#!/bin/bash
SCRIPT_PATH=./CI/CRC
DEPLOYMENT_PATH=$SCRIPT_PATH/deployment.yaml
[[ ! -f $DEPLOYMENT_PATH ]] && echo "[ERROR] please run $0 from GitHub action root directory" && exit 1
[[ -z $DEPLOYMENT_NAME ]] && echo "[ERROR] please set \$DEPLOYMENT_NAME environment variable" && exit 1
[[ -z $NAMESPACE ]] && echo "[ERROR] please set \$NAMESPACE environment variable" && exit 1
OPENSSL=`which openssl 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: openssl missing, please install it and try again" && exit 1
OC=`which oc 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: oc missing, please install it and try again" && exit 1
SED=`which sed 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: sed missing, please install it and try again" && exit 1
JQ=`which jq 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: jq missing, please install it and try again" && exit 1
ENVSUBST=`which envsubst 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: envsubst missing, please install it and try again" && exit 1
API_PORT="6443"
API_ADDRESS="https://api.`cat host`.nip.io:${API_PORT}"
FQN=$DEPLOYMENT_NAME.apps.$API_ADDRESS
echo "[INF] deploying example deployment: $DEPLOYMENT_NAME in namespace: $NAMESPACE"
$ENVSUBST < $DEPLOYMENT_PATH | $OC apply -f - > /dev/null 2>&1
echo "[INF] creating SSL self-signed certificates for route https://$FQN"
$OPENSSL genrsa -out servercakey.pem > /dev/null 2>&1
$OPENSSL req -new -x509 -key servercakey.pem -out serverca.crt -subj "/CN=$FQN/O=Red Hat Inc./C=US" > /dev/null 2>&1
$OPENSSL genrsa -out server.key > /dev/null 2>&1
$OPENSSL req -new -key server.key -out server_reqout.txt -subj "/CN=$FQN/O=Red Hat Inc./C=US" > /dev/null 2>&1
$OPENSSL x509 -req -in server_reqout.txt -days 3650 -sha256 -CAcreateserial -CA serverca.crt -CAkey servercakey.pem -out server.crt > /dev/null 2>&1
echo "[INF] creating deployment: $DEPLOYMENT_NAME public route: https://$FQN"
$OC create route --namespace $NAMESPACE edge --service=$DEPLOYMENT_NAME-service --cert=server.crt --key=server.key --ca-cert=serverca.crt --hostname="$FQN" > /dev/null 2>&1
echo "[INF] setting github action environment variables"
NODE_NAME="`$OC get nodes -o json | $JQ -r '.items[0].metadata.name'`"
COVERAGE_FILE="`pwd`/coverage.md"
echo "DEPLOYMENT_NAME=$DEPLOYMENT_NAME" >> $GITHUB_ENV
echo "DEPLOYMENT_FQN=$FQN" >> $GITHUB_ENV
echo "API_ADDRESS=$API_ADDRESS" >> $GITHUB_ENV
echo "API_PORT=$API_PORT" >> $GITHUB_ENV
echo "NODE_NAME=$NODE_NAME" >> $GITHUB_ENV
echo "NAMESPACE=$NAMESPACE" >> $GITHUB_ENV
echo "COVERAGE_FILE=$COVERAGE_FILE" >> $GITHUB_ENV
echo "[INF] deployment fully qualified name will be available in \${{ env.DEPLOYMENT_NAME }} with value $DEPLOYMENT_NAME"
echo "[INF] deployment name will be available in \${{ env.DEPLOYMENT_FQN }} with value $FQN"
echo "[INF] OCP API address will be available in \${{ env.API_ADDRESS }} with value $API_ADDRESS"
echo "[INF] OCP API port will be available in \${{ env.API_PORT }} with value $API_PORT"
echo "[INF] OCP node name will be available in \${{ env.NODE_NAME }} with value $NODE_NAME"
echo "[INF] coverage file will ve available in \${{ env.COVERAGE_FILE }} with value $COVERAGE_FILE"

View File

@@ -1,7 +1,7 @@
## CI Tests
### First steps
Edit [my_tests](tests/my_tests) with tests you want to run
Edit [functional_tests](tests/functional_tests) with tests you want to run
### How to run
```./CI/run.sh```
@@ -11,7 +11,7 @@ This will run kraken using python, make sure python3 is set up and configured pr
### Adding a test case
1. Add in simple scenario yaml file to execute under [../CI/scenarios/](scenarios)
1. Add in simple scenario yaml file to execute under [../CI/scenarios/](legacy)
2. Copy [test_application_outages.sh](tests/test_app_outages.sh) for example on how to get started
@@ -27,7 +27,7 @@ This will run kraken using python, make sure python3 is set up and configured pr
e. 15: Make sure name of config in line 14 matches what you pass on this line
4. Add test name to [my_tests](../CI/tests/my_tests) file
4. Add test name to [functional_tests](../CI/tests/functional_tests) file
a. This will be the name of the file without ".sh"

View File

@@ -1,5 +1,5 @@
kraken:
distribution: openshift # Distribution can be kubernetes or openshift.
distribution: kubernetes # Distribution can be kubernetes or openshift.
kubeconfig_path: ~/.kube/config # Path to kubeconfig.
exit_on_failure: False # Exit when a post action scenario fails.
litmus_version: v1.13.6 # Litmus version to install.
@@ -29,9 +29,12 @@ tunings:
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever.
telemetry:
enabled: False # enable/disables the telemetry collection feature
api_url: https://ulnmf9xv7j.execute-api.us-west-2.amazonaws.com/production #telemetry service endpoint
username: username # telemetry service username
password: password # telemetry service password
api_url: https://yvnn4rfoi7.execute-api.us-west-2.amazonaws.com/test #telemetry service endpoint
username: $TELEMETRY_USERNAME # telemetry service username
password: $TELEMETRY_PASSWORD # telemetry service password
prometheus_namespace: 'prometheus-k8s' # prometheus namespace
prometheus_pod_name: 'prometheus-kind-prometheus-kube-prome-prometheus-0' # prometheus pod_name
prometheus_container_name: 'prometheus'
prometheus_backup: True # enables/disables prometheus data collection
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
backup_threads: 5 # number of telemetry download/upload threads
@@ -39,3 +42,11 @@ telemetry:
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
archive_size: 10000 # the size of the prometheus data archive size in KB. The lower the size of archive is
logs_backup: True
logs_filter_patterns:
- "(\\w{3}\\s\\d{1,2}\\s\\d{2}:\\d{2}:\\d{2}\\.\\d+).+" # Sep 9 11:20:36.123425532
- "kinit (\\d+/\\d+/\\d+\\s\\d{2}:\\d{2}:\\d{2})\\s+" # kinit 2023/09/15 11:20:36 log
- "(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d+Z).+" # 2023-09-15T11:20:36.123425532Z log
oc_cli_path: /usr/bin/oc # optional, if not specified will be search in $PATH
events_backup: True # enables/disables cluster events collection
telemetry_group: "funtests"

View File

@@ -1,15 +1,14 @@
#!/bin/bash
set -x
MAX_RETRIES=60
OC=`which oc 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: oc missing, please install it and try again" && exit 1
KUBECTL=`which kubectl 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: kubectl missing, please install it and try again" && exit 1
wait_cluster_become_ready() {
COUNT=1
until `$OC get namespace > /dev/null 2>&1`
until `$KUBECTL get namespace > /dev/null 2>&1`
do
echo "[INF] waiting OpenShift to become ready, after $COUNT check"
echo "[INF] waiting Kubernetes to become ready, after $COUNT check"
sleep 3
[[ $COUNT == $MAX_RETRIES ]] && echo "[ERR] max retries exceeded, failing" && exit 1
((COUNT++))
@@ -18,9 +17,9 @@ wait_cluster_become_ready() {
ci_tests_loc="CI/tests/my_tests"
ci_tests_loc="CI/tests/functional_tests"
echo "running test suit consisting of ${ci_tests}"
echo -e "********* Running Functional Tests Suite *********\n\n"
rm -rf CI/out
@@ -37,9 +36,32 @@ echo 'Test | Result | Duration' >> $results
echo '-----------------------|--------|---------' >> $results
# Run each test
for test_name in `cat CI/tests/my_tests`
failed_tests=()
for test_name in `cat CI/tests/functional_tests`
do
wait_cluster_become_ready
./CI/run_test.sh $test_name $results
#wait_cluster_become_ready
return_value=`./CI/run_test.sh $test_name $results`
if [[ $return_value == 1 ]]
then
echo "Failed"
failed_tests+=("$test_name")
fi
wait_cluster_become_ready
done
if (( ${#failed_tests[@]}>0 ))
then
echo -e "\n\n======================================================================"
echo -e "\n FUNCTIONAL TESTS FAILED ${failed_tests[*]} ABORTING"
echo -e "\n======================================================================\n\n"
for test in "${failed_tests[@]}"
do
echo -e "\n********** $test KRKN RUN OUTPUT **********\n"
cat "CI/out/$test.out"
echo -e "\n********************************************\n\n\n\n"
done
exit 1
fi

View File

@@ -1,5 +1,4 @@
#!/bin/bash
set -x
readonly SECONDS_PER_HOUR=3600
readonly SECONDS_PER_MINUTE=60
function get_time_format() {
@@ -14,9 +13,7 @@ ci_test=`echo $1`
results_file=$2
echo -e "\n======================================================================"
echo -e " CI test for ${ci_test} "
echo -e "======================================================================\n"
echo -e "test: ${ci_test}" >&2
ci_results="CI/out/$ci_test.out"
# Test ci
@@ -28,13 +25,16 @@ then
# if the test passes update the results and complete
duration=$SECONDS
duration=$(get_time_format $duration)
echo "$ci_test: Successful"
echo -e "> $ci_test: Successful\n" >&2
echo "$ci_test | Pass | $duration" >> $results_file
count=$retries
# return value for run.sh
echo 0
else
duration=$SECONDS
duration=$(get_time_format $duration)
echo "$ci_test: Failed"
echo -e "> $ci_test: Failed\n" >&2
echo "$ci_test | Fail | $duration" >> $results_file
echo "Logs for "$ci_test
# return value for run.sh
echo 1
fi

View File

@@ -1,5 +0,0 @@
application_outage: # Scenario to create an outage of an application by blocking traffic
duration: 10 # Duration in seconds after which the routes will be accessible
namespace: openshift-monitoring # Namespace to target - all application routes will go inaccessible if pod selector is empty
pod_selector: {} # Pods to target
block: [Ingress, Egress] # It can be Ingress or Egress or Ingress, Egress

View File

@@ -1,8 +0,0 @@
scenarios:
- name: "kill machine config container"
namespace: "openshift-machine-config-operator"
label_selector: "k8s-app=machine-config-server"
container_name: "hello-openshift"
action: "kill 1"
count: 1
retry_wait: 60

View File

@@ -1,6 +0,0 @@
network_chaos: # Scenario to create an outage by simulating random variations in the network.
duration: 10 # seconds
instance_count: 1
execution: serial
egress:
bandwidth: 100mbit

View File

@@ -1,7 +0,0 @@
scenarios:
- action: delete
namespace: "^$openshift-network-diagnostics$"
label_selector:
runs: 1
sleep: 15
wait_time: 30

View File

@@ -1,5 +0,0 @@
time_scenarios:
- action: skew_time
object_type: pod
label_selector: k8s-app=etcd
container_name: ""

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: container
labels:
scenario: container
spec:
hostNetwork: true
containers:
- name: fedtools
image: docker.io/fedora/tools
command:
- /bin/sh
- -c
- |
sleep infinity

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: outage
labels:
scenario: outage
spec:
hostNetwork: true
containers:
- name: fedtools
image: docker.io/fedora/tools
command:
- /bin/sh
- -c
- |
sleep infinity

View File

@@ -0,0 +1,29 @@
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/name: proxy
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app.kubernetes.io/name: proxy
type: NodePort
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: http-web-svc
nodePort: 30036

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Pod
metadata:
name: time-skew
labels:
scenario: time-skew
spec:
hostNetwork: true
containers:
- name: fedtools
image: docker.io/fedora/tools
command:
- /bin/sh
- -c
- |
sleep infinity

View File

@@ -1,18 +1,26 @@
ERRORED=false
function finish {
if [ $? -eq 1 ] && [ $ERRORED != "true" ]
if [ $? != 0 ] && [ $ERRORED != "true" ]
then
error
fi
}
function error {
echo "Error caught."
ERRORED=true
exit_code=$?
if [ $exit_code == 1 ]
then
echo "Error caught."
ERRORED=true
elif [ $exit_code == 2 ]
then
echo "Run with exit code 2 detected, it is expected, wrapping the exit code with 0 to avoid pipeline failure"
exit 0
fi
}
function get_node {
worker_node=$(oc get nodes --no-headers | grep worker | head -n 1)
worker_node=$(kubectl get nodes --no-headers | grep worker | head -n 1)
export WORKER_NODE=$worker_node
}

View File

@@ -0,0 +1 @@

View File

@@ -1 +0,0 @@
test_net_chaos

View File

@@ -7,9 +7,11 @@ trap finish EXIT
function functional_test_app_outage {
yq -i '.application_outage.duration=10' scenarios/openshift/app_outage.yaml
yq -i '.application_outage.pod_selector={"scenario":"outage"}' scenarios/openshift/app_outage.yaml
yq -i '.application_outage.namespace="default"' scenarios/openshift/app_outage.yaml
export scenario_type="application_outages"
export scenario_file="CI/scenarios/app_outage.yaml"
export scenario_file="scenarios/openshift/app_outage.yaml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/app_outage.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/app_outage.yaml

View File

@@ -1,21 +0,0 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_app_outage {
[ -z $DEPLOYMENT_NAME ] && echo "[ERR] DEPLOYMENT_NAME variable not set, failing." && exit 1
yq -i '.application_outage.pod_selector={"app":"'$DEPLOYMENT_NAME'"}' CI/scenarios/app_outage.yaml
yq -i '.application_outage.namespace="'$NAMESPACE'"' CI/scenarios/app_outage.yaml
export scenario_type="application_outages"
export scenario_file="CI/scenarios/app_outage.yaml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/app_outage.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/app_outage.yaml
echo "App outage scenario test: Success"
}
functional_test_app_outage

View File

@@ -0,0 +1,19 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_arca_cpu_hog {
yq -i '.input_list[0].node_selector={"kubernetes.io/hostname":"kind-worker2"}' scenarios/arcaflow/cpu-hog/input.yaml
export scenario_type="arcaflow_scenarios"
export scenario_file="scenarios/arcaflow/cpu-hog/input.yaml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/arca_cpu_hog.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/arca_cpu_hog.yaml
echo "Arcaflow CPU Hog: Success"
}
functional_test_arca_cpu_hog

View File

@@ -0,0 +1,19 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_arca_io_hog {
yq -i '.input_list[0].node_selector={"kubernetes.io/hostname":"kind-worker2"}' scenarios/arcaflow/io-hog/input.yaml
export scenario_type="arcaflow_scenarios"
export scenario_file="scenarios/arcaflow/io-hog/input.yaml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/arca_io_hog.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/arca_io_hog.yaml
echo "Arcaflow IO Hog: Success"
}
functional_test_arca_io_hog

View File

@@ -0,0 +1,19 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_arca_memory_hog {
yq -i '.input_list[0].node_selector={"kubernetes.io/hostname":"kind-worker2"}' scenarios/arcaflow/memory-hog/input.yaml
export scenario_type="arcaflow_scenarios"
export scenario_file="scenarios/arcaflow/memory-hog/input.yaml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/arca_memory_hog.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/arca_memory_hog.yaml
echo "Arcaflow Memory Hog: Success"
}
functional_test_arca_memory_hog

View File

@@ -8,9 +8,11 @@ trap finish EXIT
pod_file="CI/scenarios/hello_pod.yaml"
function functional_test_container_crash {
yq -i '.scenarios[0].namespace="default"' scenarios/openshift/container_etcd.yml
yq -i '.scenarios[0].label_selector="scenario=container"' scenarios/openshift/container_etcd.yml
yq -i '.scenarios[0].container_name="fedtools"' scenarios/openshift/container_etcd.yml
export scenario_type="container_scenarios"
export scenario_file="- CI/scenarios/container_scenario.yml"
export scenario_file="- scenarios/openshift/container_etcd.yml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/container_config.yaml

View File

@@ -7,12 +7,13 @@ trap finish EXIT
function funtional_test_namespace_deletion {
export scenario_type="namespace_scenarios"
export scenario_file="- CI/scenarios/network_diagnostics_namespace.yaml"
export scenario_file="- scenarios/openshift/ingress_namespace.yaml"
export post_config=""
yq '.scenarios.[0].namespace="^openshift-network-diagnostics$"' -i CI/scenarios/network_diagnostics_namespace.yaml
yq '.scenarios[0].namespace="^namespace-scenario$"' -i scenarios/openshift/ingress_namespace.yaml
yq '.scenarios[0].wait_time=30' -i scenarios/openshift/ingress_namespace.yaml
yq '.scenarios[0].action="delete"' -i scenarios/openshift/ingress_namespace.yaml
envsubst < CI/config/common_test_config.yaml > CI/config/namespace_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/namespace_config.yaml
echo $?
echo "Namespace scenario test: Success"
}

View File

@@ -7,9 +7,16 @@ trap finish EXIT
function functional_test_network_chaos {
yq -i '.network_chaos.duration=10' scenarios/openshift/network_chaos.yaml
yq -i '.network_chaos.node_name="kind-worker2"' scenarios/openshift/network_chaos.yaml
yq -i '.network_chaos.egress.bandwidth="100mbit"' scenarios/openshift/network_chaos.yaml
yq -i 'del(.network_chaos.interfaces)' scenarios/openshift/network_chaos.yaml
yq -i 'del(.network_chaos.label_selector)' scenarios/openshift/network_chaos.yaml
yq -i 'del(.network_chaos.egress.latency)' scenarios/openshift/network_chaos.yaml
yq -i 'del(.network_chaos.egress.loss)' scenarios/openshift/network_chaos.yaml
export scenario_type="network_chaos"
export scenario_file="CI/scenarios/network_chaos.yaml"
export scenario_file="scenarios/openshift/network_chaos.yaml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/network_chaos.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/network_chaos.yaml

View File

@@ -0,0 +1,107 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
# port mapping has been configured in kind-config.yml
SERVICE_URL=http://localhost:8888
PAYLOAD_GET_1="{ \
\"status\":\"internal server error\" \
}"
STATUS_CODE_GET_1=500
PAYLOAD_PATCH_1="resource patched"
STATUS_CODE_PATCH_1=201
PAYLOAD_POST_1="{ \
\"status\": \"unauthorized\" \
}"
STATUS_CODE_POST_1=401
PAYLOAD_GET_2="{ \
\"status\":\"resource created\" \
}"
STATUS_CODE_GET_2=201
PAYLOAD_PATCH_2="bad request"
STATUS_CODE_PATCH_2=400
PAYLOAD_POST_2="not found"
STATUS_CODE_POST_2=404
JSON_MIME="application/json"
TEXT_MIME="text/plain; charset=utf-8"
function functional_test_service_hijacking {
export scenario_type="service_hijacking"
export scenario_file="scenarios/kube/service_hijacking.yaml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/service_hijacking.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/service_hijacking.yaml > /dev/null 2>&1 &
PID=$!
#Waiting the hijacking to have effect
while [ `curl -X GET -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL/list/index.php` == 404 ]; do echo "waiting scenario to kick in."; sleep 1; done;
#Checking Step 1 GET on /list/index.php
OUT_GET="`curl -X GET -s $SERVICE_URL/list/index.php`"
OUT_CONTENT=`curl -X GET -s -o /dev/null -I -w "%{content_type}" $SERVICE_URL/list/index.php`
OUT_STATUS_CODE=`curl -X GET -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL/list/index.php`
[ "${PAYLOAD_GET_1//[$'\t\r\n ']}" == "${OUT_GET//[$'\t\r\n ']}" ] && echo "Step 1 GET Payload OK" || (echo "Payload did not match. Test failed." && exit 1)
[ "$OUT_STATUS_CODE" == "$STATUS_CODE_GET_1" ] && echo "Step 1 GET Status Code OK" || (echo " Step 1 GET status code did not match. Test failed." && exit 1)
[ "$OUT_CONTENT" == "$JSON_MIME" ] && echo "Step 1 GET MIME OK" || (echo " Step 1 GET MIME did not match. Test failed." && exit 1)
#Checking Step 1 POST on /list/index.php
OUT_POST="`curl -s -X POST $SERVICE_URL/list/index.php`"
OUT_STATUS_CODE=`curl -X POST -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL/list/index.php`
OUT_CONTENT=`curl -X POST -s -o /dev/null -I -w "%{content_type}" $SERVICE_URL/list/index.php`
[ "${PAYLOAD_POST_1//[$'\t\r\n ']}" == "${OUT_POST//[$'\t\r\n ']}" ] && echo "Step 1 POST Payload OK" || (echo "Payload did not match. Test failed." && exit 1)
[ "$OUT_STATUS_CODE" == "$STATUS_CODE_POST_1" ] && echo "Step 1 POST Status Code OK" || (echo "Step 1 POST status code did not match. Test failed." && exit 1)
[ "$OUT_CONTENT" == "$JSON_MIME" ] && echo "Step 1 POST MIME OK" || (echo " Step 1 POST MIME did not match. Test failed." && exit 1)
#Checking Step 1 PATCH on /patch
OUT_PATCH="`curl -s -X PATCH $SERVICE_URL/patch`"
OUT_STATUS_CODE=`curl -X PATCH -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL/patch`
OUT_CONTENT=`curl -X PATCH -s -o /dev/null -I -w "%{content_type}" $SERVICE_URL/patch`
[ "${PAYLOAD_PATCH_1//[$'\t\r\n ']}" == "${OUT_PATCH//[$'\t\r\n ']}" ] && echo "Step 1 PATCH Payload OK" || (echo "Payload did not match. Test failed." && exit 1)
[ "$OUT_STATUS_CODE" == "$STATUS_CODE_PATCH_1" ] && echo "Step 1 PATCH Status Code OK" || (echo "Step 1 PATCH status code did not match. Test failed." && exit 1)
[ "$OUT_CONTENT" == "$TEXT_MIME" ] && echo "Step 1 PATCH MIME OK" || (echo " Step 1 PATCH MIME did not match. Test failed." && exit 1)
# wait for the next step
sleep 16
#Checking Step 2 GET on /list/index.php
OUT_GET="`curl -X GET -s $SERVICE_URL/list/index.php`"
OUT_CONTENT=`curl -X GET -s -o /dev/null -I -w "%{content_type}" $SERVICE_URL/list/index.php`
OUT_STATUS_CODE=`curl -X GET -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL/list/index.php`
[ "${PAYLOAD_GET_2//[$'\t\r\n ']}" == "${OUT_GET//[$'\t\r\n ']}" ] && echo "Step 2 GET Payload OK" || (echo "Step 2 GET Payload did not match. Test failed." && exit 1)
[ "$OUT_STATUS_CODE" == "$STATUS_CODE_GET_2" ] && echo "Step 2 GET Status Code OK" || (echo "Step 2 GET status code did not match. Test failed." && exit 1)
[ "$OUT_CONTENT" == "$JSON_MIME" ] && echo "Step 2 GET MIME OK" || (echo " Step 2 GET MIME did not match. Test failed." && exit 1)
#Checking Step 2 POST on /list/index.php
OUT_POST="`curl -s -X POST $SERVICE_URL/list/index.php`"
OUT_CONTENT=`curl -X POST -s -o /dev/null -I -w "%{content_type}" $SERVICE_URL/list/index.php`
OUT_STATUS_CODE=`curl -X POST -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL/list/index.php`
[ "${PAYLOAD_POST_2//[$'\t\r\n ']}" == "${OUT_POST//[$'\t\r\n ']}" ] && echo "Step 2 POST Payload OK" || (echo "Step 2 POST Payload did not match. Test failed." && exit 1)
[ "$OUT_STATUS_CODE" == "$STATUS_CODE_POST_2" ] && echo "Step 2 POST Status Code OK" || (echo "Step 2 POST status code did not match. Test failed." && exit 1)
[ "$OUT_CONTENT" == "$TEXT_MIME" ] && echo "Step 2 POST MIME OK" || (echo " Step 2 POST MIME did not match. Test failed." && exit 1)
#Checking Step 2 PATCH on /patch
OUT_PATCH="`curl -s -X PATCH $SERVICE_URL/patch`"
OUT_CONTENT=`curl -X PATCH -s -o /dev/null -I -w "%{content_type}" $SERVICE_URL/patch`
OUT_STATUS_CODE=`curl -X PATCH -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL/patch`
[ "${PAYLOAD_PATCH_2//[$'\t\r\n ']}" == "${OUT_PATCH//[$'\t\r\n ']}" ] && echo "Step 2 PATCH Payload OK" || (echo "Step 2 PATCH Payload did not match. Test failed." && exit 1)
[ "$OUT_STATUS_CODE" == "$STATUS_CODE_PATCH_2" ] && echo "Step 2 PATCH Status Code OK" || (echo "Step 2 PATCH status code did not match. Test failed." && exit 1)
[ "$OUT_CONTENT" == "$TEXT_MIME" ] && echo "Step 2 PATCH MIME OK" || (echo " Step 2 PATCH MIME did not match. Test failed." && exit 1)
wait $PID
# now checking if service has been restore correctly and nginx responds correctly
curl -s $SERVICE_URL | grep nginx! && echo "BODY: Service restored!" || (echo "BODY: failed to restore service" && exit 1)
OUT_STATUS_CODE=`curl -X GET -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL`
[ "$OUT_STATUS_CODE" == "200" ] && echo "STATUS_CODE: Service restored!" || (echo "STATUS_CODE: failed to restore service" && exit 1)
echo "Service Hijacking Chaos test: Success"
}
functional_test_service_hijacking

View File

@@ -0,0 +1,37 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_telemetry {
AWS_CLI=`which aws`
[ -z "$AWS_CLI" ]&& echo "AWS cli not found in path" && exit 1
[ -z "$AWS_BUCKET" ] && echo "AWS bucket not set in environment" && exit 1
export RUN_TAG="funtest-telemetry"
yq -i '.telemetry.enabled=True' CI/config/common_test_config.yaml
yq -i '.telemetry.full_prometheus_backup=True' CI/config/common_test_config.yaml
yq -i '.performance_monitoring.check_critical_alerts=True' CI/config/common_test_config.yaml
yq -i '.performance_monitoring.prometheus_url="http://localhost:9090"' CI/config/common_test_config.yaml
yq -i '.telemetry.run_tag=env(RUN_TAG)' CI/config/common_test_config.yaml
export scenario_type="arcaflow_scenarios"
export scenario_file="scenarios/arcaflow/cpu-hog/input.yaml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/telemetry.yaml
retval=$(python3 -m coverage run -a run_kraken.py -c CI/config/telemetry.yaml)
RUN_FOLDER=`cat CI/out/test_telemetry.out | grep amazonaws.com | sed -rn "s#.*https:\/\/.*\/files/(.*)#\1#p"`
$AWS_CLI s3 ls "s3://$AWS_BUCKET/$RUN_FOLDER/" | awk '{ print $4 }' > s3_remote_files
echo "checking if telemetry files are uploaded on s3"
cat s3_remote_files | grep events-00.json || ( echo "FAILED: events-00.json not uploaded" && exit 1 )
cat s3_remote_files | grep critical-alerts-00.log || ( echo "FAILED: critical-alerts-00.log not uploaded" && exit 1 )
cat s3_remote_files | grep prometheus-00.tar || ( echo "FAILED: prometheus backup not uploaded" && exit 1 )
cat s3_remote_files | grep telemetry.json || ( echo "FAILED: telemetry.json not uploaded" && exit 1 )
echo "all files uploaded!"
echo "Telemetry Collection: Success"
}
functional_test_telemetry

View File

@@ -7,8 +7,12 @@ trap finish EXIT
function functional_test_time_scenario {
yq -i '.time_scenarios[0].label_selector="scenario=time-skew"' scenarios/openshift/time_scenarios_example.yml
yq -i '.time_scenarios[0].container_name=""' scenarios/openshift/time_scenarios_example.yml
yq -i '.time_scenarios[0].namespace="default"' scenarios/openshift/time_scenarios_example.yml
yq -i '.time_scenarios[1].label_selector="kubernetes.io/hostname=kind-worker2"' scenarios/openshift/time_scenarios_example.yml
export scenario_type="time_scenarios"
export scenario_file="CI/scenarios/time_scenarios.yml"
export scenario_file="scenarios/openshift/time_scenarios_example.yml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/time_config.yaml

View File

@@ -1,6 +1,7 @@
# Krkn aka Kraken
[![Docker Repository on Quay](https://quay.io/repository/redhat-chaos/krkn/status "Docker Repository on Quay")](https://quay.io/repository/redhat-chaos/krkn?tab=tags&tag=latest)
![Workflow-Status](https://github.com/redhat-chaos/krkn/actions/workflows/docker-image.yml/badge.svg)
![Workflow-Status](https://github.com/krkn-chaos/krkn/actions/workflows/docker-image.yml/badge.svg)
![coverage](https://krkn-chaos.github.io/krkn-lib-docs/coverage_badge_krkn.svg)
![action](https://github.com/krkn-chaos/krkn/actions/workflows/tests.yml/badge.svg)
![Krkn logo](media/logo.png)
@@ -38,19 +39,7 @@ After installation, refer back to the below sections for supported scenarios and
#### Running Kraken with minimal configuration tweaks
For cases where you want to run Kraken with minimal configuration changes, refer to [Kraken-hub](https://github.com/redhat-chaos/krknChaos-hub). One use case is CI integration where you do not want to carry around different configuration files for the scenarios.
### Setting up infrastructure dependencies
Kraken indexes the metrics specified in the profile into Elasticsearch in addition to leveraging Cerberus for understanding the health of the Kubernetes/OpenShift cluster under test. More information on the features is documented below. The infrastructure pieces can be easily installed and uninstalled by running:
```
$ cd kraken
$ podman-compose up or $ docker-compose up # Spins up the containers specified in the docker-compose.yml file present in the run directory.
$ podman-compose down or $ docker-compose down # Delete the containers installed.
```
This will manage the Cerberus and Elasticsearch containers on the host on which you are running Kraken.
**NOTE**: Make sure you have enough resources (memory and disk) on the machine on top of which the containers are running as Elasticsearch is resource intensive. Cerberus monitors the system components by default, the [config](config/cerberus.yaml) can be tweaked to add applications namespaces, routes and other components to monitor as well. The command will keep running until killed since detached mode is not supported as of now.
For cases where you want to run Kraken with minimal configuration changes, refer to [krkn-hub](https://github.com/krkn-chaos/krkn-hub). One use case is CI integration where you do not want to carry around different configuration files for the scenarios.
### Config
@@ -65,7 +54,7 @@ Scenario type | Kubernetes
[Pod Network Scenarios](docs/pod_network_scenarios.md) | :x: |
[Container Scenarios](docs/container_scenarios.md) | :heavy_check_mark: |
[Node Scenarios](docs/node_scenarios.md) | :heavy_check_mark: |
[Time Scenarios](docs/time_scenarios.md) | :x: |
[Time Scenarios](docs/time_scenarios.md) | :heavy_check_mark: |
[Hog Scenarios: CPU, Memory](docs/arcaflow_scenarios.md) | :heavy_check_mark: |
[Cluster Shut Down Scenarios](docs/cluster_shut_down_scenarios.md) | :heavy_check_mark: |
[Service Disruption Scenarios](docs/service_disruption_scenarios.md.md) | :heavy_check_mark: |
@@ -74,12 +63,14 @@ Scenario type | Kubernetes
[PVC scenario](docs/pvc_scenario.md) | :heavy_check_mark: |
[Network_Chaos](docs/network_chaos.md) | :heavy_check_mark: |
[ManagedCluster Scenarios](docs/managedcluster_scenarios.md) | :heavy_check_mark: |
[Service Hijacking Scenarios](docs/service_hijacking_scenarios.md) | :heavy_check_mark: |
[SYN Flood Scenarios](docs/syn_flood_scenarios.md) | :heavy_check_mark: |
### Kraken scenario pass/fail criteria and report
It is important to make sure to check if the targeted component recovered from the chaos injection and also if the Kubernetes/OpenShift cluster is healthy as failures in one component can have an adverse impact on other components. Kraken does this by:
It is important to make sure to check if the targeted component recovered from the chaos injection and also if the Kubernetes cluster is healthy as failures in one component can have an adverse impact on other components. Kraken does this by:
- Having built in checks for pod and node based scenarios to ensure the expected number of replicas and nodes are up. It also supports running custom scripts with the checks.
- Leveraging [Cerberus](https://github.com/redhat-chaos/cerberus) to monitor the cluster under test and consuming the aggregated go/no-go signal to determine pass/fail post chaos. It is highly recommended to turn on the Cerberus health check feature available in Kraken. Instructions on installing and setting up Cerberus can be found [here](https://github.com/openshift-scale/cerberus#installation) or can be installed from Kraken using the [instructions](https://github.com/redhat-chaos/krkn#setting-up-infrastructure-dependencies). Once Cerberus is up and running, set cerberus_enabled to True and cerberus_url to the url where Cerberus publishes go/no-go signal in the Kraken config file. Cerberus can monitor [application routes](https://github.com/redhat-chaos/cerberus/blob/main/docs/config.md#watch-routes) during the chaos and fails the run if it encounters downtime as it is a potential downtime in a customers, or users environment as well. It is especially important during the control plane chaos scenarios including the API server, Etcd, Ingress etc. It can be enabled by setting `check_applicaton_routes: True` in the [Kraken config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) provided application routes are being monitored in the [cerberus config](https://github.com/redhat-chaos/krkn/blob/main/config/cerberus.yaml).
- Leveraging [Cerberus](https://github.com/krkn-chaos/cerberus) to monitor the cluster under test and consuming the aggregated go/no-go signal to determine pass/fail post chaos. It is highly recommended to turn on the Cerberus health check feature available in Kraken. Instructions on installing and setting up Cerberus can be found [here](https://github.com/openshift-scale/cerberus#installation) or can be installed from Kraken using the [instructions](https://github.com/krkn-chaos/krkn#setting-up-infrastructure-dependencies). Once Cerberus is up and running, set cerberus_enabled to True and cerberus_url to the url where Cerberus publishes go/no-go signal in the Kraken config file. Cerberus can monitor [application routes](https://github.com/redhat-chaos/cerberus/blob/main/docs/config.md#watch-routes) during the chaos and fails the run if it encounters downtime as it is a potential downtime in a customers, or users environment as well. It is especially important during the control plane chaos scenarios including the API server, Etcd, Ingress etc. It can be enabled by setting `check_applicaton_routes: True` in the [Kraken config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) provided application routes are being monitored in the [cerberus config](https://github.com/redhat-chaos/krkn/blob/main/config/cerberus.yaml).
- Leveraging built-in alert collection feature to fail the runs in case of critical alerts.
### Signaling
@@ -103,7 +94,7 @@ Information on enabling and leveraging this feature can be found [here](docs/SLO
### OCM / ACM integration
Kraken supports injecting faults into [Open Cluster Management (OCM)](https://open-cluster-management.io/) and [Red Hat Advanced Cluster Management for Kubernetes (ACM)](https://www.redhat.com/en/technologies/management/advanced-cluster-management) managed clusters through [ManagedCluster Scenarios](docs/managedcluster_scenarios.md).
Kraken supports injecting faults into [Open Cluster Management (OCM)](https://open-cluster-management.io/) and [Red Hat Advanced Cluster Management for Kubernetes (ACM)](https://www.krkn.com/en/technologies/management/advanced-cluster-management) managed clusters through [ManagedCluster Scenarios](docs/managedcluster_scenarios.md).
### Blogs and other useful resources
@@ -129,6 +120,7 @@ Please read [this file]((CI/README.md#adding-a-test-case)) for more information
### Community
Key Members(slack_usernames/full name): paigerube14/Paige Rubendall, mffiedler/Mike Fiedler, ravielluri/Naga Ravi Chaitanya Elluri.
* [**#krkn on Kubernetes Slack**](https://kubernetes.slack.com)
* [**#forum-chaos on CoreOS Slack internal to Red Hat**](https://coreos.slack.com)
Key Members(slack_usernames/full name): paigerube14/Paige Rubendall, mffiedler/Mike Fiedler, tsebasti/Tullio Sebastiani, yogi/Yogananth Subramanian, sahil/Sahil Shah, pradeep/Pradeep Surisetty and ravielluri/Naga Ravi Chaitanya Elluri.
* [**#krkn on Kubernetes Slack**](https://kubernetes.slack.com/messages/C05SFMHRWK1)
The Linux Foundation® (TLF) has registered trademarks and uses trademarks. For a list of TLF trademarks, see [Trademark Usage](https://www.linuxfoundation.org/legal/trademark-usage).

View File

@@ -2,14 +2,14 @@
Following are a list of enhancements that we are planning to work on adding support in Krkn. Of course any help/contributions are greatly appreciated.
- [ ] [Ability to run multiple chaos scenarios in parallel under load to mimic real world outages](https://github.com/redhat-chaos/krkn/issues/424)
- [x] [Centralized storage for chaos experiments artifacts](https://github.com/redhat-chaos/krkn/issues/423)
- [ ] [Support for causing DNS outages](https://github.com/redhat-chaos/krkn/issues/394)
- [x] [Chaos recommender](https://github.com/redhat-chaos/krkn/tree/main/utils/chaos-recommender) to suggest scenarios having probability of impacting the service under test using profiling results
- [ ] [Ability to run multiple chaos scenarios in parallel under load to mimic real world outages](https://github.com/krkn-chaos/krkn/issues/424)
- [x] [Centralized storage for chaos experiments artifacts](https://github.com/krkn-chaos/krkn/issues/423)
- [ ] [Support for causing DNS outages](https://github.com/krkn-chaos/krkn/issues/394)
- [x] [Chaos recommender](https://github.com/krkn-chaos/krkn/tree/main/utils/chaos-recommender) to suggest scenarios having probability of impacting the service under test using profiling results
- [ ] Chaos AI integration to improve and automate test coverage
- [x] [Support for pod level network traffic shaping](https://github.com/redhat-chaos/krkn/issues/393)
- [ ] [Ability to visualize the metrics that are being captured by Kraken and stored in Elasticsearch](https://github.com/redhat-chaos/krkn/issues/124)
- [ ] Support for running all the scenarios of Kraken on Kubernetes distribution - see https://github.com/redhat-chaos/krkn/issues/185, https://github.com/redhat-chaos/krkn/issues/186
- [ ] Continue to improve [Chaos Testing Guide](https://redhat-chaos.github.io/krkn) in terms of adding best practices, test environment recommendations and scenarios to make sure the OpenShift platform, as well the applications running on top it, are resilient and performant under chaotic conditions.
- [ ] [Switch documentation references to Kubernetes](https://github.com/redhat-chaos/krkn/issues/495)
- [ ] [OCP and Kubernetes functionalities segregation](https://github.com/redhat-chaos/krkn/issues/497)
- [x] [Support for pod level network traffic shaping](https://github.com/krkn-chaos/krkn/issues/393)
- [ ] [Ability to visualize the metrics that are being captured by Kraken and stored in Elasticsearch](https://github.com/krkn-chaos/krkn/issues/124)
- [ ] Support for running all the scenarios of Kraken on Kubernetes distribution - see https://github.com/krkn-chaos/krkn/issues/185, https://github.com/redhat-chaos/krkn/issues/186
- [ ] Continue to improve [Chaos Testing Guide](https://krkn-chaos.github.io/krkn) in terms of adding best practices, test environment recommendations and scenarios to make sure the OpenShift platform, as well the applications running on top it, are resilient and performant under chaotic conditions.
- [ ] [Switch documentation references to Kubernetes](https://github.com/krkn-chaos/krkn/issues/495)
- [ ] [OCP and Kubernetes functionalities segregation](https://github.com/krkn-chaos/krkn/issues/497)

View File

@@ -8,7 +8,7 @@
description: 10 minutes avg. 99th etcd fsync latency on {{$labels.pod}} higher than 1s. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket[2m]))[10m:]) > 0.007
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket[2m]))[10m:]) > 0.03
description: 10 minutes avg. 99th etcd commit latency on {{$labels.pod}} higher than 30ms. {{$value}}s
severity: warning
@@ -88,3 +88,42 @@
- expr: ALERTS{severity="critical", alertstate="firing"} > 0
description: Critical prometheus alert. {{$labels.alertname}}
severity: warning
# etcd CPU and usage increase
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-etcd', container='etcd'}[1m])) * 100 / sum(machine_cpu_cores) > 5
description: Etcd CPU usage increased significantly
severity: warning
# etcd memory usage increase
- expr: sum(deriv(container_memory_usage_bytes{image!='', namespace='openshift-etcd', container='etcd'}[5m])) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: Etcd memory usage increased significantly
severity: warning
# Openshift API server CPU and memory usage increase
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-apiserver', container='openshift-apiserver'}[1m])) * 100 / sum(machine_cpu_cores) > 5
description: openshift apiserver cpu usage increased significantly
severity: warning
- expr: (sum(deriv(container_memory_usage_bytes{namespace='openshift-apiserver', container='openshift-apiserver'}[5m]))) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: openshift apiserver memory usage increased significantly
severity: warning
# Openshift kube API server CPU and memory usage increase
- expr: sum(rate(container_cpu_usage_seconds_total{image!='', namespace='openshift-kube-apiserver', container='kube-apiserver'}[1m])) * 100 / sum(machine_cpu_cores) > 5
description: openshift apiserver cpu usage increased significantly
severity: warning
- expr: (sum(deriv(container_memory_usage_bytes{namespace='openshift-kube-apiserver', container='kube-apiserver'}[5m]))) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: openshift apiserver memory usage increased significantly
severity: warning
# Master node CPU usage increase
- expr: (sum((sum(deriv(pod:container_cpu_usage:sum{container="",pod!=""}[5m])) BY (namespace, pod) * on(pod, namespace) group_left(node) (node_namespace_pod:kube_pod_info:) ) * on(node) group_left(role) (max by (node) (kube_node_role{role="master"})))) * 100 / sum(machine_cpu_cores) > 5
description: master nodes cpu usage increased significantly
severity: warning
# Master nodes memory usage increase
- expr: (sum((sum(deriv(container_memory_usage_bytes{container="",pod!=""}[5m])) BY (namespace, pod) * on(pod, namespace) group_left(node) (node_namespace_pod:kube_pod_info:) ) * on(node) group_left(role) (max by (node) (kube_node_role{role="master"})))) * 100 / sum(node_memory_MemTotal_bytes) > 5
description: master nodes memory usage increased significantly
severity: warning

View File

@@ -1,5 +1,5 @@
kraken:
distribution: openshift # Distribution can be kubernetes or openshift
distribution: kubernetes # Distribution can be kubernetes or openshift
kubeconfig_path: ~/.kube/config # Path to kubeconfig
exit_on_failure: False # Exit when a post action scenario fails
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
@@ -15,7 +15,7 @@ kraken:
- application_outages:
- scenarios/openshift/app_outage.yaml
- container_scenarios: # List of chaos pod scenarios to load
- - scenarios/openshift/container_etcd.yml
- - scenarios/openshift/container_etcd.yml
- plugin_scenarios:
- scenarios/openshift/etcd.yml
- scenarios/openshift/regex_openshift_pod_kill.yml
@@ -23,7 +23,7 @@ kraken:
- scenarios/openshift/network_chaos_ingress.yml
- scenarios/openshift/prom_kill.yml
- node_scenarios: # List of chaos node scenarios to load
- scenarios/openshift/node_scenarios_example.yml
- scenarios/openshift/node_scenarios_example.yml
- plugin_scenarios:
- scenarios/openshift/openshift-apiserver.yml
- scenarios/openshift/openshift-kube-apiserver.yml
@@ -42,6 +42,10 @@ kraken:
- scenarios/openshift/pvc_scenario.yaml
- network_chaos:
- scenarios/openshift/network_chaos.yaml
- service_hijacking:
- scenarios/kube/service_hijacking.yaml
- syn_flood:
- scenarios/kube/syn_flood.yaml
cerberus:
cerberus_enabled: False # Enable it when cerberus is previously installed
@@ -51,7 +55,7 @@ cerberus:
performance_monitoring:
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
uuid: # uuid for the run is generated by default if not set
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
@@ -65,14 +69,19 @@ telemetry:
enabled: False # enable/disables the telemetry collection feature
api_url: https://ulnmf9xv7j.execute-api.us-west-2.amazonaws.com/production #telemetry service endpoint
username: username # telemetry service username
password: password # telemetry service password
password: password # telemetry service password
prometheus_backup: True # enables/disables prometheus data collection
prometheus_namespace: "" # namespace where prometheus is deployed (if distribution is kubernetes)
prometheus_container_name: "" # name of the prometheus container name (if distribution is kubernetes)
prometheus_pod_name: "" # name of the prometheus pod (if distribution is kubernetes)
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
backup_threads: 5 # number of telemetry download/upload threads
archive_path: /tmp # local path where the archive files will be temporarly stored
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
archive_size: 500000 # the size of the prometheus data archive size in KB. The lower the size of archive is
archive_size: 500000
telemetry_group: '' # if set will archive the telemetry in the S3 bucket on a folder named after the value, otherwise will use "default"
# the size of the prometheus data archive size in KB. The lower the size of archive is
# the higher the number of archive files will be produced and uploaded (and processed by backup_threads
# simultaneously).
# For unstable/slow connection is better to keep this value low
@@ -85,6 +94,9 @@ telemetry:
- "(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d+Z).+" # 2023-09-15T11:20:36.123425532Z log
oc_cli_path: /usr/bin/oc # optional, if not specified will be search in $PATH
events_backup: True # enables/disables cluster events collection
elastic:
elastic_url: "" # To track results in elasticsearch, give url to server here; will post telemetry details when url and index not blank
elastic_index: "" # Elastic search index pattern to post results to

View File

@@ -77,3 +77,8 @@ telemetry:
- "kinit (\\d+/\\d+/\\d+\\s\\d{2}:\\d{2}:\\d{2})\\s+" # kinit 2023/09/15 11:20:36 log
- "(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d+Z).+" # 2023-09-15T11:20:36.123425532Z log
oc_cli_path: /usr/bin/oc # optional, if not specified will be search in $PATH
elastic:
elastic_url: "" # To track results in elasticsearch, give url to server here; will post telemetry details when url and index not blank
elastic_index: "" # Elastic search index pattern to post results to

View File

@@ -1,5 +1,5 @@
application: openshift-etcd
namespace: openshift-etcd
namespaces: openshift-etcd
labels: app=openshift-etcd
kubeconfig: ~/.kube/config.yaml
prometheus_endpoint: <Prometheus_Endpoint>
@@ -7,6 +7,8 @@ auth_token: <Auth_Token>
scrape_duration: 10m
chaos_library: "kraken"
log_level: INFO
json_output_file: False
json_output_folder_path:
# for output purpose only do not change if not needed
chaos_tests:
@@ -26,4 +28,8 @@ chaos_tests:
- pod_network_chaos
MEM:
- node_memory_hog
- pvc_disk_fill
- pvc_disk_fill
threshold: .7
cpu_threshold: .5
mem_threshold: .5

View File

@@ -1,28 +1,54 @@
# Dockerfile for kraken
FROM mcr.microsoft.com/azure-cli:latest as azure-cli
FROM registry.access.redhat.com/ubi8/ubi:latest
ENV KUBECONFIG /root/.kube/config
# Copy azure client binary from azure-cli image
COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
# Install dependencies
RUN yum install -y git python39 python3-pip jq gettext wget && \
python3.9 -m pip install -U pip && \
git clone https://github.com/krkn-chaos/krkn.git --branch v1.5.4 /root/kraken && \
mkdir -p /root/.kube && cd /root/kraken && \
pip3.9 install -r requirements.txt && \
pip3.9 install virtualenv && \
wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && chmod +x /usr/bin/yq
# Get Kubernetes and OpenShift clients from stable releases
# oc build
FROM golang:1.22.4 AS oc-build
RUN apt-get update && apt-get install -y --no-install-recommends libkrb5-dev
WORKDIR /tmp
RUN wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz && tar -xvf openshift-client-linux.tar.gz && cp oc /usr/local/bin/oc && cp kubectl /usr/local/bin/kubectl
RUN git clone --branch release-4.18 https://github.com/openshift/oc.git
WORKDIR /tmp/oc
RUN go mod edit -go 1.22.3 &&\
go get github.com/moby/buildkit@v0.12.5 &&\
go get github.com/containerd/containerd@v1.7.11&&\
go get github.com/docker/docker@v25.0.5&&\
go mod tidy && go mod vendor
RUN make GO_REQUIRED_MIN_VERSION:= oc
WORKDIR /root/kraken
FROM fedora:40
ARG PR_NUMBER
ARG TAG
RUN groupadd -g 1001 krkn && useradd -m -u 1001 -g krkn krkn
RUN dnf update -y
ENV KUBECONFIG /home/krkn/.kube/config
# install kubectl
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" &&\
cp kubectl /usr/local/bin/kubectl && chmod +x /usr/local/bin/kubectl &&\
cp kubectl /usr/bin/kubectl && chmod +x /usr/bin/kubectl
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
RUN dnf update && dnf install -y --setopt=install_weak_deps=False \
git python39 jq yq gettext wget which &&\
dnf clean all
# copy oc client binary from oc-build image
COPY --from=oc-build /tmp/oc/oc /usr/bin/oc
# krkn build
RUN git clone https://github.com/krkn-chaos/krkn.git /home/krkn/kraken && \
mkdir -p /home/krkn/.kube
WORKDIR /home/krkn/kraken
# default behaviour will be to build main
# if it is a PR trigger the PR itself will be checked out
RUN if [ -n "$PR_NUMBER" ]; then git fetch origin pull/${PR_NUMBER}/head:pr-${PR_NUMBER} && git checkout pr-${PR_NUMBER};fi
# if it is a TAG trigger checkout the tag
RUN if [ -n "$TAG" ]; then git checkout "$TAG";fi
RUN python3.9 -m ensurepip
RUN pip3.9 install -r requirements.txt
RUN pip3.9 install jsonschema
RUN chown -R krkn:krkn /home/krkn && chmod 755 /home/krkn
USER krkn
ENTRYPOINT ["python3.9", "run_kraken.py"]
CMD ["--config=config/config.yaml"]

View File

@@ -1,29 +0,0 @@
# Dockerfile for kraken
FROM ppc64le/centos:8
FROM mcr.microsoft.com/azure-cli:latest as azure-cli
LABEL org.opencontainers.image.authors="Red Hat OpenShift Chaos Engineering"
ENV KUBECONFIG /root/.kube/config
# Copy azure client binary from azure-cli image
COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
# Install dependencies
RUN yum install -y git python39 python3-pip jq gettext wget && \
python3.9 -m pip install -U pip && \
git clone https://github.com/redhat-chaos/krkn.git --branch v1.5.4 /root/kraken && \
mkdir -p /root/.kube && cd /root/kraken && \
pip3.9 install -r requirements.txt && \
pip3.9 install virtualenv && \
wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && chmod +x /usr/bin/yq
# Get Kubernetes and OpenShift clients from stable releases
WORKDIR /tmp
RUN wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz && tar -xvf openshift-client-linux.tar.gz && cp oc /usr/local/bin/oc && cp kubectl /usr/local/bin/kubectl
WORKDIR /root/kraken
ENTRYPOINT python3.9 run_kraken.py --config=config/config.yaml

View File

@@ -1,31 +0,0 @@
version: "3"
services:
elastic:
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.2
deploy:
replicas: 1
restart_policy:
condition: on-failure
network_mode: host
environment:
discovery.type: single-node
kibana:
image: docker.elastic.co/kibana/kibana:7.13.2
deploy:
replicas: 1
restart_policy:
condition: on-failure
network_mode: host
environment:
ELASTICSEARCH_HOSTS: "http://0.0.0.0:9200"
cerberus:
image: quay.io/openshift-scale/cerberus:latest
privileged: true
deploy:
replicas: 1
restart_policy:
condition: on-failure
network_mode: host
volumes:
- ./config/cerberus.yaml:/root/cerberus/config/config.yaml:Z # Modify the config in case of the need to monitor additional components
- ${HOME}/.kube/config:/root/.kube/config:Z

View File

@@ -27,14 +27,12 @@ After creating the service account you will need to enable the account using the
## Azure
**NOTE**: For Azure node killing scenarios, make sure [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) is installed.
You will also need to create a service principal and give it the correct access, see [here](https://docs.openshift.com/container-platform/4.5/installing/installing_azure/installing-azure-account.html) for creating the service principal and setting the proper permissions.
**NOTE**: You will need to create a service principal and give it the correct access, see [here](https://docs.openshift.com/container-platform/4.5/installing/installing_azure/installing-azure-account.html) for creating the service principal and setting the proper permissions.
To properly run the service principal requires “Azure Active Directory Graph/Application.ReadWrite.OwnedBy” api permission granted and “User Access Administrator”.
Before running you will need to set the following:
1. Login using ```az login```
1. ```export AZURE_SUBSCRIPTION_ID=<subscription_id>```
2. ```export AZURE_TENANT_ID=<tenant_id>```

View File

@@ -1,5 +1,5 @@
#### Kubernetes/OpenShift cluster shut down scenario
Scenario to shut down all the nodes including the masters and restart them after specified duration. Cluster shut down scenario can be injected by placing the shut_down config file under cluster_shut_down_scenario option in the kraken config. Refer to [cluster_shut_down_scenario](https://github.com/redhat-chaos/krkn/blob/main/scenarios/cluster_shut_down_scenario.yml) config file.
#### Kubernetes cluster shut down scenario
Scenario to shut down all the nodes including the masters and restart them after specified duration. Cluster shut down scenario can be injected by placing the shut_down config file under cluster_shut_down_scenario option in the kraken config. Refer to [cluster_shut_down_scenario](https://github.com/krkn-chaos/krkn/blob/main/scenarios/cluster_shut_down_scenario.yml) config file.
Refer to [cloud setup](cloud_setup.md) to configure your cli properly for the cloud provider of the cluster you want to shut down.

View File

@@ -4,7 +4,7 @@ This can be based on the pods namespace or labels. If you know the exact object
These scenarios are in a simple yaml format that you can manipulate to run your specific tests or use the pre-existing scenarios to see how it works.
#### Example Config
The following are the components of Kubernetes/OpenShift for which a basic chaos scenario config exists today.
The following are the components of Kubernetes for which a basic chaos scenario config exists today.
```
scenarios:
@@ -25,7 +25,7 @@ In all scenarios we do a post chaos check to wait and verify the specific compon
Here there are two options:
1. Pass a custom script in the main config scenario list that will run before the chaos and verify the output matches post chaos scenario.
See [scenarios/post_action_etcd_container.py](https://github.com/redhat-chaos/krkn/blob/main/scenarios/post_action_etcd_container.py) for an example.
See [scenarios/post_action_etcd_container.py](https://github.com/krkn-chaos/krkn/blob/main/scenarios/post_action_etcd_container.py) for an example.
```
- container_scenarios: # List of chaos pod scenarios to load.
- - scenarios/container_etcd.yml

View File

@@ -62,7 +62,7 @@ If changes go into the main repository while you're working on your code it is b
If not already configured, set the upstream url for kraken.
```
git remote add upstream https://github.com/redhat-chaos/krkn.git
git remote add upstream https://github.com/krkn-chaos/krkn.git
```
Rebase to upstream master branch.

View File

@@ -14,11 +14,7 @@ For example, for adding a pod level scenario for a new application, refer to the
namespace_pattern: ^<namespace>$
label_selector: <pod label>
kill: <number of pods to kill>
- id: wait-for-pods
config:
namespace_pattern: ^<namespace>$
label_selector: <pod label>
count: <expected number of pods that match namespace and label>
krkn_pod_recovery_time: <expected time for the pod to become ready>
```
#### Node Scenario Yaml Template

View File

@@ -3,13 +3,13 @@
The following ways are supported to run Kraken:
- Standalone python program through Git.
- Containerized version using either Podman or Docker as the runtime via [Krkn-hub](https://github.com/redhat-chaos/krkn-hub)
- Containerized version using either Podman or Docker as the runtime via [Krkn-hub](https://github.com/krkn-chaos/krkn-hub)
- Kubernetes or OpenShift deployment ( unsupported )
**NOTE**: It is recommended to run Kraken external to the cluster ( Standalone or Containerized ) hitting the Kubernetes/OpenShift API as running it internal to the cluster might be disruptive to itself and also might not report back the results if the chaos leads to cluster's API server instability.
**NOTE**: To run Kraken on Power (ppc64le) architecture, build and run a containerized version by following the
instructions given [here](https://github.com/redhat-chaos/krkn/blob/main/containers/build_own_image-README.md).
instructions given [here](https://github.com/krkn-chaos/krkn/blob/main/containers/build_own_image-README.md).
**NOTE**: Helper functions for interactions in Krkn are part of [krkn-lib](https://github.com/redhat-chaos/krkn-lib).
Please feel free to reuse and expand them as you see fit when adding a new scenario or expanding
@@ -19,10 +19,10 @@ the capabilities of the current supported scenarios.
### Git
#### Clone the repository
Pick the latest stable release to install [here](https://github.com/redhat-chaos/krkn/releases).
Pick the latest stable release to install [here](https://github.com/krkn-chaos/krkn/releases).
```
$ git clone https://github.com/redhat-chaos/krkn.git --branch <release version>
$ cd kraken
$ git clone https://github.com/krkn-chaos/krkn.git --branch <release version>
$ cd krkn
```
#### Install the dependencies
@@ -40,13 +40,13 @@ $ python3.9 run_kraken.py --config <config_file_location>
```
### Run containerized version
[Krkn-hub](https://github.com/redhat-chaos/krkn-hub) is a wrapper that allows running Krkn chaos scenarios via podman or docker runtime with scenario parameters/configuration defined as environment variables.
[Krkn-hub](https://github.com/krkn-chaos/krkn-hub) is a wrapper that allows running Krkn chaos scenarios via podman or docker runtime with scenario parameters/configuration defined as environment variables.
Refer [instructions](https://github.com/redhat-chaos/krkn-hub#supported-chaos-scenarios) to get started.
Refer [instructions](https://github.com/krkn-chaos/krkn-hub#supported-chaos-scenarios) to get started.
### Run Kraken as a Kubernetes deployment ( unsupported option - standalone or containerized deployers are recommended )
Refer [Instructions](https://github.com/redhat-chaos/krkn/blob/main/containers/README.md) on how to deploy and run Kraken as a Kubernetes/OpenShift deployment.
Refer [Instructions](https://github.com/krkn-chaos/krkn/blob/main/containers/README.md) on how to deploy and run Kraken as a Kubernetes/OpenShift deployment.
Refer to the [chaos-kraken chart manpage](https://artifacthub.io/packages/helm/startx/chaos-kraken)

View File

@@ -17,11 +17,8 @@ You can then create the scenario file with the following contents:
config:
namespace_pattern: ^kube-system$
label_selector: k8s-app=kube-scheduler
- id: wait-for-pods
config:
namespace_pattern: ^kube-system$
label_selector: k8s-app=kube-scheduler
count: 3
krkn_pod_recovery_time: 120
```
Please adjust the schema reference to point to the [schema file](../scenarios/plugin.schema.json). This file will give you code completion and documentation for the available options in your IDE.

View File

@@ -16,7 +16,7 @@ Set to '^.*$' and label_selector to "" to randomly select any namespace in your
**sleep:** Number of seconds to wait between each iteration/count of killing namespaces. Defaults to 10 seconds if not set
Refer to [namespace_scenarios_example](https://github.com/redhat-chaos/krkn/blob/main/scenarios/regex_namespace.yaml) config file.
Refer to [namespace_scenarios_example](https://github.com/krkn-chaos/krkn/blob/main/scenarios/regex_namespace.yaml) config file.
```
scenarios:

View File

@@ -0,0 +1,80 @@
### Service Hijacking Scenarios
Service Hijacking Scenarios aim to simulate fake HTTP responses from a workload targeted by a
`Service` already deployed in the cluster.
This scenario is executed by deploying a custom-made web service and modifying the target `Service`
selector to direct traffic to this web service for a specified duration.
The web service's source code is available [here](https://github.com/krkn-chaos/krkn-service-hijacking).
It employs a time-based test plan from the scenario configuration file, which specifies the behavior of resources during the chaos scenario as follows:
```yaml
service_target_port: http-web-svc # The port of the service to be hijacked (can be named or numeric, based on the workload and service configuration).
service_name: nginx-service # The name of the service that will be hijacked.
service_namespace: default # The namespace where the target service is located.
image: quay.io/krkn-chaos/krkn-service-hijacking:v0.1.3 # Image of the krkn web service to be deployed to receive traffic.
chaos_duration: 30 # Total duration of the chaos scenario in seconds.
plan:
- resource: "/list/index.php" # Specifies the resource or path to respond to in the scenario. For paths, both the path and query parameters are captured but ignored. For resources, only query parameters are captured.
steps: # A time-based plan consisting of steps can be defined for each resource.
GET: # One or more HTTP methods can be specified for each step. Note: Non-standard methods are supported for fully custom web services (e.g., using NONEXISTENT instead of POST).
- duration: 15 # Duration in seconds for this step before moving to the next one, if defined. Otherwise, this step will continue until the chaos scenario ends.
status: 500 # HTTP status code to be returned in this step.
mime_type: "application/json" # MIME type of the response for this step.
payload: | # The response payload for this step.
{
"status":"internal server error"
}
- duration: 15
status: 201
mime_type: "application/json"
payload: |
{
"status":"resource created"
}
POST:
- duration: 15
status: 401
mime_type: "application/json"
payload: |
{
"status": "unauthorized"
}
- duration: 15
status: 404
mime_type: "text/plain"
payload: "not found"
```
The scenario will focus on the `service_name` within the `service_namespace`,
substituting the selector with a randomly generated one, which is added as a label in the mock service manifest.
This allows multiple scenarios to be executed in the same namespace, each targeting different services without
causing conflicts.
The newly deployed mock web service will expose a `service_target_port`,
which can be either a named or numeric port based on the service configuration.
This ensures that the Service correctly routes HTTP traffic to the mock web service during the chaos run.
Each step will last for `duration` seconds from the deployment of the mock web service in the cluster.
For each HTTP resource, defined as a top-level YAML property of the plan
(it could be a specific resource, e.g., /list/index.php, or a path-based resource typical in MVC frameworks),
one or more HTTP request methods can be specified. Both standard and custom request methods are supported.
During this time frame, the web service will respond with:
- `status`: The [HTTP status code](https://datatracker.ietf.org/doc/html/rfc7231#section-6) (can be standard or custom).
- `mime_type`: The [MIME type](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types) (can be standard or custom).
- `payload`: The response body to be returned to the client.
At the end of the step `duration`, the web service will proceed to the next step (if available) until
the global `chaos_duration` concludes. At this point, the original service will be restored,
and the custom web service and its resources will be undeployed.
__NOTE__: Some clients (e.g., cURL, jQuery) may optimize queries using lightweight methods (like HEAD or OPTIONS)
to probe API behavior. If these methods are not defined in the test plan, the web service may respond with
a `405` or `404` status code. If you encounter unexpected behavior, consider this use case.

View File

@@ -0,0 +1,33 @@
### SYN Flood Scenarios
This scenario generates a substantial amount of TCP traffic directed at one or more Kubernetes services within
the cluster to test the server's resiliency under extreme traffic conditions.
It can also target hosts outside the cluster by specifying a reachable IP address or hostname.
This scenario leverages the distributed nature of Kubernetes clusters to instantiate multiple instances
of the same pod against a single host, significantly increasing the effectiveness of the attack.
The configuration also allows for the specification of multiple node selectors, enabling Kubernetes to schedule
the attacker pods on a user-defined subset of nodes to make the test more realistic.
```yaml
packet-size: 120 # hping3 packet size
window-size: 64 # hping 3 TCP window size
duration: 10 # chaos scenario duration
namespace: default # namespace where the target service(s) are deployed
target-service: target-svc # target service name (if set target-service-label must be empty)
target-port: 80 # target service TCP port
target-service-label : "" # target service label, can be used to target multiple target at the same time
# if they have the same label set (if set target-service must be empty)
number-of-pods: 2 # number of attacker pod instantiated per each target
image: quay.io/krkn-chaos/krkn-syn-flood # syn flood attacker container image
attacker-nodes: # this will set the node affinity to schedule the attacker node. Per each node label selector
# can be specified multiple values in this way the kube scheduler will schedule the attacker pods
# in the best way possible based on the provided labels. Multiple labels can be specified
kubernetes.io/hostname:
- host_1
- host_2
kubernetes.io/os:
- linux
```
The attacker container source code is available [here](https://github.com/krkn-chaos/krkn-syn-flood).

View File

@@ -16,7 +16,7 @@ Configuration Options:
**object_name:** List of the names of pods or nodes you want to skew.
Refer to [time_scenarios_example](https://github.com/redhat-chaos/krkn/blob/main/scenarios/time_scenarios_example.yml) config file.
Refer to [time_scenarios_example](https://github.com/krkn-chaos/krkn/blob/main/scenarios/time_scenarios_example.yml) config file.
```
time_scenarios:

View File

@@ -2,6 +2,9 @@ kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30036
hostPort: 8888
- role: control-plane
- role: control-plane
- role: worker

View File

@@ -4,6 +4,7 @@ import time
import kraken.cerberus.setup as cerberus
from jinja2 import Template
import kraken.invoke.command as runcommand
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
@@ -11,14 +12,14 @@ from krkn_lib.utils.functions import get_yaml_item_value, log_exception
# Reads the scenario config, applies and deletes a network policy to
# block the traffic for the specified duration
def run(scenarios_list, config, wait_duration, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
def run(scenarios_list, config, wait_duration,kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
failed_post_scenarios = ""
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
for app_outage_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = app_outage_config
scenario_telemetry.startTimeStamp = time.time()
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, app_outage_config)
if len(app_outage_config) > 1:
try:
@@ -49,25 +50,22 @@ spec:
podSelector:
matchLabels: {{ pod_selector }}
policyTypes: {{ traffic_type }}
"""
"""
t = Template(network_policy_template)
rendered_spec = t.render(pod_selector=pod_selector, traffic_type=traffic_type)
# Write the rendered template to a file
with open("kraken_network_policy.yaml", "w") as f:
f.write(rendered_spec)
yaml_spec = yaml.safe_load(rendered_spec)
# Block the traffic by creating network policy
logging.info("Creating the network policy")
runcommand.invoke(
"kubectl create -f %s -n %s --validate=false" % ("kraken_network_policy.yaml", namespace)
)
kubecli.create_net_policy(yaml_spec, namespace)
# wait for the specified duration
logging.info("Waiting for the specified duration in the config: %s" % (duration))
time.sleep(duration)
# unblock the traffic by deleting the network policy
logging.info("Deleting the network policy")
runcommand.invoke("kubectl delete -f %s -n %s" % ("kraken_network_policy.yaml", namespace))
kubecli.delete_net_policy("kraken-deny", namespace)
logging.info("End of scenario. Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
@@ -75,12 +73,12 @@ spec:
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
except Exception as e :
scenario_telemetry.exitStatus = 1
scenario_telemetry.exit_status = 1
failed_scenarios.append(app_outage_config)
log_exception(app_outage_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -16,12 +16,12 @@ def run(scenarios_list: List[str], kubeconfig_path: str, telemetry: KrknTelemetr
for scenario in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario
scenario_telemetry.startTimeStamp = time.time()
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry,scenario)
engine_args = build_args(scenario)
status_code = run_workflow(engine_args, kubeconfig_path)
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetry.exitStatus = status_code
scenario_telemetry.end_timestamp = time.time()
scenario_telemetry.exit_status = status_code
scenario_telemetries.append(scenario_telemetry)
if status_code != 0:
failed_post_scenarios.append(scenario)
@@ -36,9 +36,10 @@ def run_workflow(engine_args: arcaflow.EngineArgs, kubeconfig_path: str) -> int:
def build_args(input_file: str) -> arcaflow.EngineArgs:
"""sets the kubeconfig parsed by setArcaKubeConfig as an input to the arcaflow workflow"""
context = Path(input_file).parent
workflow = "{}/workflow.yaml".format(context)
config = "{}/config.yaml".format(context)
current_path = Path().resolve()
context = f"{current_path}/{Path(input_file).parent}"
workflow = f"{context}/workflow.yaml"
config = f"{context}/config.yaml"
if not os.path.exists(context):
raise Exception(
"context folder for arcaflow workflow not found: {}".format(
@@ -61,7 +62,8 @@ def build_args(input_file: str) -> arcaflow.EngineArgs:
engine_args = arcaflow.EngineArgs()
engine_args.context = context
engine_args.config = config
engine_args.input = input_file
engine_args.workflow = workflow
engine_args.input = f"{current_path}/{input_file}"
return engine_args

View File

@@ -4,13 +4,10 @@ import pandas as pd
import kraken.chaos_recommender.kraken_tests as kraken_tests
import time
threshold = .7 # Adjust the threshold as needed
heatmap_cpu_threshold = .5
heatmap_mem_threshold = .5
KRAKEN_TESTS_PATH = "./kraken_chaos_tests.txt"
#Placeholder, this should be done with topology
# Placeholder, this should be done with topology
def return_critical_services():
return ["web", "cart"]
@@ -19,15 +16,18 @@ def load_telemetry_data(file_path):
data = pd.read_csv(file_path, delimiter=r"\s+")
return data
def calculate_zscores(data):
zscores = pd.DataFrame()
zscores["Namespace"] = data["namespace"]
zscores["Service"] = data["service"]
zscores["CPU"] = (data["CPU"] - data["CPU"].mean()) / data["CPU"].std()
zscores["Memory"] = (data["MEM"] - data["MEM"].mean()) / data["MEM"].std()
zscores["Network"] = (data["NETWORK"] - data["NETWORK"].mean()) / data["NETWORK"].std()
return zscores
def identify_outliers(data):
def identify_outliers(data, threshold):
outliers_cpu = data[data["CPU"] > threshold]["Service"].tolist()
outliers_memory = data[data["Memory"] > threshold]["Service"].tolist()
outliers_network = data[data["Network"] > threshold]["Service"].tolist()
@@ -47,44 +47,85 @@ def get_services_above_heatmap_threshold(dataframe, cpu_threshold, mem_threshold
return cpu_services, mem_services
def analysis(file_path, chaos_tests_config):
def analysis(file_path, namespaces, chaos_tests_config, threshold,
heatmap_cpu_threshold, heatmap_mem_threshold):
# Load the telemetry data from file
logging.info("Fetching the Telemetry data...")
data = load_telemetry_data(file_path)
# Calculate Z-scores for CPU, Memory, and Network columns
zscores = calculate_zscores(data)
# Dict for saving analysis data -- key is the namespace
analysis_data = {}
# Identify outliers
outliers_cpu, outliers_memory, outliers_network = identify_outliers(zscores)
cpu_services, mem_services = get_services_above_heatmap_threshold(data, heatmap_cpu_threshold, heatmap_mem_threshold)
# Identify outliers for each namespace
for namespace in namespaces:
# Display the identified outliers
logging.info("======================== Profiling ==================================")
logging.info(f"CPU outliers: {outliers_cpu}")
logging.info(f"Memory outliers: {outliers_memory}")
logging.info(f"Network outliers: {outliers_network}")
logging.info("===================== HeatMap Analysis ==============================")
logging.info(f"Identifying outliers for namespace {namespace}...")
namespace_zscores = zscores.loc[zscores["Namespace"] == namespace]
namespace_data = data.loc[data["namespace"] == namespace]
outliers_cpu, outliers_memory, outliers_network = identify_outliers(
namespace_zscores, threshold)
cpu_services, mem_services = get_services_above_heatmap_threshold(
namespace_data, heatmap_cpu_threshold, heatmap_mem_threshold)
analysis_data[namespace] = analysis_json(outliers_cpu, outliers_memory,
outliers_network,
cpu_services, mem_services,
chaos_tests_config)
if cpu_services:
logging.info(f"These services use significant CPU compared to "
f"their assigned limits: {cpu_services}")
else:
logging.info("There are no services that are using significant "
"CPU compared to their assigned limits "
"(infinite in case no limits are set).")
if mem_services:
logging.info(f"These services use significant MEMORY compared to "
f"their assigned limits: {mem_services}")
else:
logging.info("There are no services that are using significant "
"MEMORY compared to their assigned limits "
"(infinite in case no limits are set).")
time.sleep(2)
logging.info("Please check data in utilisation.txt for further analysis")
return analysis_data
def analysis_json(outliers_cpu, outliers_memory, outliers_network,
cpu_services, mem_services, chaos_tests_config):
profiling = {
"cpu_outliers": outliers_cpu,
"memory_outliers": outliers_memory,
"network_outliers": outliers_network
}
heatmap = {
"services_with_cpu_heatmap_above_threshold": cpu_services,
"services_with_mem_heatmap_above_threshold": mem_services
}
recommendations = {}
if cpu_services:
logging.info("Services with CPU_HEATMAP above threshold:", cpu_services)
else:
logging.info("There are no services that are using siginificant CPU compared to their assigned limits (infinite in case no limits are set).")
cpu_recommend = {"services": cpu_services,
"tests": chaos_tests_config['CPU']}
recommendations["cpu_services_recommendations"] = cpu_recommend
if mem_services:
logging.info("Services with MEM_HEATMAP above threshold:", mem_services)
else:
logging.info("There are no services that are using siginificant MEMORY compared to their assigned limits (infinite in case no limits are set).")
time.sleep(2)
logging.info("======================= Recommendations =============================")
if cpu_services:
logging.info(f"Recommended tests for {str(cpu_services)} :\n {chaos_tests_config['CPU']}")
logging.info("\n")
if mem_services:
logging.info(f"Recommended tests for {str(mem_services)} :\n {chaos_tests_config['MEM']}")
logging.info("\n")
mem_recommend = {"services": mem_services,
"tests": chaos_tests_config['MEM']}
recommendations["mem_services_recommendations"] = mem_recommend
if outliers_network:
logging.info(f"Recommended tests for str(outliers_network) :\n {chaos_tests_config['NETWORK']}")
logging.info("\n")
outliers_network_recommend = {"outliers_networks": outliers_network,
"tests": chaos_tests_config['NETWORK']}
recommendations["outliers_network_recommendations"] = (
outliers_network_recommend)
logging.info("\n")
logging.info("Please check data in utilisation.txt for further analysis")
return [profiling, heatmap, recommendations]

View File

@@ -1,6 +1,5 @@
import logging
import pandas
from prometheus_api_client import PrometheusConnect
import pandas as pd
import urllib3
@@ -8,6 +7,7 @@ import urllib3
saved_metrics_path = "./utilisation.txt"
def convert_data_to_dataframe(data, label):
df = pd.DataFrame()
df['service'] = [item['metric']['pod'] for item in data]
@@ -17,29 +17,60 @@ def convert_data_to_dataframe(data, label):
def convert_data(data, service):
result = {}
for entry in data:
pod_name = entry['metric']['pod']
value = entry['value'][1]
result[pod_name] = value
return result.get(service, '100000000000') # for those pods whose limits are not defined they can take as much resources, there assigning a very high value
def save_utilization_to_file(cpu_data, cpu_limits_result, mem_data, mem_limits_result, network_data, filename):
df_cpu = convert_data_to_dataframe(cpu_data, "CPU")
merged_df = pd.DataFrame(columns=['service','CPU','CPU_LIMITS','MEM','MEM_LIMITS','NETWORK'])
services = df_cpu.service.unique()
logging.info(services)
for s in services:
new_row_df = pd.DataFrame( {"service": s, "CPU" : convert_data(cpu_data, s),
"CPU_LIMITS" : convert_data(cpu_limits_result, s),
"MEM" : convert_data(mem_data, s), "MEM_LIMITS" : convert_data(mem_limits_result, s),
"NETWORK" : convert_data(network_data, s)}, index=[0])
merged_df = pd.concat([merged_df, new_row_df], ignore_index=True)
return result.get(service) # for those pods whose limits are not defined they can take as much resources, there assigning a very high value
def convert_data_limits(data, node_data, service, prometheus):
result = {}
for entry in data:
pod_name = entry['metric']['pod']
value = entry['value'][1]
result[pod_name] = value
return result.get(service, get_node_capacity(node_data, service, prometheus)) # for those pods whose limits are not defined they can take as much resources, there assigning a very high value
def get_node_capacity(node_data, pod_name, prometheus ):
# Get the node name on which the pod is running
query = f'kube_pod_info{{pod="{pod_name}"}}'
result = prometheus.custom_query(query)
if not result:
return None
node_name = result[0]['metric']['node']
for item in node_data:
if item['metric']['node'] == node_name:
return item['value'][1]
return '1000000000'
def save_utilization_to_file(utilization, filename, prometheus):
merged_df = pd.DataFrame(columns=['namespace', 'service', 'CPU', 'CPU_LIMITS', 'MEM', 'MEM_LIMITS', 'NETWORK'])
for namespace in utilization:
# Loading utilization_data[] for namespace
# indexes -- 0 CPU, 1 CPU limits, 2 mem, 3 mem limits, 4 network
utilization_data = utilization[namespace]
df_cpu = convert_data_to_dataframe(utilization_data[0], "CPU")
services = df_cpu.service.unique()
logging.info(f"Services for namespace {namespace}: {services}")
for s in services:
new_row_df = pd.DataFrame({
"namespace": namespace, "service": s,
"CPU": convert_data(utilization_data[0], s),
"CPU_LIMITS": convert_data_limits(utilization_data[1],utilization_data[5], s, prometheus),
"MEM": convert_data(utilization_data[2], s),
"MEM_LIMITS": convert_data_limits(utilization_data[3], utilization_data[6], s, prometheus),
"NETWORK": convert_data(utilization_data[4], s)}, index=[0])
merged_df = pd.concat([merged_df, new_row_df], ignore_index=True)
# Convert columns to string
merged_df['CPU'] = merged_df['CPU'].astype(str)
@@ -49,48 +80,65 @@ def save_utilization_to_file(cpu_data, cpu_limits_result, mem_data, mem_limits_r
merged_df['NETWORK'] = merged_df['NETWORK'].astype(str)
# Extract integer part before the decimal point
merged_df['CPU'] = merged_df['CPU'].str.split('.').str[0]
merged_df['MEM'] = merged_df['MEM'].str.split('.').str[0]
merged_df['CPU_LIMITS'] = merged_df['CPU_LIMITS'].str.split('.').str[0]
merged_df['MEM_LIMITS'] = merged_df['MEM_LIMITS'].str.split('.').str[0]
merged_df['NETWORK'] = merged_df['NETWORK'].str.split('.').str[0]
#merged_df['CPU'] = merged_df['CPU'].str.split('.').str[0]
#merged_df['MEM'] = merged_df['MEM'].str.split('.').str[0]
#merged_df['CPU_LIMITS'] = merged_df['CPU_LIMITS'].str.split('.').str[0]
#merged_df['MEM_LIMITS'] = merged_df['MEM_LIMITS'].str.split('.').str[0]
#merged_df['NETWORK'] = merged_df['NETWORK'].str.split('.').str[0]
merged_df.to_csv(filename, sep='\t', index=False)
def fetch_utilization_from_prometheus(prometheus_endpoint, auth_token, namespace, scrape_duration):
def fetch_utilization_from_prometheus(prometheus_endpoint, auth_token,
namespaces, scrape_duration):
urllib3.disable_warnings()
prometheus = PrometheusConnect(url=prometheus_endpoint, headers={'Authorization':'Bearer {}'.format(auth_token)}, disable_ssl=True)
prometheus = PrometheusConnect(url=prometheus_endpoint, headers={
'Authorization':'Bearer {}'.format(auth_token)}, disable_ssl=True)
# Fetch CPU utilization
cpu_query = 'sum (rate (container_cpu_usage_seconds_total{image!="", namespace="%s"}[%s])) by (pod) *1000' % (namespace,scrape_duration)
logging.info(cpu_query)
cpu_result = prometheus.custom_query(cpu_query)
cpu_data = cpu_result
cpu_limits_query = '(sum by (pod) (kube_pod_container_resource_limits{resource="cpu", namespace="%s"}))*1000' %(namespace)
logging.info(cpu_limits_query)
cpu_limits_result = prometheus.custom_query(cpu_limits_query)
mem_query = 'sum by (pod) (avg_over_time(container_memory_usage_bytes{image!="", namespace="%s"}[%s]))' % (namespace, scrape_duration)
logging.info(mem_query)
mem_result = prometheus.custom_query(mem_query)
mem_data = mem_result
mem_limits_query = 'sum by (pod) (kube_pod_container_resource_limits{resource="memory", namespace="%s"}) ' %(namespace)
logging.info(mem_limits_query)
mem_limits_result = prometheus.custom_query(mem_limits_query)
network_query = 'sum by (pod) ((avg_over_time(container_network_transmit_bytes_total{namespace="%s"}[%s])) + \
(avg_over_time(container_network_receive_bytes_total{namespace="%s"}[%s])))' % (namespace, scrape_duration, namespace, scrape_duration)
network_result = prometheus.custom_query(network_query)
logging.info(network_query)
network_data = network_result
save_utilization_to_file(cpu_data, cpu_limits_result, mem_data, mem_limits_result, network_data, saved_metrics_path)
return saved_metrics_path
# Dicts for saving utilisation and queries -- key is namespace
utilization = {}
queries = {}
logging.info("Fetching utilization...")
for namespace in namespaces:
# Fetch CPU utilization
cpu_query = 'sum (rate (container_cpu_usage_seconds_total{image!="", namespace="%s"}[%s])) by (pod) *1000' % (namespace,scrape_duration)
cpu_result = prometheus.custom_query(cpu_query)
cpu_limits_query = '(sum by (pod) (kube_pod_container_resource_limits{resource="cpu", namespace="%s"}))*1000' %(namespace)
cpu_limits_result = prometheus.custom_query(cpu_limits_query)
node_cpu_limits_query = 'kube_node_status_capacity{resource="cpu", unit="core"}*1000'
node_cpu_limits_result = prometheus.custom_query(node_cpu_limits_query)
mem_query = 'sum by (pod) (avg_over_time(container_memory_usage_bytes{image!="", namespace="%s"}[%s]))' % (namespace, scrape_duration)
mem_result = prometheus.custom_query(mem_query)
mem_limits_query = 'sum by (pod) (kube_pod_container_resource_limits{resource="memory", namespace="%s"}) ' %(namespace)
mem_limits_result = prometheus.custom_query(mem_limits_query)
node_mem_limits_query = 'kube_node_status_capacity{resource="memory", unit="byte"}'
node_mem_limits_result = prometheus.custom_query(node_mem_limits_query)
network_query = 'sum by (pod) ((avg_over_time(container_network_transmit_bytes_total{namespace="%s"}[%s])) + \
(avg_over_time(container_network_receive_bytes_total{namespace="%s"}[%s])))' % (namespace, scrape_duration, namespace, scrape_duration)
network_result = prometheus.custom_query(network_query)
utilization[namespace] = [cpu_result, cpu_limits_result, mem_result, mem_limits_result, network_result, node_cpu_limits_result, node_mem_limits_result ]
queries[namespace] = json_queries(cpu_query, cpu_limits_query, mem_query, mem_limits_query, network_query)
save_utilization_to_file(utilization, saved_metrics_path, prometheus)
return saved_metrics_path, queries
def json_queries(cpu_query, cpu_limits_query, mem_query, mem_limits_query, network_query):
queries = {
"cpu_query": cpu_query,
"cpu_limit_query": cpu_limits_query,
"memory_query": mem_query,
"memory_limit_query": mem_limits_query,
"network_query": network_query
}
return queries

View File

@@ -23,7 +23,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
for net_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = net_config
scenario_telemetry.startTimeStamp = time.time()
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, net_config)
try:
with open(net_config, "r") as file:
@@ -114,11 +114,11 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
logging.info("Deleting jobs")
delete_job(joblst[:], kubecli)
except (RuntimeError, Exception):
scenario_telemetry.exitStatus = 1
scenario_telemetry.exit_status = 1
failed_scenarios.append(net_config)
log_exception(net_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.exit_status = 0
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -1,6 +1,6 @@
import time
import yaml
import os
import kraken.invoke.command as runcommand
import logging
import kraken.node_actions.common_node_functions as nodeaction
@@ -17,9 +17,9 @@ class Azure:
# Acquire a credential object using CLI-based authentication.
credentials = DefaultAzureCredential()
logging.info("credential " + str(credentials))
az_account = runcommand.invoke("az account list -o yaml")
az_account_yaml = yaml.safe_load(az_account, Loader=yaml.FullLoader)
subscription_id = az_account_yaml[0]["id"]
# az_account = runcommand.invoke("az account list -o yaml")
# az_account_yaml = yaml.safe_load(az_account, Loader=yaml.FullLoader)
subscription_id = os.getenv("AZURE_SUBSCRIPTION_ID")
self.compute_client = ComputeManagementClient(credentials, subscription_id)
# Get the instance ID of the node

View File

@@ -1,6 +1,8 @@
import os
import sys
import time
import logging
import json
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
from googleapiclient import discovery
@@ -10,11 +12,19 @@ from krkn_lib.k8s import KrknKubernetes
class GCP:
def __init__(self):
try:
gapp_creds = os.getenv("GOOGLE_APPLICATION_CREDENTIALS")
with open(gapp_creds, "r") as f:
f_str = f.read()
self.project = json.loads(f_str)['project_id']
#self.project = runcommand.invoke("gcloud config get-value project").split("/n")[0].strip()
logging.info("project " + str(self.project) + "!")
credentials = GoogleCredentials.get_application_default()
self.client = discovery.build("compute", "v1", credentials=credentials, cache_discovery=False)
self.project = runcommand.invoke("gcloud config get-value project").split("/n")[0].strip()
logging.info("project " + str(self.project) + "!")
credentials = GoogleCredentials.get_application_default()
self.client = discovery.build("compute", "v1", credentials=credentials, cache_discovery=False)
except Exception as e:
logging.error("Error on setting up GCP connection: " + str(e))
sys.exit(1)
# Get the instance ID of the node
def get_instance_id(self, node):

View File

@@ -15,7 +15,7 @@ import kraken.cerberus.setup as cerberus
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
node_general = False
@@ -61,7 +61,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
for node_scenario_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = node_scenario_config
scenario_telemetry.startTimeStamp = time.time()
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, node_scenario_config)
with open(node_scenario_config, "r") as f:
node_scenario_config = yaml.full_load(f)
@@ -78,13 +78,13 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
cerberus.get_status(config, start_time, end_time)
logging.info("")
except (RuntimeError, Exception) as e:
scenario_telemetry.exitStatus = 1
scenario_telemetry.exit_status = 1
failed_scenarios.append(node_scenario_config)
log_exception(node_scenario_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.exit_status = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -2,11 +2,14 @@ import dataclasses
import json
import logging
from os.path import abspath
from typing import List, Dict
from typing import List, Dict, Any
import time
from arcaflow_plugin_sdk import schema, serialization, jsonschema
from arcaflow_plugin_kill_pod import kill_pods, wait_for_pods
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.k8s.pods_monitor_pool import PodsMonitorPool
import kraken.plugins.node_scenarios.vmware_plugin as vmware_plugin
import kraken.plugins.node_scenarios.ibmcloud_plugin as ibmcloud_plugin
from kraken.plugins.run_python_plugin import run_python_file
@@ -47,11 +50,14 @@ class Plugins:
)
self.steps_by_id[step.schema.id] = step
def run(self, file: str, kubeconfig_path: str, kraken_config: str):
def unserialize_scenario(self, file: str) -> Any:
return serialization.load_from_file(abspath(file))
def run(self, file: str, kubeconfig_path: str, kraken_config: str, run_uuid:str):
"""
Run executes a series of steps
"""
data = serialization.load_from_file(abspath(file))
data = self.unserialize_scenario(abspath(file))
if not isinstance(data, list):
raise Exception(
"Invalid scenario configuration file: {} expected list, found {}".format(file, type(data).__name__)
@@ -96,7 +102,8 @@ class Plugins:
unserialized_input.kubeconfig_path = kubeconfig_path
if "kraken_config" in step.schema.input.properties:
unserialized_input.kraken_config = kraken_config
output_id, output_data = step.schema(unserialized_input)
output_id, output_data = step.schema(params=unserialized_input, run_id=run_uuid)
logging.info(step.render_output(output_id, output_data) + "\n")
if output_id in step.error_output_ids:
raise Exception(
@@ -213,6 +220,12 @@ PLUGINS = Plugins(
"error"
]
),
PluginStep(
network_chaos,
[
"error"
]
),
PluginStep(
pod_outage,
[
@@ -235,25 +248,73 @@ PLUGINS = Plugins(
)
def run(scenarios: List[str], kubeconfig_path: str, kraken_config: str, failed_post_scenarios: List[str], wait_duration: int, telemetry: KrknTelemetryKubernetes) -> (List[str], list[ScenarioTelemetry]):
def run(scenarios: List[str],
kubeconfig_path: str,
kraken_config: str,
failed_post_scenarios: List[str],
wait_duration: int,
telemetry: KrknTelemetryKubernetes,
kubecli: KrknKubernetes,
run_uuid: str
) -> (List[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
for scenario in scenarios:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario
scenario_telemetry.startTimeStamp = time.time()
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, scenario)
logging.info('scenario ' + str(scenario))
pool = PodsMonitorPool(kubecli)
kill_scenarios = [kill_scenario for kill_scenario in PLUGINS.unserialize_scenario(scenario) if kill_scenario["id"] == "kill-pods"]
try:
PLUGINS.run(scenario, kubeconfig_path, kraken_config)
start_monitoring(pool, kill_scenarios)
PLUGINS.run(scenario, kubeconfig_path, kraken_config, run_uuid)
result = pool.join()
scenario_telemetry.affected_pods = result
if result.error:
raise Exception(f"unrecovered pods: {result.error}")
except Exception as e:
scenario_telemetry.exitStatus = 1
logging.error(f"scenario exception: {str(e)}")
scenario_telemetry.exit_status = 1
pool.cancel()
failed_post_scenarios.append(scenario)
log_exception(scenario)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.exit_status = 0
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
scenario_telemetries.append(scenario_telemetry)
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetry.end_timestamp = time.time()
return failed_post_scenarios, scenario_telemetries
def start_monitoring(pool: PodsMonitorPool, scenarios: list[Any]):
for kill_scenario in scenarios:
recovery_time = kill_scenario["config"]["krkn_pod_recovery_time"]
if ("namespace_pattern" in kill_scenario["config"] and
"label_selector" in kill_scenario["config"]):
namespace_pattern = kill_scenario["config"]["namespace_pattern"]
label_selector = kill_scenario["config"]["label_selector"]
pool.select_and_monitor_by_namespace_pattern_and_label(
namespace_pattern=namespace_pattern,
label_selector=label_selector,
max_timeout=recovery_time)
logging.info(
f"waiting {recovery_time} seconds for pod recovery, "
f"pod label selector: {label_selector} namespace pattern: {namespace_pattern}")
elif ("namespace_pattern" in kill_scenario["config"] and
"name_pattern" in kill_scenario["config"]):
namespace_pattern = kill_scenario["config"]["namespace_pattern"]
name_pattern = kill_scenario["config"]["name_pattern"]
pool.select_and_monitor_by_name_pattern_and_namespace_pattern(pod_name_pattern=name_pattern,
namespace_pattern=namespace_pattern,
max_timeout=recovery_time)
logging.info(f"waiting {recovery_time} seconds for pod recovery, "
f"pod name pattern: {name_pattern} namespace pattern: {namespace_pattern}")
else:
raise Exception(f"impossible to determine monitor parameters, check {kill_scenario} configuration")

View File

@@ -62,7 +62,7 @@ class NetworkScenarioConfig:
typing.Optional[int],
validation.min(1)
] = field(
default=300,
default=30,
metadata={
"name": "Wait Duration",
"description":
@@ -864,7 +864,7 @@ def network_chaos(cfg: NetworkScenarioConfig) -> typing.Tuple[
)
logging.info("Waiting for parallel job to finish")
start_time = int(time.time())
wait_for_job(batch_cli, job_list[:], cfg.wait_duration)
wait_for_job(batch_cli, job_list[:], cfg.test_duration+100)
end_time = int(time.time())
if publish:
cerberus.publish_kraken_status(
@@ -893,7 +893,7 @@ def network_chaos(cfg: NetworkScenarioConfig) -> typing.Tuple[
)
logging.info("Waiting for serial job to finish")
start_time = int(time.time())
wait_for_job(batch_cli, job_list[:], cfg.wait_duration)
wait_for_job(batch_cli, job_list[:], cfg.test_duration+100)
logging.info("Deleting jobs")
delete_jobs(cli, batch_cli, job_list[:])
job_list = []

View File

@@ -119,11 +119,11 @@ class vSphere:
vm = self.get_vm(instance_id)
try:
self.client.vcenter.vm.Power.stop(vm)
logging.info("Stopped VM -- '{}-({})'", instance_id, vm)
logging.info(f"Stopped VM -- '{instance_id}-({vm})'")
return True
except AlreadyInDesiredState:
logging.info(
"VM '{}'-'({})' is already Powered Off", instance_id, vm
f"VM '{instance_id}'-'({vm})' is already Powered Off"
)
return False
@@ -136,11 +136,11 @@ class vSphere:
vm = self.get_vm(instance_id)
try:
self.client.vcenter.vm.Power.start(vm)
logging.info("Started VM -- '{}-({})'", instance_id, vm)
logging.info(f"Started VM -- '{instance_id}-({vm})'")
return True
except AlreadyInDesiredState:
logging.info(
"VM '{}'-'({})' is already Powered On", instance_id, vm
f"VM '{instance_id}'-'({vm})' is already Powered On"
)
return False
@@ -318,12 +318,12 @@ class vSphere:
try:
vm = self.get_vm(instance_id)
state = self.client.vcenter.vm.Power.get(vm).state
logging.info("Check instance %s status", instance_id)
logging.info(f"Check instance {instance_id} status")
return state
except Exception as e:
logging.error(
"Failed to get node instance status %s. Encountered following "
"exception: %s.", instance_id, e
f"Failed to get node instance status {instance_id}. Encountered following "
f"exception: {str(e)}. "
)
return None
@@ -338,16 +338,14 @@ class vSphere:
while vm is not None:
vm = self.get_vm(instance_id)
logging.info(
"VM %s is still being deleted, "
"sleeping for 5 seconds",
instance_id
f"VM {instance_id} is still being deleted, "
f"sleeping for 5 seconds"
)
time.sleep(5)
time_counter += 5
if time_counter >= timeout:
logging.info(
"VM %s is still not deleted in allotted time",
instance_id
f"VM {instance_id} is still not deleted in allotted time"
)
return False
return True
@@ -371,8 +369,7 @@ class vSphere:
time_counter += 5
if time_counter >= timeout:
logging.info(
"VM %s is still not ready in allotted time",
instance_id
f"VM {instance_id} is still not ready in allotted time"
)
return False
return True
@@ -388,16 +385,14 @@ class vSphere:
while status != Power.State.POWERED_OFF:
status = self.get_vm_status(instance_id)
logging.info(
"VM %s is still not running, "
"sleeping for 5 seconds",
instance_id
f"VM {instance_id} is still not running, "
f"sleeping for 5 seconds"
)
time.sleep(5)
time_counter += 5
if time_counter >= timeout:
logging.info(
"VM %s is still not ready in allotted time",
instance_id
f"VM {instance_id} is still not ready in allotted time"
)
return False
return True
@@ -561,7 +556,7 @@ def node_start(
try:
for _ in range(cfg.runs):
logging.info("Starting node_start_scenario injection")
logging.info("Starting the node %s ", name)
logging.info(f"Starting the node {name} ")
vm_started = vsphere.start_instances(name)
if vm_started:
vsphere.wait_until_running(name, cfg.timeout)
@@ -571,7 +566,7 @@ def node_start(
)
nodes_started[int(time.time_ns())] = Node(name=name)
logging.info(
"Node with instance ID: %s is in running state", name
f"Node with instance ID: {name} is in running state"
)
logging.info(
"node_start_scenario has been successfully injected!"
@@ -579,8 +574,8 @@ def node_start(
except Exception as e:
logging.error("Failed to start node instance. Test Failed")
logging.error(
"node_start_scenario injection failed! "
"Error was: %s", str(e)
f"node_start_scenario injection failed! "
f"Error was: {str(e)}"
)
return "error", NodeScenarioErrorOutput(
format_exc(), kube_helper.Actions.START
@@ -620,7 +615,7 @@ def node_stop(
try:
for _ in range(cfg.runs):
logging.info("Starting node_stop_scenario injection")
logging.info("Stopping the node %s ", name)
logging.info(f"Stopping the node {name} ")
vm_stopped = vsphere.stop_instances(name)
if vm_stopped:
vsphere.wait_until_stopped(name, cfg.timeout)
@@ -630,7 +625,7 @@ def node_stop(
)
nodes_stopped[int(time.time_ns())] = Node(name=name)
logging.info(
"Node with instance ID: %s is in stopped state", name
f"Node with instance ID: {name} is in stopped state"
)
logging.info(
"node_stop_scenario has been successfully injected!"
@@ -638,8 +633,8 @@ def node_stop(
except Exception as e:
logging.error("Failed to stop node instance. Test Failed")
logging.error(
"node_stop_scenario injection failed! "
"Error was: %s", str(e)
f"node_stop_scenario injection failed! "
f"Error was: {str(e)}"
)
return "error", NodeScenarioErrorOutput(
format_exc(), kube_helper.Actions.STOP
@@ -679,7 +674,7 @@ def node_reboot(
try:
for _ in range(cfg.runs):
logging.info("Starting node_reboot_scenario injection")
logging.info("Rebooting the node %s ", name)
logging.info(f"Rebooting the node {name} ")
vsphere.reboot_instances(name)
if not cfg.skip_openshift_checks:
kube_helper.wait_for_unknown_status(
@@ -690,8 +685,8 @@ def node_reboot(
)
nodes_rebooted[int(time.time_ns())] = Node(name=name)
logging.info(
"Node with instance ID: %s has rebooted "
"successfully", name
f"Node with instance ID: {name} has rebooted "
"successfully"
)
logging.info(
"node_reboot_scenario has been successfully injected!"
@@ -699,8 +694,8 @@ def node_reboot(
except Exception as e:
logging.error("Failed to reboot node instance. Test Failed")
logging.error(
"node_reboot_scenario injection failed! "
"Error was: %s", str(e)
f"node_reboot_scenario injection failed! "
f"Error was: {str(e)}"
)
return "error", NodeScenarioErrorOutput(
format_exc(), kube_helper.Actions.REBOOT
@@ -739,13 +734,13 @@ def node_terminate(
vsphere.stop_instances(name)
vsphere.wait_until_stopped(name, cfg.timeout)
logging.info(
"Releasing the node with instance ID: %s ", name
f"Releasing the node with instance ID: {name} "
)
vsphere.release_instances(name)
vsphere.wait_until_released(name, cfg.timeout)
nodes_terminated[int(time.time_ns())] = Node(name=name)
logging.info(
"Node with instance ID: %s has been released", name
f"Node with instance ID: {name} has been released"
)
logging.info(
"node_terminate_scenario has been "
@@ -754,8 +749,8 @@ def node_terminate(
except Exception as e:
logging.error("Failed to terminate node instance. Test Failed")
logging.error(
"node_terminate_scenario injection failed! "
"Error was: %s", str(e)
f"node_terminate_scenario injection failed! "
f"Error was: {str(e)}"
)
return "error", NodeScenarioErrorOutput(
format_exc(), kube_helper.Actions.TERMINATE

View File

@@ -1,9 +1,13 @@
import logging
import time
from typing import Any
import yaml
import sys
import random
import arcaflow_plugin_kill_pod
from krkn_lib.k8s.pods_monitor_pool import PodsMonitorPool
import kraken.cerberus.setup as cerberus
import kraken.post_actions.actions as post_actions
from krkn_lib.k8s import KrknKubernetes
@@ -79,11 +83,12 @@ def container_run(kubeconfig_path,
failed_scenarios = []
scenario_telemetries: list[ScenarioTelemetry] = []
pool = PodsMonitorPool(kubecli)
for container_scenario_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = container_scenario_config[0]
scenario_telemetry.startTimeStamp = time.time()
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, container_scenario_config[0])
if len(container_scenario_config) > 1:
pre_action_output = post_actions.run(kubeconfig_path, container_scenario_config[1])
@@ -91,23 +96,17 @@ def container_run(kubeconfig_path,
pre_action_output = ""
with open(container_scenario_config[0], "r") as f:
cont_scenario_config = yaml.full_load(f)
start_monitoring(kill_scenarios=cont_scenario_config["scenarios"], pool=pool)
for cont_scenario in cont_scenario_config["scenarios"]:
# capture start time
start_time = int(time.time())
try:
killed_containers = container_killing_in_pod(cont_scenario, kubecli)
if len(container_scenario_config) > 1:
failed_post_scenarios = post_actions.check_recovery(
kubeconfig_path,
container_scenario_config,
failed_post_scenarios,
pre_action_output
)
else:
failed_post_scenarios = check_failed_containers(
killed_containers, cont_scenario.get("retry_wait", 120), kubecli
)
logging.info(f"killed containers: {str(killed_containers)}")
result = pool.join()
if result.error:
raise Exception(f"pods failed to recovery: {result.error}")
scenario_telemetry.affected_pods = result
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
@@ -117,18 +116,29 @@ def container_run(kubeconfig_path,
# publish cerberus status
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
except (RuntimeError, Exception):
pool.cancel()
failed_scenarios.append(container_scenario_config[0])
log_exception(container_scenario_config[0])
scenario_telemetry.exitStatus = 1
scenario_telemetry.exit_status = 1
# removed_exit
# sys.exit(1)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries
def start_monitoring(kill_scenarios: list[Any], pool: PodsMonitorPool):
for kill_scenario in kill_scenarios:
namespace_pattern = f"^{kill_scenario['namespace']}$"
label_selector = kill_scenario["label_selector"]
recovery_time = kill_scenario["expected_recovery_time"]
pool.select_and_monitor_by_namespace_pattern_and_label(
namespace_pattern=namespace_pattern,
label_selector=label_selector,
max_timeout=recovery_time)
def container_killing_in_pod(cont_scenario, kubecli: KrknKubernetes):
scenario_name = get_yaml_item_value(cont_scenario, "name", "")

View File

@@ -1,10 +1,13 @@
import datetime
import os.path
from typing import Optional
import urllib3
import logging
import sys
import yaml
from krkn_lib.models.krkn import ChaosRunAlertSummary, ChaosRunAlert
from krkn_lib.prometheus.krkn_prometheus import KrknPrometheus
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
def alerts(prom_cli: KrknPrometheus, start_time, end_time, alert_profile):
@@ -27,4 +30,59 @@ def alerts(prom_cli: KrknPrometheus, start_time, end_time, alert_profile):
prom_cli.process_alert(alert,
datetime.datetime.fromtimestamp(start_time),
datetime.datetime.fromtimestamp(end_time))
datetime.datetime.fromtimestamp(end_time))
def critical_alerts(prom_cli: KrknPrometheus,
summary: ChaosRunAlertSummary,
run_id,
scenario,
start_time,
end_time):
summary.scenario = scenario
summary.run_id = run_id
query = r"""ALERTS{severity="critical"}"""
logging.info("Checking for critical alerts firing post chaos")
during_critical_alerts = prom_cli.process_prom_query_in_range(
query,
start_time=datetime.datetime.fromtimestamp(start_time),
end_time=end_time
)
for alert in during_critical_alerts:
if "metric" in alert:
alertname = alert["metric"]["alertname"] if "alertname" in alert["metric"] else "none"
alertstate = alert["metric"]["alertstate"] if "alertstate" in alert["metric"] else "none"
namespace = alert["metric"]["namespace"] if "namespace" in alert["metric"] else "none"
severity = alert["metric"]["severity"] if "severity" in alert["metric"] else "none"
alert = ChaosRunAlert(alertname, alertstate, namespace, severity)
summary.chaos_alerts.append(alert)
post_critical_alerts = prom_cli.process_query(
query
)
for alert in post_critical_alerts:
if "metric" in alert:
alertname = alert["metric"]["alertname"] if "alertname" in alert["metric"] else "none"
alertstate = alert["metric"]["alertstate"] if "alertstate" in alert["metric"] else "none"
namespace = alert["metric"]["namespace"] if "namespace" in alert["metric"] else "none"
severity = alert["metric"]["severity"] if "severity" in alert["metric"] else "none"
alert = ChaosRunAlert(alertname, alertstate, namespace, severity)
summary.post_chaos_alerts.append(alert)
during_critical_alerts_count = len(during_critical_alerts)
post_critical_alerts_count = len(post_critical_alerts)
firing_alerts = False
if during_critical_alerts_count > 0:
firing_alerts = True
if post_critical_alerts_count > 0:
firing_alerts = True
if not firing_alerts:
logging.info("No critical alerts are firing!!")

View File

@@ -11,7 +11,7 @@ from krkn_lib.utils.functions import get_yaml_item_value, log_exception
# krkn_lib
def run(scenarios_list, config, kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
"""
Reads the scenario config and creates a temp file to fill up the PVC
"""
@@ -21,7 +21,7 @@ def run(scenarios_list, config, kubecli: KrknKubernetes, telemetry: KrknTelemetr
for app_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = app_config
scenario_telemetry.startTimeStamp = time.time()
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, app_config)
try:
if len(app_config) > 1:
@@ -305,7 +305,9 @@ def run(scenarios_list, config, kubecli: KrknKubernetes, telemetry: KrknTelemetr
file_size_kb,
kubecli
)
logging.info("End of scenario. Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(
config,
@@ -314,11 +316,11 @@ def run(scenarios_list, config, kubecli: KrknKubernetes, telemetry: KrknTelemetr
end_time
)
except (RuntimeError, Exception):
scenario_telemetry.exitStatus = 1
scenario_telemetry.exit_status = 1
failed_scenarios.append(app_config)
log_exception(app_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.exit_status = 0
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -165,7 +165,7 @@ def run(
for scenario_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario_config[0]
scenario_telemetry.startTimeStamp = time.time()
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, scenario_config[0])
try:
if len(scenario_config) > 1:
@@ -249,12 +249,12 @@ def run(
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
except (Exception, RuntimeError):
scenario_telemetry.exitStatus = 1
scenario_telemetry.exit_status = 1
failed_scenarios.append(scenario_config[0])
log_exception(scenario_config[0])
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

View File

@@ -0,0 +1,90 @@
import logging
import time
import yaml
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
def run(scenarios_list: list[str],wait_duration: int, krkn_lib: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries= list[ScenarioTelemetry]()
failed_post_scenarios = []
for scenario in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, scenario)
with open(scenario) as stream:
scenario_config = yaml.safe_load(stream)
service_name = scenario_config['service_name']
service_namespace = scenario_config['service_namespace']
plan = scenario_config["plan"]
image = scenario_config["image"]
target_port = scenario_config["service_target_port"]
chaos_duration = scenario_config["chaos_duration"]
logging.info(f"checking service {service_name} in namespace: {service_namespace}")
if not krkn_lib.service_exists(service_name, service_namespace):
logging.error(f"service: {service_name} not found in namespace: {service_namespace}, failed to run scenario.")
fail(scenario_telemetry, scenario_telemetries)
failed_post_scenarios.append(scenario)
break
try:
logging.info(f"service: {service_name} found in namespace: {service_namespace}")
logging.info(f"creating webservice and initializing test plan...")
# both named ports and port numbers can be used
if isinstance(target_port, int):
logging.info(f"webservice will listen on port {target_port}")
webservice = krkn_lib.deploy_service_hijacking(service_namespace, plan, image, port_number=target_port)
else:
logging.info(f"traffic will be redirected to named port: {target_port}")
webservice = krkn_lib.deploy_service_hijacking(service_namespace, plan, image, port_name=target_port)
logging.info(f"successfully deployed pod: {webservice.pod_name} "
f"in namespace:{service_namespace} with selector {webservice.selector}!"
)
logging.info(f"patching service: {service_name} to hijack traffic towards: {webservice.pod_name}")
original_service = krkn_lib.replace_service_selector([webservice.selector], service_name, service_namespace)
if original_service is None:
logging.error(f"failed to patch service: {service_name}, namespace: {service_namespace} with selector {webservice.selector}")
fail(scenario_telemetry, scenario_telemetries)
failed_post_scenarios.append(scenario)
break
logging.info(f"service: {service_name} successfully patched!")
logging.info(f"original service manifest:\n\n{yaml.dump(original_service)}")
logging.info(f"waiting {chaos_duration} before restoring the service")
time.sleep(chaos_duration)
selectors = ["=".join([key, original_service["spec"]["selector"][key]]) for key in original_service["spec"]["selector"].keys()]
logging.info(f"restoring the service selectors {selectors}")
original_service = krkn_lib.replace_service_selector(selectors, service_name, service_namespace)
if original_service is None:
logging.error(f"failed to restore original service: {service_name}, namespace: {service_namespace} with selectors: {selectors}")
fail(scenario_telemetry, scenario_telemetries)
failed_post_scenarios.append(scenario)
break
logging.info("selectors successfully restored")
logging.info("undeploying service-hijacking resources...")
krkn_lib.undeploy_service_hijacking(webservice)
logging.info("End of scenario. Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
logging.info("success")
except Exception as e:
logging.error(f"scenario {scenario} failed with exception: {e}")
fail(scenario_telemetry, scenario_telemetries)
failed_post_scenarios.append(scenario)
return failed_post_scenarios, scenario_telemetries
def fail(scenario_telemetry: ScenarioTelemetry, scenario_telemetries: list[ScenarioTelemetry]):
scenario_telemetry.exit_status = 1
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)

View File

@@ -147,7 +147,7 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = config_path
scenario_telemetry.startTimeStamp = time.time()
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, config_path)
with open(config_path, "r") as f:
@@ -175,11 +175,11 @@ def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetr
except (RuntimeError, Exception):
log_exception(config_path)
failed_scenarios.append(config_path)
scenario_telemetry.exitStatus = 1
scenario_telemetry.exit_status = 1
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.exit_status = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -0,0 +1 @@
from .syn_flood import *

View File

@@ -0,0 +1,132 @@
import logging
import os.path
import time
from typing import List
import krkn_lib.utils
import yaml
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
def run(scenarios_list: list[str], krkn_kubernetes: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
failed_post_scenarios = []
for scenario in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, scenario)
try:
pod_names = []
config = parse_config(scenario)
if config["target-service-label"]:
target_services = krkn_kubernetes.select_service_by_label(config["namespace"], config["target-service-label"])
else:
target_services = [config["target-service"]]
for target in target_services:
if not krkn_kubernetes.service_exists(target, config["namespace"]):
raise Exception(f"{target} service not found")
for i in range(config["number-of-pods"]):
pod_name = "syn-flood-" + krkn_lib.utils.get_random_string(10)
krkn_kubernetes.deploy_syn_flood(pod_name,
config["namespace"],
config["image"],
target,
config["target-port"],
config["packet-size"],
config["window-size"],
config["duration"],
config["attacker-nodes"]
)
pod_names.append(pod_name)
logging.info("waiting all the attackers to finish:")
did_finish = False
finished_pods = []
while not did_finish:
for pod_name in pod_names:
if not krkn_kubernetes.is_pod_running(pod_name, config["namespace"]):
finished_pods.append(pod_name)
if set(pod_names) == set(finished_pods):
did_finish = True
time.sleep(1)
except Exception as e:
logging.error(f"Failed to run syn flood scenario {scenario}: {e}")
failed_post_scenarios.append(scenario)
scenario_telemetry.exit_status = 1
else:
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_post_scenarios, scenario_telemetries
def parse_config(scenario_file: str) -> dict[str,any]:
if not os.path.exists(scenario_file):
raise Exception(f"failed to load scenario file {scenario_file}")
try:
with open(scenario_file) as stream:
config = yaml.safe_load(stream)
except Exception:
raise Exception(f"{scenario_file} is not a valid yaml file")
missing = []
if not check_key_value(config ,"packet-size"):
missing.append("packet-size")
if not check_key_value(config,"window-size"):
missing.append("window-size")
if not check_key_value(config, "duration"):
missing.append("duration")
if not check_key_value(config, "namespace"):
missing.append("namespace")
if not check_key_value(config, "number-of-pods"):
missing.append("number-of-pods")
if not check_key_value(config, "target-port"):
missing.append("target-port")
if not check_key_value(config, "image"):
missing.append("image")
if "target-service" not in config.keys():
missing.append("target-service")
if "target-service-label" not in config.keys():
missing.append("target-service-label")
if len(missing) > 0:
raise Exception(f"{(',').join(missing)} parameter(s) are missing")
if not config["target-service"] and not config["target-service-label"]:
raise Exception("you have either to set a target service or a label")
if config["target-service"] and config["target-service-label"]:
raise Exception("you cannot select both target-service and target-service-label")
if 'attacker-nodes' and not is_node_affinity_correct(config['attacker-nodes']):
raise Exception("attacker-nodes format is not correct")
return config
def check_key_value(dictionary, key):
if key in dictionary:
value = dictionary[key]
if value is not None and value != '':
return True
return False
def is_node_affinity_correct(obj) -> bool:
if not isinstance(obj, dict):
return False
for key in obj.keys():
if not isinstance(key, str):
return False
if not isinstance(obj[key], list):
return False
return True

View File

@@ -354,7 +354,7 @@ def run(scenarios_list, config, wait_duration, kubecli:KrknKubernetes, telemetry
for time_scenario_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = time_scenario_config
scenario_telemetry.startTimeStamp = time.time()
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, time_scenario_config)
try:
with open(time_scenario_config, "r") as f:
@@ -377,12 +377,12 @@ def run(scenarios_list, config, wait_duration, kubecli:KrknKubernetes, telemetry
end_time
)
except (RuntimeError, Exception):
scenario_telemetry.exitStatus = 1
scenario_telemetry.exit_status = 1
log_exception(time_scenario_config)
failed_scenarios.append(time_scenario_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -19,7 +19,7 @@ def run(scenarios_list, config, wait_duration, telemetry: KrknTelemetryKubernete
for zone_outage_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = zone_outage_config
scenario_telemetry.startTimeStamp = time.time()
scenario_telemetry.start_timestamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, zone_outage_config)
try:
if len(zone_outage_config) > 1:
@@ -110,12 +110,12 @@ def run(scenarios_list, config, wait_duration, telemetry: KrknTelemetryKubernete
end_time
)
except (RuntimeError, Exception):
scenario_telemetry.exitStatus = 1
scenario_telemetry.exit_status = 1
failed_scenarios.append(zone_outage_config)
log_exception(zone_outage_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetry.exit_status = 0
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -1,40 +1,40 @@
PyYAML>=5.1
aliyun-python-sdk-core==2.13.36
aliyun-python-sdk-ecs==4.24.25
arcaflow==0.9.0
arcaflow-plugin-sdk==0.10.0
azure-identity
azure-keyvault
azure-mgmt-compute
arcaflow-plugin-sdk==0.14.0
arcaflow==0.17.2
boto3==1.28.61
coverage
datetime
docker
docker-compose
git+https://github.com/redhat-chaos/arcaflow-plugin-kill-pod.git
git+https://github.com/vmware/vsphere-automation-sdk-python.git@v8.0.0.0
gitpython
google-api-python-client
ibm_cloud_sdk_core
ibm_vpc
azure-identity==1.16.1
azure-keyvault==4.2.0
azure-mgmt-compute==30.5.0
itsdangerous==2.0.1
jinja2==3.1.3
krkn-lib >= 1.4.6
kubernetes
lxml >= 4.3.0
oauth2client>=4.1.3
openshift-client
paramiko
podman-compose
pyVmomi >= 6.7
pyfiglet
pytest
python-ipmi
python-openstackclient
requests
service_identity
setuptools==65.5.1
werkzeug==3.0.1
wheel
coverage==7.4.1
datetime==5.4
docker==7.0.0
gitpython==3.1.41
google-api-python-client==2.116.0
ibm_cloud_sdk_core==3.18.0
ibm_vpc==0.20.0
jinja2==3.1.4
krkn-lib==2.1.7
lxml==5.1.0
kubernetes==28.1.0
oauth2client==4.1.3
pandas==2.2.0
openshift-client==1.0.21
paramiko==3.4.0
pyVmomi==8.0.2.0.1
pyfiglet==1.0.2
pytest==8.0.0
python-ipmi==0.5.4
python-openstackclient==6.5.0
requests==2.32.2
service_identity==24.1.0
PyYAML==6.0.1
setuptools==70.0.0
werkzeug==3.0.3
wheel==0.42.0
zope.interface==5.4.0
pandas<2.0.0
git+https://github.com/krkn-chaos/arcaflow-plugin-kill-pod.git@v0.1.0
git+https://github.com/vmware/vsphere-automation-sdk-python.git@v8.0.0.0
cryptography>=42.0.4 # not directly required, pinned by Snyk to avoid a vulnerability

View File

@@ -9,6 +9,8 @@ import optparse
import pyfiglet
import uuid
import time
from krkn_lib.models.krkn import ChaosRunOutput, ChaosRunAlertSummary
from krkn_lib.prometheus.krkn_prometheus import KrknPrometheus
import kraken.time_actions.common_time_functions as time_actions
import kraken.performance_dashboards.setup as performance_dashboards
@@ -23,10 +25,12 @@ import kraken.pvc.pvc_scenario as pvc_scenario
import kraken.network_chaos.actions as network_chaos
import kraken.arcaflow_plugin as arcaflow_plugin
import kraken.prometheus as prometheus_plugin
import kraken.service_hijacking.service_hijacking as service_hijacking_plugin
import server as server
from kraken import plugins
from kraken import plugins, syn_flood
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.ocp import KrknOpenshift
from krkn_lib.telemetry.elastic import KrknElastic
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from krkn_lib.models.telemetry import ChaosRunTelemetry
@@ -34,7 +38,7 @@ from krkn_lib.utils import SafeLogger
from krkn_lib.utils.functions import get_yaml_item_value
report_file = ""
# Main function
def main(cfg):
@@ -94,6 +98,9 @@ def main(cfg):
config["performance_monitoring"], "check_critical_alerts", False
)
telemetry_api_url = config["telemetry"].get("api_url")
elastic_config = get_yaml_item_value(config,"elastic",{})
elastic_url = get_yaml_item_value(elastic_config,"elastic_url","")
elastic_index = get_yaml_item_value(elastic_config,"elastic_index","")
# Initialize clients
if (not os.path.isfile(kubeconfig_path) and
@@ -129,8 +136,6 @@ def main(cfg):
except:
kubecli.initialize_clients(None)
# find node kraken might be running on
kubecli.find_kraken_node()
@@ -156,12 +161,22 @@ def main(cfg):
# Cluster info
logging.info("Fetching cluster info")
cv = ""
if config["kraken"]["distribution"] == "openshift":
if distribution == "openshift":
cv = ocpcli.get_clusterversion_string()
if prometheus_url is None:
connection_data = ocpcli.get_prometheus_api_connection_data()
prometheus_url = connection_data.endpoint
prometheus_bearer_token = connection_data.token
try:
connection_data = ocpcli.get_prometheus_api_connection_data()
if connection_data:
prometheus_url = connection_data.endpoint
prometheus_bearer_token = connection_data.token
else:
# If can't make a connection, set alerts to false
enable_alerts = False
critical_alerts = False
except Exception:
logging.error("invalid distribution selected, running openshift scenarios against kubernetes cluster."
"Please set 'kubernetes' in config.yaml krkn.platform and try again")
sys.exit(1)
if cv != "":
logging.info(cv)
else:
@@ -170,9 +185,9 @@ def main(cfg):
# KrknTelemetry init
telemetry_k8s = KrknTelemetryKubernetes(safe_logger, kubecli)
telemetry_ocp = KrknTelemetryOpenshift(safe_logger, ocpcli)
if enable_alerts:
telemetry_elastic = KrknElastic(safe_logger,elastic_url)
summary = ChaosRunAlertSummary()
if enable_alerts or check_critical_alerts:
prometheus = KrknPrometheus(prometheus_url, prometheus_bearer_token)
logging.info("Server URL: %s" % kubecli.get_host())
@@ -203,7 +218,8 @@ def main(cfg):
# Capture the start time
start_time = int(time.time())
post_critical_alerts = 0
chaos_output = ChaosRunOutput()
chaos_telemetry = ChaosRunTelemetry()
chaos_telemetry.run_uuid = run_uuid
# Loop to run the chaos starts here
@@ -249,7 +265,9 @@ def main(cfg):
kraken_config,
failed_post_scenarios,
wait_duration,
telemetry_k8s
telemetry_k8s,
kubecli,
run_uuid
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# krkn_lib
@@ -317,14 +335,14 @@ def main(cfg):
elif scenario_type == "application_outages":
logging.info("Injecting application outage")
failed_post_scenarios, scenario_telemetries = application_outage.run(
scenarios_list, config, wait_duration, telemetry_k8s)
scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# PVC scenarios
# krkn_lib
elif scenario_type == "pvc_scenarios":
logging.info("Running PVC scenario")
failed_post_scenarios, scenario_telemetries = pvc_scenario.run(scenarios_list, config, kubecli, telemetry_k8s)
failed_post_scenarios, scenario_telemetries = pvc_scenario.run(scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Network scenarios
@@ -332,27 +350,31 @@ def main(cfg):
elif scenario_type == "network_chaos":
logging.info("Running Network Chaos")
failed_post_scenarios, scenario_telemetries = network_chaos.run(scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
elif scenario_type == "service_hijacking":
logging.info("Running Service Hijacking Chaos")
failed_post_scenarios, scenario_telemetries = service_hijacking_plugin.run(scenarios_list, wait_duration, kubecli, telemetry_k8s)
chaos_telemetry.scenarios.extend(scenario_telemetries)
elif scenario_type == "syn_flood":
logging.info("Running Syn Flood Chaos")
failed_post_scenarios, scenario_telemetries = syn_flood.run(scenarios_list, kubecli, telemetry_k8s)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Check for critical alerts when enabled
if enable_alerts and check_critical_alerts :
logging.info("Checking for critical alerts firing post choas")
post_critical_alerts = 0
if check_critical_alerts:
prometheus_plugin.critical_alerts(prometheus,
summary,
run_uuid,
scenario_type,
start_time,
datetime.datetime.now())
##PROM
query = r"""ALERTS{severity="critical"}"""
end_time = datetime.datetime.now()
critical_alerts = prometheus.process_prom_query_in_range(
query,
start_time = datetime.datetime.fromtimestamp(start_time),
end_time = end_time
chaos_output.critical_alerts = summary
post_critical_alerts = len(summary.post_chaos_alerts)
if post_critical_alerts > 0:
logging.error("Post chaos critical alerts firing please check, exiting")
break
)
critical_alerts_count = len(critical_alerts)
if critical_alerts_count > 0:
logging.error("Critical alerts are firing: %s", critical_alerts)
logging.error("Please check, exiting")
sys.exit(1)
else:
logging.info("No critical alerts are firing!!")
iteration += 1
logging.info("")
@@ -366,27 +388,52 @@ def main(cfg):
# if platform is openshift will be collected
# Cloud platform and network plugins metadata
# through OCP specific APIs
if config["kraken"]["distribution"] == "openshift":
if distribution == "openshift":
telemetry_ocp.collect_cluster_metadata(chaos_telemetry)
else:
telemetry_k8s.collect_cluster_metadata(chaos_telemetry)
decoded_chaos_run_telemetry = ChaosRunTelemetry(json.loads(chaos_telemetry.to_json()))
logging.info(f"Telemetry data:\n{decoded_chaos_run_telemetry.to_json()}")
chaos_output.telemetry = decoded_chaos_run_telemetry
logging.info(f"Chaos data:\n{chaos_output.to_json()}")
telemetry_elastic.upload_data_to_elasticsearch(decoded_chaos_run_telemetry.to_json(), elastic_index)
if config["telemetry"]["enabled"]:
logging.info(f"telemetry data will be stored on s3 bucket folder: {telemetry_api_url}/download/{telemetry_request_id}")
logging.info(f'telemetry data will be stored on s3 bucket folder: {telemetry_api_url}/files/'
f'{(config["telemetry"]["telemetry_group"] if config["telemetry"]["telemetry_group"] else "default")}/'
f'{telemetry_request_id}')
logging.info(f"telemetry upload log: {safe_logger.log_file_name}")
try:
telemetry_k8s.send_telemetry(config["telemetry"], telemetry_request_id, chaos_telemetry)
telemetry_k8s.put_cluster_events(telemetry_request_id, config["telemetry"], start_time, end_time)
telemetry_k8s.put_critical_alerts(telemetry_request_id, config["telemetry"], summary)
# prometheus data collection is available only on Openshift
if config["telemetry"]["prometheus_backup"] and config["kraken"]["distribution"] == "openshift":
safe_logger.info("archives download started:")
prometheus_archive_files = telemetry_ocp.get_ocp_prometheus_data(config["telemetry"], telemetry_request_id)
safe_logger.info("archives upload started:")
telemetry_k8s.put_prometheus_data(config["telemetry"], prometheus_archive_files, telemetry_request_id)
if config["telemetry"]["logs_backup"]:
if config["telemetry"]["prometheus_backup"]:
prometheus_archive_files = ''
if distribution == "openshift" :
prometheus_archive_files = telemetry_ocp.get_ocp_prometheus_data(config["telemetry"], telemetry_request_id)
else:
if (config["telemetry"]["prometheus_namespace"] and
config["telemetry"]["prometheus_pod_name"] and
config["telemetry"]["prometheus_container_name"]):
try:
prometheus_archive_files = telemetry_k8s.get_prometheus_pod_data(
config["telemetry"],
telemetry_request_id,
config["telemetry"]["prometheus_pod_name"],
config["telemetry"]["prometheus_container_name"],
config["telemetry"]["prometheus_namespace"]
)
except Exception as e:
logging.error(f"failed to get prometheus backup with exception {str(e)}")
else:
logging.warning("impossible to backup prometheus,"
"check if config contains telemetry.prometheus_namespace, "
"telemetry.prometheus_pod_name and "
"telemetry.prometheus_container_name")
if prometheus_archive_files:
safe_logger.info("starting prometheus archive upload:")
telemetry_k8s.put_prometheus_data(config["telemetry"], prometheus_archive_files, telemetry_request_id)
if config["telemetry"]["logs_backup"] and distribution == "openshift":
telemetry_ocp.put_ocp_logs(telemetry_request_id, config["telemetry"], start_time, end_time)
except Exception as e:
logging.error(f"failed to send telemetry data: {str(e)}")
@@ -408,16 +455,19 @@ def main(cfg):
logging.error("Alert profile is not defined")
sys.exit(1)
if post_critical_alerts > 0:
logging.error("Critical alerts are firing, please check; exiting")
sys.exit(2)
if failed_post_scenarios:
logging.error(
"Post scenarios are still failing at the end of all iterations"
)
sys.exit(1)
sys.exit(2)
run_dir = os.getcwd() + "/kraken.report"
logging.info(
"Successfully finished running Kraken. UUID for the run: "
"%s. Report generated at %s. Exiting" % (run_uuid, run_dir)
"%s. Report generated at %s. Exiting" % (run_uuid, report_file)
)
else:
logging.error("Cannot find a config at %s, please check" % (cfg))
@@ -434,12 +484,21 @@ if __name__ == "__main__":
help="config location",
default="config/config.yaml",
)
parser.add_option(
"-o",
"--output",
dest="output",
help="output report location",
default="kraken.report",
)
(options, args) = parser.parse_args()
report_file = options.output
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s [%(levelname)s] %(message)s",
handlers=[
logging.FileHandler("kraken.report", mode="w"),
logging.FileHandler(report_file, mode="w"),
logging.StreamHandler(),
],
)

View File

@@ -4,7 +4,7 @@ deployers:
connection: {}
deployer_name: kubernetes
log:
level: debug
level: error
logged_outputs:
error:
level: error

View File

@@ -1,8 +1,13 @@
input_list:
- cpu_count: 1
cpu_load_percentage: 80
cpu_method: all
duration: 1s
kubeconfig: ''
namespace: default
node_selector: {}
- cpu_count: 1
cpu_load_percentage: 80
cpu_method: all
duration: 30
kubeconfig: ''
namespace: default
# set the node selector as a key-value pair eg.
# node_selector:
# kubernetes.io/hostname: kind-worker2
node_selector: {}

View File

@@ -1,9 +1,9 @@
version: v0.2.0
input:
root: RootObject
root: SubRootObject
objects:
RootObject:
id: input_item
SubRootObject:
id: SubRootObject
properties:
kubeconfig:
display:
@@ -35,7 +35,7 @@ input:
description: stop stress test after T seconds. One can also specify the units of time in
seconds, minutes, hours, days or years with the suffix s, m, h, d or y
type:
type_id: string
type_id: integer
required: true
cpu_count:
display:
@@ -68,18 +68,18 @@ steps:
kubeconfig: !expr $.input.kubeconfig
stressng:
plugin:
src: quay.io/arcalot/arcaflow-plugin-stressng:0.5.0
src: quay.io/arcalot/arcaflow-plugin-stressng:0.6.0
deployment_type: image
step: workload
input:
cleanup: "true"
StressNGParams:
timeout: !expr $.input.duration
stressors:
- stressor: cpu
cpu_count: !expr $.input.cpu_count
cpu_method: !expr $.input.cpu_method
cpu_load: !expr $.input.cpu_load_percentage
timeout: !expr $.input.duration
stressors:
- stressor: cpu
workers: !expr $.input.cpu_count
cpu-method: "all"
cpu-load: !expr $.input.cpu_load_percentage
deploy:
deployer_name: kubernetes
connection: !expr $.steps.kubeconfig.outputs.success.connection

View File

@@ -9,62 +9,10 @@ input:
type:
type_id: list
items:
id: input_item
type_id: object
properties:
kubeconfig:
display:
description: The complete kubeconfig file as a string
name: Kubeconfig file contents
type:
type_id: string
required: true
namespace:
display:
description: The namespace where the container will be deployed
name: Namespace
type:
type_id: string
required: true
node_selector:
display:
description: kubernetes node name where the plugin must be deployed
type:
type_id: map
values:
type_id: string
keys:
type_id: string
required: true
duration:
display:
name: duration the scenario expressed in seconds
description: stop stress test after T seconds. One can also specify the units of time in
seconds, minutes, hours, days or years with the suffix s, m, h, d or y
type:
type_id: string
required: true
cpu_count:
display:
description: Number of CPU cores to be used (0 means all)
name: number of CPUs
type:
type_id: integer
required: true
cpu_method:
display:
description: CPU stress method
name: fine grained control of which cpu stressors to use (ackermann, cfloat etc.)
type:
type_id: string
required: true
cpu_load_percentage:
display:
description: load CPU by percentage
name: CPU load
type:
type_id: integer
required: true
id: SubRootObject
type_id: ref
namespace: $.steps.workload_loop.execute.inputs.items
steps:
workload_loop:
kind: foreach

View File

@@ -3,7 +3,7 @@ deployers:
connection: {}
deployer_name: kubernetes
log:
level: debug
level: error
logged_outputs:
error:
level: error

View File

@@ -1,10 +1,13 @@
input_list:
- duration: 30s
- duration: 30
io_block_size: 1m
io_workers: 1
io_write_bytes: 10m
kubeconfig: ''
namespace: default
# set the node selector as a key-value pair eg.
# node_selector:
# kubernetes.io/hostname: kind-worker2
node_selector: {}
target_pod_folder: /hog-data
target_pod_volume:

View File

@@ -1,9 +1,25 @@
version: v0.2.0
input:
root: RootObject
root: SubRootObject
objects:
RootObject:
id: input_item
hostPath:
id: HostPathVolumeSource
properties:
path:
type:
type_id: string
Volume:
id: Volume
properties:
name:
type:
type_id: string
hostPath:
type:
id: hostPath
type_id: ref
SubRootObject:
id: SubRootObject
properties:
kubeconfig:
display:
@@ -35,7 +51,7 @@ input:
description: stop stress test after T seconds. One can also specify the units of time in
seconds, minutes, hours, days or years with the suffix s, m, h, d or y
type:
type_id: string
type_id: integer
required: true
io_workers:
display:
@@ -73,25 +89,8 @@ input:
description: the volume that will be attached to the pod. In order to stress
the node storage only hosPath mode is currently supported
type:
type_id: object
id: k8s_volume
properties:
name:
display:
description: name of the volume (must match the name in pod definition)
type:
type_id: string
required: true
hostPath:
display:
description: hostPath options expressed as string map (key-value)
type:
type_id: map
values:
type_id: string
keys:
type_id: string
required: true
type_id: ref
id: Volume
required: true
steps:
@@ -103,19 +102,18 @@ steps:
kubeconfig: !expr $.input.kubeconfig
stressng:
plugin:
src: quay.io/arcalot/arcaflow-plugin-stressng:0.5.0
src: quay.io/arcalot/arcaflow-plugin-stressng:0.6.0
deployment_type: image
step: workload
input:
cleanup: "true"
StressNGParams:
timeout: !expr $.input.duration
workdir: !expr $.input.target_pod_folder
stressors:
- stressor: hdd
hdd: !expr $.input.io_workers
hdd_bytes: !expr $.input.io_write_bytes
hdd_write_size: !expr $.input.io_block_size
timeout: !expr $.input.duration
workdir: !expr $.input.target_pod_folder
stressors:
- stressor: hdd
workers: !expr $.input.io_workers
hdd-bytes: !expr $.input.io_write_bytes
hdd-write-size: !expr $.input.io_block_size
deploy:
deployer_name: kubernetes

View File

@@ -9,97 +9,9 @@ input:
type:
type_id: list
items:
id: input_item
type_id: object
properties:
kubeconfig:
display:
description: The complete kubeconfig file as a string
name: Kubeconfig file contents
type:
type_id: string
required: true
namespace:
display:
description: The namespace where the container will be deployed
name: Namespace
type:
type_id: string
required: true
node_selector:
display:
description: kubernetes node name where the plugin must be deployed
type:
type_id: map
values:
type_id: string
keys:
type_id: string
required: true
duration:
display:
name: duration the scenario expressed in seconds
description: stop stress test after T seconds. One can also specify the units of time in
seconds, minutes, hours, days or years with the suffix s, m, h, d or y
type:
type_id: string
required: true
io_workers:
display:
description: number of workers
name: start N workers continually writing, reading and removing temporary files
type:
type_id: integer
required: true
io_block_size:
display:
description: single write size
name: specify size of each write in bytes. Size can be from 1 byte to 4MB.
type:
type_id: string
required: true
io_write_bytes:
display:
description: Total number of bytes written
name: write N bytes for each hdd process, the default is 1 GB. One can specify the size
as % of free space on the file system or in units of Bytes, KBytes, MBytes and
GBytes using the suffix b, k, m or g
type:
type_id: string
required: true
target_pod_folder:
display:
description: Target Folder
name: Folder in the pod where the test will be executed and the test files will be written
type:
type_id: string
required: true
target_pod_volume:
display:
name: kubernetes volume definition
description: the volume that will be attached to the pod. In order to stress
the node storage only hosPath mode is currently supported
type:
type_id: object
id: k8s_volume
properties:
name:
display:
description: name of the volume (must match the name in pod definition)
type:
type_id: string
required: true
hostPath:
display:
description: hostPath options expressed as string map (key-value)
type:
type_id: map
values:
type_id: string
keys:
type_id: string
required: true
required: true
id: SubRootObject
type_id: ref
namespace: $.steps.workload_loop.execute.inputs.items
steps:
workload_loop:
kind: foreach

View File

@@ -4,7 +4,7 @@ deployers:
connection: {}
deployer_name: kubernetes
log:
level: debug
level: error
logged_outputs:
error:
level: error

View File

@@ -1,11 +1,11 @@
input_list:
- duration: 30s
- duration: 30
vm_bytes: 10%
vm_workers: 2
node_selector: { }
# node selector example
# set the node selector as a key-value pair eg.
# node_selector:
# kubernetes.io/hostname: master
# kubernetes.io/hostname: kind-worker2
node_selector: { }
kubeconfig: ""
namespace: default

View File

@@ -1,9 +1,9 @@
version: v0.2.0
input:
root: RootObject
root: SubRootObject
objects:
RootObject:
id: input_item
SubRootObject:
id: SubRootObject
properties:
kubeconfig:
display:
@@ -34,7 +34,7 @@ input:
name: duration the scenario expressed in seconds
description: stop stress test after T seconds. One can also specify the units of time in seconds, minutes, hours, days or years with the suffix s, m, h, d or y
type:
type_id: string
type_id: integer
required: true
vm_workers:
display:
@@ -60,17 +60,16 @@ steps:
kubeconfig: !expr $.input.kubeconfig
stressng:
plugin:
src: quay.io/arcalot/arcaflow-plugin-stressng:0.5.0
src: quay.io/arcalot/arcaflow-plugin-stressng:0.6.0
deployment_type: image
step: workload
input:
cleanup: "true"
StressNGParams:
timeout: !expr $.input.duration
stressors:
- stressor: vm
vm: !expr $.input.vm_workers
vm_bytes: !expr $.input.vm_bytes
timeout: !expr $.input.duration
stressors:
- stressor: vm
workers: !expr $.input.vm_workers
vm-bytes: !expr $.input.vm_bytes
deploy:
deployer_name: kubernetes
connection: !expr $.steps.kubeconfig.outputs.success.connection

View File

@@ -9,54 +9,10 @@ input:
type:
type_id: list
items:
id: input_item
type_id: object
properties:
kubeconfig:
display:
description: The complete kubeconfig file as a string
name: Kubeconfig file contents
type:
type_id: string
required: true
namespace:
display:
description: The namespace where the container will be deployed
name: Namespace
type:
type_id: string
required: true
node_selector:
display:
description: kubernetes node name where the plugin must be deployed
type:
type_id: map
values:
type_id: string
keys:
type_id: string
required: true
duration:
display:
name: duration the scenario expressed in seconds
description: stop stress test after T seconds. One can also specify the units of time in seconds, minutes, hours, days or years with the suffix s, m, h, d or y
type:
type_id: string
required: true
vm_workers:
display:
description: Number of VM stressors to be run (0 means 1 stressor per CPU)
name: Number of VM stressors
type:
type_id: integer
required: true
vm_bytes:
display:
description: N bytes per vm process, the default is 256MB. The size can be expressed in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
name: Kubeconfig file contents
type:
type_id: string
required: true
id: SubRootObject
type_id: ref
namespace: $.steps.workload_loop.execute.inputs.items
steps:
workload_loop:
kind: foreach

View File

@@ -3,8 +3,4 @@
config:
namespace_pattern: ^kube-system$
label_selector: component=kube-scheduler
- id: wait-for-pods
config:
namespace_pattern: ^kube-system$
label_selector: component=kube-scheduler
count: 3
krkn_pod_recovery_time: 120

Some files were not shown because too many files have changed in this diff Show More