Compare commits

...

96 Commits

Author SHA1 Message Date
Tullio Sebastiani
4f7c58106d Dockerfile v1.5.4 (#552)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-01-15 19:22:52 +01:00
Tullio Sebastiani
a7e5ae6c80 Replaced oc debug command execution on node with a native version (#547)
* native time skew feature

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* fixed podname conflict issue

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* updated krkn-lib to v1.4.6

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* fixed pod conflict issue

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-01-15 12:15:38 -05:00
Tullio Sebastiani
aa030a21d3 Fixes the critical alerts exception with the start_time > end_time
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-01-15 11:11:45 -05:00
Paige Rubendall
631f12bdff Adding push to both red hat and krkn chaos quay (#550)
* adding push to both red hat and krkn chaos quay

* tag redhat chaos from krkn-chaos image

* login to both quays
2024-01-12 13:58:50 -05:00
Naga Ravi Chaitanya Elluri
2525982c55 Rename repo name and update workflow
This commit also removes OpenShift references and updates source
in the dockerfile.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-01-12 13:21:37 -05:00
dependabot[bot]
9760d7d97d Bump jinja2 from 3.0.3 to 3.1.3
Bumps [jinja2](https://github.com/pallets/jinja) from 3.0.3 to 3.1.3.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.0.3...3.1.3)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-11 15:40:09 -05:00
Naga Ravi Chaitanya Elluri
720488c159 Add new blogs to the useful resources list (#546)
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-01-10 15:45:36 -05:00
Naga Ravi Chaitanya Elluri
487a9f464c Deprecate long term metrics collection
This will be added back soon via native prometheus integration.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-01-10 15:08:58 -05:00
Tullio Sebastiani
d9e137e85a fixes prometheus url check on Kubernetes
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-01-10 11:23:02 -05:00
Tullio Sebastiani
d6c8054275 changed docker files (#543)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-01-10 12:22:42 +01:00
Paige Rubendall
462f93ad87 updating scenarios to have deployers (#537)
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-01-10 12:06:15 +01:00
Mark McLoughlin
c200f0774f Fix some links in README.md (#542)
* Fix github.io link in README.md

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* Fix krknChaos-hub link in README.md

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

* Fix kube-burner link in README.md

Signed-off-by: Mark McLoughlin <markmc@redhat.com>

---------

Signed-off-by: Mark McLoughlin <markmc@redhat.com>
2024-01-09 11:49:52 -05:00
Tullio Sebastiani
f2d7f88cb8 Krkn lib prometheus client + kube_burner references removed
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2024-01-09 10:43:32 -05:00
Naga Ravi Chaitanya Elluri
93f1f19411 Focus on Kubernetes in the chaos testing guide
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-01-08 20:09:12 -05:00
Naga Ravi Chaitanya Elluri
83c6058816 Use CNCF code of conduct
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2024-01-03 10:53:47 -05:00
Naga Ravi Chaitanya Elluri
ee34d08f41 Rename Krkn to KrknChaos (#536)
This change will help reflect the use case of the tool more evidently.
2023-12-18 16:40:34 -08:00
Tullio Sebastiani
41f9573563 Fixes cluster shutdown issue with single entry in scenario config (#535)
* fixed cluster shutdown issue

* fixed config file list parsing
2023-12-15 14:22:25 -05:00
Tullio Sebastiani
c00328cc2b v1.4.5 (#534) 2023-12-15 11:00:41 +01:00
Tullio Sebastiani
c2431d548f functional tests adapted to newer version of crc-cloud + OCP 4.14.1 (#532) 2023-12-11 12:48:42 -05:00
Paige Rubendall
b03511850b taking out more litmus references 2023-12-03 13:10:52 +05:30
Sahil Shah
82db2fca75 Removing Litmus Scenario 2023-11-16 09:50:04 -05:00
Naga Ravi Chaitanya Elluri
afe8d817a9 Print telemetry data location to stdout
This commit also deprecates litmus integration.
2023-11-13 10:01:17 -05:00
Tullio Sebastiani
dbf02a6c22 updated krkn-lib to fix log filtering in prow (#527) 2023-11-09 17:47:00 +01:00
Naga Ravi Chaitanya Elluri
94bec8dc9b Add missing import to get values from yaml (#526)
* Add missing import to get values from yaml

* Update Dockerfile

* Update Dockerfile-ppc64le

---------

Co-authored-by: Tullio Sebastiani <tsebastiani@users.noreply.github.com>
2023-11-07 11:07:17 +01:00
yogananth-subramanian
2111bab9a4 Pod ingress network shaping Chaos scenario
The scenario introduces network latency, packet loss, and bandwidth restriction in the Pod's network interface. The purpose of this scenario is to observe faults caused by random variations in the network.

Below example config applies ingress traffic shaping to openshift console.
````
- id: pod_ingress_shaping
  config:
    namespace: openshift-console   # Required - Namespace of the pod to which filter need to be applied.
    label_selector: 'component=ui' # Applies traffic shaping to access openshift console.
    network_params:
        latency: 500ms             # Add 500ms latency to ingress traffic from the pod.
````
2023-11-06 23:34:17 -05:00
Kamesh Akella
b734f1dd05 Updating the chaos recommender README to point to accurate python version 2023-11-03 11:23:43 -04:00
Tullio Sebastiani
7a966a71d0 krkn integration of telemetry events collection (#523)
* function package refactoring in krkn-lib

* cluster events collection flag

* krkn-lib version bump

requirements

* dockerfile bump
2023-10-31 14:31:33 -04:00
Naga Ravi Chaitanya Elluri
43d891afd3 Bump telemetry archive default size to 500MB
This commit also removes litmus configs as they are not maintained.
2023-10-30 12:50:04 -04:00
Tullio Sebastiani
27fabfd4af OCP/K8S functionalities and packages splitting in krkn-lib (#507)
* krkn-lib ocp/k8s split adaptation

* library reference updated

* requirements update

* rebase with main + fix
2023-10-30 17:31:48 +01:00
Tullio Sebastiani
724068a978 Chaos recommender refactoring (#516)
* basic structure working

* config and options refactoring

nits and changes

* removed unused function with typo + fixed duration

* removed unused arguments

* minor fixes
2023-10-30 15:51:09 +01:00
Tullio Sebastiani
c9778474f1 arcaflow version bump (#520)
arcaflow version bump

stressng version typo
2023-10-27 18:09:46 +02:00
dependabot[bot]
6efdb2eb84 Bump werkzeug from 2.2.3 to 3.0.1
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.2.3 to 3.0.1.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/2.2.3...3.0.1)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-26 11:09:29 -04:00
Naga Ravi Chaitanya Elluri
0e852da7d4 Deprecate kubernetes method of deploying Krkn
This will ensure users will use the recommended methods ( standlone or containerized )
of installing and running Krkn.
2023-10-25 12:32:46 -04:00
jtydlack
86d1fda325 Fix container scenario to accept only signal number (#350) (#485) 2023-10-24 16:51:48 -04:00
Naga Ravi Chaitanya Elluri
fc6344176b Add pointer to the CNCF sandbox discussion (#517)
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2023-10-24 16:07:40 -04:00
jtydlack
ff469579e9 Use function get_yaml_item_value
Enables using default even though the value was loaded as None.
2023-10-24 14:55:49 -04:00
Naga Ravi Chaitanya Elluri
8cbd1c5e7f Add docs for installing chaos-recommender dependencies
This commit also updates roadmap around chaos-recommender.
2023-10-18 08:56:33 -04:00
Mudit Verma
5953e53b46 chaos recommendation entry in README (#510) 2023-10-16 11:26:32 -04:00
Mudit Verma
23f1fc044b Chaos Recommendation Utility (#508)
* application profiling based chaos recommendation

* deleted unused dir

* Update requirements.txt

Signed-off-by: Mudit Verma <mudiverm@in.ibm.com>

* Update config.ini

Signed-off-by: Mudit Verma <mudiverm@in.ibm.com>

* Update Makefile

Signed-off-by: Mudit Verma <mudiverm@in.ibm.com>

* Update Dockerfile

Signed-off-by: Mudit Verma <mudiverm@in.ibm.com>

* Update README.md

Signed-off-by: Mudit Verma <mudiverm@in.ibm.com>

---------

Signed-off-by: Mudit Verma <mudiverm@in.ibm.com>
2023-10-16 10:06:02 -04:00
Naga Ravi Chaitanya Elluri
69e386db53 Update roadmap with upcoming integrations and enhancements 2023-10-11 09:24:34 -04:00
Tullio Sebastiani
fef77cfc0e dockerfiles version update 2023-10-09 09:26:57 -04:00
Naga Ravi Chaitanya Elluri
eb2eabe029 Update community slack channel 2023-10-06 17:47:25 -04:00
Paige Rubendall
f7f1b2dfb0 Service disruption (#494)
* adding service disruption

* fixing kil services

* service log changes

* remvoing extra logging

* adding daemon set

* adding service disruption name changes

* cerberus config back

* bad string
2023-10-06 12:51:10 -04:00
Tullio Sebastiani
61356fd70b Added log telemetry piece to Krkn (#500)
* config

* log collection and upload

dictionary key fix

* escape regex in config.yaml

* bump krkn-lib version

* updated funtest github cli command

* update krkn-lib version to 1.3.2

* fixed requirements.txt
2023-10-06 10:08:46 -04:00
Tullio Sebastiani
067969a81a docker version update (#502) 2023-10-03 11:49:20 -04:00
Naga Ravi Chaitanya Elluri
972ac12921 Bump krkn-lib version (#499) 2023-10-03 17:00:07 +02:00
Tullio Sebastiani
ea813748ae added OCP/K8S functionalities split in the roadmap (#498) 2023-09-26 00:59:20 -04:00
Tullio Sebastiani
782d04c1b1 Prints the telemetry json after sending it to the webservice (#479)
* prints telemetry json after sending it to the service


deserialized base64 parameters

* json output even if telemetry collection is disabled.
2023-09-25 12:00:08 -04:00
Naga Ravi Chaitanya Elluri
2fb58f9897 Update roadmap with completed and new items 2023-09-21 15:33:44 -04:00
Tullio Sebastiani
5712721410 bumped docker version (#493)
Co-authored-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2023-09-19 17:38:44 +02:00
Tullio Sebastiani
5567c06cd0 reinstated io-hog documentation (#492) 2023-09-19 17:27:59 +02:00
Sahil Shah
0ad4c11356 Fix for time scenario (#490) 2023-09-14 12:36:08 -04:00
Tullio Sebastiani
f6f686e8fe fixed io-hog scenario 2023-09-13 09:57:00 -04:00
Pratyusha Thammineni
3a66f8a5a3 Added Docker image build workflow status badge
This Allows the users to track the docker-build action in README.md
without navigationg to Actions tab on Github
2023-09-11 15:16:28 -04:00
Sahil Shah
585d519687 Adding Prometheus Disruption Scenario (#484) 2023-09-11 11:18:29 -04:00
yogananth-subramanian
e40fedcd44 Update etcd metrics 2023-09-08 11:11:42 -04:00
Paige Rubendall
1bb5b8ad04 adding comment 2023-08-29 21:54:17 -04:00
Paige Rubendall
725d58c8ce adding docs update again 2023-08-25 14:37:07 -04:00
Paige Rubendall
c6058da7a7 adding comment 2023-08-25 12:19:03 -04:00
Naga Ravi Chaitanya Elluri
06a8ed220c Bump release version to v1.4.4 2023-08-24 13:28:39 -04:00
Dustin Black
2c6b50bcdc bump arcaflow stressng plugin to 0.3.1 for bug fix 2023-08-24 12:50:28 -04:00
Naga Ravi Chaitanya Elluri
ed97c8df2b Bump release version to v1.4.3 2023-08-23 11:56:39 -04:00
Tullio Sebastiani
1baa68bcee engine bump to v0.6.1 2023-08-23 11:38:23 -04:00
Naga Ravi Chaitanya Elluri
ab84f09448 Use release tags vs latest for kubeconfig arca plugins (#473) 2023-08-23 09:59:33 -04:00
Dustin Black
6ace3c952b update to plugin release stressng:0.3.0 (#472) 2023-08-23 09:15:30 -04:00
Tullio Sebastiani
cee5259fd3 arcaflow scenarios removed from config.yaml 2023-08-23 08:50:19 -04:00
Tullio Sebastiani
f868000ebd Switched from krkn_lib_kubernetes to krkn_lib v1.0.0 (#469)
* changed all the references to krkn_lib_kubernetes to the new krkn_lib


changed all the references

* added krkn-lib pointer in documentation
2023-08-22 12:41:40 -04:00
pratyusha
d2d80be241 Updated config.yaml file with more scenarios (#468) 2023-08-21 11:26:33 -04:00
Naga Ravi Chaitanya Elluri
da464859c4 Bump release version to v1.4.2 2023-08-21 09:06:28 -04:00
Naga Ravi Chaitanya Elluri
ef88005985 Use images tagged with a release for hog scenarios
This commit switches from using latest images to a specific release
to review changes and update configs before using the latest bits.
2023-08-18 01:47:17 -04:00
Sahil Shah
102bdfdc96 Bump the release version to v1.4.1 (#465) 2023-08-17 10:18:11 -04:00
Sahil Shah
b569e6a9d5 Fixing pvc scenario 2023-08-16 16:05:18 -04:00
Tullio Sebastiani
dba38668b7 Dockerfile version bump 2023-08-11 11:12:56 -04:00
Tullio Sebastiani
39c0152b7b Krkn telemetry integration (#435)
* adapted config.yaml to the new feature

* temporarly pointing requirement.txt to the lib feature branch

* run_kraken.py + arcaflow scenarios refactoring


typo

* plugin scenario

* node scenarios


return failed scenarios

* container scenarios


fix

* time scenarios

* cluster shutdown  scenarios

* namespace scenarios

* zone outage scenarios

* app outage scenarios

* pvc scenarios

* network chaos scenarios

* run_kraken.py adaptation to telemetry

* prometheus telemetry upload + config.yaml


some fixes


typos and logs


max retries in config


telemetry id with run_uuid


safe_logger

* catch send_telemetry exception

* scenario collection bug fixes

* telemetry enabled check

* telemetry run tag

* requirements pointing to main + archive_size

* requirements.txt and config.yaml update

* added telemetry config to common config

* fixed scenario array elements for telemetry
2023-08-10 14:42:53 -04:00
jtydlack
491dc17267 Slo via http (#459)
* Fix typo

* Enable loading SLO profile via URL (#438)
2023-08-10 11:02:33 -04:00
yogananth-subramanian
b2b5002f45 Pod egress network shapping Chaos scenario
The scenario introduces network latency, packet loss, and bandwidth restriction in the Pod's network interface.
The purpose of this scenario is to observe faults caused by random variations in the network.

Below example config applies egress traffic shaping to openshift console.
````
- id: pod_egress_shaping
  config:
    namespace: openshift-console   # Required - Namespace of the pod to which filter need to be applied.
    label_selector: 'component=ui' # Applies traffic shaping to access openshift console.
    network_params:
        latency: 500ms             # Add 500ms latency to egress traffic from the pod.
````
2023-08-08 11:45:03 -04:00
Sahil Shah
fccd701dee Changed the image in volume_scenario.yml to a public one (#458) 2023-08-02 00:11:38 -04:00
José Castillo Lema
570631ebfc Widen except (#457)
Signed-off-by: José Castillo Lema <josecastillolema@gmail.com>
2023-07-26 18:53:52 +02:00
Naga Ravi Chaitanya Elluri
3ab9ca4319 Bump release version to v1.3.6 2023-07-24 14:06:37 -04:00
Naga Ravi Chaitanya Elluri
4084ffd9c6 Bake in virtualenv in krkn images
This is needed to tie the python version being used in case multiple
versions are installed.
2023-07-24 12:52:20 -04:00
Sahil Shah
19cc2c047f Fix for pvc scenario 2023-07-21 15:41:28 -04:00
Paige Rubendall
6197fc6722 separating build and test workflows (#448)
* separating build and test workflows

* only run build on pull request
2023-07-20 16:01:50 -04:00
Naga Ravi Chaitanya Elluri
2a8ac41ebf Bump release version to v1.3.5 2023-07-20 15:24:56 -04:00
Naga Ravi Chaitanya Elluri
b4d235d31c Bake in yq dependency in Kraken container images (#450)
This commit also updates ppc64le image to have the latest bits.
2023-07-20 13:17:52 -04:00
Naga Ravi Chaitanya Elluri
e4e4620d10 Bump release version to 1.3.4 (#447) 2023-06-28 16:30:28 -04:00
Naga Ravi Chaitanya Elluri
a2c24ab7ed Install latest version of krkn-lib-kubernetes (#446) 2023-06-28 15:21:19 -04:00
Naga Ravi Chaitanya Elluri
fe892fd9bf Switch from centos to redhat ubi base image
This replaces the base image for Kraken container images to use
redhat ubi image to be more secure and stable.
2023-06-22 12:10:51 -04:00
Naga Ravi Chaitanya Elluri
74613fdb4b Install oc and kubectl clients from stable releases
This makes sure latest clients are installed and used:
- This will avoid compatability issues with the server
- Fixes security vulnerabilities and CVEs
2023-06-20 15:39:53 -04:00
Naga Ravi Chaitanya Elluri
28c37c9353 Bump release version to v1.3.3 2023-06-16 09:42:46 -04:00
Naga Ravi Chaitanya Elluri
de0567b067 Tweak the etcd alert severity 2023-06-16 09:19:17 -04:00
Naga Ravi Chaitanya Elluri
83486557f1 Bump release version to v1.3.2 (#439) 2023-06-15 12:12:42 -04:00
Naga Ravi Chaitanya Elluri
ce409ea6fb Update kube-burner dependency version to 1.7.0 2023-06-15 11:55:17 -04:00
Naga Ravi Chaitanya Elluri
0eb8d38596 Expand SLOs profile to cover monitoring for more alerts
This commit:
- Also sets appropriate severity to avoid false failures for the
  test cases especially given that theses are monitored during the chaos
  vs post chaos. Critical alerts are all monitored post chaos with few
  monitored during the chaos that represent overall health and performance
  of the service.
- Renames Alerts to SLOs validation

Metrics reference: f09a492b13/cmd/kube-burner/ocp-config/alerts.yml
2023-06-14 16:58:36 -04:00
Tullio Sebastiani
68dc17bc44 krkn-lib-kubernetes refactoring proposal (#400)
* run_kraken.py updated + renamed kubernetes library folder


unstaged files


kubecli marker

* container scenarios updated

* node scenarios updated


typo


injected kubecli

* managed cluster scenarios updated

* time scenarios updated

* litmus scenarios updated

* cluster scenarios updated

* namespace scenarios updated

* pvc scenarios updated

* network chaos scenarios updated

* common_managed_cluster functions updated

* switched draft library to official one

* regression on rebase
2023-06-13 10:02:35 -04:00
Naga Ravi Chaitanya Elluri
572eeefaf4 Minor fixes
This commit fixes few typos and duplicate logs
2023-06-12 21:05:27 -04:00
Naga Ravi Chaitanya Elluri
81376bad56 Bump release version to v1.3.1
This updates the Krkn container images to use the latest v1.3.1
minor release: https://github.com/redhat-chaos/krkn/releases.
2023-06-07 14:41:09 -04:00
110 changed files with 4716 additions and 2407 deletions

View File

@@ -1,8 +1,5 @@
name: Build Krkn
on:
push:
branches:
- main
pull_request:
jobs:
@@ -51,20 +48,4 @@ jobs:
if-no-files-found: error
- name: Check CI results
run: grep Fail CI/results.markdown && false || true
- name: Build the Docker images
run: docker build --no-cache -t quay.io/redhat-chaos/krkn containers/
- name: Login in quay
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
env:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}
- name: Push the Docker images
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker push quay.io/redhat-chaos/krkn
- name: Rebuild krkn-hub
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
uses: redhat-chaos/actions/krkn-hub@main
with:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}

41
.github/workflows/docker-image.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Docker Image CI
on:
push:
branches:
- main
pull_request:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Build the Docker images
run: |
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/
docker tag quay.io/krkn-chaos/krkn quay.io/redhat-chaos/krkn
- name: Login in quay
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
env:
QUAY_USER: ${{ secrets.QUAY_USERNAME }}
QUAY_TOKEN: ${{ secrets.QUAY_PASSWORD }}
- name: Push the KrknChaos Docker images
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker push quay.io/krkn-chaos/krkn
- name: Login in to redhat-chaos quay
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
env:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}
- name: Push the RedHat Chaos Docker images
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker push quay.io/redhat-chaos/krkn
- name: Rebuild krkn-hub
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
uses: redhat-chaos/actions/krkn-hub@main
with:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}

View File

@@ -34,7 +34,7 @@ jobs:
- name: Check out Kraken
uses: actions/checkout@v3
- name: Checkout Pull Request
run: hub pr checkout ${{ github.event.issue.number }}
run: gh pr checkout ${{ github.event.issue.number }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Install OC CLI
@@ -44,47 +44,65 @@ jobs:
- name: Install python 3.9
uses: actions/setup-python@v4
with:
python-version: '3.9'
python-version: '3.9'
- name: Setup kraken dependencies
run: pip install -r requirements.txt
- name: Create Workdir & export the path
run: |
mkdir workdir
echo "WORKDIR_PATH=`pwd`/workdir" >> $GITHUB_ENV
- name: Generate run id
run: |
echo "RUN_ID=`date +%s`" > $GITHUB_ENV
echo "Run Id: ${RUN_ID}"
- name: Write Pull Secret
env:
PULLSECRET_BASE64: ${{ secrets.PS_64 }}
run: |
echo "$PULLSECRET_BASE64" | base64 --decode > pullsecret.txt
- name: Write Boot Private Key
env:
BOOT_KEY: ${{ secrets.CRC_KEY_FILE }}
run: |
echo -n "$BOOT_KEY" > key.txt
- name: Teardown CRC (Post Action)
uses: webiny/action-post-run@3.0.0
id: post-run-command
with:
# currently using image coming from tsebastiani quay.io repo
# waiting that a fix is merged in the upstream one
# post action run cannot (apparently) be properly indented
run: docker run -v "${{ env.WORKDIR_PATH }}:/workdir" -e WORKING_MODE=T -e AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }} -e AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }} -e AWS_DEFAULT_REGION=us-west-2 -e TEARDOWN_RUN_ID=crc quay.io/tsebastiani/crc-cloud
- name: Run CRC
# currently using image coming from tsebastiani quay.io repo
# waiting that a fix is merged in the upstream one
run: podman run --rm -v "${{ github.workspace }}:/workspace:z" -e AWS_ACCESS_KEY_ID="${{ secrets.AWS_ACCESS_KEY_ID }}" -e AWS_SECRET_ACCESS_KEY="${{ secrets.AWS_SECRET_ACCESS_KEY }}" -e AWS_DEFAULT_REGION=us-west-2 quay.io/crcont/crc-cloud:v0.0.2 destroy --project-name "chaos-funtest-${{ env.RUN_ID }}" --backed-url "s3://krkn-crc-state/${{ env.RUN_ID }}" --provider "aws"
- name: Create cluster
run: |
docker run -v "${{ env.WORKDIR_PATH }}:/workdir" \
-e WORKING_MODE=C \
-e PULL_SECRET="${{ secrets.PULL_SECRET }}" \
-e AWS_ACCESS_KEY_ID="${{ secrets.AWS_ACCESS_KEY_ID }}" \
-e AWS_SECRET_ACCESS_KEY="${{ secrets.AWS_SECRET_ACCESS_KEY }}" \
-e AWS_DEFAULT_REGION=us-west-2 \
-e CREATE_RUN_ID=crc \
-e PASS_KUBEADMIN="${{ secrets.KUBEADMIN_PWD }}" \
-e PASS_REDHAT="${{ secrets.REDHAT_PWD }}" \
-e PASS_DEVELOPER="${{ secrets.DEVELOPER_PWD }}" \
quay.io/tsebastiani/crc-cloud
- name: OpenShift login and example deployment, GitHub Action env init
podman run --name crc-cloud-create --rm \
-v ${PWD}:/workspace:z \
-e AWS_ACCESS_KEY_ID="${{ secrets.AWS_ACCESS_KEY_ID }}" \
-e AWS_SECRET_ACCESS_KEY="${{ secrets.AWS_SECRET_ACCESS_KEY }}" \
-e AWS_DEFAULT_REGION="us-west-2" \
quay.io/crcont/crc-cloud:v0.0.2 \
create aws \
--project-name "chaos-funtest-${RUN_ID}" \
--backed-url "s3://krkn-crc-state/${RUN_ID}" \
--output "/workspace" \
--aws-ami-id "ami-00f5eaf98cf42ef9f" \
--pullsecret-filepath /workspace/pullsecret.txt \
--key-filepath /workspace/key.txt
- name: Setup kubeconfig
continue-on-error: true
run: |
ssh -o StrictHostKeyChecking=no -i id_rsa core@$(cat host) "cat /opt/kubeconfig" > kubeconfig
sed -i "s/https:\/\/api.crc.testing:6443/https:\/\/`cat host`.nip.io:6443/g" kubeconfig
echo "KUBECONFIG=${PWD}/kubeconfig" > $GITHUB_ENV
- name: Example deployment, GitHub Action env init
env:
NAMESPACE: test-namespace
DEPLOYMENT_NAME: test-nginx
KUBEADMIN_PWD: '${{ secrets.KUBEADMIN_PWD }}'
run: ./CI/CRC/init_github_action.sh
- name: Setup test suite
run: |
yq -i '.kraken.port="8081"' CI/config/common_test_config.yaml
yq -i '.kraken.signal_address="0.0.0.0"' CI/config/common_test_config.yaml
yq -i '.kraken.kubeconfig_path="'${KUBECONFIG}'"' CI/config/common_test_config.yaml
echo "test_app_outages_gh" > ./CI/tests/my_tests
echo "test_container" >> ./CI/tests/my_tests
echo "test_namespace" >> ./CI/tests/my_tests
@@ -106,6 +124,6 @@ jobs:
echo "# Test results" > $GITHUB_STEP_SUMMARY
cat CI/results.markdown >> $GITHUB_STEP_SUMMARY
echo "# Test coverage" >> $GITHUB_STEP_SUMMARY
python -m coverage report --format=markdown >> $GITHUB_STEP_SUMMARY
python -m coverage report --format=markdown >> $GITHUB_STEP_SUMMARY

View File

@@ -1,16 +1,11 @@
#!/bin/bash
SCRIPT_PATH=./CI/CRC
DEPLOYMENT_PATH=$SCRIPT_PATH/deployment.yaml
CLUSTER_INFO=cluster_infos.json
[[ -z $WORKDIR_PATH ]] && echo "[ERROR] please set \$WORKDIR_PATH environment variable" && exit 1
CLUSTER_INFO_PATH=$WORKDIR_PATH/crc/$CLUSTER_INFO
[[ ! -f $DEPLOYMENT_PATH ]] && echo "[ERROR] please run $0 from GitHub action root directory" && exit 1
[[ -z $KUBEADMIN_PWD ]] && echo "[ERROR] kubeadmin password not set, please check the repository secrets" && exit 1
[[ -z $DEPLOYMENT_NAME ]] && echo "[ERROR] please set \$DEPLOYMENT_NAME environment variable" && exit 1
[[ -z $NAMESPACE ]] && echo "[ERROR] please set \$NAMESPACE environment variable" && exit 1
[[ ! -f $CLUSTER_INFO_PATH ]] && echo "[ERROR] cluster_info.json not found in $CLUSTER_INFO_PATH" && exit 1
OPENSSL=`which openssl 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: openssl missing, please install it and try again" && exit 1
@@ -23,20 +18,11 @@ JQ=`which jq 2>/dev/null`
ENVSUBST=`which envsubst 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: envsubst missing, please install it and try again" && exit 1
API_ADDRESS="$($JQ -r '.api.address' $CLUSTER_INFO_PATH)"
API_PORT="$($JQ -r '.api.port' $CLUSTER_INFO_PATH)"
BASE_HOST=`$JQ -r '.api.address' $CLUSTER_INFO_PATH | sed -r 's#https:\/\/api\.(.+\.nip\.io)#\1#'`
FQN=$DEPLOYMENT_NAME.apps.$BASE_HOST
echo "[INF] logging on $API_ADDRESS:$API_PORT"
COUNTER=1
until `$OC login --insecure-skip-tls-verify -u kubeadmin -p $KUBEADMIN_PWD $API_ADDRESS:$API_PORT > /dev/null 2>&1`
do
echo "[INF] login attempt $COUNTER"
[[ $COUNTER == 20 ]] && echo "[ERR] maximum login attempts exceeded, failing" && exit 1
((COUNTER++))
sleep 10
done
API_PORT="6443"
API_ADDRESS="https://api.`cat host`.nip.io:${API_PORT}"
FQN=$DEPLOYMENT_NAME.apps.$API_ADDRESS
echo "[INF] deploying example deployment: $DEPLOYMENT_NAME in namespace: $NAMESPACE"
$ENVSUBST < $DEPLOYMENT_PATH | $OC apply -f - > /dev/null 2>&1

View File

@@ -15,17 +15,27 @@ cerberus:
performance_monitoring:
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift.
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
kube_burner_binary_url: "https://github.com/cloud-bulldozer/kube-burner/releases/download/v0.9.1/kube-burner-0.9.1-Linux-x86_64.tar.gz"
capture_metrics: False
config_path: config/kube_burner.yaml # Define the Elasticsearch url and index name in this config.
metrics_profile_path: config/metrics-aggregated.yaml
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
uuid: # uuid for the run is generated by default if not set.
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error.
alert_profile: config/alerts # Path to alert profile with the prometheus queries.
alert_profile: config/alerts.yaml # Path to alert profile with the prometheus queries.
tunings:
wait_duration: 6 # Duration to wait between each chaos scenario.
iterations: 1 # Number of times to execute the scenarios.
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever.
telemetry:
enabled: False # enable/disables the telemetry collection feature
api_url: https://ulnmf9xv7j.execute-api.us-west-2.amazonaws.com/production #telemetry service endpoint
username: username # telemetry service username
password: password # telemetry service password
prometheus_backup: True # enables/disables prometheus data collection
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
backup_threads: 5 # number of telemetry download/upload threads
archive_path: /tmp # local path where the archive files will be temporarly stored
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
archive_size: 10000 # the size of the prometheus data archive size in KB. The lower the size of archive is

View File

@@ -1,34 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: litmus
spec:
# It can be true/false
annotationCheck: 'false'
# It can be active/stop
engineState: 'active'
chaosServiceAccount: litmus-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
experiments:
- name: node-cpu-hog
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '10'
# Number of cores of node CPU to be consumed
- name: NODE_CPU_CORE
value: '1'
# percentage of total nodes to target
- name: NODES_AFFECTED_PERC
value: '30'
# ENTER THE COMMA SEPARATED TARGET NODES NAME
- name: TARGET_NODES
value: $WORKER_NODE

View File

@@ -1,34 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: litmus
spec:
# It can be true/false
annotationCheck: 'false'
# It can be active/stop
engineState: 'active'
chaosServiceAccount: litmus-sa
monitoring: false
# It can be delete/retain
jobCleanUpPolicy: 'delete'
experiments:
- name: node-cpu-hog
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '10'
# Number of cores of node CPU to be consumed
- name: NODE_CPU_CORE
value: '1'
# percentage of total nodes to target
- name: NODES_AFFECTED_PERC
value: '30'
# ENTER THE COMMA SEPARATED TARGET NODES NAME
- name: TARGET_NODES
value:

View File

@@ -1,35 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: litmus
spec:
# It can be delete/retain
jobCleanUpPolicy: 'retain'
# It can be active/stop
engineState: 'active'
chaosServiceAccount: litmus-sa
experiments:
- name: node-io-stress
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '10'
## specify the size as percentage of free space on the file system
- name: FILESYSTEM_UTILIZATION_PERCENTAGE
value: '100'
## Number of core of CPU
- name: CPU
value: '1'
## Total number of workers default value is 4
- name: NUMBER_OF_WORKERS
value: '3'
## enter the comma separated target nodes name
- name: TARGET_NODES
value: $WORKER_NODE

View File

@@ -1,35 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: litmus
spec:
# It can be delete/retain
jobCleanUpPolicy: 'retain'
# It can be active/stop
engineState: 'active'
chaosServiceAccount: litmus-sa
experiments:
- name: node-io-stress
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '10'
## specify the size as percentage of free space on the file system
- name: FILESYSTEM_UTILIZATION_PERCENTAGE
value: '100'
## Number of core of CPU
- name: CPU
value: '1'
## Total number of workers default value is 4
- name: NUMBER_OF_WORKERS
value: '3'
## enter the comma separated target nodes name
- name: TARGET_NODES
value:

View File

@@ -1,28 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: litmus
spec:
# It can be delete/retain
jobCleanUpPolicy: 'retain'
# It can be active/stop
engineState: 'active'
chaosServiceAccount: litmus-sa
experiments:
- name: node-memory-hog
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '10'
## Specify the size as percent of total node capacity Ex: '30'
## Note: For consuming memory in mebibytes change the variable to MEMORY_CONSUMPTION_MEBIBYTES
- name: MEMORY_CONSUMPTION_PERCENTAGE
value: '30'
# ENTER THE COMMA SEPARATED TARGET NODES NAME
- name: TARGET_NODES
value: $WORKER_NODE

View File

@@ -1,28 +0,0 @@
apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
name: nginx-chaos
namespace: litmus
spec:
# It can be delete/retain
jobCleanUpPolicy: 'retain'
# It can be active/stop
engineState: 'active'
chaosServiceAccount: litmus-sa
experiments:
- name: node-memory-hog
spec:
components:
env:
# set chaos duration (in sec) as desired
- name: TOTAL_CHAOS_DURATION
value: '10'
## Specify the size as percent of total node capacity Ex: '30'
## Note: For consuming memory in mebibytes change the variable to MEMORY_CONSUMPTION_MEBIBYTES
- name: MEMORY_CONSUMPTION_PERCENTAGE
value: '30'
# ENTER THE COMMA SEPARATED TARGET NODES NAME
- name: TARGET_NODES
value:

View File

@@ -51,7 +51,7 @@ spec:
claimName: kraken-test-pvc
containers:
- name: kraken-test-container
image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest'
image: 'quay.io/centos7/httpd-24-centos7:latest'
volumeMounts:
- mountPath: "/home/krake-dir/"
name: kraken-test-pv

View File

@@ -1,20 +0,0 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_litmus_cpu {
export scenario_type="litmus_scenarios"
export scenario_file="- scenarios/templates/litmus-rbac.yaml"
export post_config="- CI/scenarios/node_cpu_hog_engine_node.yaml"
envsubst < CI/config/common_test_config.yaml > CI/config/litmus_config.yaml
envsubst < CI/scenarios/node_cpu_hog_engine.yaml > CI/scenarios/node_cpu_hog_engine_node.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario test: Success"
}
functional_test_litmus_cpu

View File

@@ -1,20 +0,0 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_litmus_cpu {
[ -z $NODE_NAME ] && echo "[ERR] NODE_NAME variable not set, failing." && exit 1
yq -i ' .spec.experiments = [{"name": "node-cpu-hog", "spec":{"components":{"env":[{"name":"TOTAL_CHAOS_DURATION","value":"10"},{"name":"NODE_CPU_CORE","value":"1"},{"name":"NODES_AFFECTED_PERC","value":"30"},{"name":"TARGET_NODES","value":"'$NODE_NAME'"}]}}}]' CI/scenarios/node_cpu_hog_engine_node.yaml
cp CI/config/common_test_config.yaml CI/config/litmus_config.yaml
yq '.kraken.chaos_scenarios = [{"litmus_scenarios":[["scenarios/openshift/templates/litmus-rbac.yaml","CI/scenarios/node_cpu_hog_engine_node.yaml"]]}]' -i CI/config/litmus_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario test: Success"
}
functional_test_litmus_cpu

View File

@@ -1,20 +0,0 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_litmus_io {
export scenario_type="litmus_scenarios"
export scenario_file="- scenarios/templates/litmus-rbac.yaml"
export post_config="- CI/scenarios/node_io_engine_node.yaml"
envsubst < CI/config/common_test_config.yaml > CI/config/litmus_config.yaml
envsubst < CI/scenarios/node_io_engine.yaml > CI/scenarios/node_io_engine_node.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario test: Success"
}
functional_test_litmus_io

View File

@@ -1,19 +0,0 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_litmus_io {
[ -z $NODE_NAME ] && echo "[ERR] NODE_NAME variable not set, failing." && exit 1
yq -i ' .spec.experiments = [{"name": "node-io-stress", "spec":{"components":{"env":[{"name":"TOTAL_CHAOS_DURATION","value":"10"},{"name":"FILESYSTEM_UTILIZATION_PERCENTAGE","value":"100"},{"name":"CPU","value":"1"},{"name":"NUMBER_OF_WORKERS","value":"3"},{"name":"TARGET_NODES","value":"'$NODE_NAME'"}]}}}]' CI/scenarios/node_io_engine_node.yaml
cp CI/config/common_test_config.yaml CI/config/litmus_config.yaml
yq '.kraken.chaos_scenarios = [{"litmus_scenarios":[["scenarios/openshift/templates/litmus-rbac.yaml","CI/scenarios/node_io_engine_node.yaml"]]}]' -i CI/config/litmus_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario test: Success"
}
functional_test_litmus_io

View File

@@ -1,20 +0,0 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_litmus_mem {
export scenario_type="litmus_scenarios"
export scenario_file="- scenarios/templates/litmus-rbac.yaml"
export post_config="- CI/scenarios/node_mem_engine_node.yaml"
envsubst < CI/config/common_test_config.yaml > CI/config/litmus_config.yaml
envsubst < CI/scenarios/node_mem_engine.yaml > CI/scenarios/node_mem_engine_node.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario $1 test: Success"
}
functional_test_litmus_mem "- CI/scenarios/node_mem_engine.yaml"

View File

@@ -1,19 +0,0 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_litmus_mem {
[ -z $NODE_NAME ] && echo "[ERR] NODE_NAME variable not set, failing." && exit 1
yq -i ' .spec.experiments = [{"name": "node-io-stress", "spec":{"components":{"env":[{"name":"TOTAL_CHAOS_DURATION","value":"10"},{"name":"CPU","value":"1"},{"name":"TARGET_NODES","value":"'$NODE_NAME'"}]}}}]' CI/scenarios/node_mem_engine_node.yaml
cp CI/config/common_test_config.yaml CI/config/litmus_config.yaml
yq '.kraken.chaos_scenarios = [{"litmus_scenarios":[["scenarios/openshift/templates/litmus-rbac.yaml","CI/scenarios/node_mem_engine_node.yaml"]]}]' -i CI/config/litmus_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario test: Success"
}
functional_test_litmus_mem

View File

@@ -1,21 +1,50 @@
# Contributor Covenant Code of Conduct
## CNCF Community Code of Conduct v1.3
## Our Pledge
Other languages available:
- [Arabic/العربية](code-of-conduct-languages/ar.md)
- [Bulgarian/Български](code-of-conduct-languages/bg.md)
- [Chinese/中文](code-of-conduct-languages/zh.md)
- [Czech/Česky](code-of-conduct-languages/cs.md)
- [Farsi/فارسی](code-of-conduct-languages/fa.md)
- [French/Français](code-of-conduct-languages/fr.md)
- [German/Deutsch](code-of-conduct-languages/de.md)
- [Hindi/हिन्दी](code-of-conduct-languages/hi.md)
- [Indonesian/Bahasa Indonesia](code-of-conduct-languages/id.md)
- [Italian/Italiano](code-of-conduct-languages/it.md)
- [Japanese/日本語](code-of-conduct-languages/jp.md)
- [Korean/한국어](code-of-conduct-languages/ko.md)
- [Polish/Polski](code-of-conduct-languages/pl.md)
- [Portuguese/Português](code-of-conduct-languages/pt.md)
- [Russian/Русский](code-of-conduct-languages/ru.md)
- [Spanish/Español](code-of-conduct-languages/es.md)
- [Turkish/Türkçe](code-of-conduct-languages/tr.md)
- [Ukrainian/Українська](code-of-conduct-languages/uk.md)
- [Vietnamese/Tiếng Việt](code-of-conduct-languages/vi.md)
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
### Community Code of Conduct
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
As contributors, maintainers, and participants in the CNCF community, and in the interest of fostering
an open and welcoming community, we pledge to respect all people who participate or contribute
through reporting issues, posting feature requests, updating documentation,
submitting pull requests or patches, attending conferences or events, or engaging in other community or project activities.
We are committed to making participation in the CNCF community a harassment-free experience for everyone, regardless of age, body size, caste, disability, ethnicity, level of experience, family status, gender, gender identity and expression, marital status, military or veteran status, nationality, personal appearance, race, religion, sexual orientation, socioeconomic status, tribe, or any other dimension of diversity.
## Scope
This code of conduct applies:
* within project and community spaces,
* in other spaces when an individual CNCF community participant's words or actions are directed at or are about a CNCF project, the CNCF community, or another CNCF community participant.
### CNCF Events
CNCF events that are produced by the Linux Foundation with professional events staff are governed by the Linux Foundation [Events Code of Conduct](https://events.linuxfoundation.org/code-of-conduct/) available on the event page. This is designed to be used in conjunction with the CNCF Code of Conduct.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
The CNCF Community is open, inclusive and respectful. Every member of our community has the right to have their identity respected.
Examples of behavior that contributes to a positive environment include but are not limited to:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
@@ -24,104 +53,52 @@ community include:
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
* Using welcoming and inclusive language
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
Examples of unacceptable behavior include but are not limited to:
* The use of sexualized language or imagery
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Public or private harassment in any form
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Violence, threatening violence, or encouraging others to engage in violent behavior
* Stalking or following someone without their consent
* Unwelcome physical contact
* Unwelcome sexual or romantic attention or advances
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
The following behaviors are also prohibited:
* Providing knowingly false or misleading information in connection with a Code of Conduct investigation or otherwise intentionally tampering with an investigation.
* Retaliating against a person because they reported an incident or provided information about an incident as a witness.
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct.
By adopting this Code of Conduct, project maintainers commit themselves to fairly and consistently applying these principles to every aspect
of managing a CNCF project.
Project maintainers who do not follow or enforce the Code of Conduct may be temporarily or permanently removed from the project team.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Reporting
## Scope
For incidents occurring in the Kubernetes community, contact the [Kubernetes Code of Conduct Committee](https://git.k8s.io/community/committee-code-of-conduct) via <conduct@kubernetes.io>. You can expect a response within three business days.
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
For other projects, or for incidents that are project-agnostic or impact multiple CNCF projects, please contact the [CNCF Code of Conduct Committee](https://www.cncf.io/conduct/committee/) via <conduct@cncf.io>. Alternatively, you can contact any of the individual members of the [CNCF Code of Conduct Committee](https://www.cncf.io/conduct/committee/) to submit your report. For more detailed instructions on how to submit a report, including how to submit a report anonymously, please see our [Incident Resolution Procedures](https://github.com/cncf/foundation/blob/main/code-of-conduct/coc-incident-resolution-procedures.md). You can expect a response within three business days.
For incidents occurring at CNCF event that is produced by the Linux Foundation, please contact <eventconduct@cncf.io>.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement.
All complaints will be reviewed and investigated promptly and fairly.
Upon review and investigation of a reported incident, the CoC response team that has jurisdiction will determine what action is appropriate based on this Code of Conduct and its related documentation.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
For information about which Code of Conduct incidents are handled by project leadership, which incidents are handled by the CNCF Code of Conduct Committee, and which incidents are handled by the Linux Foundation (including its events team), see our [Jurisdiction Policy](https://github.com/cncf/foundation/blob/main/code-of-conduct/coc-committee-jurisdiction-policy.md).
## Enforcement Guidelines
## Amendments
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
Consistent with the CNCF Charter, any substantive changes to this Code of Conduct must be approved by the Technical Oversight Committee.
### 1. Correction
## Acknowledgements
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.
This Code of Conduct is adapted from the Contributor Covenant
(http://contributor-covenant.org), version 2.0 available at
http://contributor-covenant.org/version/2/0/code_of_conduct/

View File

@@ -1,10 +1,11 @@
# Krkn aka Kraken
[![Docker Repository on Quay](https://quay.io/repository/redhat-chaos/krkn/status "Docker Repository on Quay")](https://quay.io/repository/redhat-chaos/krkn?tab=tags&tag=latest)
![Workflow-Status](https://github.com/redhat-chaos/krkn/actions/workflows/docker-image.yml/badge.svg)
![Krkn logo](media/logo.png)
Chaos and resiliency testing tool for Kubernetes and OpenShift.
Kraken injects deliberate failures into Kubernetes/OpenShift clusters to check if it is resilient to turbulent conditions.
Chaos and resiliency testing tool for Kubernetes.
Kraken injects deliberate failures into Kubernetes clusters to check if it is resilient to turbulent conditions.
### Workflow
@@ -17,25 +18,27 @@ Kraken injects deliberate failures into Kubernetes/OpenShift clusters to check i
### Chaos Testing Guide
[Guide](docs/index.md) encapsulates:
- Test methodology that needs to be embraced.
- Best practices that an OpenShift cluster, platform and applications running on top of it should take into account for best user experience, performance, resilience and reliability.
- Best practices that an Kubernetes cluster, platform and applications running on top of it should take into account for best user experience, performance, resilience and reliability.
- Tooling.
- Scenarios supported.
- Test environment recommendations as to how and where to run chaos tests.
- Chaos testing in practice.
The guide is hosted at https://redhat-chaos.github.io/krkn.
The guide is hosted at https://krkn-chaos.github.io/krkn.
### How to Get Started
Instructions on how to setup, configure and run Kraken can be found at [Installation](docs/installation.md).
You may consider utilizing the chaos recommendation tool prior to initiating the chaos runs to profile the application service(s) under test. This tool discovers a list of Krkn scenarios with a high probability of causing failures or disruptions to your application service(s). The tool can be accessed at [Chaos-Recommender](utils/chaos_recommender/README.md).
See the [getting started doc](docs/getting_started.md) on support on how to get started with your own custom scenario or editing current scenarios for your specific usage.
After installation, refer back to the below sections for supported scenarios and how to tweak the kraken config to load them on your cluster.
#### Running Kraken with minimal configuration tweaks
For cases where you want to run Kraken with minimal configuration changes, refer to [Kraken-hub](https://github.com/redhat-chaos/krkn-hub). One use case is CI integration where you do not want to carry around different configuration files for the scenarios.
For cases where you want to run Kraken with minimal configuration changes, refer to [Kraken-hub](https://github.com/redhat-chaos/krknChaos-hub). One use case is CI integration where you do not want to carry around different configuration files for the scenarios.
### Setting up infrastructure dependencies
Kraken indexes the metrics specified in the profile into Elasticsearch in addition to leveraging Cerberus for understanding the health of the Kubernetes/OpenShift cluster under test. More information on the features is documented below. The infrastructure pieces can be easily installed and uninstalled by running:
@@ -54,29 +57,30 @@ This will manage the Cerberus and Elasticsearch containers on the host on which
Instructions on how to setup the config and the options supported can be found at [Config](docs/config.md).
### Kubernetes/OpenShift chaos scenarios supported
### Kubernetes chaos scenarios supported
Scenario type | Kubernetes | OpenShift
--------------------------- | ------------- |--------------------|
[Pod Scenarios](docs/pod_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Container Scenarios](docs/container_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Node Scenarios](docs/node_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Time Scenarios](docs/time_scenarios.md) | :x: | :heavy_check_mark: |
[Hog Scenarios](docs/arcaflow_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Cluster Shut Down Scenarios](docs/cluster_shut_down_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Namespace Scenarios](docs/namespace_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Zone Outage Scenarios](docs/zone_outage.md) | :heavy_check_mark: | :heavy_check_mark: |
[Application_outages](docs/application_outages.md) | :heavy_check_mark: | :heavy_check_mark: |
[PVC scenario](docs/pvc_scenario.md) | :heavy_check_mark: | :heavy_check_mark: |
[Network_Chaos](docs/network_chaos.md) | :heavy_check_mark: | :heavy_check_mark: |
[ManagedCluster Scenarios](docs/managedcluster_scenarios.md) | :heavy_check_mark: | :question: |
Scenario type | Kubernetes
--------------------------- | ------------- |
[Pod Scenarios](docs/pod_scenarios.md) | :heavy_check_mark: |
[Pod Network Scenarios](docs/pod_network_scenarios.md) | :x: |
[Container Scenarios](docs/container_scenarios.md) | :heavy_check_mark: |
[Node Scenarios](docs/node_scenarios.md) | :heavy_check_mark: |
[Time Scenarios](docs/time_scenarios.md) | :x: |
[Hog Scenarios: CPU, Memory](docs/arcaflow_scenarios.md) | :heavy_check_mark: |
[Cluster Shut Down Scenarios](docs/cluster_shut_down_scenarios.md) | :heavy_check_mark: |
[Service Disruption Scenarios](docs/service_disruption_scenarios.md.md) | :heavy_check_mark: |
[Zone Outage Scenarios](docs/zone_outage.md) | :heavy_check_mark: |
[Application_outages](docs/application_outages.md) | :heavy_check_mark: |
[PVC scenario](docs/pvc_scenario.md) | :heavy_check_mark: |
[Network_Chaos](docs/network_chaos.md) | :heavy_check_mark: |
[ManagedCluster Scenarios](docs/managedcluster_scenarios.md) | :heavy_check_mark: |
### Kraken scenario pass/fail criteria and report
It is important to make sure to check if the targeted component recovered from the chaos injection and also if the Kubernetes/OpenShift cluster is healthy as failures in one component can have an adverse impact on other components. Kraken does this by:
- Having built in checks for pod and node based scenarios to ensure the expected number of replicas and nodes are up. It also supports running custom scripts with the checks.
- Leveraging [Cerberus](https://github.com/openshift-scale/cerberus) to monitor the cluster under test and consuming the aggregated go/no-go signal to determine pass/fail post chaos. It is highly recommended to turn on the Cerberus health check feature available in Kraken. Instructions on installing and setting up Cerberus can be found [here](https://github.com/openshift-scale/cerberus#installation) or can be installed from Kraken using the [instructions](https://github.com/redhat-chaos/krkn#setting-up-infrastructure-dependencies). Once Cerberus is up and running, set cerberus_enabled to True and cerberus_url to the url where Cerberus publishes go/no-go signal in the Kraken config file. Cerberus can monitor [application routes](https://github.com/redhat-chaos/cerberus/blob/main/docs/config.md#watch-routes) during the chaos and fails the run if it encounters downtime as it is a potential downtime in a customers, or users environment as well. It is especially important during the control plane chaos scenarios including the API server, Etcd, Ingress etc. It can be enabled by setting `check_applicaton_routes: True` in the [Kraken config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) provided application routes are being monitored in the [cerberus config](https://github.com/redhat-chaos/krkn/blob/main/config/cerberus.yaml).
- Leveraging [kube-burner](docs/alerts.md) alerting feature to fail the runs in case of critical alerts.
- Leveraging [Cerberus](https://github.com/redhat-chaos/cerberus) to monitor the cluster under test and consuming the aggregated go/no-go signal to determine pass/fail post chaos. It is highly recommended to turn on the Cerberus health check feature available in Kraken. Instructions on installing and setting up Cerberus can be found [here](https://github.com/openshift-scale/cerberus#installation) or can be installed from Kraken using the [instructions](https://github.com/redhat-chaos/krkn#setting-up-infrastructure-dependencies). Once Cerberus is up and running, set cerberus_enabled to True and cerberus_url to the url where Cerberus publishes go/no-go signal in the Kraken config file. Cerberus can monitor [application routes](https://github.com/redhat-chaos/cerberus/blob/main/docs/config.md#watch-routes) during the chaos and fails the run if it encounters downtime as it is a potential downtime in a customers, or users environment as well. It is especially important during the control plane chaos scenarios including the API server, Etcd, Ingress etc. It can be enabled by setting `check_applicaton_routes: True` in the [Kraken config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) provided application routes are being monitored in the [cerberus config](https://github.com/redhat-chaos/krkn/blob/main/config/cerberus.yaml).
- Leveraging built-in alert collection feature to fail the runs in case of critical alerts.
### Signaling
In CI runs or any external job it is useful to stop Kraken once a certain test or state gets reached. We created a way to signal to kraken to pause the chaos or stop it completely using a signal posted to a port of your choice.
@@ -90,23 +94,26 @@ More detailed information on enabling and leveraging this feature can be found [
Monitoring the Kubernetes/OpenShift cluster to observe the impact of Kraken chaos scenarios on various components is key to find out the bottlenecks as it is important to make sure the cluster is healthy in terms if both recovery as well as performance during/after the failure has been injected. Instructions on enabling it can be found [here](docs/performance_dashboards.md).
### Scraping and storing metrics long term
Kraken supports capturing metrics for the duration of the scenarios defined in the config and indexes then into Elasticsearch to be able to store and evaluate the state of the runs long term. The indexed metrics can be visualized with the help of Grafana. It uses [Kube-burner](https://github.com/cloud-bulldozer/kube-burner) under the hood. The metrics to capture need to be defined in a metrics profile which Kraken consumes to query prometheus ( installed by default in OpenShift ) with the start and end timestamp of the run. Information on enabling and leveraging this feature can be found [here](docs/metrics.md).
### SLOs validation during and post chaos
- In addition to checking the recovery and health of the cluster and components under test, Kraken takes in a profile with the Prometheus expressions to validate and alerts, exits with a non-zero return code depending on the severity set. This feature can be used to determine pass/fail or alert on abnormalities observed in the cluster based on the metrics.
- Kraken also provides ability to check if any critical alerts are firing in the cluster post chaos and pass/fail's.
Information on enabling and leveraging this feature can be found [here](docs/SLOs_validation.md)
### Alerts
In addition to checking the recovery and health of the cluster and components under test, Kraken takes in a profile with the Prometheus expressions to validate and alerts, exits with a non-zero return code depending on the severity set. This feature can be used to determine pass/fail or alert on abnormalities observed in the cluster based on the metrics. Information on enabling and leveraging this feature can be found [here](docs/alerts.md).
### OCM / ACM integration
Kraken supports injecting faults into [Open Cluster Management (OCM)](https://open-cluster-management.io/) and [Red Hat Advanced Cluster Management for Kubernetes (ACM)](https://www.redhat.com/en/technologies/management/advanced-cluster-management) managed clusters through [ManagedCluster Scenarios](docs/managedcluster_scenarios.md).
### Blogs and other useful resources
- Blog post on introduction to Kraken: https://www.openshift.com/blog/introduction-to-kraken-a-chaos-tool-for-openshift/kubernetes
- Discussion and demo on how Kraken can be leveraged to ensure OpenShift is reliable, performant and scalable: https://www.youtube.com/watch?v=s1PvupI5sD0&ab_channel=OpenShift
- Blog post emphasizing the importance of making Chaos part of Performance and Scale runs to mimic the production environments: https://www.openshift.com/blog/making-chaos-part-of-kubernetes/openshift-performance-and-scalability-tests
- Blog post on findings from Chaos test runs: https://cloud.redhat.com/blog/openshift/kubernetes-chaos-stories
- Discussion with CNCF TAG App Delivery on Krkn workflow, features and addition to CNCF sandbox: [Github](https://github.com/cncf/sandbox/issues/44), [Tracker](https://github.com/cncf/tag-app-delivery/issues/465), [recording](https://www.youtube.com/watch?v=nXQkBFK_MWc&t=722s)
- Blog post on supercharging chaos testing using AI integration in Krkn: https://www.redhat.com/en/blog/supercharging-chaos-testing-using-ai
- Blog post announcing Krkn joining CNCF Sandbox: https://www.redhat.com/en/blog/krknchaos-joining-cncf-sandbox
### Roadmap
Enhancements being planned can be found in the [roadmap](ROADMAP.md).
@@ -123,5 +130,5 @@ Please read [this file]((CI/README.md#adding-a-test-case)) for more information
### Community
Key Members(slack_usernames/full name): paigerube14/Paige Rubendall, mffiedler/Mike Fiedler, ravielluri/Naga Ravi Chaitanya Elluri.
* [**#sig-scalability on Kubernetes Slack**](https://kubernetes.slack.com)
* [**#krkn on Kubernetes Slack**](https://kubernetes.slack.com)
* [**#forum-chaos on CoreOS Slack internal to Red Hat**](https://coreos.slack.com)

View File

@@ -2,10 +2,14 @@
Following are a list of enhancements that we are planning to work on adding support in Krkn. Of course any help/contributions are greatly appreciated.
- [Ability to run multiple chaos scenarios in parallel under load to mimic real world outages](https://github.com/redhat-chaos/krkn/issues/424)
- [Centralized storage for chaos experiments artifacts](https://github.com/redhat-chaos/krkn/issues/423)
- [Support for causing DNS outages](https://github.com/redhat-chaos/krkn/issues/394)
- [Support for pod level network traffic shaping](https://github.com/redhat-chaos/krkn/issues/393)
- [Ability to visualize the metrics that are being captured by Kraken and stored in Elasticsearch](https://github.com/redhat-chaos/krkn/issues/124)
- Support for running all the scenarios of Kraken on Kubernetes distribution - see https://github.com/redhat-chaos/krkn/issues/185, https://github.com/redhat-chaos/krkn/issues/186
- Continue to improve [Chaos Testing Guide](https://redhat-chaos.github.io/krkn) in terms of adding best practices, test environment recommendations and scenarios to make sure the OpenShift platform, as well the applications running on top it, are resilient and performant under chaotic conditions.
- [ ] [Ability to run multiple chaos scenarios in parallel under load to mimic real world outages](https://github.com/redhat-chaos/krkn/issues/424)
- [x] [Centralized storage for chaos experiments artifacts](https://github.com/redhat-chaos/krkn/issues/423)
- [ ] [Support for causing DNS outages](https://github.com/redhat-chaos/krkn/issues/394)
- [x] [Chaos recommender](https://github.com/redhat-chaos/krkn/tree/main/utils/chaos-recommender) to suggest scenarios having probability of impacting the service under test using profiling results
- [ ] Chaos AI integration to improve and automate test coverage
- [x] [Support for pod level network traffic shaping](https://github.com/redhat-chaos/krkn/issues/393)
- [ ] [Ability to visualize the metrics that are being captured by Kraken and stored in Elasticsearch](https://github.com/redhat-chaos/krkn/issues/124)
- [ ] Support for running all the scenarios of Kraken on Kubernetes distribution - see https://github.com/redhat-chaos/krkn/issues/185, https://github.com/redhat-chaos/krkn/issues/186
- [ ] Continue to improve [Chaos Testing Guide](https://redhat-chaos.github.io/krkn) in terms of adding best practices, test environment recommendations and scenarios to make sure the OpenShift platform, as well the applications running on top it, are resilient and performant under chaotic conditions.
- [ ] [Switch documentation references to Kubernetes](https://github.com/redhat-chaos/krkn/issues/495)
- [ ] [OCP and Kubernetes functionalities segregation](https://github.com/redhat-chaos/krkn/issues/497)

View File

@@ -1,11 +0,0 @@
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[2m]))[5m:]) > 0.01
description: 5 minutes avg. etcd fsync latency on {{$labels.pod}} higher than 10ms {{$value}}
severity: error
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[5m]))[5m:]) > 0.1
description: 5 minutes avg. etcd netowrk peer round trip on {{$labels.pod}} higher than 100ms {{$value}}
severity: info
- expr: increase(etcd_server_leader_changes_seen_total[2m]) > 0
description: etcd leader changes observed
severity: error

90
config/alerts.yaml Normal file
View File

@@ -0,0 +1,90 @@
# etcd
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[2m]))[10m:]) > 0.01
description: 10 minutes avg. 99th etcd fsync latency on {{$labels.pod}} higher than 10ms. {{$value}}s
severity: warning
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[2m]))[10m:]) > 1
description: 10 minutes avg. 99th etcd fsync latency on {{$labels.pod}} higher than 1s. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket[2m]))[10m:]) > 0.007
description: 10 minutes avg. 99th etcd commit latency on {{$labels.pod}} higher than 30ms. {{$value}}s
severity: warning
- expr: rate(etcd_server_leader_changes_seen_total[2m]) > 0
description: etcd leader changes observed
severity: warning
- expr: (last_over_time(etcd_mvcc_db_total_size_in_bytes[5m]) / last_over_time(etcd_server_quota_backend_bytes[5m]))*100 > 95
description: etcd cluster database is running full.
severity: critical
- expr: (last_over_time(etcd_mvcc_db_total_size_in_use_in_bytes[5m]) / last_over_time(etcd_mvcc_db_total_size_in_bytes[5m])) < 0.5
description: etcd database size in use is less than 50% of the actual allocated storage.
severity: warning
- expr: rate(etcd_server_proposals_failed_total{job=~".*etcd.*"}[15m]) > 5
description: etcd cluster has high number of proposal failures.
severity: warning
- expr: histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket{job=~".*etcd.*"}[5m])) > 0.15
description: etcd cluster member communication is slow.
severity: warning
- expr: histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{job=~".*etcd.*", grpc_method!="Defragment", grpc_type="unary"}[5m])) without(grpc_type)) > 0.15
description: etcd grpc requests are slow.
severity: critical
- expr: 100 * sum(rate(grpc_server_handled_total{job=~".*etcd.*", grpc_code=~"Unknown|FailedPrecondition|ResourceExhausted|Internal|Unavailable|DataLoss|DeadlineExceeded"}[5m])) without (grpc_type, grpc_code) / sum(rate(grpc_server_handled_total{job=~".*etcd.*"}[5m])) without (grpc_type, grpc_code) > 5
description: etcd cluster has high number of failed grpc requests.
severity: critical
- expr: etcd_server_has_leader{job=~".*etcd.*"} == 0
description: etcd cluster has no leader.
severity: warning
- expr: sum(up{job=~".*etcd.*"} == bool 1) without (instance) < ((count(up{job=~".*etcd.*"}) without (instance) + 1) / 2)
description: etcd cluster has insufficient number of members.
severity: warning
- expr: max without (endpoint) ( sum without (instance) (up{job=~".*etcd.*"} == bool 0) or count without (To) ( sum without (instance) (rate(etcd_network_peer_sent_failures_total{job=~".*etcd.*"}[120s])) > 0.01 )) > 0
description: etcd cluster members are down.
severity: warning
# API server
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"POST|PUT|DELETE|PATCH", subresource!~"log|exec|portforward|attach|proxy"}[2m])) by (le, resource, verb))[10m:]) > 1
description: 10 minutes avg. 99th mutating API call latency for {{$labels.verb}}/{{$labels.resource}} higher than 1 second. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"LIST|GET", subresource!~"log|exec|portforward|attach|proxy", scope="resource"}[2m])) by (le, resource, verb, scope))[5m:]) > 1
description: 5 minutes avg. 99th read-only API call latency for {{$labels.verb}}/{{$labels.resource}} in scope {{$labels.scope}} higher than 1 second. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"LIST|GET", subresource!~"log|exec|portforward|attach|proxy", scope="namespace"}[2m])) by (le, resource, verb, scope))[5m:]) > 5
description: 5 minutes avg. 99th read-only API call latency for {{$labels.verb}}/{{$labels.resource}} in scope {{$labels.scope}} higher than 5 seconds. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"LIST|GET", subresource!~"log|exec|portforward|attach|proxy", scope="cluster"}[2m])) by (le, resource, verb, scope))[5m:]) > 30
description: 5 minutes avg. 99th read-only API call latency for {{$labels.verb}}/{{$labels.resource}} in scope {{$labels.scope}} higher than 30 seconds. {{$value}}s
severity: error
# Control plane pods
- expr: up{job=~"crio|kubelet"} == 0
description: "{{$labels.node}}/{{$labels.job}} down"
severity: warning
- expr: up{job="ovnkube-node"} == 0
description: "{{$labels.instance}}/{{$labels.pod}} {{$labels.job}} down"
severity: warning
# Service sync latency
- expr: histogram_quantile(0.99, sum(rate(kubeproxy_network_programming_duration_seconds_bucket[2m])) by (le)) > 10
description: 99th Kubeproxy network programming latency higher than 10 seconds. {{$value}}s
severity: warning
# Prometheus alerts
- expr: ALERTS{severity="critical", alertstate="firing"} > 0
description: Critical prometheus alert. {{$labels.alertname}}
severity: warning

View File

@@ -0,0 +1,101 @@
# etcd
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[2m]))[10m:]) > 0.01
description: 10 minutes avg. 99th etcd fsync latency on {{$labels.pod}} higher than 10ms. {{$value}}s
severity: warning
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[2m]))[10m:]) > 1
description: 10 minutes avg. 99th etcd fsync latency on {{$labels.pod}} higher than 1s. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket[2m]))[10m:]) > 0.03
description: 10 minutes avg. 99th etcd commit latency on {{$labels.pod}} higher than 30ms. {{$value}}s
severity: warning
- expr: rate(etcd_server_leader_changes_seen_total[2m]) > 0
description: etcd leader changes observed
severity: warning
- expr: (last_over_time(etcd_mvcc_db_total_size_in_bytes[5m]) / last_over_time(etcd_server_quota_backend_bytes[5m]))*100 > 95
description: etcd cluster database is running full.
severity: critical
- expr: (last_over_time(etcd_mvcc_db_total_size_in_use_in_bytes[5m]) / last_over_time(etcd_mvcc_db_total_size_in_bytes[5m])) < 0.5
description: etcd database size in use is less than 50% of the actual allocated storage.
severity: warning
- expr: rate(etcd_server_proposals_failed_total{job=~".*etcd.*"}[15m]) > 5
description: etcd cluster has high number of proposal failures.
severity: warning
- expr: histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket{job=~".*etcd.*"}[5m])) > 0.15
description: etcd cluster member communication is slow.
severity: warning
- expr: histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{job=~".*etcd.*", grpc_method!="Defragment", grpc_type="unary"}[5m])) without(grpc_type)) > 0.15
description: etcd grpc requests are slow.
severity: critical
- expr: 100 * sum(rate(grpc_server_handled_total{job=~".*etcd.*", grpc_code=~"Unknown|FailedPrecondition|ResourceExhausted|Internal|Unavailable|DataLoss|DeadlineExceeded"}[5m])) without (grpc_type, grpc_code) / sum(rate(grpc_server_handled_total{job=~".*etcd.*"}[5m])) without (grpc_type, grpc_code) > 5
description: etcd cluster has high number of failed grpc requests.
severity: critical
- expr: etcd_server_has_leader{job=~".*etcd.*"} == 0
description: etcd cluster has no leader.
severity: warning
- expr: sum(up{job=~".*etcd.*"} == bool 1) without (instance) < ((count(up{job=~".*etcd.*"}) without (instance) + 1) / 2)
description: etcd cluster has insufficient number of members.
severity: warning
- expr: max without (endpoint) ( sum without (instance) (up{job=~".*etcd.*"} == bool 0) or count without (To) ( sum without (instance) (rate(etcd_network_peer_sent_failures_total{job=~".*etcd.*"}[120s])) > 0.01 )) > 0
description: etcd cluster members are down.
severity: warning
# API server
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"POST|PUT|DELETE|PATCH", subresource!~"log|exec|portforward|attach|proxy"}[2m])) by (le, resource, verb))[10m:]) > 1
description: 10 minutes avg. 99th mutating API call latency for {{$labels.verb}}/{{$labels.resource}} higher than 1 second. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"LIST|GET", subresource!~"log|exec|portforward|attach|proxy", scope="resource"}[2m])) by (le, resource, verb, scope))[5m:]) > 1
description: 5 minutes avg. 99th read-only API call latency for {{$labels.verb}}/{{$labels.resource}} in scope {{$labels.scope}} higher than 1 second. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"LIST|GET", subresource!~"log|exec|portforward|attach|proxy", scope="namespace"}[2m])) by (le, resource, verb, scope))[5m:]) > 5
description: 5 minutes avg. 99th read-only API call latency for {{$labels.verb}}/{{$labels.resource}} in scope {{$labels.scope}} higher than 5 seconds. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"LIST|GET", subresource!~"log|exec|portforward|attach|proxy", scope="cluster"}[2m])) by (le, resource, verb, scope))[5m:]) > 30
description: 5 minutes avg. 99th read-only API call latency for {{$labels.verb}}/{{$labels.resource}} in scope {{$labels.scope}} higher than 30 seconds. {{$value}}s
severity: error
# Control plane pods
- expr: up{apiserver=~"kube-apiserver|openshift-apiserver"} == 0
description: "{{$labels.apiserver}} {{$labels.instance}} down"
severity: warning
- expr: up{namespace=~"openshift-etcd"} == 0
description: "{{$labels.namespace}}/{{$labels.pod}} down"
severity: warning
- expr: up{namespace=~"openshift-.*(kube-controller-manager|scheduler|controller-manager|sdn|ovn-kubernetes|dns)"} == 0
description: "{{$labels.namespace}}/{{$labels.pod}} down"
severity: warning
- expr: up{job=~"crio|kubelet"} == 0
description: "{{$labels.node}}/{{$labels.job}} down"
severity: warning
- expr: up{job="ovnkube-node"} == 0
description: "{{$labels.instance}}/{{$labels.pod}} {{$labels.job}} down"
severity: warning
# Service sync latency
- expr: histogram_quantile(0.99, sum(rate(kubeproxy_network_programming_duration_seconds_bucket[2m])) by (le)) > 10
description: 99th Kubeproxy network programming latency higher than 10 seconds. {{$value}}s
severity: warning
# Prometheus alerts
- expr: ALERTS{severity="critical", alertstate="firing"} > 0
description: Critical prometheus alert. {{$labels.alertname}}
severity: warning

View File

@@ -6,40 +6,41 @@ kraken:
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
signal_address: 0.0.0.0 # Signal listening address
port: 8081 # Signal port
chaos_scenarios: # List of policies/chaos scenarios to load
- arcaflow_scenarios:
- scenarios/arcaflow/cpu-hog/input.yaml
- scenarios/arcaflow/memory-hog/input.yaml
- container_scenarios: # List of chaos pod scenarios to load
chaos_scenarios:
# List of policies/chaos scenarios to load
- arcaflow_scenarios:
- scenarios/arcaflow/cpu-hog/input.yaml
- scenarios/arcaflow/memory-hog/input.yaml
- scenarios/arcaflow/io-hog/input.yaml
- application_outages:
- scenarios/openshift/app_outage.yaml
- container_scenarios: # List of chaos pod scenarios to load
- - scenarios/openshift/container_etcd.yml
- plugin_scenarios:
- plugin_scenarios:
- scenarios/openshift/etcd.yml
- scenarios/openshift/regex_openshift_pod_kill.yml
- scenarios/openshift/vmware_node_scenarios.yml
- scenarios/openshift/ibmcloud_node_scenarios.yml
- scenarios/openshift/network_chaos_ingress.yml
- scenarios/openshift/pod_network_outage.yml
- node_scenarios: # List of chaos node scenarios to load
- scenarios/openshift/prom_kill.yml
- node_scenarios: # List of chaos node scenarios to load
- scenarios/openshift/node_scenarios_example.yml
- plugin_scenarios:
- plugin_scenarios:
- scenarios/openshift/openshift-apiserver.yml
- scenarios/openshift/openshift-kube-apiserver.yml
- time_scenarios: # List of chaos time scenarios to load
- time_scenarios: # List of chaos time scenarios to load
- scenarios/openshift/time_scenarios_example.yml
- cluster_shut_down_scenarios:
- cluster_shut_down_scenarios:
- - scenarios/openshift/cluster_shut_down_scenario.yml
- scenarios/openshift/post_action_shut_down.py
- namespace_scenarios:
- service_disruption_scenarios:
- - scenarios/openshift/regex_namespace.yaml
- - scenarios/openshift/ingress_namespace.yaml
- scenarios/openshift/post_action_namespace.py
- zone_outages:
- zone_outages:
- scenarios/openshift/zone_outage.yaml
- application_outages:
- scenarios/openshift/app_outage.yaml
- pvc_scenarios:
- pvc_scenarios:
- scenarios/openshift/pvc_scenario.yaml
- network_chaos:
- network_chaos:
- scenarios/openshift/network_chaos.yaml
cerberus:
@@ -50,17 +51,41 @@ cerberus:
performance_monitoring:
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
kube_burner_binary_url: "https://github.com/cloud-bulldozer/kube-burner/releases/download/v0.9.1/kube-burner-0.9.1-Linux-x86_64.tar.gz"
capture_metrics: False
config_path: config/kube_burner.yaml # Define the Elasticsearch url and index name in this config
metrics_profile_path: config/metrics-aggregated.yaml
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
uuid: # uuid for the run is generated by default if not set
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
alert_profile: config/alerts # Path to alert profile with the prometheus queries
alert_profile: config/alerts.yaml # Path or URL to alert profile with the prometheus queries
check_critical_alerts: False # When enabled will check prometheus for critical alerts firing post chaos
tunings:
wait_duration: 60 # Duration to wait between each chaos scenario
iterations: 1 # Number of times to execute the scenarios
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever
telemetry:
enabled: False # enable/disables the telemetry collection feature
api_url: https://ulnmf9xv7j.execute-api.us-west-2.amazonaws.com/production #telemetry service endpoint
username: username # telemetry service username
password: password # telemetry service password
prometheus_backup: True # enables/disables prometheus data collection
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
backup_threads: 5 # number of telemetry download/upload threads
archive_path: /tmp # local path where the archive files will be temporarly stored
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
archive_size: 500000 # the size of the prometheus data archive size in KB. The lower the size of archive is
# the higher the number of archive files will be produced and uploaded (and processed by backup_threads
# simultaneously).
# For unstable/slow connection is better to keep this value low
# increasing the number of backup_threads, in this way, on upload failure, the retry will happen only on the
# failed chunk without affecting the whole upload.
logs_backup: True
logs_filter_patterns:
- "(\\w{3}\\s\\d{1,2}\\s\\d{2}:\\d{2}:\\d{2}\\.\\d+).+" # Sep 9 11:20:36.123425532
- "kinit (\\d+/\\d+/\\d+\\s\\d{2}:\\d{2}:\\d{2})\\s+" # kinit 2023/09/15 11:20:36 log
- "(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d+Z).+" # 2023-09-15T11:20:36.123425532Z log
oc_cli_path: /usr/bin/oc # optional, if not specified will be search in $PATH
events_backup: True # enables/disables cluster events collection

View File

@@ -6,11 +6,7 @@ kraken:
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
signal_address: 0.0.0.0 # Signal listening address
litmus_install: True # Installs specified version, set to False if it's already setup
litmus_version: v1.13.6 # Litmus version to install
litmus_uninstall: False # If you want to uninstall litmus if failure
litmus_uninstall_before_run: True # If you want to uninstall litmus before a new run starts
chaos_scenarios: # List of policies/chaos scenarios to load
chaos_scenarios: # List of policies/chaos scenarios to load
- plugin_scenarios:
- scenarios/kind/scheduler.yml
- node_scenarios:
@@ -24,15 +20,11 @@ cerberus:
performance_monitoring:
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
kube_burner_binary_url: "https://github.com/cloud-bulldozer/kube-burner/releases/download/v0.9.1/kube-burner-0.9.1-Linux-x86_64.tar.gz"
capture_metrics: False
config_path: config/kube_burner.yaml # Define the Elasticsearch url and index name in this config
metrics_profile_path: config/metrics-aggregated.yaml
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
uuid: # uuid for the run is generated by default if not set
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
alert_profile: config/alerts # Path to alert profile with the prometheus queries
alert_profile: config/alerts.yaml # Path to alert profile with the prometheus queries
tunings:
wait_duration: 60 # Duration to wait between each chaos scenario

View File

@@ -5,10 +5,6 @@ kraken:
port: 8081
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
litmus_install: True # Installs specified version, set to False if it's already setup
litmus_version: v1.13.6 # Litmus version to install
litmus_uninstall: False # If you want to uninstall litmus if failure
litmus_uninstall_before_run: True # If you want to uninstall litmus before a new run starts
chaos_scenarios: # List of policies/chaos scenarios to load
- container_scenarios: # List of chaos pod scenarios to load
- - scenarios/kube/container_dns.yml
@@ -23,15 +19,11 @@ cerberus:
performance_monitoring:
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
kube_burner_binary_url: "https://github.com/cloud-bulldozer/kube-burner/releases/download/v0.9.1/kube-burner-0.9.1-Linux-x86_64.tar.gz"
capture_metrics: False
config_path: config/kube_burner.yaml # Define the Elasticsearch url and index name in this config
metrics_profile_path: config/metrics-aggregated.yaml
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
uuid: # uuid for the run is generated by default if not set
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
alert_profile: config/alerts # Path to alert profile with the prometheus queries
alert_profile: config/alerts.yaml # Path to alert profile with the prometheus queries
check_critical_alerts: False # When enabled will check prometheus for critical alerts firing post chaos after soak time for the cluster to settle down
tunings:
wait_duration: 60 # Duration to wait between each chaos scenario

View File

@@ -6,13 +6,11 @@ kraken:
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
signal_address: 0.0.0.0 # Signal listening address
port: 8081 # Signal port
litmus_version: v1.13.6 # Litmus version to install
litmus_uninstall: False # If you want to uninstall litmus if failure
litmus_uninstall_before_run: True # If you want to uninstall litmus before a new run starts
chaos_scenarios: # List of policies/chaos scenarios to load
- plugin_scenarios: # List of chaos pod scenarios to load
- scenarios/openshift/etcd.yml
- scenarios/openshift/regex_openshift_pod_kill.yml
- scenarios/openshift/prom_kill.yml
- node_scenarios: # List of chaos node scenarios to load
- scenarios/openshift/node_scenarios_example.yml
- plugin_scenarios:
@@ -20,13 +18,10 @@ kraken:
- scenarios/openshift/openshift-kube-apiserver.yml
- time_scenarios: # List of chaos time scenarios to load
- scenarios/openshift/time_scenarios_example.yml
- litmus_scenarios: # List of litmus scenarios to load
- - https://hub.litmuschaos.io/api/chaos/1.10.0?file=charts/generic/node-cpu-hog/rbac.yaml
- scenarios/openshift/node_cpu_hog_engine.yaml
- cluster_shut_down_scenarios:
- - scenarios/openshift/cluster_shut_down_scenario.yml
- scenarios/openshift/post_action_shut_down.py
- namespace_scenarios:
- service_disruption_scenarios:
- scenarios/openshift/regex_namespace.yaml
- scenarios/openshift/ingress_namespace.yaml
- zone_outages:
@@ -46,17 +41,39 @@ cerberus:
performance_monitoring:
deploy_dashboards: True # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
kube_burner_binary_url: "https://github.com/cloud-bulldozer/kube-burner/releases/download/v0.9.1/kube-burner-0.9.1-Linux-x86_64.tar.gz"
capture_metrics: True
config_path: config/kube_burner.yaml # Define the Elasticsearch url and index name in this config
metrics_profile_path: config/metrics-aggregated.yaml
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
uuid: # uuid for the run is generated by default if not set
enable_alerts: True # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
alert_profile: config/alerts # Path to alert profile with the prometheus queries
alert_profile: config/alerts.yaml # Path to alert profile with the prometheus queries
tunings:
wait_duration: 60 # Duration to wait between each chaos scenario
iterations: 1 # Number of times to execute the scenarios
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever
telemetry:
enabled: False # enable/disables the telemetry collection feature
api_url: https://ulnmf9xv7j.execute-api.us-west-2.amazonaws.com/production #telemetry service endpoint
username: username # telemetry service username
password: password # telemetry service password
prometheus_backup: True # enables/disables prometheus data collection
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
backup_threads: 5 # number of telemetry download/upload threads
archive_path: /tmp # local path where the archive files will be temporarly stored
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
archive_size: 500000 # the size of the prometheus data archive size in KB. The lower the size of archive is
# the higher the number of archive files will be produced and uploaded (and processed by backup_threads
# simultaneously).
# For unstable/slow connection is better to keep this value low
# increasing the number of backup_threads, in this way, on upload failure, the retry will happen only on the
# failed chunk without affecting the whole upload.
logs_backup: True
logs_filter_patterns:
- "(\\w{3}\\s\\d{1,2}\\s\\d{2}:\\d{2}:\\d{2}\\.\\d+).+" # Sep 9 11:20:36.123425532
- "kinit (\\d+/\\d+/\\d+\\s\\d{2}:\\d{2}:\\d{2})\\s+" # kinit 2023/09/15 11:20:36 log
- "(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d+Z).+" # 2023-09-15T11:20:36.123425532Z log
oc_cli_path: /usr/bin/oc # optional, if not specified will be search in $PATH

View File

@@ -1,15 +0,0 @@
---
global:
writeToFile: true
metricsDirectory: collected-metrics
measurements:
- name: podLatency
esIndex: kraken
indexerConfig:
enabled: true
esServers: [http://0.0.0.0:9200] # Please change this to the respective Elasticsearch in use if you haven't run the podman-compose command to setup the infrastructure containers
insecureSkipVerify: true
defaultIndex: kraken
type: elastic

View File

@@ -139,6 +139,39 @@ metrics:
- query: histogram_quantile(0.99,sum(rate(etcd_request_duration_seconds_bucket[2m])) by (le,operation,apiserver)) > 0
metricName: P99APIEtcdRequestLatency
- query: sum(grpc_server_started_total{namespace="openshift-etcd",grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{namespace="openshift-etcd",grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"})
metricName: ActiveWatchStreams
- query: sum(grpc_server_started_total{namespace="openshift-etcd",grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{namespace="openshift-etcd",grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"})
metricName: ActiveLeaseStreams
- query: sum(rate(etcd_debugging_snap_save_total_duration_seconds_sum{namespace="openshift-etcd"}[2m]))
metricName: snapshotSaveLatency
- query: sum(rate(etcd_server_heartbeat_send_failures_total{namespace="openshift-etcd"}[2m]))
metricName: HeartBeatFailures
- query: sum(rate(etcd_server_health_failures{namespace="openshift-etcd"}[2m]))
metricName: HealthFailures
- query: sum(rate(etcd_server_slow_apply_total{namespace="openshift-etcd"}[2m]))
metricName: SlowApplies
- query: sum(rate(etcd_server_slow_read_indexes_total{namespace="openshift-etcd"}[2m]))
metricName: SlowIndexRead
- query: sum(etcd_server_proposals_pending)
metricName: PendingProposals
- query: histogram_quantile(1.0, sum(rate(etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket[1m])) by (le, instance))
metricName: CompactionMaxPause
- query: sum by (instance) (apiserver_storage_objects)
metricName: etcdTotalObjectCount
- query: topk(500, max by(resource) (apiserver_storage_objects))
metricName: etcdTopObectCount
# Cluster metrics
- query: count(kube_namespace_created)
metricName: namespaceCount

View File

@@ -0,0 +1,29 @@
application: openshift-etcd
namespace: openshift-etcd
labels: app=openshift-etcd
kubeconfig: ~/.kube/config.yaml
prometheus_endpoint: <Prometheus_Endpoint>
auth_token: <Auth_Token>
scrape_duration: 10m
chaos_library: "kraken"
log_level: INFO
# for output purpose only do not change if not needed
chaos_tests:
GENERIC:
- pod_failure
- container_failure
- node_failure
- zone_outage
- time_skew
- namespace_failure
- power_outage
CPU:
- node_cpu_hog
NETWORK:
- application_outage
- node_network_chaos
- pod_network_chaos
MEM:
- node_memory_hog
- pvc_disk_fill

View File

@@ -1,29 +1,26 @@
# Dockerfile for kraken
FROM quay.io/openshift/origin-tests:latest as origintests
FROM mcr.microsoft.com/azure-cli:latest as azure-cli
FROM quay.io/centos/centos:stream9
LABEL org.opencontainers.image.authors="Red Hat OpenShift Chaos Engineering"
FROM registry.access.redhat.com/ubi8/ubi:latest
ENV KUBECONFIG /root/.kube/config
# Copy OpenShift CLI, Kubernetes CLI from origin-tests image
COPY --from=origintests /usr/bin/oc /usr/bin/oc
COPY --from=origintests /usr/bin/kubectl /usr/bin/kubectl
# Copy azure client binary from azure-cli image
COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
# Install dependencies
RUN yum install epel-release -y && \
yum install -y git python39 python3-pip jq gettext && \
RUN yum install -y git python39 python3-pip jq gettext wget && \
python3.9 -m pip install -U pip && \
git clone https://github.com/redhat-chaos/krkn.git --branch v1.3.0 /root/kraken && \
git clone https://github.com/krkn-chaos/krkn.git --branch v1.5.4 /root/kraken && \
mkdir -p /root/.kube && cd /root/kraken && \
pip3.9 install -r requirements.txt
pip3.9 install -r requirements.txt && \
pip3.9 install virtualenv && \
wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && chmod +x /usr/bin/yq
# Get Kubernetes and OpenShift clients from stable releases
WORKDIR /tmp
RUN wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz && tar -xvf openshift-client-linux.tar.gz && cp oc /usr/local/bin/oc && cp kubectl /usr/local/bin/kubectl
WORKDIR /root/kraken

View File

@@ -2,24 +2,28 @@
FROM ppc64le/centos:8
MAINTAINER Red Hat OpenShift Performance and Scale
FROM mcr.microsoft.com/azure-cli:latest as azure-cli
LABEL org.opencontainers.image.authors="Red Hat OpenShift Chaos Engineering"
ENV KUBECONFIG /root/.kube/config
RUN curl -L -o kubernetes-client-linux-ppc64le.tar.gz https://dl.k8s.io/v1.19.0/kubernetes-client-linux-ppc64le.tar.gz \
&& tar xf kubernetes-client-linux-ppc64le.tar.gz && mv kubernetes/client/bin/kubectl /usr/bin/ && rm -rf kubernetes-client-linux-ppc64le.tar.gz
RUN curl -L -o openshift-client-linux.tar.gz https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/stable/openshift-client-linux.tar.gz \
&& tar xf openshift-client-linux.tar.gz -C /usr/bin && rm -rf openshift-client-linux.tar.gz
# Copy azure client binary from azure-cli image
COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
# Install dependencies
RUN yum install epel-release -y && \
yum install -y git python36 python3-pip gcc libffi-devel python36-devel openssl-devel gcc-c++ make jq gettext && \
git clone https://github.com/redhat-chaos/krkn.git --branch main /root/kraken && \
mkdir -p /root/.kube && cd /root/kraken && \
pip3 install cryptography==3.3.2 && \
pip3 install -r requirements.txt setuptools==40.3.0 urllib3==1.25.4
RUN yum install -y git python39 python3-pip jq gettext wget && \
python3.9 -m pip install -U pip && \
git clone https://github.com/redhat-chaos/krkn.git --branch v1.5.4 /root/kraken && \
mkdir -p /root/.kube && cd /root/kraken && \
pip3.9 install -r requirements.txt && \
pip3.9 install virtualenv && \
wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && chmod +x /usr/bin/yq
# Get Kubernetes and OpenShift clients from stable releases
WORKDIR /tmp
RUN wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz && tar -xvf openshift-client-linux.tar.gz && cp oc /usr/local/bin/oc && cp kubectl /usr/local/bin/kubectl
WORKDIR /root/kraken
ENTRYPOINT python3 run_kraken.py --config=config/config.yaml
ENTRYPOINT python3.9 run_kraken.py --config=config/config.yaml

View File

@@ -1,27 +1,20 @@
### Kraken image
Container image gets automatically built by quay.io at [Kraken image](https://quay.io/redhat-chaos/krkn).
### Run containerized version
Refer [instructions](https://github.com/redhat-chaos/krkn/blob/main/docs/installation.md#run-containerized-version) for information on how to run the containerized version of kraken.
### Run Custom Kraken Image
Refer to [instructions](https://github.com/redhat-chaos/krkn/blob/main/containers/build_own_image-README.md) for information on how to run a custom containerized version of kraken using podman.
### Kraken as a KubeApp
### Kraken as a KubeApp ( Unsupported and not recommended )
#### GENERAL NOTES:
@@ -50,4 +43,4 @@ To run containerized Kraken as a Kubernetes/OpenShift Deployment, follow these s
8. Create a ConfigMap named scenarios-kube-config using `kubectl create configmap scenarios-kube-config --from-file=<path_to_kraken>/scenarios/kube`
9. Create a service account to run the kraken pod `kubectl create serviceaccount useroot`.
10. In Openshift, add privileges to service account and execute `oc adm policy add-scc-to-user privileged -z useroot`.
11. Create a Job using `kubectl apply -f <path_to_kraken>/containers/kraken.yml` and monitor the status using `oc get jobs` and `oc get pods`.
11. Create a Job using `kubectl apply -f <path_to_kraken>/containers/kraken.yml` and monitor the status using `oc get jobs` and `oc get pods`.

View File

@@ -1,13 +1,13 @@
# Building your own Kraken image
1. Git clone the Kraken repository using `git clone https://github.com/openshift-scale/kraken.git`.
1. Git clone the Kraken repository using `git clone https://github.com/redhat-chaos/krkn.git`.
2. Modify the python code and yaml files to address your needs.
3. Execute `podman build -t <new_image_name>:latest .` in the containers directory within kraken to build an image from a Dockerfile.
4. Execute `podman run --detach --name <container_name> <new_image_name>:latest` to start a container based on your new image.
# Building the Kraken image on IBM Power (ppc64le)
1. Git clone the Kraken repository using `git clone https://github.com/cloud-bulldozer/kraken.git` on an IBM Power Systems server.
1. Git clone the Kraken repository using `git clone https://github.com/redhat-chaos/krkn.git` on an IBM Power Systems server.
2. Modify the python code and yaml files to address your needs.
3. Execute `podman build -t <new_image_name>:latest -f Dockerfile-ppc64le` in the containers directory within kraken to build an image from the Dockerfile for Power.
4. Execute `podman run --detach --name <container_name> <new_image_name>:latest` to start a container based on your new image.

View File

@@ -1,29 +1,28 @@
## Alerts
## SLOs validation
Pass/fail based on metrics captured from the cluster is important in addition to checking the health status and recovery. Kraken supports:
### Checking for critical alerts
If enabled, the check runs at the end of each scenario and Kraken exits in case critical alerts are firing to allow user to debug. You can enable it in the config:
### Checking for critical alerts post chaos
If enabled, the check runs at the end of each scenario ( post chaos ) and Kraken exits in case critical alerts are firing to allow user to debug. You can enable it in the config:
```
performance_monitoring:
check_critical_alerts: False # When enabled will check prometheus for critical alerts firing post chaos
```
### Alerting based on the queries defined by the user
Takes PromQL queries as input and modifies the return code of the run to determine pass/fail. It's especially useful in case of automated runs in CI where user won't be able to monitor the system. It uses [Kube-burner](https://kube-burner.readthedocs.io/en/latest/) under the hood. This feature can be enabled in the [config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) by setting the following:
### Validation and alerting based on the queries defined by the user during chaos
Takes PromQL queries as input and modifies the return code of the run to determine pass/fail. It's especially useful in case of automated runs in CI where user won't be able to monitor the system. This feature can be enabled in the [config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) by setting the following:
```
performance_monitoring:
kube_burner_binary_url: "https://github.com/cloud-bulldozer/kube-burner/releases/download/v0.9.1/kube-burner-0.9.1-Linux-x86_64.tar.gz"
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
enable_alerts: True # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error.
alert_profile: config/alerts # Path to alert profile with the prometheus queries.
alert_profile: config/alerts.yaml # Path to alert profile with the prometheus queries.
```
#### Alert profile
A couple of [alert profiles](https://github.com/redhat-chaos/krkn/tree/main/config) [alerts](https://github.com/redhat-chaos/krkn/blob/main/config/alerts) are shipped by default and can be tweaked to add more queries to alert on. The following are a few alerts examples:
A couple of [alert profiles](https://github.com/redhat-chaos/krkn/tree/main/config) [alerts](https://github.com/redhat-chaos/krkn/blob/main/config/alerts.yaml) are shipped by default and can be tweaked to add more queries to alert on. User can provide a URL or path to the file in the [config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml). The following are a few alerts examples:
```
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[2m]))[5m:]) > 0.01

View File

@@ -7,6 +7,7 @@ The engine uses containers to execute plugins and runs them either locally in Do
#### Hog scenarios:
- [CPU Hog](arcaflow_scenarios/cpu_hog.md)
- [Memory Hog](arcaflow_scenarios/memory_hog.md)
- [I/O Hog](arcaflow_scenarios/io_hog.md)
### Prequisites
@@ -64,4 +65,6 @@ Each step is represented by a container that will be executed from the deployer
Note that we provide the scenarios as a template, but they can be manipulated to define more complex workflows.
To have more details regarding the arcaflow workflows architecture and syntax it is suggested to refer to the [Arcaflow Documentation](https://arcalot.io/arcaflow/).
This edit is no longer in quay image
Working on fix in ticket: https://issues.redhat.com/browse/CHAOS-494
This will effect all versions 4.12 and higher of OpenShift

View File

@@ -0,0 +1,21 @@
# I/O Hog
This scenario is based on the arcaflow [arcaflow-plugin-stressng](https://github.com/arcalot/arcaflow-plugin-stressng) plugin.
The purpose of this scenario is to create disk pressure on a particular node of the Kubernetes/OpenShift cluster for a time span.
The scenario allows to attach a node path to the pod as a `hostPath` volume.
To enable this plugin add the pointer to the scenario input file `scenarios/arcaflow/io-hog/input.yaml` as described in the
Usage section.
This scenario takes a list of objects named `input_list` with the following properties:
- **kubeconfig :** *string* the kubeconfig needed by the deployer to deploy the sysbench plugin in the target cluster
- **namespace :** *string* the namespace where the scenario container will be deployed
**Note:** this parameter will be automatically filled by kraken if the `kubeconfig_path` property is correctly set
- **node_selector :** *key-value map* the node label that will be used as `nodeSelector` by the pod to target a specific cluster node
- **duration :** *string* stop stress test after N seconds. One can also specify the units of time in seconds, minutes, hours, days or years with the suffix s, m, h, d or y.
- **target_pod_folder :** *string* the path in the pod where the volume is mounted
- **target_pod_volume :** *object* the `hostPath` volume definition in the [Kubernetes/OpenShift](https://docs.openshift.com/container-platform/3.11/install_config/persistent_storage/using_hostpath.html) format, that will be attached to the pod as a volume
- **io_write_bytes :** *string* writes N bytes for each hdd process. The size can be expressed as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g
- **io_block_size :** *string* size of each write in bytes. Size can be from 1 byte to 4m.
To perform several load tests in the same run simultaneously (eg. stress two or more nodes in the same run) add another item
to the `input_list` with the same properties (and eventually different values eg. different node_selectors
to schedule the pod on different nodes). To reduce (or increase) the parallelism change the value `parallelism` in `workload.yaml` file

View File

@@ -8,14 +8,14 @@ The following are the components of Kubernetes/OpenShift for which a basic chaos
```
scenarios:
- name: "<Name of scenario>"
- name: "<name of scenario>"
namespace: "<specific namespace>" # can specify "*" if you want to find in all namespaces
label_selector: "<label of pod(s)>"
container_name: "<specific container name>" # This is optional, can take out and will kill all containers in all pods found under namespace and label
pod_names: # This is optional, can take out and will select all pods with given namespace and label
- <pod_name>
count: <number of containers to disrupt, default=1>
action: <Action to run. For example kill 1 ( hang up ) or kill 9. Default is set to kill 1>
action: <kill signal to run. For example 1 ( hang up ) or 9. Default is set to 1>
expected_recovery_time: <number of seconds to wait for container to be running again> (defaults to 120seconds)
```

View File

@@ -62,7 +62,7 @@ If changes go into the main repository while you're working on your code it is b
If not already configured, set the upstream url for kraken.
```
git remote add upstream https://github.com/cloud-bulldozer/kraken.git
git remote add upstream https://github.com/redhat-chaos/krkn.git
```
Rebase to upstream master branch.

View File

@@ -48,7 +48,7 @@ Failures in production are costly. To help mitigate risk to service health, cons
### Best Practices
Now that we understand the test methodology, let us take a look at the best practices for an OpenShift cluster. On that platform there are user applications and cluster workloads that need to be designed for stability and to provide the best user experience possible:
Now that we understand the test methodology, let us take a look at the best practices for an Kubernetes cluster. On that platform there are user applications and cluster workloads that need to be designed for stability and to provide the best user experience possible:
- Alerts with appropriate severity should get fired.
- Alerts are key to identify when a component starts degrading, and can help focus the investigation effort on affected system components.
@@ -77,11 +77,11 @@ We want to look at this in terms of CPU, Memory, Disk, Throughput, Network etc.
- The controller watching the component should recognize a failure as soon as possible. The component needs to have minimal initialization time to avoid extended downtime or overloading the replicas if it is a highly available configuration. The cause of failure can be because of issues with the infrastructure on top of which it is running, application failures, or because of service failures that it depends on.
- High Availability deployment strategy.
- There should be multiple replicas ( both OpenShift and application control planes ) running preferably in different availability zones to survive outages while still serving the user/system requests. Avoid single points of failure.
- There should be multiple replicas ( both Kubernetes and application control planes ) running preferably in different availability zones to survive outages while still serving the user/system requests. Avoid single points of failure.
- Backed by persistent storage
- It is important to have the system/application backed by persistent storage. This is especially important in cases where the application is a database or a stateful application given that a node, pod, or container failure will wipe off the data.
- There should be fallback routes to the backend in case of using CDN, for example, Akamai in case of console.redhat.com - a managed service deployed on top of OpenShift dedicated:
- There should be fallback routes to the backend in case of using CDN, for example, Akamai in case of console.redhat.com - a managed service deployed on top of Kubernetes dedicated:
- Content delivery networks (CDNs) are commonly used to host resources such as images, JavaScript files, and CSS. The average web page is nearly 2 MB in size, and offloading heavy resources to third-parties is extremely effective for reducing backend server traffic and latency. However, this makes each CDN an additional point of failure for every site that relies on it. If the CDN fails, its customers could also fail.
- To test how the application reacts to failures, drop all network traffic between the system and CDN. The application should still serve the content to the user irrespective of the failure.
@@ -92,10 +92,10 @@ We want to look at this in terms of CPU, Memory, Disk, Throughput, Network etc.
### Tooling
Now that we looked at the best practices, In this section, we will go through how [Kraken](https://github.com/redhat-chaos/krkn) - a chaos testing framework can help test the resilience of OpenShift and make sure the applications and services are following the best practices.
Now that we looked at the best practices, In this section, we will go through how [Kraken](https://github.com/redhat-chaos/krkn) - a chaos testing framework can help test the resilience of Kubernetes and make sure the applications and services are following the best practices.
#### Workflow
Let us start by understanding the workflow of kraken: the user will start by running kraken by pointing to a specific OpenShift cluster using kubeconfig to be able to talk to the platform on top of which the OpenShift cluster is hosted. This can be done by either the oc/kubectl API or the cloud API. Based on the configuration of kraken, it will inject specific chaos scenarios as shown below, talk to [Cerberus](https://github.com/redhat-chaos/cerberus) to get the go/no-go signal representing the overall health of the cluster ( optional - can be turned off ), scrapes metrics from in-cluster prometheus given a metrics profile with the promql queries and stores them long term in Elasticsearch configured ( optional - can be turned off ), evaluates the promql expressions specified in the alerts profile ( optional - can be turned off ) and aggregated everything to set the pass/fail i.e. exits 0 or 1. More about the metrics collection, cerberus and metrics evaluation can be found in the next section.
Let us start by understanding the workflow of kraken: the user will start by running kraken by pointing to a specific Kubernetes cluster using kubeconfig to be able to talk to the platform on top of which the Kubernetes cluster is hosted. This can be done by either the oc/kubectl API or the cloud API. Based on the configuration of kraken, it will inject specific chaos scenarios as shown below, talk to [Cerberus](https://github.com/redhat-chaos/cerberus) to get the go/no-go signal representing the overall health of the cluster ( optional - can be turned off ), scrapes metrics from in-cluster prometheus given a metrics profile with the promql queries and stores them long term in Elasticsearch configured ( optional - can be turned off ), evaluates the promql expressions specified in the alerts profile ( optional - can be turned off ) and aggregated everything to set the pass/fail i.e. exits 0 or 1. More about the metrics collection, cerberus and metrics evaluation can be found in the next section.
![Kraken workflow](../media/kraken-workflow.png)
@@ -112,15 +112,15 @@ If the monitoring tool, cerberus is enabled it will consume the signal and conti
### Scenarios
Let us take a look at how to run the chaos scenarios on your OpenShift clusters using Kraken-hub - a lightweight wrapper around Kraken to ease the runs by providing the ability to run them by just running container images using podman with parameters set as environment variables. This eliminates the need to carry around and edit configuration files and makes it easy for any CI framework integration. Here are the scenarios supported:
Let us take a look at how to run the chaos scenarios on your Kubernetes clusters using Kraken-hub - a lightweight wrapper around Kraken to ease the runs by providing the ability to run them by just running container images using podman with parameters set as environment variables. This eliminates the need to carry around and edit configuration files and makes it easy for any CI framework integration. Here are the scenarios supported:
- Pod Scenarios ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/pod-scenarios.md))
- Disrupts OpenShift/Kubernetes and applications deployed as pods:
- Disrupts Kubernetes/Kubernetes and applications deployed as pods:
- Helps understand the availability of the application, the initialization timing and recovery status.
- [Demo](https://asciinema.org/a/452351?speed=3&theme=solarized-dark)
- Container Scenarios ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/container-scenarios.md))
- Disrupts OpenShift/Kubernetes and applications deployed as containers running as part of a pod(s) using a specified kill signal to mimic failures:
- Disrupts Kubernetes/Kubernetes and applications deployed as containers running as part of a pod(s) using a specified kill signal to mimic failures:
- Helps understand the impact and recovery timing when the program/process running in the containers are disrupted - hangs, paused, killed etc., using various kill signals, i.e. SIGHUP, SIGTERM, SIGKILL etc.
- [Demo](https://asciinema.org/a/BXqs9JSGDSEKcydTIJ5LpPZBM?speed=3&theme=solarized-dark)
@@ -134,8 +134,8 @@ Let us take a look at how to run the chaos scenarios on your OpenShift clusters
- [Demo](https://asciinema.org/a/ANZY7HhPdWTNaWt4xMFanF6Q5)
- Zone Outages ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/zone-outages.md))
- Creates outage of availability zone(s) in a targeted region in the public cloud where the OpenShift cluster is running by tweaking the network acl of the zone to simulate the failure, and that in turn will stop both ingress and egress traffic from all nodes in a particular zone for the specified duration and reverts it back to the previous state.
- Helps understand the impact on both Kubernetes/OpenShift control plane as well as applications and services running on the worker nodes in that zone.
- Creates outage of availability zone(s) in a targeted region in the public cloud where the Kubernetes cluster is running by tweaking the network acl of the zone to simulate the failure, and that in turn will stop both ingress and egress traffic from all nodes in a particular zone for the specified duration and reverts it back to the previous state.
- Helps understand the impact on both Kubernetes/Kubernetes control plane as well as applications and services running on the worker nodes in that zone.
- Currently, only set up for AWS cloud platform: 1 VPC and multiples subnets within the VPC can be specified.
- [Demo](https://asciinema.org/a/452672?speed=3&theme=solarized-dark)
@@ -200,7 +200,7 @@ Let us take a look at few recommendations on how and where to run the chaos test
- Enable Observability:
- Chaos Engineering Without Observability ... Is Just Chaos.
- Make sure to have logging and monitoring installed on the cluster to help with understanding the behaviour as to why it is happening. In case of running the tests in the CI where it is not humanly possible to monitor the cluster all the time, it is recommended to leverage Cerberus to capture the state during the runs and metrics collection in Kraken to store metrics long term even after the cluster is gone.
- Kraken ships with dashboards that will help understand API, Etcd and OpenShift cluster level stats and performance metrics.
- Kraken ships with dashboards that will help understand API, Etcd and Kubernetes cluster level stats and performance metrics.
- Pay attention to Prometheus alerts. Check if they are firing as expected.
- Run multiple chaos tests at once to mimic the production outages:

View File

@@ -3,14 +3,19 @@
The following ways are supported to run Kraken:
- Standalone python program through Git.
- Containerized version using either Podman or Docker as the runtime.
- Kubernetes or OpenShift deployment.
- Containerized version using either Podman or Docker as the runtime via [Krkn-hub](https://github.com/redhat-chaos/krkn-hub)
- Kubernetes or OpenShift deployment ( unsupported )
**NOTE**: It is recommended to run Kraken external to the cluster ( Standalone or Containerized ) hitting the Kubernetes/OpenShift API as running it internal to the cluster might be disruptive to itself and also might not report back the results if the chaos leads to cluster's API server instability.
**NOTE**: To run Kraken on Power (ppc64le) architecture, build and run a containerized version by following the
instructions given [here](https://github.com/redhat-chaos/krkn/blob/main/containers/build_own_image-README.md).
**NOTE**: Helper functions for interactions in Krkn are part of [krkn-lib](https://github.com/redhat-chaos/krkn-lib).
Please feel free to reuse and expand them as you see fit when adding a new scenario or expanding
the capabilities of the current supported scenarios.
### Git
#### Clone the repository
@@ -35,26 +40,12 @@ $ python3.9 run_kraken.py --config <config_file_location>
```
### Run containerized version
Assuming that the latest docker ( 17.05 or greater with multi-build support ) is installed on the host, run:
```
$ docker pull quay.io/redhat-chaos/krkn:latest
$ docker run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -d quay.io/redhat-chaos/krkn:latest
$ docker run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -v <path_to_scenarios_directory>:/root/kraken/scenarios:Z -d quay.io/redhat-chaos/krkn:latest #custom or tweaked scenario configs
$ docker logs -f kraken
```
[Krkn-hub](https://github.com/redhat-chaos/krkn-hub) is a wrapper that allows running Krkn chaos scenarios via podman or docker runtime with scenario parameters/configuration defined as environment variables.
Similarly, podman can be used to achieve the same:
```
$ podman pull quay.io/redhat-chaos/krkn
$ podman run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -d quay.io/redhat-chaos/krkn:latest
$ podman run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -v <path_to_scenarios_directory>:/root/kraken/scenarios:Z -d quay.io/redhat-chaos/krkn:latest #custom or tweaked scenario configs
$ podman logs -f kraken
```
If you want to build your own kraken image see [here](https://github.com/redhat-chaos/krkn/blob/main/containers/build_own_image-README.md)
Refer [instructions](https://github.com/redhat-chaos/krkn-hub#supported-chaos-scenarios) to get started.
### Run Kraken as a Kubernetes deployment
### Run Kraken as a Kubernetes deployment ( unsupported option - standalone or containerized deployers are recommended )
Refer [Instructions](https://github.com/redhat-chaos/krkn/blob/main/containers/README.md) on how to deploy and run Kraken as a Kubernetes/OpenShift deployment.

View File

@@ -1,51 +0,0 @@
## Scraping and storing metrics for the run
There are cases where the state of the cluster and metrics on the cluster during the chaos test run need to be stored long term to review after the cluster is terminated, for example CI and automation test runs. To help with this, Kraken supports capturing metrics for the duration of the scenarios defined in the config and indexes them into Elasticsearch. The indexed metrics can be visualized with the help of Grafana.
It uses [Kube-burner](https://github.com/cloud-bulldozer/kube-burner) under the hood. The metrics to capture need to be defined in a metrics profile which Kraken consumes to query prometheus ( installed by default in OpenShift ) with the start and end timestamp of the run. Each run has a unique identifier ( uuid ) and all the metrics/documents in Elasticsearch will be associated with it. The uuid is generated automatically if not set in the config. This feature can be enabled in the [config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) by setting the following:
```
performance_monitoring:
kube_burner_binary_url: "https://github.com/cloud-bulldozer/kube-burner/releases/download/v0.9.1/kube-burner-0.9.1-Linux-x86_64.tar.gz"
capture_metrics: True
config_path: config/kube_burner.yaml # Define the Elasticsearch url and index name in this config.
metrics_profile_path: config/metrics-aggregated.yaml
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
uuid: # uuid for the run is generated by default if not set.
```
### Metrics profile
A couple of [metric profiles](https://github.com/redhat-chaos/krkn/tree/main/config), [metrics.yaml](https://github.com/redhat-chaos/krkn/blob/main/config/metrics.yaml), and [metrics-aggregated.yaml](https://github.com/redhat-chaos/krkn/blob/main/config/metrics-aggregated.yaml) are shipped by default and can be tweaked to add more metrics to capture during the run. The following are the API server metrics for example:
```
metrics:
# API server
- query: histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb!~"WATCH", subresource!="log"}[2m])) by (verb,resource,subresource,instance,le)) > 0
metricName: API99thLatency
- query: sum(irate(apiserver_request_total{apiserver="kube-apiserver",verb!="WATCH",subresource!="log"}[2m])) by (verb,instance,resource,code) > 0
metricName: APIRequestRate
- query: sum(apiserver_current_inflight_requests{}) by (request_kind) > 0
metricName: APIInflightRequests
```
### Indexing
Define the Elasticsearch and index to store the metrics/documents in the kube_burner config:
```
global:
writeToFile: true
metricsDirectory: collected-metrics
measurements:
- name: podLatency
esIndex: kube-burner
indexerConfig:
enabled: true
esServers: [https://elastic.example.com:9200]
insecureSkipVerify: true
defaultIndex: kraken
type: elastic
```

View File

@@ -12,9 +12,9 @@ network_chaos: # Scenario to create an outage
- "ens5" # Interface name would be the Kernel host network interface name.
execution: serial|parallel # Execute each of the egress options as a single scenario(parallel) or as separate scenario(serial).
egress:
latency: 50ms
loss: 0.02 # percentage
bandwidth: 100mbit
latency: 500ms
loss: 50% # percentage
bandwidth: 10mbit
```
##### Sample scenario config for ingress traffic shaping (using a plugin)
@@ -30,9 +30,9 @@ network_chaos: # Scenario to create an outage
kubeconfig_path: ~/.kube/config # Path to kubernetes config file. If not specified, it defaults to ~/.kube/config
execution_type: parallel # Execute each of the ingress options as a single scenario(parallel) or as separate scenario(serial).
network_params:
latency: 50ms
loss: '0.02'
bandwidth: 100mbit
latency: 500ms
loss: '50%'
bandwidth: 10mbit
wait_duration: 120
test_duration: 60
'''

View File

@@ -1,15 +0,0 @@
### Pod outage
Scenario to block the traffic ( Ingress/Egress ) of a pod matching the labels for the specified duration of time to understand the behavior of the service/other services which depend on it during downtime. This helps with planning the requirements accordingly, be it improving the timeouts or tweaking the alerts etc.
With the current network policies, it is not possible to explicitly block ports which are enabled by allowed network policy rule. This chaos scenario addresses this issue by using OVS flow rules to block ports related to the pod. It supports OpenShiftSDN and OVNKubernetes based networks.
##### Sample scenario config (using a plugin)
```
- id: pod_network_outage
config:
namespace: openshift-console # Required - Namespace of the pod to which filter need to be applied
direction: # Optioinal - List of directions to apply filters
- ingress # Blocks ingress traffic, Default both egress and ingress
ingress_ports: # Optional - List of ports to block traffic on
- 8443 # Blocks 8443, Default [], i.e. all ports.
label_selector: 'component=ui' # Blocks access to openshift console
```

View File

@@ -0,0 +1,46 @@
## Pod network Scenarios
### Pod outage
Scenario to block the traffic ( Ingress/Egress ) of a pod matching the labels for the specified duration of time to understand the behavior of the service/other services which depend on it during downtime. This helps with planning the requirements accordingly, be it improving the timeouts or tweaking the alerts etc.
With the current network policies, it is not possible to explicitly block ports which are enabled by allowed network policy rule. This chaos scenario addresses this issue by using OVS flow rules to block ports related to the pod. It supports OpenShiftSDN and OVNKubernetes based networks.
##### Sample scenario config (using a plugin)
```
- id: pod_network_outage
config:
namespace: openshift-console # Required - Namespace of the pod to which filter need to be applied
direction: # Optioinal - List of directions to apply filters
- ingress # Blocks ingress traffic, Default both egress and ingress
ingress_ports: # Optional - List of ports to block traffic on
- 8443 # Blocks 8443, Default [], i.e. all ports.
label_selector: 'component=ui' # Blocks access to openshift console
```
### Pod Network shaping
Scenario to introduce network latency, packet loss, and bandwidth restriction in the Pod's network interface. The purpose of this scenario is to observe faults caused by random variations in the network.
##### Sample scenario config for egress traffic shaping (using plugin)
```
- id: pod_egress_shaping
config:
namespace: openshift-console # Required - Namespace of the pod to which filter need to be applied.
label_selector: 'component=ui' # Applies traffic shaping to access openshift console.
network_params:
latency: 500ms # Add 500ms latency to egress traffic from the pod.
```
##### Sample scenario config for ingress traffic shaping (using plugin)
```
- id: pod_ingress_shaping
config:
namespace: openshift-console # Required - Namespace of the pod to which filter need to be applied.
label_selector: 'component=ui' # Applies traffic shaping to access openshift console.
network_params:
latency: 500ms # Add 500ms latency to egress traffic from the pod.
```
##### Steps
- Pick the pods to introduce the network anomaly either from label_selector or pod_name.
- Identify the pod interface name on the node.
- Set traffic shaping config on pod's interface using tc and netem.
- Wait for the duration time.
- Remove traffic shaping config on pod's interface.
- Remove the job that spawned the pod.

View File

@@ -1,6 +1,6 @@
### Delete Namespace Scenarios
### Service Disruption Scenarios (Previously Delete Namespace Scenario)
Using this type of scenario configuration one is able to delete a specific namespace, or a namespace matching a certain regex string.
Using this type of scenario configuration one is able to delete crucial objects in a specific namespace, or a namespace matching a certain regex string.
Configuration Options:
@@ -27,12 +27,20 @@ scenarios:
sleep: 15
```
**NOTE:** Many openshift namespaces have finalizers built that protect the namespace from being fully deleted: see documentation [here](https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/).
The namespaces that do have finalizers enabled will be in left in a terminating state but all the pods running on that namespace will get deleted.
### Steps
This scenario will select a namespace (or multiple) dependent on the configuration and will kill all of the below object types in that namespace and will wait for them to be Running in the post action
1. Services
2. Daemonsets
3. Statefulsets
4. Replicasets
5. Deployments
#### Post Action
In all scenarios we do a post chaos check to wait and verify the specific component.
We do a post chaos check to wait and verify the specific objects in each namespace are Ready
Here there are two options:
@@ -47,8 +55,8 @@ See [scenarios/post_action_namespace.py](https://github.com/cloud-bulldozer/krak
```
2. Allow kraken to wait and check the killed namespaces become 'Active' again. Kraken keeps a list of the specific
namespaces that were killed to verify all that were affected recover properly.
1. Allow kraken to wait and check all killed objects in the namespaces become 'Running' again. Kraken keeps a list of the specific
objects in namespaces that were killed to verify all that were affected recover properly.
```
wait_time: <seconds to wait for namespace to recover>

View File

@@ -4,25 +4,43 @@ import time
import kraken.cerberus.setup as cerberus
from jinja2 import Template
import kraken.invoke.command as runcommand
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
# Reads the scenario config, applies and deletes a network policy to
# block the traffic for the specified duration
def run(scenarios_list, config, wait_duration):
def run(scenarios_list, config, wait_duration, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
failed_post_scenarios = ""
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
for app_outage_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = app_outage_config
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, app_outage_config)
if len(app_outage_config) > 1:
with open(app_outage_config, "r") as f:
app_outage_config_yaml = yaml.full_load(f)
scenario_config = app_outage_config_yaml["application_outage"]
pod_selector = scenario_config.get("pod_selector", "{}")
traffic_type = scenario_config.get("block", "[Ingress, Egress]")
namespace = scenario_config.get("namespace", "")
duration = scenario_config.get("duration", 60)
try:
with open(app_outage_config, "r") as f:
app_outage_config_yaml = yaml.full_load(f)
scenario_config = app_outage_config_yaml["application_outage"]
pod_selector = get_yaml_item_value(
scenario_config, "pod_selector", "{}"
)
traffic_type = get_yaml_item_value(
scenario_config, "block", "[Ingress, Egress]"
)
namespace = get_yaml_item_value(
scenario_config, "namespace", ""
)
duration = get_yaml_item_value(
scenario_config, "duration", 60
)
start_time = int(time.time())
start_time = int(time.time())
network_policy_template = """---
network_policy_template = """---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
@@ -31,28 +49,38 @@ spec:
podSelector:
matchLabels: {{ pod_selector }}
policyTypes: {{ traffic_type }}
"""
t = Template(network_policy_template)
rendered_spec = t.render(pod_selector=pod_selector, traffic_type=traffic_type)
# Write the rendered template to a file
with open("kraken_network_policy.yaml", "w") as f:
f.write(rendered_spec)
# Block the traffic by creating network policy
logging.info("Creating the network policy")
runcommand.invoke(
"kubectl create -f %s -n %s --validate=false" % ("kraken_network_policy.yaml", namespace)
)
"""
t = Template(network_policy_template)
rendered_spec = t.render(pod_selector=pod_selector, traffic_type=traffic_type)
# Write the rendered template to a file
with open("kraken_network_policy.yaml", "w") as f:
f.write(rendered_spec)
# Block the traffic by creating network policy
logging.info("Creating the network policy")
runcommand.invoke(
"kubectl create -f %s -n %s --validate=false" % ("kraken_network_policy.yaml", namespace)
)
# wait for the specified duration
logging.info("Waiting for the specified duration in the config: %s" % (duration))
time.sleep(duration)
# wait for the specified duration
logging.info("Waiting for the specified duration in the config: %s" % (duration))
time.sleep(duration)
# unblock the traffic by deleting the network policy
logging.info("Deleting the network policy")
runcommand.invoke("kubectl delete -f %s -n %s" % ("kraken_network_policy.yaml", namespace))
# unblock the traffic by deleting the network policy
logging.info("Deleting the network policy")
runcommand.invoke("kubectl delete -f %s -n %s" % ("kraken_network_policy.yaml", namespace))
logging.info("End of scenario. Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
logging.info("End of scenario. Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
except Exception as e :
scenario_telemetry.exitStatus = 1
failed_scenarios.append(app_outage_config)
log_exception(app_outage_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)

View File

@@ -1,27 +1,37 @@
import time
import arcaflow
import os
import yaml
import logging
import sys
from pathlib import Path
from typing import List
from .context_auth import ContextAuth
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
def run(scenarios_list: List[str], kubeconfig_path: str):
def run(scenarios_list: List[str], kubeconfig_path: str, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
failed_post_scenarios = []
for scenario in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry,scenario)
engine_args = build_args(scenario)
run_workflow(engine_args, kubeconfig_path)
status_code = run_workflow(engine_args, kubeconfig_path)
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetry.exitStatus = status_code
scenario_telemetries.append(scenario_telemetry)
if status_code != 0:
failed_post_scenarios.append(scenario)
return failed_post_scenarios, scenario_telemetries
def run_workflow(engine_args: arcaflow.EngineArgs, kubeconfig_path: str):
def run_workflow(engine_args: arcaflow.EngineArgs, kubeconfig_path: str) -> int:
set_arca_kubeconfig(engine_args, kubeconfig_path)
exit_status = arcaflow.run(engine_args)
if exit_status != 0:
logging.error(
f"failed to run arcaflow scenario {engine_args.input}"
)
sys.exit(exit_status)
return exit_status
def build_args(input_file: str) -> arcaflow.EngineArgs:
@@ -86,9 +96,9 @@ def set_arca_kubeconfig(engine_args: arcaflow.EngineArgs, kubeconfig_path: str):
with open(engine_args.config, "r") as stream:
config_file = yaml.safe_load(stream)
if config_file["deployer"]["type"] == "kubernetes":
kube_connection = set_kubernetes_deployer_auth(config_file["deployer"]["connection"], context_auth)
config_file["deployer"]["connection"]=kube_connection
if config_file["deployers"]["image"]["deployer_name"] == "kubernetes":
kube_connection = set_kubernetes_deployer_auth(config_file["deployers"]["image"]["connection"], context_auth)
config_file["deployers"]["image"]["connection"]=kube_connection
with open(engine_args.config, "w") as stream:
yaml.safe_dump(config_file, stream,explicit_start=True, width=4096)

View File

@@ -0,0 +1,3 @@
from .analysis import *
from .kraken_tests import *
from .prometheus import *

View File

@@ -0,0 +1,90 @@
import logging
import pandas as pd
import kraken.chaos_recommender.kraken_tests as kraken_tests
import time
threshold = .7 # Adjust the threshold as needed
heatmap_cpu_threshold = .5
heatmap_mem_threshold = .5
KRAKEN_TESTS_PATH = "./kraken_chaos_tests.txt"
#Placeholder, this should be done with topology
def return_critical_services():
return ["web", "cart"]
def load_telemetry_data(file_path):
data = pd.read_csv(file_path, delimiter=r"\s+")
return data
def calculate_zscores(data):
zscores = pd.DataFrame()
zscores["Service"] = data["service"]
zscores["CPU"] = (data["CPU"] - data["CPU"].mean()) / data["CPU"].std()
zscores["Memory"] = (data["MEM"] - data["MEM"].mean()) / data["MEM"].std()
zscores["Network"] = (data["NETWORK"] - data["NETWORK"].mean()) / data["NETWORK"].std()
return zscores
def identify_outliers(data):
outliers_cpu = data[data["CPU"] > threshold]["Service"].tolist()
outliers_memory = data[data["Memory"] > threshold]["Service"].tolist()
outliers_network = data[data["Network"] > threshold]["Service"].tolist()
return outliers_cpu, outliers_memory, outliers_network
def get_services_above_heatmap_threshold(dataframe, cpu_threshold, mem_threshold):
# Filter the DataFrame based on CPU_HEATMAP and MEM_HEATMAP thresholds
filtered_df = dataframe[((dataframe['CPU']/dataframe['CPU_LIMITS']) > cpu_threshold)]
# Get the lists of services
cpu_services = filtered_df['service'].tolist()
filtered_df = dataframe[((dataframe['MEM']/dataframe['MEM_LIMITS']) > mem_threshold)]
mem_services = filtered_df['service'].tolist()
return cpu_services, mem_services
def analysis(file_path, chaos_tests_config):
# Load the telemetry data from file
data = load_telemetry_data(file_path)
# Calculate Z-scores for CPU, Memory, and Network columns
zscores = calculate_zscores(data)
# Identify outliers
outliers_cpu, outliers_memory, outliers_network = identify_outliers(zscores)
cpu_services, mem_services = get_services_above_heatmap_threshold(data, heatmap_cpu_threshold, heatmap_mem_threshold)
# Display the identified outliers
logging.info("======================== Profiling ==================================")
logging.info(f"CPU outliers: {outliers_cpu}")
logging.info(f"Memory outliers: {outliers_memory}")
logging.info(f"Network outliers: {outliers_network}")
logging.info("===================== HeatMap Analysis ==============================")
if cpu_services:
logging.info("Services with CPU_HEATMAP above threshold:", cpu_services)
else:
logging.info("There are no services that are using siginificant CPU compared to their assigned limits (infinite in case no limits are set).")
if mem_services:
logging.info("Services with MEM_HEATMAP above threshold:", mem_services)
else:
logging.info("There are no services that are using siginificant MEMORY compared to their assigned limits (infinite in case no limits are set).")
time.sleep(2)
logging.info("======================= Recommendations =============================")
if cpu_services:
logging.info(f"Recommended tests for {str(cpu_services)} :\n {chaos_tests_config['CPU']}")
logging.info("\n")
if mem_services:
logging.info(f"Recommended tests for {str(mem_services)} :\n {chaos_tests_config['MEM']}")
logging.info("\n")
if outliers_network:
logging.info(f"Recommended tests for str(outliers_network) :\n {chaos_tests_config['NETWORK']}")
logging.info("\n")
logging.info("\n")
logging.info("Please check data in utilisation.txt for further analysis")

View File

@@ -0,0 +1,30 @@
def get_entries_by_category(filename, category):
# Read the file
with open(filename, 'r') as file:
content = file.read()
# Split the content into sections based on the square brackets
sections = content.split('\n\n')
# Define the categories
valid_categories = ['CPU', 'NETWORK', 'MEM', 'GENERIC']
# Validate the provided category
if category not in valid_categories:
return []
# Find the section corresponding to the specified category
target_section = None
for section in sections:
if section.startswith(f"[{category}]"):
target_section = section
break
# If the category section was not found, return an empty list
if target_section is None:
return []
# Extract the entries from the category section
entries = [entry.strip() for entry in target_section.split('\n') if entry and not entry.startswith('[')]
return entries

View File

@@ -0,0 +1,96 @@
import logging
import pandas
from prometheus_api_client import PrometheusConnect
import pandas as pd
import urllib3
saved_metrics_path = "./utilisation.txt"
def convert_data_to_dataframe(data, label):
df = pd.DataFrame()
df['service'] = [item['metric']['pod'] for item in data]
df[label] = [item['value'][1] for item in data]
return df
def convert_data(data, service):
result = {}
for entry in data:
pod_name = entry['metric']['pod']
value = entry['value'][1]
result[pod_name] = value
return result.get(service, '100000000000') # for those pods whose limits are not defined they can take as much resources, there assigning a very high value
def save_utilization_to_file(cpu_data, cpu_limits_result, mem_data, mem_limits_result, network_data, filename):
df_cpu = convert_data_to_dataframe(cpu_data, "CPU")
merged_df = pd.DataFrame(columns=['service','CPU','CPU_LIMITS','MEM','MEM_LIMITS','NETWORK'])
services = df_cpu.service.unique()
logging.info(services)
for s in services:
new_row_df = pd.DataFrame( {"service": s, "CPU" : convert_data(cpu_data, s),
"CPU_LIMITS" : convert_data(cpu_limits_result, s),
"MEM" : convert_data(mem_data, s), "MEM_LIMITS" : convert_data(mem_limits_result, s),
"NETWORK" : convert_data(network_data, s)}, index=[0])
merged_df = pd.concat([merged_df, new_row_df], ignore_index=True)
# Convert columns to string
merged_df['CPU'] = merged_df['CPU'].astype(str)
merged_df['MEM'] = merged_df['MEM'].astype(str)
merged_df['CPU_LIMITS'] = merged_df['CPU_LIMITS'].astype(str)
merged_df['MEM_LIMITS'] = merged_df['MEM_LIMITS'].astype(str)
merged_df['NETWORK'] = merged_df['NETWORK'].astype(str)
# Extract integer part before the decimal point
merged_df['CPU'] = merged_df['CPU'].str.split('.').str[0]
merged_df['MEM'] = merged_df['MEM'].str.split('.').str[0]
merged_df['CPU_LIMITS'] = merged_df['CPU_LIMITS'].str.split('.').str[0]
merged_df['MEM_LIMITS'] = merged_df['MEM_LIMITS'].str.split('.').str[0]
merged_df['NETWORK'] = merged_df['NETWORK'].str.split('.').str[0]
merged_df.to_csv(filename, sep='\t', index=False)
def fetch_utilization_from_prometheus(prometheus_endpoint, auth_token, namespace, scrape_duration):
urllib3.disable_warnings()
prometheus = PrometheusConnect(url=prometheus_endpoint, headers={'Authorization':'Bearer {}'.format(auth_token)}, disable_ssl=True)
# Fetch CPU utilization
cpu_query = 'sum (rate (container_cpu_usage_seconds_total{image!="", namespace="%s"}[%s])) by (pod) *1000' % (namespace,scrape_duration)
logging.info(cpu_query)
cpu_result = prometheus.custom_query(cpu_query)
cpu_data = cpu_result
cpu_limits_query = '(sum by (pod) (kube_pod_container_resource_limits{resource="cpu", namespace="%s"}))*1000' %(namespace)
logging.info(cpu_limits_query)
cpu_limits_result = prometheus.custom_query(cpu_limits_query)
mem_query = 'sum by (pod) (avg_over_time(container_memory_usage_bytes{image!="", namespace="%s"}[%s]))' % (namespace, scrape_duration)
logging.info(mem_query)
mem_result = prometheus.custom_query(mem_query)
mem_data = mem_result
mem_limits_query = 'sum by (pod) (kube_pod_container_resource_limits{resource="memory", namespace="%s"}) ' %(namespace)
logging.info(mem_limits_query)
mem_limits_result = prometheus.custom_query(mem_limits_query)
network_query = 'sum by (pod) ((avg_over_time(container_network_transmit_bytes_total{namespace="%s"}[%s])) + \
(avg_over_time(container_network_receive_bytes_total{namespace="%s"}[%s])))' % (namespace, scrape_duration, namespace, scrape_duration)
network_result = prometheus.custom_query(network_query)
logging.info(network_query)
network_data = network_result
save_utilization_to_file(cpu_data, cpu_limits_result, mem_data, mem_limits_result, network_data, saved_metrics_path)
return saved_metrics_path

View File

@@ -1,105 +0,0 @@
import subprocess
import logging
import urllib.request
import shutil
import sys
import kraken.prometheus.client as prometheus
def setup(url):
"""
Downloads and unpacks kube-burner binary
"""
filename = "kube_burner.tar"
try:
logging.info("Fetching kube-burner binary")
urllib.request.urlretrieve(url, filename)
except Exception as e:
logging.error("Failed to download kube-burner binary located at %s" % url, e)
sys.exit(1)
try:
logging.info("Unpacking kube-burner tar ball")
shutil.unpack_archive(filename)
except Exception as e:
logging.error("Failed to unpack the kube-burner binary tarball: %s" % e)
sys.exit(1)
def scrape_metrics(
distribution, uuid, prometheus_url, prometheus_bearer_token, start_time, end_time, config_path, metrics_profile
):
"""
Scrapes metrics defined in the profile from Prometheus and indexes them into Elasticsearch
"""
if not prometheus_url:
if distribution == "openshift":
logging.info("Looks like prometheus_url is not defined, trying to use the default instance on the cluster")
prometheus_url, prometheus_bearer_token = prometheus.instance(
distribution, prometheus_url, prometheus_bearer_token
)
else:
logging.error("Looks like proemtheus url is not defined, exiting")
sys.exit(1)
command = (
"./kube-burner index --uuid "
+ str(uuid)
+ " -u "
+ str(prometheus_url)
+ " -t "
+ str(prometheus_bearer_token)
+ " -m "
+ str(metrics_profile)
+ " --start "
+ str(start_time)
+ " --end "
+ str(end_time)
+ " -c "
+ str(config_path)
)
try:
logging.info("Running kube-burner to capture the metrics: %s" % command)
logging.info("UUID for the run: %s" % uuid)
subprocess.run(command, shell=True, universal_newlines=True)
except Exception as e:
logging.error("Failed to run kube-burner, error: %s" % (e))
sys.exit(1)
def alerts(distribution, prometheus_url, prometheus_bearer_token, start_time, end_time, alert_profile):
"""
Scrapes metrics defined in the profile from Prometheus and alerts based on the severity defined
"""
if not prometheus_url:
if distribution == "openshift":
logging.info("Looks like prometheus_url is not defined, trying to use the default instance on the cluster")
prometheus_url, prometheus_bearer_token = prometheus.instance(
distribution, prometheus_url, prometheus_bearer_token
)
else:
logging.error("Looks like proemtheus url is not defined, exiting")
sys.exit(1)
command = (
"./kube-burner check-alerts "
+ " -u "
+ str(prometheus_url)
+ " -t "
+ str(prometheus_bearer_token)
+ " -a "
+ str(alert_profile)
+ " --start "
+ str(start_time)
+ " --end "
+ str(end_time)
)
try:
logging.info("Running kube-burner to capture the metrics: %s" % command)
output = subprocess.run(command, shell=True, universal_newlines=True)
if output.returncode != 0:
logging.error("command exited with a non-zero rc, please check the logs for errors or critical alerts")
sys.exit(output.returncode)
except Exception as e:
logging.error("Failed to run kube-burner, error: %s" % (e))
sys.exit(1)

View File

@@ -1,202 +0,0 @@
import kraken.invoke.command as runcommand
import kraken.kubernetes.client as kubecli
import logging
import time
import sys
import requests
import yaml
import kraken.cerberus.setup as cerberus
# Inject litmus scenarios defined in the config
def run(scenarios_list, config, litmus_uninstall, wait_duration, litmus_namespace):
# Loop to run the scenarios starts here
for l_scenario in scenarios_list:
start_time = int(time.time())
try:
for item in l_scenario:
runcommand.invoke("kubectl apply -f %s -n %s" % (item, litmus_namespace))
if "http" in item:
f = requests.get(item)
yaml_item = list(yaml.safe_load_all(f.content))[0]
else:
with open(item, "r") as f:
yaml_item = list(yaml.safe_load_all(f))[0]
if yaml_item["kind"] == "ChaosEngine":
engine_name = yaml_item["metadata"]["name"]
experiment_names = yaml_item["spec"]["experiments"]
experiment_namespace = yaml_item["metadata"]["namespace"]
if experiment_namespace != "litmus":
logging.error(
"Specified namespace: %s in the scenario: %s is not supported, please switch it to litmus"
% (experiment_namespace, l_scenario)
)
sys.exit(1)
for expr in experiment_names:
expr_name = expr["name"]
experiment_result = check_experiment(engine_name, expr_name, litmus_namespace)
if experiment_result:
logging.info("Scenario: %s has been successfully injected!" % item)
else:
logging.info("Scenario: %s was not successfully injected, please check" % item)
if litmus_uninstall:
delete_chaos(litmus_namespace)
sys.exit(1)
if litmus_uninstall:
delete_chaos(litmus_namespace)
logging.info("Waiting for the specified duration: %s" % wait_duration)
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.get_status(config, start_time, end_time)
except Exception as e:
logging.error("Failed to run litmus scenario: %s. Encountered " "the following exception: %s" % (item, e))
sys.exit(1)
# Install litmus and wait until pod is running
def install_litmus(version, namespace):
logging.info("Installing version %s of litmus in namespace %s" % (version, namespace))
litmus_install = runcommand.invoke(
"kubectl -n %s apply -f " "https://litmuschaos.github.io/litmus/litmus-operator-%s.yaml" % (namespace, version)
)
if "unable" in litmus_install:
logging.info("Unable to install litmus because " + str(litmus_install))
sys.exit(1)
runcommand.invoke(
"oc patch -n %s deployment.apps/chaos-operator-ce --type=json --patch ' "
'[ { "op": "add", "path": "/spec/template/spec/containers/0/env/-", '
'"value": { "name": "ANALYTICS", "value": "FALSE" } } ]\'' % namespace
)
logging.info("Waiting for litmus operator to become available")
runcommand.invoke("oc wait deploy -n %s chaos-operator-ce --for=condition=Available" % namespace)
def deploy_all_experiments(version_string, namespace):
if not version_string.startswith("v"):
logging.error("Incorrect version string for litmus, needs to start with 'v' " "followed by a number")
sys.exit(1)
version = version_string[1:]
logging.info("Installing all litmus experiments")
runcommand.invoke(
"kubectl -n %s apply -f "
"https://hub.litmuschaos.io/api/chaos/%s?file=charts/generic/experiments.yaml" % (namespace, version)
)
def wait_for_initialized(engine_name, experiment_name, namespace):
chaos_engine = kubecli.get_litmus_chaos_object(kind='chaosengine', name=engine_name,
namespace=namespace).engineStatus
engine_status = chaos_engine.strip()
max_tries = 30
engine_counter = 0
while engine_status.lower() != "initialized":
time.sleep(10)
logging.info("Waiting for " + experiment_name + " to be initialized")
chaos_engine = kubecli.get_litmus_chaos_object(kind='chaosengine', name=engine_name,
namespace=namespace).engineStatus
engine_status = chaos_engine.strip()
if engine_counter >= max_tries:
logging.error("Chaos engine " + experiment_name + " took longer than 5 minutes to be initialized")
return False
engine_counter += 1
# need to see if error in run
if "notfound" in engine_status.lower():
logging.info("Chaos engine was not found")
return False
return True
def wait_for_status(engine_name, expected_status, experiment_name, namespace):
if expected_status == "running":
response = wait_for_initialized(engine_name, experiment_name, namespace)
if not response:
logging.info("Chaos engine never initialized, exiting")
return False
chaos_engine = kubecli.get_litmus_chaos_object(kind='chaosengine', name=engine_name,
namespace=namespace).expStatus
engine_status = chaos_engine.strip()
max_tries = 30
engine_counter = 0
while engine_status.lower() != expected_status:
time.sleep(10)
logging.info("Waiting for " + experiment_name + " to be " + expected_status)
chaos_engine = kubecli.get_litmus_chaos_object(kind='chaosengine', name=engine_name,
namespace=namespace).expStatus
engine_status = chaos_engine.strip()
if engine_counter >= max_tries:
logging.error("Chaos engine " + experiment_name + " took longer than 5 minutes to be " + expected_status)
return False
engine_counter += 1
# need to see if error in run
if "notfound" in engine_status.lower():
logging.info("Chaos engine was not found")
return False
return True
# Check status of experiment
def check_experiment(engine_name, experiment_name, namespace):
wait_response = wait_for_status(engine_name, "running", experiment_name, namespace)
if wait_response:
wait_for_status(engine_name, "completed", experiment_name, namespace)
else:
sys.exit(1)
chaos_result = kubecli.get_litmus_chaos_object(kind='chaosresult', name=engine_name+'-'+experiment_name,
namespace=namespace).verdict
if chaos_result == "Pass":
logging.info("Engine " + str(engine_name) + " finished with status " + str(chaos_result))
return True
else:
chaos_result = kubecli.get_litmus_chaos_object(kind='chaosresult', name=engine_name+'-'+experiment_name,
namespace=namespace).failStep
logging.info("Chaos scenario:" + engine_name + " failed with error: " + str(chaos_result))
logging.info(
"See 'kubectl get chaosresult %s"
"-%s -n %s -o yaml' for full results" % (engine_name, experiment_name, namespace)
)
return False
# Delete all chaos engines in a given namespace
def delete_chaos_experiments(namespace):
if kubecli.check_if_namespace_exists(namespace):
chaos_exp_exists = runcommand.invoke_no_exit("kubectl get chaosexperiment")
if "returned non-zero exit status 1" not in chaos_exp_exists:
logging.info("Deleting all litmus experiments")
runcommand.invoke("kubectl delete chaosexperiment --all -n " + str(namespace))
# Delete all chaos engines in a given namespace
def delete_chaos(namespace):
if kubecli.check_if_namespace_exists(namespace):
logging.info("Deleting all litmus run objects")
chaos_engine_exists = runcommand.invoke_no_exit("kubectl get chaosengine")
if "returned non-zero exit status 1" not in chaos_engine_exists:
runcommand.invoke("kubectl delete chaosengine --all -n " + str(namespace))
chaos_result_exists = runcommand.invoke_no_exit("kubectl get chaosresult")
if "returned non-zero exit status 1" not in chaos_result_exists:
runcommand.invoke("kubectl delete chaosresult --all -n " + str(namespace))
else:
logging.info(namespace + " namespace doesn't exist")
def uninstall_litmus(version, litmus_namespace):
if kubecli.check_if_namespace_exists(litmus_namespace):
logging.info("Uninstalling Litmus operator")
runcommand.invoke_no_exit(
"kubectl delete -n %s -f "
"https://litmuschaos.github.io/litmus/litmus-operator-%s.yaml" % (litmus_namespace, version)
)
logging.info("Deleting litmus crd")
runcommand.invoke_no_exit("kubectl get crds | grep litmus | awk '{print $1}' | xargs -I {} oc delete crd/{}")

View File

@@ -1,10 +1,15 @@
import random
import logging
import kraken.kubernetes.client as kubecli
from krkn_lib.k8s import KrknKubernetes
# krkn_lib
# Pick a random managedcluster with specified label selector
def get_managedcluster(managedcluster_name, label_selector, instance_kill_count):
def get_managedcluster(
managedcluster_name,
label_selector,
instance_kill_count,
kubecli: KrknKubernetes):
if managedcluster_name in kubecli.list_killable_managedclusters():
return [managedcluster_name]
elif managedcluster_name:
@@ -25,10 +30,12 @@ def get_managedcluster(managedcluster_name, label_selector, instance_kill_count)
# Wait until the managedcluster status becomes Available
def wait_for_available_status(managedcluster, timeout):
# krkn_lib
def wait_for_available_status(managedcluster, timeout, kubecli: KrknKubernetes):
kubecli.watch_managedcluster_status(managedcluster, "True", timeout)
# Wait until the managedcluster status becomes Not Available
def wait_for_unavailable_status(managedcluster, timeout):
# krkn_lib
def wait_for_unavailable_status(managedcluster, timeout, kubecli: KrknKubernetes):
kubecli.watch_managedcluster_status(managedcluster, "Unknown", timeout)

View File

@@ -4,18 +4,18 @@ import time
import logging
import sys
import yaml
import html
import kraken.kubernetes.client as kubecli
import kraken.managedcluster_scenarios.common_managedcluster_functions as common_managedcluster_functions
from krkn_lib.k8s import KrknKubernetes
class GENERAL:
def __init__(self):
pass
# krkn_lib
class managedcluster_scenarios():
def __init__(self):
kubecli: KrknKubernetes
def __init__(self, kubecli: KrknKubernetes):
self.kubecli = kubecli
self.general = GENERAL()
# managedcluster scenario to start the managedcluster
@@ -31,16 +31,16 @@ class managedcluster_scenarios():
args="""kubectl scale deployment.apps/klusterlet --replicas 3 &
kubectl scale deployment.apps/klusterlet-registration-agent --replicas 1 -n open-cluster-management-agent""")
)
kubecli.create_manifestwork(body, managedcluster)
self.kubecli.create_manifestwork(body, managedcluster)
logging.info("managedcluster_start_scenario has been successfully injected!")
logging.info("Waiting for the specified timeout: %s" % timeout)
common_managedcluster_functions.wait_for_available_status(managedcluster, timeout)
common_managedcluster_functions.wait_for_available_status(managedcluster, timeout, self.kubecli)
except Exception as e:
logging.error("managedcluster scenario exiting due to Exception %s" % e)
sys.exit(1)
finally:
logging.info("Deleting manifestworks")
kubecli.delete_manifestwork(managedcluster)
self.kubecli.delete_manifestwork(managedcluster)
# managedcluster scenario to stop the managedcluster
def managedcluster_stop_scenario(self, instance_kill_count, managedcluster, timeout):
@@ -55,16 +55,16 @@ class managedcluster_scenarios():
args="""kubectl scale deployment.apps/klusterlet --replicas 0 &&
kubectl scale deployment.apps/klusterlet-registration-agent --replicas 0 -n open-cluster-management-agent""")
)
kubecli.create_manifestwork(body, managedcluster)
self.kubecli.create_manifestwork(body, managedcluster)
logging.info("managedcluster_stop_scenario has been successfully injected!")
logging.info("Waiting for the specified timeout: %s" % timeout)
common_managedcluster_functions.wait_for_unavailable_status(managedcluster, timeout)
common_managedcluster_functions.wait_for_unavailable_status(managedcluster, timeout, self.kubecli)
except Exception as e:
logging.error("managedcluster scenario exiting due to Exception %s" % e)
sys.exit(1)
finally:
logging.info("Deleting manifestworks")
kubecli.delete_manifestwork(managedcluster)
self.kubecli.delete_manifestwork(managedcluster)
# managedcluster scenario to stop and then start the managedcluster
def managedcluster_stop_start_scenario(self, instance_kill_count, managedcluster, timeout):
@@ -94,7 +94,7 @@ class managedcluster_scenarios():
template.render(managedcluster_name=managedcluster,
args="""kubectl scale deployment.apps/klusterlet --replicas 3""")
)
kubecli.create_manifestwork(body, managedcluster)
self.kubecli.create_manifestwork(body, managedcluster)
logging.info("start_klusterlet_scenario has been successfully injected!")
time.sleep(30) # until https://github.com/open-cluster-management-io/OCM/issues/118 gets solved
except Exception as e:
@@ -102,7 +102,7 @@ class managedcluster_scenarios():
sys.exit(1)
finally:
logging.info("Deleting manifestworks")
kubecli.delete_manifestwork(managedcluster)
self.kubecli.delete_manifestwork(managedcluster)
# managedcluster scenario to stop the klusterlet
def stop_klusterlet_scenario(self, instance_kill_count, managedcluster, timeout):
@@ -116,7 +116,7 @@ class managedcluster_scenarios():
template.render(managedcluster_name=managedcluster,
args="""kubectl scale deployment.apps/klusterlet --replicas 0""")
)
kubecli.create_manifestwork(body, managedcluster)
self.kubecli.create_manifestwork(body, managedcluster)
logging.info("stop_klusterlet_scenario has been successfully injected!")
time.sleep(30) # until https://github.com/open-cluster-management-io/OCM/issues/118 gets solved
except Exception as e:
@@ -124,7 +124,7 @@ class managedcluster_scenarios():
sys.exit(1)
finally:
logging.info("Deleting manifestworks")
kubecli.delete_manifestwork(managedcluster)
self.kubecli.delete_manifestwork(managedcluster)
# managedcluster scenario to stop and start the klusterlet
def stop_start_klusterlet_scenario(self, instance_kill_count, managedcluster, timeout):

View File

@@ -4,23 +4,26 @@ import time
from kraken.managedcluster_scenarios.managedcluster_scenarios import managedcluster_scenarios
import kraken.managedcluster_scenarios.common_managedcluster_functions as common_managedcluster_functions
import kraken.cerberus.setup as cerberus
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.utils.functions import get_yaml_item_value
# Get the managedcluster scenarios object of specfied cloud type
def get_managedcluster_scenario_object(managedcluster_scenario):
return managedcluster_scenarios()
# krkn_lib
def get_managedcluster_scenario_object(managedcluster_scenario, kubecli: KrknKubernetes):
return managedcluster_scenarios(kubecli)
# Run defined scenarios
def run(scenarios_list, config, wait_duration):
# krkn_lib
def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes):
for managedcluster_scenario_config in scenarios_list:
with open(managedcluster_scenario_config, "r") as f:
managedcluster_scenario_config = yaml.full_load(f)
for managedcluster_scenario in managedcluster_scenario_config["managedcluster_scenarios"]:
managedcluster_scenario_object = get_managedcluster_scenario_object(managedcluster_scenario)
managedcluster_scenario_object = get_managedcluster_scenario_object(managedcluster_scenario, kubecli)
if managedcluster_scenario["actions"]:
for action in managedcluster_scenario["actions"]:
start_time = int(time.time())
inject_managedcluster_scenario(action, managedcluster_scenario, managedcluster_scenario_object)
inject_managedcluster_scenario(action, managedcluster_scenario, managedcluster_scenario_object, kubecli)
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
end_time = int(time.time())
@@ -29,20 +32,29 @@ def run(scenarios_list, config, wait_duration):
# Inject the specified managedcluster scenario
def inject_managedcluster_scenario(action, managedcluster_scenario, managedcluster_scenario_object):
# krkn_lib
def inject_managedcluster_scenario(action, managedcluster_scenario, managedcluster_scenario_object, kubecli: KrknKubernetes):
# Get the managedcluster scenario configurations
run_kill_count = managedcluster_scenario.get("runs", 1)
instance_kill_count = managedcluster_scenario.get("instance_count", 1)
managedcluster_name = managedcluster_scenario.get("managedcluster_name", "")
label_selector = managedcluster_scenario.get("label_selector", "")
timeout = managedcluster_scenario.get("timeout", 120)
run_kill_count = get_yaml_item_value(
managedcluster_scenario, "runs", 1
)
instance_kill_count = get_yaml_item_value(
managedcluster_scenario, "instance_count", 1
)
managedcluster_name = get_yaml_item_value(
managedcluster_scenario, "managedcluster_name", ""
)
label_selector = get_yaml_item_value(
managedcluster_scenario, "label_selector", ""
)
timeout = get_yaml_item_value(managedcluster_scenario, "timeout", 120)
# Get the managedcluster to apply the scenario
if managedcluster_name:
managedcluster_name_list = managedcluster_name.split(",")
else:
managedcluster_name_list = [managedcluster_name]
for single_managedcluster_name in managedcluster_name_list:
managedclusters = common_managedcluster_functions.get_managedcluster(single_managedcluster_name, label_selector, instance_kill_count)
managedclusters = common_managedcluster_functions.get_managedcluster(single_managedcluster_name, label_selector, instance_kill_count, kubecli)
for single_managedcluster in managedclusters:
if action == "managedcluster_start_scenario":
managedcluster_scenario_object.managedcluster_start_scenario(run_kill_count, single_managedcluster, timeout)

View File

@@ -1,98 +0,0 @@
import time
import random
import logging
import kraken.kubernetes.client as kubecli
import kraken.cerberus.setup as cerberus
import kraken.post_actions.actions as post_actions
import yaml
import sys
def run(scenarios_list, config, wait_duration, failed_post_scenarios, kubeconfig_path):
for scenario_config in scenarios_list:
if len(scenario_config) > 1:
pre_action_output = post_actions.run(kubeconfig_path, scenario_config[1])
else:
pre_action_output = ""
with open(scenario_config[0], "r") as f:
scenario_config_yaml = yaml.full_load(f)
for scenario in scenario_config_yaml["scenarios"]:
scenario_namespace = scenario.get("namespace", "")
scenario_label = scenario.get("label_selector", "")
if scenario_namespace is not None and scenario_namespace.strip() != "":
if scenario_label is not None and scenario_label.strip() != "":
logging.error("You can only have namespace or label set in your namespace scenario")
logging.error(
"Current scenario config has namespace '%s' and label selector '%s'"
% (scenario_namespace, scenario_label)
)
logging.error(
"Please set either namespace to blank ('') or label_selector to blank ('') to continue"
)
sys.exit(1)
delete_count = scenario.get("delete_count", 1)
run_count = scenario.get("runs", 1)
run_sleep = scenario.get("sleep", 10)
wait_time = scenario.get("wait_time", 30)
killed_namespaces = []
start_time = int(time.time())
for i in range(run_count):
namespaces = kubecli.check_namespaces([scenario_namespace], scenario_label)
for j in range(delete_count):
if len(namespaces) == 0:
logging.error(
"Couldn't delete %s namespaces, not enough namespaces matching %s with label %s"
% (str(run_count), scenario_namespace, str(scenario_label))
)
sys.exit(1)
selected_namespace = namespaces[random.randint(0, len(namespaces) - 1)]
killed_namespaces.append(selected_namespace)
try:
kubecli.delete_namespace(selected_namespace)
logging.info("Delete on namespace %s was successful" % str(selected_namespace))
except Exception as e:
logging.info("Delete on namespace %s was unsuccessful" % str(selected_namespace))
logging.info("Namespace action error: " + str(e))
sys.exit(1)
namespaces.remove(selected_namespace)
logging.info("Waiting %s seconds between namespace deletions" % str(run_sleep))
time.sleep(run_sleep)
logging.info("Waiting for the specified duration: %s" % wait_duration)
time.sleep(wait_duration)
if len(scenario_config) > 1:
try:
failed_post_scenarios = post_actions.check_recovery(
kubeconfig_path, scenario_config, failed_post_scenarios, pre_action_output
)
except Exception as e:
logging.error("Failed to run post action checks: %s" % e)
sys.exit(1)
else:
failed_post_scenarios = check_active_namespace(killed_namespaces, wait_time)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
def check_active_namespace(killed_namespaces, wait_time):
active_namespace = []
timer = 0
while timer < wait_time and killed_namespaces:
for namespace_name in killed_namespaces:
if namespace_name in kubecli.list_namespaces():
response = kubecli.get_namespace_status(namespace_name).strip()
if response != "Active":
continue
else:
active_namespace.append(namespace_name)
killed_namespaces = set(killed_namespaces) - set(active_namespace)
if len(killed_namespaces) == 0:
return []
timer += 5
time.sleep(5)
logging.info("Waiting 5 seconds for namespaces to become active")
logging.error("Namespaces are still not active after waiting " + str(wait_time) + "seconds")
logging.error("Non active namespaces " + str(killed_namespaces))
return killed_namespaces

View File

@@ -1,97 +1,130 @@
import yaml
import logging
import time
import sys
import os
import random
from jinja2 import Environment, FileSystemLoader
import kraken.cerberus.setup as cerberus
import kraken.kubernetes.client as kubecli
import kraken.node_actions.common_node_functions as common_node_functions
from jinja2 import Environment, FileSystemLoader
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
# krkn_lib
# Reads the scenario config and introduces traffic variations in Node's host network interface.
def run(scenarios_list, config, wait_duration):
def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
failed_post_scenarios = ""
logging.info("Runing the Network Chaos tests")
failed_post_scenarios = ""
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
for net_config in scenarios_list:
with open(net_config, "r") as file:
param_lst = ["latency", "loss", "bandwidth"]
test_config = yaml.safe_load(file)
test_dict = test_config["network_chaos"]
test_duration = int(test_dict.get("duration", 300))
test_interface = test_dict.get("interfaces", [])
test_node = test_dict.get("node_name", "")
test_node_label = test_dict.get("label_selector", "node-role.kubernetes.io/master")
test_execution = test_dict.get("execution", "serial")
test_instance_count = test_dict.get("instance_count", 1)
test_egress = test_dict.get("egress", {"bandwidth": "100mbit"})
if test_node:
node_name_list = test_node.split(",")
else:
node_name_list = [test_node]
nodelst = []
for single_node_name in node_name_list:
nodelst.extend(common_node_functions.get_node(single_node_name, test_node_label, test_instance_count))
file_loader = FileSystemLoader(os.path.abspath(os.path.dirname(__file__)))
env = Environment(loader=file_loader, autoescape=True)
pod_template = env.get_template("pod.j2")
test_interface = verify_interface(test_interface, nodelst, pod_template)
joblst = []
egress_lst = [i for i in param_lst if i in test_egress]
chaos_config = {
"network_chaos": {
"duration": test_duration,
"interfaces": test_interface,
"node_name": ",".join(nodelst),
"execution": test_execution,
"instance_count": test_instance_count,
"egress": test_egress,
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = net_config
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, net_config)
try:
with open(net_config, "r") as file:
param_lst = ["latency", "loss", "bandwidth"]
test_config = yaml.safe_load(file)
test_dict = test_config["network_chaos"]
test_duration = int(
get_yaml_item_value(test_dict, "duration", 300)
)
test_interface = get_yaml_item_value(
test_dict, "interfaces", []
)
test_node = get_yaml_item_value(test_dict, "node_name", "")
test_node_label = get_yaml_item_value(
test_dict, "label_selector",
"node-role.kubernetes.io/master"
)
test_execution = get_yaml_item_value(
test_dict, "execution", "serial"
)
test_instance_count = get_yaml_item_value(
test_dict, "instance_count", 1
)
test_egress = get_yaml_item_value(
test_dict, "egress", {"bandwidth": "100mbit"}
)
if test_node:
node_name_list = test_node.split(",")
else:
node_name_list = [test_node]
nodelst = []
for single_node_name in node_name_list:
nodelst.extend(common_node_functions.get_node(single_node_name, test_node_label, test_instance_count, kubecli))
file_loader = FileSystemLoader(os.path.abspath(os.path.dirname(__file__)))
env = Environment(loader=file_loader, autoescape=True)
pod_template = env.get_template("pod.j2")
test_interface = verify_interface(test_interface, nodelst, pod_template, kubecli)
joblst = []
egress_lst = [i for i in param_lst if i in test_egress]
chaos_config = {
"network_chaos": {
"duration": test_duration,
"interfaces": test_interface,
"node_name": ",".join(nodelst),
"execution": test_execution,
"instance_count": test_instance_count,
"egress": test_egress,
}
}
}
logging.info("Executing network chaos with config \n %s" % yaml.dump(chaos_config))
job_template = env.get_template("job.j2")
try:
for i in egress_lst:
for node in nodelst:
exec_cmd = get_egress_cmd(
test_execution, test_interface, i, test_dict["egress"], duration=test_duration
)
logging.info("Executing %s on node %s" % (exec_cmd, node))
job_body = yaml.safe_load(
job_template.render(jobname=i + str(hash(node))[:5], nodename=node, cmd=exec_cmd)
)
joblst.append(job_body["metadata"]["name"])
api_response = kubecli.create_job(job_body)
if api_response is None:
raise Exception("Error creating job")
if test_execution == "serial":
logging.info("Waiting for serial job to finish")
logging.info("Executing network chaos with config \n %s" % yaml.dump(chaos_config))
job_template = env.get_template("job.j2")
try:
for i in egress_lst:
for node in nodelst:
exec_cmd = get_egress_cmd(
test_execution, test_interface, i, test_dict["egress"], duration=test_duration
)
logging.info("Executing %s on node %s" % (exec_cmd, node))
job_body = yaml.safe_load(
job_template.render(jobname=i + str(hash(node))[:5], nodename=node, cmd=exec_cmd)
)
joblst.append(job_body["metadata"]["name"])
api_response = kubecli.create_job(job_body)
if api_response is None:
raise Exception("Error creating job")
if test_execution == "serial":
logging.info("Waiting for serial job to finish")
start_time = int(time.time())
wait_for_job(joblst[:], kubecli, test_duration + 300)
logging.info("Waiting for wait_duration %s" % wait_duration)
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
if test_execution == "parallel":
break
if test_execution == "parallel":
logging.info("Waiting for parallel job to finish")
start_time = int(time.time())
wait_for_job(joblst[:], test_duration + 300)
wait_for_job(joblst[:], kubecli, test_duration + 300)
logging.info("Waiting for wait_duration %s" % wait_duration)
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
if test_execution == "parallel":
break
if test_execution == "parallel":
logging.info("Waiting for parallel job to finish")
start_time = int(time.time())
wait_for_job(joblst[:], test_duration + 300)
logging.info("Waiting for wait_duration %s" % wait_duration)
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
except Exception as e:
logging.error("Network Chaos exiting due to Exception %s" % e)
sys.exit(1)
finally:
logging.info("Deleting jobs")
delete_job(joblst[:])
except Exception as e:
logging.error("Network Chaos exiting due to Exception %s" % e)
raise RuntimeError()
finally:
logging.info("Deleting jobs")
delete_job(joblst[:], kubecli)
except (RuntimeError, Exception):
scenario_telemetry.exitStatus = 1
failed_scenarios.append(net_config)
log_exception(net_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries
def verify_interface(test_interface, nodelst, template):
# krkn_lib
def verify_interface(test_interface, nodelst, template, kubecli: KrknKubernetes):
pod_index = random.randint(0, len(nodelst) - 1)
pod_body = yaml.safe_load(template.render(nodename=nodelst[pod_index]))
logging.info("Creating pod to query interface on node %s" % nodelst[pod_index])
@@ -108,21 +141,24 @@ def verify_interface(test_interface, nodelst, template):
for interface in test_interface:
if interface not in interface_lst:
logging.error("Interface %s not found in node %s interface list %s" % (interface, nodelst[pod_index], interface_lst))
sys.exit(1)
#sys.exit(1)
raise RuntimeError()
return test_interface
finally:
logging.info("Deleteing pod to query interface on node")
kubecli.delete_pod("fedtools", "default")
def get_job_pods(api_response):
# krkn_lib
def get_job_pods(api_response, kubecli: KrknKubernetes):
controllerUid = api_response.metadata.labels["controller-uid"]
pod_label_selector = "controller-uid=" + controllerUid
pods_list = kubecli.list_pods(label_selector=pod_label_selector, namespace="default")
return pods_list[0]
def wait_for_job(joblst, timeout=300):
# krkn_lib
def wait_for_job(joblst, kubecli: KrknKubernetes, timeout=300):
waittime = time.time() + timeout
count = 0
joblen = len(joblst)
@@ -134,26 +170,27 @@ def wait_for_job(joblst, timeout=300):
count += 1
joblst.remove(jobname)
except Exception:
logging.warn("Exception in getting job status")
logging.warning("Exception in getting job status")
if time.time() > waittime:
raise Exception("Starting pod failed")
time.sleep(5)
def delete_job(joblst):
# krkn_lib
def delete_job(joblst, kubecli: KrknKubernetes):
for jobname in joblst:
try:
api_response = kubecli.get_job_status(jobname, namespace="default")
if api_response.status.failed is not None:
pod_name = get_job_pods(api_response)
pod_name = get_job_pods(api_response, kubecli)
pod_stat = kubecli.read_pod(name=pod_name, namespace="default")
logging.error(pod_stat.status.container_statuses)
pod_log_response = kubecli.get_pod_log(name=pod_name, namespace="default")
pod_log = pod_log_response.data.decode("utf-8")
logging.error(pod_log)
except Exception:
logging.warn("Exception in getting job status")
api_response = kubecli.delete_job(name=jobname, namespace="default")
logging.warning("Exception in getting job status")
kubecli.delete_job(name=jobname, namespace="default")
def get_egress_cmd(execution, test_interface, mod, vallst, duration=30):

View File

@@ -2,10 +2,13 @@ import sys
import logging
import kraken.invoke.command as runcommand
import kraken.node_actions.common_node_functions as nodeaction
from krkn_lib.k8s import KrknKubernetes
# krkn_lib
class abstract_node_scenarios:
kubecli: KrknKubernetes
def __init__(self, kubecli: KrknKubernetes):
self.kubecli = kubecli
# Node scenario to start the node
def node_start_scenario(self, instance_kill_count, node, timeout):
pass
@@ -42,7 +45,7 @@ class abstract_node_scenarios:
logging.info("Starting stop_kubelet_scenario injection")
logging.info("Stopping the kubelet of the node %s" % (node))
runcommand.run("oc debug node/" + node + " -- chroot /host systemctl stop kubelet")
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
logging.info("The kubelet of the node %s has been stopped" % (node))
logging.info("stop_kubelet_scenario has been successfuly injected!")
except Exception as e:

View File

@@ -1,13 +1,14 @@
import sys
import time
import logging
import kraken.node_actions.common_node_functions as nodeaction
import os
import json
from aliyunsdkcore.client import AcsClient
from aliyunsdkecs.request.v20140526 import DescribeInstancesRequest, DeleteInstanceRequest
from aliyunsdkecs.request.v20140526 import StopInstanceRequest, StartInstanceRequest, RebootInstanceRequest
import logging
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
import os
import json
from krkn_lib.k8s import KrknKubernetes
class Alibaba:
@@ -179,9 +180,9 @@ class Alibaba:
logging.info("ECS %s is released" % instance_id)
return True
# krkn_lib
class alibaba_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self,kubecli: KrknKubernetes):
self.alibaba = Alibaba()
# Node scenario to start the node
@@ -193,7 +194,7 @@ class alibaba_node_scenarios(abstract_node_scenarios):
logging.info("Starting the node %s with instance ID: %s " % (node, vm_id))
self.alibaba.start_instances(vm_id)
self.alibaba.wait_until_running(vm_id, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s is in running state" % node)
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -213,7 +214,7 @@ class alibaba_node_scenarios(abstract_node_scenarios):
self.alibaba.stop_instances(vm_id)
self.alibaba.wait_until_stopped(vm_id, timeout)
logging.info("Node with instance ID: %s is in stopped state" % vm_id)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % e)
logging.error("node_stop_scenario injection failed!")
@@ -248,8 +249,8 @@ class alibaba_node_scenarios(abstract_node_scenarios):
instance_id = self.alibaba.get_instance_id(node)
logging.info("Rebooting the node with instance ID: %s " % (instance_id))
self.alibaba.reboot_instances(instance_id)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s has been rebooted" % (instance_id))
logging.info("node_reboot_scenario has been successfully injected!")
except Exception as e:

View File

@@ -2,10 +2,9 @@ import sys
import time
import boto3
import logging
import kraken.kubernetes.client as kubecli
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
from krkn_lib.k8s import KrknKubernetes
class AWS:
def __init__(self):
@@ -27,7 +26,9 @@ class AWS:
logging.error(
"Failed to start node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Stop the node instance
def stop_instances(self, instance_id):
@@ -36,7 +37,9 @@ class AWS:
logging.info("EC2 instance: " + str(instance_id) + " stopped")
except Exception as e:
logging.error("Failed to stop node instance %s. Encountered following " "exception: %s." % (instance_id, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Terminate the node instance
def terminate_instances(self, instance_id):
@@ -47,7 +50,9 @@ class AWS:
logging.error(
"Failed to terminate node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Reboot the node instance
def reboot_instances(self, instance_id):
@@ -58,7 +63,9 @@ class AWS:
logging.error(
"Failed to reboot node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Below functions poll EC2.Client.describe_instances() every 15 seconds
# until a successful state is reached. An error is returned after 40 failed checks
@@ -102,7 +109,9 @@ class AWS:
"Failed to create the default network_acl: %s"
"Make sure you have aws cli configured on the host and set for the region of your vpc/subnet" % (e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
return acl_id
# Replace network acl association
@@ -114,7 +123,9 @@ class AWS:
new_association_id = status["NewAssociationId"]
except Exception as e:
logging.error("Failed to replace network acl association: %s" % (e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
return new_association_id
# Describe network acl
@@ -131,7 +142,9 @@ class AWS:
"Failed to describe network acl: %s."
"Make sure you have aws cli configured on the host and set for the region of your vpc/subnet" % (e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
associations = response["NetworkAcls"][0]["Associations"]
# grab the current network_acl in use
original_acl_id = response["NetworkAcls"][0]["Associations"][0]["NetworkAclId"]
@@ -148,11 +161,14 @@ class AWS:
"Make sure you have aws cli configured on the host and set for the region of your vpc/subnet"
% (acl_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# krkn_lib
class aws_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self, kubecli: KrknKubernetes):
super().__init__(kubecli)
self.aws = AWS()
# Node scenario to start the node
@@ -164,7 +180,7 @@ class aws_node_scenarios(abstract_node_scenarios):
logging.info("Starting the node %s with instance ID: %s " % (node, instance_id))
self.aws.start_instances(instance_id)
self.aws.wait_until_running(instance_id)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s is in running state" % (instance_id))
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -172,7 +188,9 @@ class aws_node_scenarios(abstract_node_scenarios):
"Failed to start node instance. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("node_start_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to stop the node
def node_stop_scenario(self, instance_kill_count, node, timeout):
@@ -184,11 +202,13 @@ class aws_node_scenarios(abstract_node_scenarios):
self.aws.stop_instances(instance_id)
self.aws.wait_until_stopped(instance_id)
logging.info("Node with instance ID: %s is in stopped state" % (instance_id))
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % (e))
logging.error("node_stop_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to terminate the node
def node_termination_scenario(self, instance_kill_count, node, timeout):
@@ -200,10 +220,10 @@ class aws_node_scenarios(abstract_node_scenarios):
self.aws.terminate_instances(instance_id)
self.aws.wait_until_terminated(instance_id)
for _ in range(timeout):
if node not in kubecli.list_nodes():
if node not in self.kubecli.list_nodes():
break
time.sleep(1)
if node in kubecli.list_nodes():
if node in self.kubecli.list_nodes():
raise Exception("Node could not be terminated")
logging.info("Node with instance ID: %s has been terminated" % (instance_id))
logging.info("node_termination_scenario has been successfuly injected!")
@@ -212,7 +232,9 @@ class aws_node_scenarios(abstract_node_scenarios):
"Failed to terminate node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_termination_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to reboot the node
def node_reboot_scenario(self, instance_kill_count, node, timeout):
@@ -222,8 +244,8 @@ class aws_node_scenarios(abstract_node_scenarios):
instance_id = self.aws.get_instance_id(node)
logging.info("Rebooting the node %s with instance ID: %s " % (node, instance_id))
self.aws.reboot_instances(instance_id)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s has been rebooted" % (instance_id))
logging.info("node_reboot_scenario has been successfuly injected!")
except Exception as e:
@@ -231,4 +253,6 @@ class aws_node_scenarios(abstract_node_scenarios):
"Failed to reboot node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_reboot_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()

View File

@@ -1,13 +1,14 @@
import sys
import time
from azure.mgmt.compute import ComputeManagementClient
from azure.identity import DefaultAzureCredential
import yaml
import kraken.invoke.command as runcommand
import logging
import kraken.kubernetes.client as kubecli
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
import kraken.invoke.command as runcommand
import yaml
from azure.mgmt.compute import ComputeManagementClient
from azure.identity import DefaultAzureCredential
from krkn_lib.k8s import KrknKubernetes
class Azure:
@@ -39,7 +40,9 @@ class Azure:
logging.info("vm name " + str(vm_name) + " started")
except Exception as e:
logging.error("Failed to start node instance %s. Encountered following " "exception: %s." % (vm_name, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Stop the node instance
def stop_instances(self, group_name, vm_name):
@@ -48,7 +51,9 @@ class Azure:
logging.info("vm name " + str(vm_name) + " stopped")
except Exception as e:
logging.error("Failed to stop node instance %s. Encountered following " "exception: %s." % (vm_name, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Terminate the node instance
def terminate_instances(self, group_name, vm_name):
@@ -59,7 +64,9 @@ class Azure:
logging.error(
"Failed to terminate node instance %s. Encountered following " "exception: %s." % (vm_name, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Reboot the node instance
def reboot_instances(self, group_name, vm_name):
@@ -68,7 +75,9 @@ class Azure:
logging.info("vm name " + str(vm_name) + " rebooted")
except Exception as e:
logging.error("Failed to reboot node instance %s. Encountered following " "exception: %s." % (vm_name, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
def get_vm_status(self, resource_group, vm_name):
statuses = self.compute_client.virtual_machines.instance_view(resource_group, vm_name).statuses
@@ -121,9 +130,10 @@ class Azure:
logging.info("Vm %s is terminated" % vm_name)
return True
# krkn_lib
class azure_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self, kubecli: KrknKubernetes):
super().__init__(kubecli)
logging.info("init in azure")
self.azure = Azure()
@@ -136,7 +146,7 @@ class azure_node_scenarios(abstract_node_scenarios):
logging.info("Starting the node %s with instance ID: %s " % (vm_name, resource_group))
self.azure.start_instances(resource_group, vm_name)
self.azure.wait_until_running(resource_group, vm_name, timeout)
nodeaction.wait_for_ready_status(vm_name, timeout)
nodeaction.wait_for_ready_status(vm_name, timeout,self.kubecli)
logging.info("Node with instance ID: %s is in running state" % node)
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -144,7 +154,9 @@ class azure_node_scenarios(abstract_node_scenarios):
"Failed to start node instance. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("node_start_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to stop the node
def node_stop_scenario(self, instance_kill_count, node, timeout):
@@ -156,11 +168,13 @@ class azure_node_scenarios(abstract_node_scenarios):
self.azure.stop_instances(resource_group, vm_name)
self.azure.wait_until_stopped(resource_group, vm_name, timeout)
logging.info("Node with instance ID: %s is in stopped state" % vm_name)
nodeaction.wait_for_unknown_status(vm_name, timeout)
nodeaction.wait_for_unknown_status(vm_name, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % e)
logging.error("node_stop_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to terminate the node
def node_termination_scenario(self, instance_kill_count, node, timeout):
@@ -172,10 +186,10 @@ class azure_node_scenarios(abstract_node_scenarios):
self.azure.terminate_instances(resource_group, vm_name)
self.azure.wait_until_terminated(resource_group, vm_name, timeout)
for _ in range(timeout):
if vm_name not in kubecli.list_nodes():
if vm_name not in self.kubecli.list_nodes():
break
time.sleep(1)
if vm_name in kubecli.list_nodes():
if vm_name in self.kubecli.list_nodes():
raise Exception("Node could not be terminated")
logging.info("Node with instance ID: %s has been terminated" % node)
logging.info("node_termination_scenario has been successfully injected!")
@@ -184,7 +198,9 @@ class azure_node_scenarios(abstract_node_scenarios):
"Failed to terminate node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_termination_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to reboot the node
def node_reboot_scenario(self, instance_kill_count, node, timeout):
@@ -194,8 +210,8 @@ class azure_node_scenarios(abstract_node_scenarios):
vm_name, resource_group = self.azure.get_instance_id(node)
logging.info("Rebooting the node %s with instance ID: %s " % (vm_name, resource_group))
self.azure.reboot_instances(resource_group, vm_name)
nodeaction.wait_for_unknown_status(vm_name, timeout)
nodeaction.wait_for_ready_status(vm_name, timeout)
nodeaction.wait_for_unknown_status(vm_name, timeout, self.kubecli)
nodeaction.wait_for_ready_status(vm_name, timeout, self.kubecli)
logging.info("Node with instance ID: %s has been rebooted" % (vm_name))
logging.info("node_reboot_scenario has been successfully injected!")
except Exception as e:
@@ -203,4 +219,6 @@ class azure_node_scenarios(abstract_node_scenarios):
"Failed to reboot node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_reboot_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()

View File

@@ -7,7 +7,7 @@ import pyipmi.interfaces
import sys
import time
import traceback
from krkn_lib.k8s import KrknKubernetes
class BM:
def __init__(self, bm_info, user, passwd):
@@ -104,9 +104,10 @@ class BM:
while self.get_ipmi_connection(bmc_addr, node_name).get_chassis_status().power_on:
time.sleep(1)
# krkn_lib
class bm_node_scenarios(abstract_node_scenarios):
def __init__(self, bm_info, user, passwd):
def __init__(self, bm_info, user, passwd, kubecli: KrknKubernetes):
super().__init__(kubecli)
self.bm = BM(bm_info, user, passwd)
# Node scenario to start the node
@@ -118,7 +119,7 @@ class bm_node_scenarios(abstract_node_scenarios):
logging.info("Starting the node %s with bmc address: %s " % (node, bmc_addr))
self.bm.start_instances(bmc_addr, node)
self.bm.wait_until_running(bmc_addr, node)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with bmc address: %s is in running state" % (bmc_addr))
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -140,7 +141,7 @@ class bm_node_scenarios(abstract_node_scenarios):
self.bm.stop_instances(bmc_addr, node)
self.bm.wait_until_stopped(bmc_addr, node)
logging.info("Node with bmc address: %s is in stopped state" % (bmc_addr))
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
except Exception as e:
logging.error(
"Failed to stop node instance. Encountered following exception: %s. "
@@ -163,8 +164,8 @@ class bm_node_scenarios(abstract_node_scenarios):
logging.info("BMC Addr: %s" % (bmc_addr))
logging.info("Rebooting the node %s with bmc address: %s " % (node, bmc_addr))
self.bm.reboot_instances(bmc_addr, node)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with bmc address: %s has been rebooted" % (bmc_addr))
logging.info("node_reboot_scenario has been successfuly injected!")
except Exception as e:

View File

@@ -2,14 +2,13 @@ import time
import random
import logging
import paramiko
import kraken.kubernetes.client as kubecli
import kraken.invoke.command as runcommand
from krkn_lib.k8s import KrknKubernetes
node_general = False
# Pick a random node with specified label selector
def get_node(node_name, label_selector, instance_kill_count):
def get_node(node_name, label_selector, instance_kill_count, kubecli: KrknKubernetes):
if node_name in kubecli.list_killable_nodes():
return [node_name]
elif node_name:
@@ -29,20 +28,21 @@ def get_node(node_name, label_selector, instance_kill_count):
return nodes_to_return
# krkn_lib
# Wait until the node status becomes Ready
def wait_for_ready_status(node, timeout):
def wait_for_ready_status(node, timeout, kubecli: KrknKubernetes):
resource_version = kubecli.get_node_resource_version(node)
kubecli.watch_node_status(node, "True", timeout, resource_version)
# krkn_lib
# Wait until the node status becomes Not Ready
def wait_for_not_ready_status(node, timeout):
def wait_for_not_ready_status(node, timeout, kubecli: KrknKubernetes):
resource_version = kubecli.get_node_resource_version(node)
kubecli.watch_node_status(node, "False", timeout, resource_version)
# krkn_lib
# Wait until the node status becomes Unknown
def wait_for_unknown_status(node, timeout):
def wait_for_unknown_status(node, timeout, kubecli: KrknKubernetes):
resource_version = kubecli.get_node_resource_version(node)
kubecli.watch_node_status(node, "Unknown", timeout, resource_version)

View File

@@ -3,7 +3,7 @@ from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
import logging
import sys
import docker
from krkn_lib.k8s import KrknKubernetes
class Docker:
def __init__(self):
@@ -36,7 +36,8 @@ class Docker:
class docker_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self, kubecli: KrknKubernetes):
super().__init__(kubecli)
self.docker = Docker()
# Node scenario to start the node
@@ -47,7 +48,7 @@ class docker_node_scenarios(abstract_node_scenarios):
container_id = self.docker.get_container_id(node)
logging.info("Starting the node %s with container ID: %s " % (node, container_id))
self.docker.start_instances(node)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with container ID: %s is in running state" % (container_id))
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -66,7 +67,7 @@ class docker_node_scenarios(abstract_node_scenarios):
logging.info("Stopping the node %s with container ID: %s " % (node, container_id))
self.docker.stop_instances(node)
logging.info("Node with container ID: %s is in stopped state" % (container_id))
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % (e))
logging.error("node_stop_scenario injection failed!")
@@ -97,8 +98,8 @@ class docker_node_scenarios(abstract_node_scenarios):
container_id = self.docker.get_container_id(node)
logging.info("Rebooting the node %s with container ID: %s " % (node, container_id))
self.docker.reboot_instances(node)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with container ID: %s has been rebooted" % (container_id))
logging.info("node_reboot_scenario has been successfuly injected!")
except Exception as e:

View File

@@ -1,13 +1,12 @@
import sys
import time
import logging
import kraken.kubernetes.client as kubecli
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import kraken.invoke.command as runcommand
from krkn_lib.k8s import KrknKubernetes
class GCP:
def __init__(self):
@@ -45,7 +44,9 @@ class GCP:
logging.error(
"Failed to start node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Stop the node instance
def stop_instances(self, zone, instance_id):
@@ -54,7 +55,9 @@ class GCP:
logging.info("vm name " + str(instance_id) + " stopped")
except Exception as e:
logging.error("Failed to stop node instance %s. Encountered following " "exception: %s." % (instance_id, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Start the node instance
def suspend_instances(self, zone, instance_id):
@@ -65,7 +68,9 @@ class GCP:
logging.error(
"Failed to suspend node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Terminate the node instance
def terminate_instances(self, zone, instance_id):
@@ -76,7 +81,9 @@ class GCP:
logging.error(
"Failed to start node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Reboot the node instance
def reboot_instances(self, zone, instance_id):
@@ -87,7 +94,9 @@ class GCP:
logging.error(
"Failed to start node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Get instance status
def get_instance_status(self, zone, instance_id, expected_status, timeout):
@@ -133,8 +142,10 @@ class GCP:
return True
# krkn_lib
class gcp_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self, kubecli: KrknKubernetes):
super().__init__(kubecli)
self.gcp = GCP()
# Node scenario to start the node
@@ -146,7 +157,7 @@ class gcp_node_scenarios(abstract_node_scenarios):
logging.info("Starting the node %s with instance ID: %s " % (node, instance_id))
self.gcp.start_instances(zone, instance_id)
self.gcp.wait_until_running(zone, instance_id, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s is in running state" % instance_id)
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -154,7 +165,9 @@ class gcp_node_scenarios(abstract_node_scenarios):
"Failed to start node instance. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("node_start_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to stop the node
def node_stop_scenario(self, instance_kill_count, node, timeout):
@@ -167,11 +180,13 @@ class gcp_node_scenarios(abstract_node_scenarios):
self.gcp.stop_instances(zone, instance_id)
self.gcp.wait_until_stopped(zone, instance_id, timeout)
logging.info("Node with instance ID: %s is in stopped state" % instance_id)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % (e))
logging.error("node_stop_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to terminate the node
def node_termination_scenario(self, instance_kill_count, node, timeout):
@@ -183,10 +198,10 @@ class gcp_node_scenarios(abstract_node_scenarios):
self.gcp.terminate_instances(zone, instance_id)
self.gcp.wait_until_terminated(zone, instance_id, timeout)
for _ in range(timeout):
if node not in kubecli.list_nodes():
if node not in self.kubecli.list_nodes():
break
time.sleep(1)
if node in kubecli.list_nodes():
if node in self.kubecli.list_nodes():
raise Exception("Node could not be terminated")
logging.info("Node with instance ID: %s has been terminated" % instance_id)
logging.info("node_termination_scenario has been successfuly injected!")
@@ -195,7 +210,9 @@ class gcp_node_scenarios(abstract_node_scenarios):
"Failed to terminate node instance. Encountered following exception:" " %s. Test Failed" % e
)
logging.error("node_termination_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to reboot the node
def node_reboot_scenario(self, instance_kill_count, node, timeout):
@@ -205,7 +222,7 @@ class gcp_node_scenarios(abstract_node_scenarios):
instance_id, zone = self.gcp.get_instance_id(node)
logging.info("Rebooting the node %s with instance ID: %s " % (node, instance_id))
self.gcp.reboot_instances(zone, instance_id)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s has been rebooted" % instance_id)
logging.info("node_reboot_scenario has been successfuly injected!")
except Exception as e:
@@ -213,4 +230,6 @@ class gcp_node_scenarios(abstract_node_scenarios):
"Failed to reboot node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_reboot_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()

View File

@@ -1,14 +1,15 @@
import logging
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
from krkn_lib.k8s import KrknKubernetes
class GENERAL:
def __init__(self):
pass
# krkn_lib
class general_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self, kubecli: KrknKubernetes):
super().__init__(kubecli)
self.general = GENERAL()
# Node scenario to start the node

View File

@@ -4,7 +4,7 @@ import logging
import kraken.invoke.command as runcommand
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
from krkn_lib.k8s import KrknKubernetes
class OPENSTACKCLOUD:
def __init__(self):
@@ -23,7 +23,9 @@ class OPENSTACKCLOUD:
logging.info("Instance: " + str(node) + " started")
except Exception as e:
logging.error("Failed to start node instance %s. Encountered following " "exception: %s." % (node, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Stop the node instance
def stop_instances(self, node):
@@ -32,7 +34,9 @@ class OPENSTACKCLOUD:
logging.info("Instance: " + str(node) + " stopped")
except Exception as e:
logging.error("Failed to stop node instance %s. Encountered following " "exception: %s." % (node, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Reboot the node instance
def reboot_instances(self, node):
@@ -41,7 +45,9 @@ class OPENSTACKCLOUD:
logging.info("Instance: " + str(node) + " rebooted")
except Exception as e:
logging.error("Failed to reboot node instance %s. Encountered following " "exception: %s." % (node, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Wait until the node instance is running
def wait_until_running(self, node, timeout):
@@ -86,9 +92,9 @@ class OPENSTACKCLOUD:
return node_name
counter += 1
# krkn_lib
class openstack_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self, kubecli: KrknKubernetes):
self.openstackcloud = OPENSTACKCLOUD()
# Node scenario to start the node
@@ -100,7 +106,7 @@ class openstack_node_scenarios(abstract_node_scenarios):
openstack_node_name = self.openstackcloud.get_instance_id(node)
self.openstackcloud.start_instances(openstack_node_name)
self.openstackcloud.wait_until_running(openstack_node_name, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s is in running state" % (node))
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -108,7 +114,9 @@ class openstack_node_scenarios(abstract_node_scenarios):
"Failed to start node instance. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("node_start_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to stop the node
def node_stop_scenario(self, instance_kill_count, node, timeout):
@@ -120,11 +128,13 @@ class openstack_node_scenarios(abstract_node_scenarios):
self.openstackcloud.stop_instances(openstack_node_name)
self.openstackcloud.wait_until_stopped(openstack_node_name, timeout)
logging.info("Node with instance name: %s is in stopped state" % (node))
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % (e))
logging.error("node_stop_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to reboot the node
def node_reboot_scenario(self, instance_kill_count, node, timeout):
@@ -134,8 +144,8 @@ class openstack_node_scenarios(abstract_node_scenarios):
logging.info("Rebooting the node %s" % (node))
openstack_node_name = self.openstackcloud.get_instance_id(node)
self.openstackcloud.reboot_instances(openstack_node_name)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance name: %s has been rebooted" % (node))
logging.info("node_reboot_scenario has been successfuly injected!")
except Exception as e:
@@ -143,7 +153,9 @@ class openstack_node_scenarios(abstract_node_scenarios):
"Failed to reboot node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_reboot_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to start the node
def helper_node_start_scenario(self, instance_kill_count, node_ip, timeout):
@@ -161,7 +173,9 @@ class openstack_node_scenarios(abstract_node_scenarios):
"Failed to start node instance. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("helper_node_start_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to stop the node
def helper_node_stop_scenario(self, instance_kill_count, node_ip, timeout):
@@ -176,7 +190,9 @@ class openstack_node_scenarios(abstract_node_scenarios):
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % (e))
logging.error("helper_node_stop_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
def helper_node_service_status(self, node_ip, service, ssh_private_key, timeout):
try:
@@ -187,4 +203,6 @@ class openstack_node_scenarios(abstract_node_scenarios):
except Exception as e:
logging.error("Failed to check service status. Encountered following exception:" " %s. Test Failed" % (e))
logging.error("helper_node_service_status injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()

View File

@@ -12,33 +12,38 @@ from kraken.node_actions.bm_node_scenarios import bm_node_scenarios
from kraken.node_actions.docker_node_scenarios import docker_node_scenarios
import kraken.node_actions.common_node_functions as common_node_functions
import kraken.cerberus.setup as cerberus
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value
node_general = False
# Get the node scenarios object of specfied cloud type
def get_node_scenario_object(node_scenario):
# krkn_lib
def get_node_scenario_object(node_scenario, kubecli: KrknKubernetes):
if "cloud_type" not in node_scenario.keys() or node_scenario["cloud_type"] == "generic":
global node_general
node_general = True
return general_node_scenarios()
return general_node_scenarios(kubecli)
if node_scenario["cloud_type"] == "aws":
return aws_node_scenarios()
return aws_node_scenarios(kubecli)
elif node_scenario["cloud_type"] == "gcp":
return gcp_node_scenarios()
return gcp_node_scenarios(kubecli)
elif node_scenario["cloud_type"] == "openstack":
return openstack_node_scenarios()
return openstack_node_scenarios(kubecli)
elif node_scenario["cloud_type"] == "azure" or node_scenario["cloud_type"] == "az":
return azure_node_scenarios()
return azure_node_scenarios(kubecli)
elif node_scenario["cloud_type"] == "alibaba" or node_scenario["cloud_type"] == "alicloud":
return alibaba_node_scenarios()
return alibaba_node_scenarios(kubecli)
elif node_scenario["cloud_type"] == "bm":
return bm_node_scenarios(
node_scenario.get("bmc_info"), node_scenario.get("bmc_user", None), node_scenario.get("bmc_password", None)
node_scenario.get("bmc_info"), node_scenario.get("bmc_user", None), node_scenario.get("bmc_password", None),
kubecli
)
elif node_scenario["cloud_type"] == "docker":
return docker_node_scenarios()
return docker_node_scenarios(kubecli)
else:
logging.error(
"Cloud type " + node_scenario["cloud_type"] + " is not currently supported; "
@@ -49,41 +54,64 @@ def get_node_scenario_object(node_scenario):
# Run defined scenarios
def run(scenarios_list, config, wait_duration):
# krkn_lib
def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
for node_scenario_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = node_scenario_config
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, node_scenario_config)
with open(node_scenario_config, "r") as f:
node_scenario_config = yaml.full_load(f)
for node_scenario in node_scenario_config["node_scenarios"]:
node_scenario_object = get_node_scenario_object(node_scenario)
node_scenario_object = get_node_scenario_object(node_scenario, kubecli)
if node_scenario["actions"]:
for action in node_scenario["actions"]:
start_time = int(time.time())
inject_node_scenario(action, node_scenario, node_scenario_object)
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.get_status(config, start_time, end_time)
logging.info("")
try:
inject_node_scenario(action, node_scenario, node_scenario_object, kubecli)
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.get_status(config, start_time, end_time)
logging.info("")
except (RuntimeError, Exception) as e:
scenario_telemetry.exitStatus = 1
failed_scenarios.append(node_scenario_config)
log_exception(node_scenario_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries
# Inject the specified node scenario
def inject_node_scenario(action, node_scenario, node_scenario_object):
def inject_node_scenario(action, node_scenario, node_scenario_object, kubecli: KrknKubernetes):
generic_cloud_scenarios = ("stop_kubelet_scenario", "node_crash_scenario")
# Get the node scenario configurations
run_kill_count = node_scenario.get("runs", 1)
instance_kill_count = node_scenario.get("instance_count", 1)
node_name = node_scenario.get("node_name", "")
label_selector = node_scenario.get("label_selector", "")
timeout = node_scenario.get("timeout", 120)
service = node_scenario.get("service", "")
ssh_private_key = node_scenario.get("ssh_private_key", "~/.ssh/id_rsa")
run_kill_count = get_yaml_item_value(node_scenario, "runs", 1)
instance_kill_count = get_yaml_item_value(
node_scenario, "instance_count", 1
)
node_name = get_yaml_item_value(node_scenario, "node_name", "")
label_selector = get_yaml_item_value(node_scenario, "label_selector", "")
timeout = get_yaml_item_value(node_scenario, "timeout", 120)
service = get_yaml_item_value(node_scenario, "service", "")
ssh_private_key = get_yaml_item_value(
node_scenario, "ssh_private_key", "~/.ssh/id_rsa"
)
# Get the node to apply the scenario
if node_name:
node_name_list = node_name.split(",")
else:
node_name_list = [node_name]
for single_node_name in node_name_list:
nodes = common_node_functions.get_node(single_node_name, label_selector, instance_kill_count)
nodes = common_node_functions.get_node(single_node_name, label_selector, instance_kill_count, kubecli)
for single_node in nodes:
if node_general and action not in generic_cloud_scenarios:
logging.info("Scenario: " + action + " is not set up for generic cloud type, skipping action")

View File

@@ -12,6 +12,11 @@ import kraken.plugins.node_scenarios.ibmcloud_plugin as ibmcloud_plugin
from kraken.plugins.run_python_plugin import run_python_file
from kraken.plugins.network.ingress_shaping import network_chaos
from kraken.plugins.pod_network_outage.pod_network_outage_plugin import pod_outage
from kraken.plugins.pod_network_outage.pod_network_outage_plugin import pod_egress_shaping
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from kraken.plugins.pod_network_outage.pod_network_outage_plugin import pod_ingress_shaping
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import log_exception
@dataclasses.dataclass
@@ -213,21 +218,42 @@ PLUGINS = Plugins(
[
"error"
]
)
),
PluginStep(
pod_egress_shaping,
[
"error"
]
),
PluginStep(
pod_ingress_shaping,
[
"error"
]
)
]
)
def run(scenarios: List[str], kubeconfig_path: str, kraken_config: str, failed_post_scenarios: List[str], wait_duration: int) -> List[str]:
def run(scenarios: List[str], kubeconfig_path: str, kraken_config: str, failed_post_scenarios: List[str], wait_duration: int, telemetry: KrknTelemetryKubernetes) -> (List[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
for scenario in scenarios:
logging.info('scenario '+ str(scenario))
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, scenario)
logging.info('scenario ' + str(scenario))
try:
PLUGINS.run(scenario, kubeconfig_path, kraken_config)
except Exception as e:
scenario_telemetry.exitStatus = 1
failed_post_scenarios.append(scenario)
logging.error("Error while running {}: {}".format(scenario, e))
return failed_post_scenarios
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
log_exception(scenario)
else:
scenario_telemetry.exitStatus = 0
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
scenario_telemetries.append(scenario_telemetry)
scenario_telemetry.endTimeStamp = time.time()
return failed_post_scenarios
return failed_post_scenarios, scenario_telemetries

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,16 @@
import logging
from arcaflow_plugin_sdk import serialization
import arcaflow_plugin_kill_pod
import kraken.cerberus.setup as cerberus
import kraken.post_actions.actions as post_actions
import kraken.kubernetes.client as kubecli
import time
import yaml
import sys
import random
import arcaflow_plugin_kill_pod
import kraken.cerberus.setup as cerberus
import kraken.post_actions.actions as post_actions
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from arcaflow_plugin_sdk import serialization
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
# Run pod based scenarios
@@ -67,8 +68,23 @@ def run(kubeconfig_path, scenarios_list, config, failed_post_scenarios, wait_dur
return failed_post_scenarios
def container_run(kubeconfig_path, scenarios_list, config, failed_post_scenarios, wait_duration):
# krkn_lib
def container_run(kubeconfig_path,
scenarios_list,
config,
failed_post_scenarios,
wait_duration,
kubecli: KrknKubernetes,
telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
failed_scenarios = []
scenario_telemetries: list[ScenarioTelemetry] = []
for container_scenario_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = container_scenario_config[0]
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, container_scenario_config[0])
if len(container_scenario_config) > 1:
pre_action_output = post_actions.run(kubeconfig_path, container_scenario_config[1])
else:
@@ -78,43 +94,63 @@ def container_run(kubeconfig_path, scenarios_list, config, failed_post_scenarios
for cont_scenario in cont_scenario_config["scenarios"]:
# capture start time
start_time = int(time.time())
killed_containers = container_killing_in_pod(cont_scenario)
if len(container_scenario_config) > 1:
try:
try:
killed_containers = container_killing_in_pod(cont_scenario, kubecli)
if len(container_scenario_config) > 1:
failed_post_scenarios = post_actions.check_recovery(
kubeconfig_path, container_scenario_config, failed_post_scenarios, pre_action_output
kubeconfig_path,
container_scenario_config,
failed_post_scenarios,
pre_action_output
)
except Exception as e:
logging.error("Failed to run post action checks: %s" % e)
sys.exit(1)
else:
failed_post_scenarios = check_failed_containers(
killed_containers, cont_scenario.get("retry_wait", 120), kubecli
)
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
# capture end time
end_time = int(time.time())
# publish cerberus status
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
except (RuntimeError, Exception):
failed_scenarios.append(container_scenario_config[0])
log_exception(container_scenario_config[0])
scenario_telemetry.exitStatus = 1
# removed_exit
# sys.exit(1)
else:
failed_post_scenarios = check_failed_containers(
killed_containers, cont_scenario.get("retry_wait", 120)
)
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetries.append(scenario_telemetry)
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
# capture end time
end_time = int(time.time())
# publish cerberus status
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
logging.info("")
return failed_scenarios, scenario_telemetries
def container_killing_in_pod(cont_scenario):
scenario_name = cont_scenario.get("name", "")
namespace = cont_scenario.get("namespace", "*")
label_selector = cont_scenario.get("label_selector", None)
pod_names = cont_scenario.get("pod_names", [])
container_name = cont_scenario.get("container_name", "")
kill_action = cont_scenario.get("action", "kill 1")
kill_count = cont_scenario.get("count", 1)
def container_killing_in_pod(cont_scenario, kubecli: KrknKubernetes):
scenario_name = get_yaml_item_value(cont_scenario, "name", "")
namespace = get_yaml_item_value(cont_scenario, "namespace", "*")
label_selector = get_yaml_item_value(cont_scenario, "label_selector", None)
pod_names = get_yaml_item_value(cont_scenario, "pod_names", [])
container_name = get_yaml_item_value(cont_scenario, "container_name", "")
kill_action = get_yaml_item_value(cont_scenario, "action", 1)
kill_count = get_yaml_item_value(cont_scenario, "count", 1)
if not isinstance(kill_action, int):
logging.error("Please make sure the action parameter defined in the "
"config is an integer")
raise RuntimeError()
if (kill_action < 1) or (kill_action > 15):
logging.error("Only 1-15 kill signals are supported.")
raise RuntimeError()
kill_action = "kill " + str(kill_action)
if type(pod_names) != list:
logging.error("Please make sure your pod_names are in a list format")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
if len(pod_names) == 0:
if namespace == "*":
# returns double array of pod name and namespace
@@ -126,7 +162,9 @@ def container_killing_in_pod(cont_scenario):
if namespace == "*":
logging.error("You must specify the namespace to kill a container in a specific pod")
logging.error("Scenario " + scenario_name + " failed")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
pods = pod_names
# get container and pod name
container_pod_list = []
@@ -147,17 +185,19 @@ def container_killing_in_pod(cont_scenario):
if len(container_pod_list) == 0:
logging.error("Trying to kill more containers than were found, try lowering kill count")
logging.error("Scenario " + scenario_name + " failed")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
selected_container_pod = container_pod_list[random.randint(0, len(container_pod_list) - 1)]
for c_name in selected_container_pod[2]:
if container_name != "":
if c_name == container_name:
killed_container_list.append([selected_container_pod[0], selected_container_pod[1], c_name])
retry_container_killing(kill_action, selected_container_pod[0], selected_container_pod[1], c_name)
retry_container_killing(kill_action, selected_container_pod[0], selected_container_pod[1], c_name, kubecli)
break
else:
killed_container_list.append([selected_container_pod[0], selected_container_pod[1], c_name])
retry_container_killing(kill_action, selected_container_pod[0], selected_container_pod[1], c_name)
retry_container_killing(kill_action, selected_container_pod[0], selected_container_pod[1], c_name, kubecli)
break
container_pod_list.remove(selected_container_pod)
killed_count += 1
@@ -165,7 +205,7 @@ def container_killing_in_pod(cont_scenario):
return killed_container_list
def retry_container_killing(kill_action, podname, namespace, container_name):
def retry_container_killing(kill_action, podname, namespace, container_name, kubecli: KrknKubernetes):
i = 0
while i < 5:
logging.info("Killing container %s in pod %s (ns %s)" % (str(container_name), str(podname), str(namespace)))
@@ -178,10 +218,11 @@ def retry_container_killing(kill_action, podname, namespace, container_name):
time.sleep(2)
continue
else:
logging.warning(response)
continue
def check_failed_containers(killed_container_list, wait_time):
def check_failed_containers(killed_container_list, wait_time, kubecli: KrknKubernetes):
container_ready = []
timer = 0

View File

@@ -0,0 +1 @@
from .client import *

View File

@@ -1,49 +1,30 @@
import datetime
import os.path
import urllib3
import logging
import prometheus_api_client
import sys
import kraken.invoke.command as runcommand
import yaml
from krkn_lib.prometheus.krkn_prometheus import KrknPrometheus
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
def alerts(prom_cli: KrknPrometheus, start_time, end_time, alert_profile):
# Initialize the client
def initialize_prom_client(distribution, prometheus_url, prometheus_bearer_token):
global prom_cli
prometheus_url, prometheus_bearer_token = instance(distribution, prometheus_url, prometheus_bearer_token)
if prometheus_url and prometheus_bearer_token:
bearer = "Bearer " + prometheus_bearer_token
headers = {"Authorization": bearer}
try:
prom_cli = prometheus_api_client.PrometheusConnect(url=prometheus_url, headers=headers, disable_ssl=True)
except Exception as e:
logging.error("Not able to initialize the client %s" % e)
if alert_profile is None or os.path.exists(alert_profile) is False:
logging.error(f"{alert_profile} alert profile does not exist")
sys.exit(1)
with open(alert_profile) as profile:
profile_yaml = yaml.safe_load(profile)
if not isinstance(profile_yaml, list):
logging.error(f"{alert_profile} wrong file format, alert profile must be "
f"a valid yaml file containing a list of items with 3 properties: "
f"expr, description, severity" )
sys.exit(1)
else:
prom_cli = None
for alert in profile_yaml:
if list(alert.keys()).sort() != ["expr", "description", "severity"].sort():
logging.error(f"wrong alert {alert}, skipping")
# Process custom prometheus query
def process_prom_query(query):
if prom_cli:
try:
return prom_cli.custom_query(query=query, params=None)
except Exception as e:
logging.error("Failed to get the metrics: %s" % e)
sys.exit(1)
else:
logging.info("Skipping the prometheus query as the prometheus client couldn't " "be initilized\n")
# Get prometheus details
def instance(distribution, prometheus_url, prometheus_bearer_token):
if distribution == "openshift" and not prometheus_url:
url = runcommand.invoke(
r"""oc get routes -n openshift-monitoring -o=jsonpath='{.items[?(@.metadata.name=="prometheus-k8s")].spec.host}'""" # noqa
)
prometheus_url = "https://" + url
if distribution == "openshift" and not prometheus_bearer_token:
prometheus_bearer_token = runcommand.invoke(
"oc create token -n openshift-monitoring prometheus-k8s --duration=12h "
"|| oc -n openshift-monitoring sa get-token prometheus-k8s "
"|| oc sa new-token -n openshift-monitoring prometheus-k8s"
)
return prometheus_url, prometheus_bearer_token
prom_cli.process_alert(alert,
datetime.datetime.fromtimestamp(start_time),
datetime.datetime.fromtimestamp(end_time))

View File

@@ -1,157 +1,247 @@
import logging
import random
import re
import sys
import time
import yaml
from ..cerberus import setup as cerberus
from ..kubernetes import client as kubecli
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
def run(scenarios_list, config):
# krkn_lib
def run(scenarios_list, config, kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
"""
Reads the scenario config and creates a temp file to fill up the PVC
"""
failed_post_scenarios = ""
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
for app_config in scenarios_list:
if len(app_config) > 1:
with open(app_config, "r") as f:
config_yaml = yaml.full_load(f)
scenario_config = config_yaml["pvc_scenario"]
pvc_name = scenario_config.get("pvc_name", "")
pod_name = scenario_config.get("pod_name", "")
namespace = scenario_config.get("namespace", "")
target_fill_percentage = scenario_config.get(
"fill_percentage", "50"
)
duration = scenario_config.get("duration", 60)
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = app_config
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, app_config)
try:
if len(app_config) > 1:
with open(app_config, "r") as f:
config_yaml = yaml.full_load(f)
scenario_config = config_yaml["pvc_scenario"]
pvc_name = get_yaml_item_value(
scenario_config, "pvc_name", ""
)
pod_name = get_yaml_item_value(
scenario_config, "pod_name", ""
)
namespace = get_yaml_item_value(
scenario_config, "namespace", ""
)
target_fill_percentage = get_yaml_item_value(
scenario_config, "fill_percentage", "50"
)
duration = get_yaml_item_value(
scenario_config, "duration", 60
)
logging.info(
"Input params:\n"
"pvc_name: '%s'\n"
"pod_name: '%s'\n"
"namespace: '%s'\n"
"target_fill_percentage: '%s%%'\nduration: '%ss'"
% (
str(pvc_name),
str(pod_name),
str(namespace),
str(target_fill_percentage),
str(duration)
)
)
# Check input params
if namespace is None:
logging.error(
"You must specify the namespace where the PVC is"
)
sys.exit(1)
if pvc_name is None and pod_name is None:
logging.error(
"You must specify the pvc_name or the pod_name"
)
sys.exit(1)
if pvc_name and pod_name:
logging.info(
"pod_name will be ignored, pod_name used will be "
"a retrieved from the pod used in the pvc_name"
"Input params:\n"
"pvc_name: '%s'\n"
"pod_name: '%s'\n"
"namespace: '%s'\n"
"target_fill_percentage: '%s%%'\nduration: '%ss'"
% (
str(pvc_name),
str(pod_name),
str(namespace),
str(target_fill_percentage),
str(duration)
)
)
# Get pod name
if pvc_name:
if pod_name:
logging.info(
"pod_name '%s' will be overridden with one of "
"the pods mounted in the PVC" % (str(pod_name))
)
pvc = kubecli.get_pvc_info(pvc_name, namespace)
try:
# random generator not used for
# security/cryptographic purposes.
pod_name = random.choice(pvc.podNames) # nosec
logging.info("Pod name: %s" % pod_name)
except Exception:
# Check input params
if namespace is None:
logging.error(
"Pod associated with %s PVC, on namespace %s, "
"not found" % (str(pvc_name), str(namespace))
"You must specify the namespace where the PVC is"
)
sys.exit(1)
# Get volume name
pod = kubecli.get_pod_info(name=pod_name, namespace=namespace)
if pod is None:
logging.error(
"Exiting as pod '%s' doesn't exist "
"in namespace '%s'" % (
str(pod_name),
str(namespace)
#sys.exit(1)
raise RuntimeError()
if pvc_name is None and pod_name is None:
logging.error(
"You must specify the pvc_name or the pod_name"
)
# sys.exit(1)
raise RuntimeError()
if pvc_name and pod_name:
logging.info(
"pod_name will be ignored, pod_name used will be "
"a retrieved from the pod used in the pvc_name"
)
)
sys.exit(1)
for volume in pod.volumes:
if volume.pvcName is not None:
volume_name = volume.name
pvc_name = volume.pvcName
# Get pod name
if pvc_name:
if pod_name:
logging.info(
"pod_name '%s' will be overridden with one of "
"the pods mounted in the PVC" % (str(pod_name))
)
pvc = kubecli.get_pvc_info(pvc_name, namespace)
break
if 'pvc' not in locals():
logging.error(
"Pod '%s' in namespace '%s' does not use a pvc" % (
try:
# random generator not used for
# security/cryptographic purposes.
pod_name = random.choice(pvc.podNames) # nosec
logging.info("Pod name: %s" % pod_name)
except Exception:
logging.error(
"Pod associated with %s PVC, on namespace %s, "
"not found" % (str(pvc_name), str(namespace))
)
# sys.exit(1)
raise RuntimeError()
# Get volume name
pod = kubecli.get_pod_info(name=pod_name, namespace=namespace)
if pod is None:
logging.error(
"Exiting as pod '%s' doesn't exist "
"in namespace '%s'" % (
str(pod_name),
str(namespace)
)
)
# sys.exit(1)
raise RuntimeError()
for volume in pod.volumes:
if volume.pvcName is not None:
volume_name = volume.name
pvc_name = volume.pvcName
pvc = kubecli.get_pvc_info(pvc_name, namespace)
break
if 'pvc' not in locals():
logging.error(
"Pod '%s' in namespace '%s' does not use a pvc" % (
str(pod_name),
str(namespace)
)
)
# sys.exit(1)
raise RuntimeError()
logging.info("Volume name: %s" % volume_name)
logging.info("PVC name: %s" % pvc_name)
# Get container name and mount path
for container in pod.containers:
for vol in container.volumeMounts:
if vol.name == volume_name:
mount_path = vol.mountPath
container_name = container.name
break
logging.info("Container path: %s" % container_name)
logging.info("Mount path: %s" % mount_path)
# Get PVC capacity and used bytes
command = "df %s -B 1024 | sed 1d" % (str(mount_path))
command_output = (
kubecli.exec_cmd_in_pod(
command,
pod_name,
namespace,
container_name
)
).split()
pvc_used_kb = int(command_output[2])
pvc_capacity_kb = pvc_used_kb + int(command_output[3])
logging.info("PVC used: %s KB" % pvc_used_kb)
logging.info("PVC capacity: %s KB" % pvc_capacity_kb)
# Check valid fill percentage
current_fill_percentage = pvc_used_kb / pvc_capacity_kb
if not (
current_fill_percentage * 100
< float(target_fill_percentage)
<= 99
):
logging.error(
"Target fill percentage (%.2f%%) is lower than "
"current fill percentage (%.2f%%) "
"or higher than 99%%" % (
target_fill_percentage,
current_fill_percentage * 100
)
)
# sys.exit(1)
raise RuntimeError()
# Calculate file size
file_size_kb = int(
(
float(
target_fill_percentage / 100
) * float(pvc_capacity_kb)
) - float(pvc_used_kb)
)
logging.debug("File size: %s KB" % file_size_kb)
file_name = "kraken.tmp"
logging.info(
"Creating %s file, %s KB size, in pod %s at %s (ns %s)"
% (
str(file_name),
str(file_size_kb),
str(pod_name),
str(mount_path),
str(namespace)
)
)
sys.exit(1)
logging.info("Volume name: %s" % volume_name)
logging.info("PVC name: %s" % pvc_name)
# Get container name and mount path
for container in pod.containers:
for vol in container.volumeMounts:
if vol.name == volume_name:
mount_path = vol.mountPath
container_name = container.name
break
logging.info("Container path: %s" % container_name)
logging.info("Mount path: %s" % mount_path)
# Get PVC capacity and used bytes
command = "df %s -B 1024 | sed 1d" % (str(mount_path))
command_output = (
start_time = int(time.time())
# Create temp file in the PVC
full_path = "%s/%s" % (str(mount_path), str(file_name))
command = "fallocate -l $((%s*1024)) %s" % (
str(file_size_kb),
str(full_path)
)
logging.debug(
"Create temp file in the PVC command:\n %s" % command
)
kubecli.exec_cmd_in_pod(
command,
pod_name,
namespace,
container_name,
"sh"
)
).split()
pvc_used_kb = int(command_output[2])
pvc_capacity_kb = pvc_used_kb + int(command_output[3])
logging.info("PVC used: %s KB" % pvc_used_kb)
logging.info("PVC capacity: %s KB" % pvc_capacity_kb)
# Check valid fill percentage
current_fill_percentage = pvc_used_kb / pvc_capacity_kb
if not (
current_fill_percentage * 100
< float(target_fill_percentage)
<= 99
):
logging.error(
"Target fill percentage (%.2f%%) is lower than "
"current fill percentage (%.2f%%) "
"or higher than 99%%" % (
target_fill_percentage,
current_fill_percentage * 100
)
# Check if file is created
command = "ls -lh %s" % (str(mount_path))
logging.debug("Check file is created command:\n %s" % command)
response = kubecli.exec_cmd_in_pod(
command, pod_name, namespace, container_name
)
sys.exit(1)
logging.info("\n" + str(response))
if str(file_name).lower() in str(response).lower():
logging.info(
"%s file successfully created" % (str(full_path))
)
else:
logging.error(
"Failed to create tmp file with %s size" % (
str(file_size_kb)
)
)
remove_temp_file(
file_name,
full_path,
pod_name,
namespace,
container_name,
mount_path,
file_size_kb,
kubecli
)
# sys.exit(1)
raise RuntimeError()
# Calculate file size
file_size_kb = int(
@@ -186,26 +276,25 @@ def run(scenarios_list, config):
"Create temp file in the PVC command:\n %s" % command
)
kubecli.exec_cmd_in_pod(
command, pod_name, namespace, container_name, "sh"
command, pod_name, namespace, container_name
)
# Check if file is created
command = "ls -lh %s" % (str(mount_path))
logging.debug("Check file is created command:\n %s" % command)
response = kubecli.exec_cmd_in_pod(
command, pod_name, namespace, container_name, "sh"
command, pod_name, namespace, container_name
)
logging.info("\n" + str(response))
if str(file_name).lower() in str(response).lower():
logging.info(
"%s file successfully created" % (str(full_path))
)
else:
logging.error(
"Failed to create tmp file with %s size" % (
str(file_size_kb)
"Waiting for the specified duration in the config: %ss" % (
duration
)
)
time.sleep(duration)
logging.info("Finish waiting")
remove_temp_file(
file_name,
full_path,
@@ -213,38 +302,31 @@ def run(scenarios_list, config):
namespace,
container_name,
mount_path,
file_size_kb
file_size_kb,
kubecli
)
sys.exit(1)
# Wait for the specified duration
logging.info(
"Waiting for the specified duration in the config: %ss" % (
duration
end_time = int(time.time())
cerberus.publish_kraken_status(
config,
failed_post_scenarios,
start_time,
end_time
)
)
time.sleep(duration)
logging.info("Finish waiting")
except (RuntimeError, Exception):
scenario_telemetry.exitStatus = 1
failed_scenarios.append(app_config)
log_exception(app_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetries.append(scenario_telemetry)
remove_temp_file(
file_name,
full_path,
pod_name,
namespace,
container_name,
mount_path,
file_size_kb
)
end_time = int(time.time())
cerberus.publish_kraken_status(
config,
failed_post_scenarios,
start_time,
end_time
)
return failed_scenarios, scenario_telemetries
# krkn_lib
def remove_temp_file(
file_name,
full_path,
@@ -252,19 +334,19 @@ def remove_temp_file(
namespace,
container_name,
mount_path,
file_size_kb
file_size_kb,
kubecli: KrknKubernetes
):
command = "rm -f %s" % (str(full_path))
logging.debug("Remove temp file from the PVC command:\n %s" % command)
kubecli.exec_cmd_in_pod(command, pod_name, namespace, container_name, "sh")
kubecli.exec_cmd_in_pod(command, pod_name, namespace, container_name)
command = "ls -lh %s" % (str(mount_path))
logging.debug("Check temp file is removed command:\n %s" % command)
response = kubecli.exec_cmd_in_pod(
command,
pod_name,
namespace,
container_name,
"sh"
container_name
)
logging.info("\n" + str(response))
if not (str(file_name).lower() in str(response).lower()):
@@ -273,7 +355,7 @@ def remove_temp_file(
logging.error(
"Failed to delete tmp file with %s size" % (str(file_size_kb))
)
sys.exit(1)
raise RuntimeError()
def toKbytes(value):
@@ -282,7 +364,7 @@ def toKbytes(value):
"PVC capacity %s does not match expression "
"regexp '^[0-9]+[K|M|G|T]i$'"
)
sys.exit(1)
raise RuntimeError()
unit = {"K": 0, "M": 1, "G": 2, "T": 3}
base = 1024 if ("i" in value) else 1000
exp = unit[value[-2:-1]]

View File

@@ -0,0 +1,325 @@
import time
import random
import logging
import kraken.cerberus.setup as cerberus
import kraken.post_actions.actions as post_actions
import yaml
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value, log_exception
def delete_objects(kubecli, namespace):
services = delete_all_services_namespace(kubecli, namespace)
daemonsets = delete_all_daemonset_namespace(kubecli, namespace)
statefulsets = delete_all_statefulsets_namespace(kubecli, namespace)
replicasets = delete_all_replicaset_namespace(kubecli, namespace)
deployments = delete_all_deployment_namespace(kubecli, namespace)
objects = { "daemonsets": daemonsets,
"deployments": deployments,
"replicasets": replicasets,
"statefulsets": statefulsets,
"services": services
}
return objects
def get_list_running_pods(kubecli: KrknKubernetes, namespace: str):
running_pods = []
pods = kubecli.list_pods(namespace)
for pod in pods:
pod_status = kubecli.get_pod_info(pod, namespace)
if pod_status and pod_status.status == "Running":
running_pods.append(pod)
logging.info('all running pods ' + str(running_pods))
return running_pods
def delete_all_deployment_namespace(kubecli: KrknKubernetes, namespace: str):
"""
Delete all the deployments in the specified namespace
:param kubecli: krkn kubernetes python package
:param namespace: namespace
"""
try:
deployments = kubecli.get_deployment_ns(namespace)
for deployment in deployments:
logging.info("Deleting deployment" + deployment)
kubecli.delete_deployment(deployment, namespace)
except Exception as e:
logging.error(
"Exception when calling delete_all_deployment_namespace: %s\n",
str(e),
)
raise e
return deployments
def delete_all_daemonset_namespace(kubecli: KrknKubernetes, namespace: str):
"""
Delete all the daemonset in the specified namespace
:param kubecli: krkn kubernetes python package
:param namespace: namespace
"""
try:
daemonsets = kubecli.get_daemonset(namespace)
for daemonset in daemonsets:
logging.info("Deleting daemonset" + daemonset)
kubecli.delete_daemonset(daemonset, namespace)
except Exception as e:
logging.error(
"Exception when calling delete_all_daemonset_namespace: %s\n",
str(e),
)
raise e
return daemonsets
def delete_all_statefulsets_namespace(kubecli: KrknKubernetes, namespace: str):
"""
Delete all the statefulsets in the specified namespace
:param kubecli: krkn kubernetes python package
:param namespace: namespace
"""
try:
statefulsets = kubecli.get_all_statefulset(namespace)
for statefulset in statefulsets:
logging.info("Deleting statefulsets" + statefulsets)
kubecli.delete_statefulset(statefulset, namespace)
except Exception as e:
logging.error(
"Exception when calling delete_all_statefulsets_namespace: %s\n",
str(e),
)
raise e
return statefulsets
def delete_all_replicaset_namespace(kubecli: KrknKubernetes, namespace: str):
"""
Delete all the replicasets in the specified namespace
:param kubecli: krkn kubernetes python package
:param namespace: namespace
"""
try:
replicasets = kubecli.get_all_replicasets(namespace)
for replicaset in replicasets:
logging.info("Deleting replicaset" + replicaset)
kubecli.delete_replicaset(replicaset, namespace)
except Exception as e:
logging.error(
"Exception when calling delete_all_replicaset_namespace: %s\n",
str(e),
)
raise e
return replicasets
def delete_all_services_namespace(kubecli: KrknKubernetes, namespace: str):
"""
Delete all the services in the specified namespace
:param kubecli: krkn kubernetes python package
:param namespace: namespace
"""
try:
services = kubecli.get_all_services(namespace)
for service in services:
logging.info("Deleting services" + service)
kubecli.delete_services(service, namespace)
except Exception as e:
logging.error(
"Exception when calling delete_all_services_namespace: %s\n",
str(e),
)
raise e
return services
# krkn_lib
def run(
scenarios_list,
config,
wait_duration,
failed_post_scenarios,
kubeconfig_path,
kubecli: KrknKubernetes,
telemetry: KrknTelemetryKubernetes
) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
for scenario_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario_config[0]
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, scenario_config[0])
try:
if len(scenario_config) > 1:
pre_action_output = post_actions.run(kubeconfig_path, scenario_config[1])
else:
pre_action_output = ""
with open(scenario_config[0], "r") as f:
scenario_config_yaml = yaml.full_load(f)
for scenario in scenario_config_yaml["scenarios"]:
scenario_namespace = get_yaml_item_value(
scenario, "namespace", ""
)
scenario_label = get_yaml_item_value(
scenario, "label_selector", ""
)
if scenario_namespace is not None and scenario_namespace.strip() != "":
if scenario_label is not None and scenario_label.strip() != "":
logging.error("You can only have namespace or label set in your namespace scenario")
logging.error(
"Current scenario config has namespace '%s' and label selector '%s'"
% (scenario_namespace, scenario_label)
)
logging.error(
"Please set either namespace to blank ('') or label_selector to blank ('') to continue"
)
# removed_exit
# sys.exit(1)
raise RuntimeError()
delete_count = get_yaml_item_value(
scenario, "delete_count", 1
)
run_count = get_yaml_item_value(scenario, "runs", 1)
run_sleep = get_yaml_item_value(scenario, "sleep", 10)
wait_time = get_yaml_item_value(scenario, "wait_time", 30)
logging.info(str(scenario_namespace) + str(scenario_label) + str(delete_count) + str(run_count) + str(run_sleep) + str(wait_time))
logging.info("done")
start_time = int(time.time())
for i in range(run_count):
killed_namespaces = {}
namespaces = kubecli.check_namespaces([scenario_namespace], scenario_label)
for j in range(delete_count):
if len(namespaces) == 0:
logging.error(
"Couldn't delete %s namespaces, not enough namespaces matching %s with label %s"
% (str(run_count), scenario_namespace, str(scenario_label))
)
# removed_exit
# sys.exit(1)
raise RuntimeError()
selected_namespace = namespaces[random.randint(0, len(namespaces) - 1)]
logging.info('Delete objects in selected namespace: ' + selected_namespace )
try:
# delete all pods in namespace
objects = delete_objects(kubecli,selected_namespace)
killed_namespaces[selected_namespace] = objects
logging.info("Deleted all objects in namespace %s was successful" % str(selected_namespace))
except Exception as e:
logging.info("Delete all objects in namespace %s was unsuccessful" % str(selected_namespace))
logging.info("Namespace action error: " + str(e))
raise RuntimeError()
namespaces.remove(selected_namespace)
logging.info("Waiting %s seconds between namespace deletions" % str(run_sleep))
time.sleep(run_sleep)
logging.info("Waiting for the specified duration: %s" % wait_duration)
time.sleep(wait_duration)
if len(scenario_config) > 1:
try:
failed_post_scenarios = post_actions.check_recovery(
kubeconfig_path, scenario_config, failed_post_scenarios, pre_action_output
)
except Exception as e:
logging.error("Failed to run post action checks: %s" % e)
# removed_exit
# sys.exit(1)
raise RuntimeError()
else:
failed_post_scenarios = check_all_running_deployment(killed_namespaces, wait_time, kubecli)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
except (Exception, RuntimeError):
scenario_telemetry.exitStatus = 1
failed_scenarios.append(scenario_config[0])
log_exception(scenario_config[0])
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries
def check_all_running_pods(kubecli: KrknKubernetes, namespace_name, wait_time):
timer = 0
while timer < wait_time:
pod_list = kubecli.list_pods(namespace_name)
pods_running = 0
for pod in pod_list:
pod_info = kubecli.get_pod_info(pod, namespace_name)
if pod_info.status != "Running" and pod_info.status != "Succeeded":
logging.info("Pods %s still not running or completed" % pod_info.name)
break
pods_running += 1
if len(pod_list) == pods_running:
break
timer += 5
time.sleep(5)
logging.info("Waiting 5 seconds for pods to become active")
# krkn_lib
def check_all_running_deployment(killed_namespaces, wait_time, kubecli: KrknKubernetes):
timer = 0
while timer < wait_time and killed_namespaces:
still_missing_ns = killed_namespaces.copy()
for namespace_name, objects in killed_namespaces.items():
still_missing_obj = objects.copy()
for obj_name, obj_list in objects.items():
if "deployments" == obj_name:
deployments = kubecli.get_deployment_ns(namespace_name)
if len(obj_list) == len(deployments):
still_missing_obj.pop(obj_name)
elif "replicasets" == obj_name:
replicasets = kubecli.get_all_replicasets(namespace_name)
if len(obj_list) == len(replicasets):
still_missing_obj.pop(obj_name)
elif "statefulsets" == obj_name:
statefulsets = kubecli.get_all_statefulset(namespace_name)
if len(obj_list) == len(statefulsets):
still_missing_obj.pop(obj_name)
elif "services" == obj_name:
services = kubecli.get_all_services(namespace_name)
if len(obj_list) == len(services):
still_missing_obj.pop(obj_name)
elif "daemonsets" == obj_name:
daemonsets = kubecli.get_daemonset(namespace_name)
if len(obj_list) == len(daemonsets):
still_missing_obj.pop(obj_name)
logging.info("Still missing objects " + str(still_missing_obj))
killed_namespaces[namespace_name] = still_missing_obj.copy()
if len(killed_namespaces[namespace_name].keys()) == 0:
logging.info("Wait for pods to become running for namespace: " + namespace_name)
check_all_running_pods(kubecli, namespace_name, wait_time)
still_missing_ns.pop(namespace_name)
killed_namespaces = still_missing_ns
if len(killed_namespaces.keys()) == 0:
return []
timer += 10
time.sleep(10)
logging.info("Waiting 10 seconds for objects in namespaces to become active")
logging.error("Objects are still not ready after waiting " + str(wait_time) + "seconds")
logging.error("Non active namespaces " + str(killed_namespaces))
return killed_namespaces

View File

@@ -1,19 +1,18 @@
#!/usr/bin/env python
import sys
import yaml
import logging
import time
from multiprocessing.pool import ThreadPool
from ..cerberus import setup as cerberus
from ..kubernetes import client as kubecli
from ..post_actions import actions as post_actions
from ..node_actions.aws_node_scenarios import AWS
from ..node_actions.openstack_node_scenarios import OPENSTACKCLOUD
from ..node_actions.az_node_scenarios import Azure
from ..node_actions.gcp_node_scenarios import GCP
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import log_exception
def multiprocess_nodes(cloud_object_function, nodes):
try:
@@ -40,7 +39,8 @@ def multiprocess_nodes(cloud_object_function, nodes):
# Inject the cluster shut down scenario
def cluster_shut_down(shut_down_config):
# krkn_lib
def cluster_shut_down(shut_down_config, kubecli: KrknKubernetes):
runs = shut_down_config["runs"]
shut_down_duration = shut_down_config["shut_down_duration"]
cloud_type = shut_down_config["cloud_type"]
@@ -58,7 +58,9 @@ def cluster_shut_down(shut_down_config):
"Cloud type %s is not currently supported for cluster shut down" %
cloud_type
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
nodes = kubecli.list_nodes()
node_id = []
@@ -125,31 +127,60 @@ def cluster_shut_down(shut_down_config):
logging.info("Successfully injected cluster_shut_down scenario!")
# krkn_lib
def run(scenarios_list, config, wait_duration):
def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
failed_post_scenarios = []
failed_scenarios = []
scenario_telemetries: list[ScenarioTelemetry] = []
for shut_down_config in scenarios_list:
if len(shut_down_config) > 1:
pre_action_output = post_actions.run("", shut_down_config[1])
else:
pre_action_output = ""
with open(shut_down_config[0], "r") as f:
config_path = shut_down_config
pre_action_output = ""
if isinstance(shut_down_config, list) :
if len(shut_down_config) == 0:
raise Exception("bad config file format for shutdown scenario")
config_path = shut_down_config[0]
if len(shut_down_config) > 1:
pre_action_output = post_actions.run("", shut_down_config[1])
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = config_path
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, config_path)
with open(config_path, "r") as f:
shut_down_config_yaml = yaml.full_load(f)
shut_down_config_scenario = \
shut_down_config_yaml["cluster_shut_down_scenario"]
start_time = int(time.time())
cluster_shut_down(shut_down_config_scenario)
logging.info(
"Waiting for the specified duration: %s" % (wait_duration)
)
time.sleep(wait_duration)
failed_post_scenarios = post_actions.check_recovery(
"", shut_down_config, failed_post_scenarios, pre_action_output
)
end_time = int(time.time())
cerberus.publish_kraken_status(
config,
failed_post_scenarios,
start_time,
end_time
)
try:
cluster_shut_down(shut_down_config_scenario, kubecli)
logging.info(
"Waiting for the specified duration: %s" % (wait_duration)
)
time.sleep(wait_duration)
failed_post_scenarios = post_actions.check_recovery(
"", shut_down_config, failed_post_scenarios, pre_action_output
)
end_time = int(time.time())
cerberus.publish_kraken_status(
config,
failed_post_scenarios,
start_time,
end_time
)
except (RuntimeError, Exception):
log_exception(config_path)
failed_scenarios.append(config_path)
scenario_telemetry.exitStatus = 1
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -2,17 +2,22 @@ import datetime
import time
import logging
import re
import sys
import yaml
import random
from krkn_lib import utils
from kubernetes.client import ApiException
from ..cerberus import setup as cerberus
from ..kubernetes import client as kubecli
from ..invoke import command as runcommand
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import get_yaml_item_value, log_exception, get_random_string
def pod_exec(pod_name, command, namespace, container_name):
i = 0
# krkn_lib
def pod_exec(pod_name, command, namespace, container_name, kubecli:KrknKubernetes):
for i in range(5):
response = kubecli.exec_cmd_in_pod(
command,
@@ -34,14 +39,8 @@ def pod_exec(pod_name, command, namespace, container_name):
return response
def node_debug(node_name, command):
response = runcommand.invoke(
"oc debug node/" + node_name + " -- chroot /host " + command
)
return response
def get_container_name(pod_name, namespace, container_name=""):
# krkn_lib
def get_container_name(pod_name, namespace, kubecli:KrknKubernetes, container_name=""):
container_names = kubecli.get_containers_in_pod(pod_name, namespace)
if container_name != "":
@@ -63,14 +62,46 @@ def get_container_name(pod_name, namespace, container_name=""):
return container_name
def skew_time(scenario):
skew_command = "date --set "
if scenario["action"] == "skew_date":
skewed_date = "00-01-01"
skew_command += skewed_date
elif scenario["action"] == "skew_time":
skewed_time = "01:01:01"
skew_command += skewed_time
def skew_node(node_name: str, action: str, kubecli: KrknKubernetes):
pod_namespace = "default"
status_pod_name = f"time-skew-pod-{get_random_string(5)}"
skew_pod_name = f"time-skew-pod-{get_random_string(5)}"
ntp_enabled = True
logging.info(f'Creating pod to skew {"time" if action == "skew_time" else "date"} on node {node_name}')
status_command = ["timedatectl"]
param = "2001-01-01"
skew_command = ["timedatectl", "set-time"]
if action == "skew_time":
skew_command.append("01:01:01")
else:
skew_command.append("2001-01-01")
try:
status_response = kubecli.exec_command_on_node(node_name, status_command, status_pod_name, pod_namespace)
if "Network time on: no" in status_response:
ntp_enabled = False
logging.warning(f'ntp unactive on node {node_name} skewing {"time" if action == "skew_time" else "date"} to {param}')
pod_exec(skew_pod_name, skew_command, pod_namespace, None, kubecli)
else:
logging.info(f'ntp active in cluster node, {"time" if action == "skew_time" else "date"} skewing will have no effect, skipping')
except ApiException:
pass
except Exception as e:
logging.error(f"failed to execute skew command in pod: {e}")
finally:
kubecli.delete_pod(status_pod_name, pod_namespace)
if not ntp_enabled :
kubecli.delete_pod(skew_pod_name, pod_namespace)
# krkn_lib
def skew_time(scenario, kubecli:KrknKubernetes):
if scenario["action"] not in ["skew_date","skew_time"]:
raise RuntimeError(f'{scenario["action"]} is not a valid time skew action')
if "node" in scenario["object_type"]:
node_names = []
if "object_name" in scenario.keys() and scenario["object_name"]:
@@ -80,20 +111,28 @@ def skew_time(scenario):
scenario["label_selector"]
):
node_names = kubecli.list_nodes(scenario["label_selector"])
for node in node_names:
node_debug(node, skew_command)
skew_node(node, scenario["action"], kubecli)
logging.info("Reset date/time on node " + str(node))
return "node", node_names
elif "pod" in scenario["object_type"]:
container_name = scenario.get("container_name", "")
skew_command = "date --date "
if scenario["action"] == "skew_date":
skewed_date = "00-01-01"
skew_command += skewed_date
elif scenario["action"] == "skew_time":
skewed_time = "01:01:01"
skew_command += skewed_time
container_name = get_yaml_item_value(scenario, "container_name", "")
pod_names = []
if "object_name" in scenario.keys() and scenario["object_name"]:
for name in scenario["object_name"]:
if "namespace" not in scenario.keys():
logging.error("Need to set namespace when using pod name")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
pod_names.append([name, scenario["namespace"]])
elif "namespace" in scenario.keys() and scenario["namespace"]:
if "label_selector" not in scenario.keys():
@@ -127,20 +166,26 @@ def skew_time(scenario):
"Cannot find pods matching the namespace/label_selector, "
"please check"
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
pod_counter = 0
for pod in pod_names:
if len(pod) > 1:
selected_container_name = get_container_name(
pod[0],
pod[1],
container_name
kubecli,
container_name,
)
pod_exec_response = pod_exec(
pod[0],
skew_command,
pod[1],
selected_container_name
selected_container_name,
kubecli,
)
if pod_exec_response is False:
logging.error(
@@ -148,19 +193,23 @@ def skew_time(scenario):
"in pod %s in namespace %s"
% (selected_container_name, pod[0], pod[1])
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
pod_names[pod_counter].append(selected_container_name)
else:
selected_container_name = get_container_name(
pod,
scenario["namespace"],
kubecli,
container_name
)
pod_exec_response = pod_exec(
pod,
skew_command,
scenario["namespace"],
selected_container_name
selected_container_name,
kubecli
)
if pod_exec_response is False:
logging.error(
@@ -172,7 +221,9 @@ def skew_time(scenario):
scenario["namespace"]
)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
pod_names[pod_counter].append(selected_container_name)
logging.info("Reset date/time on pod " + str(pod[0]))
pod_counter += 1
@@ -216,14 +267,16 @@ def string_to_date(obj_datetime):
return datetime.datetime(datetime.MINYEAR, 1, 1)
def check_date_time(object_type, names):
# krkn_lib
def check_date_time(object_type, names, kubecli:KrknKubernetes):
skew_command = "date"
not_reset = []
max_retries = 30
if object_type == "node":
for node_name in names:
first_date_time = datetime.datetime.utcnow()
node_datetime_string = node_debug(node_name, skew_command)
check_pod_name = f"time-skew-pod-{get_random_string(5)}"
node_datetime_string = kubecli.exec_command_on_node(node_name, [skew_command], check_pod_name)
node_datetime = string_to_date(node_datetime_string)
counter = 0
while not (
@@ -234,7 +287,8 @@ def check_date_time(object_type, names):
"Date/time on node %s still not reset, "
"waiting 10 seconds and retrying" % node_name
)
node_datetime_string = node_debug(node_name, skew_command)
node_datetime_string = kubecli.exec_cmd_in_pod([skew_command], check_pod_name, "default")
node_datetime = string_to_date(node_datetime_string)
counter += 1
if counter > max_retries:
@@ -248,6 +302,8 @@ def check_date_time(object_type, names):
logging.info(
"Date in node " + str(node_name) + " reset properly"
)
kubecli.delete_pod(check_pod_name)
elif object_type == "pod":
for pod_name in names:
first_date_time = datetime.datetime.utcnow()
@@ -256,7 +312,8 @@ def check_date_time(object_type, names):
pod_name[0],
skew_command,
pod_name[1],
pod_name[2]
pod_name[2],
kubecli
)
pod_datetime = string_to_date(pod_datetime_string)
while not (
@@ -271,7 +328,8 @@ def check_date_time(object_type, names):
pod_name[0],
skew_command,
pod_name[1],
pod_name[2]
pod_name[2],
kubecli
)
pod_datetime = string_to_date(pod_datetime)
counter += 1
@@ -289,24 +347,42 @@ def check_date_time(object_type, names):
return not_reset
def run(scenarios_list, config, wait_duration):
# krkn_lib
def run(scenarios_list, config, wait_duration, kubecli:KrknKubernetes, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]):
failed_scenarios = []
scenario_telemetries: list[ScenarioTelemetry] = []
for time_scenario_config in scenarios_list:
with open(time_scenario_config, "r") as f:
scenario_config = yaml.full_load(f)
for time_scenario in scenario_config["time_scenarios"]:
start_time = int(time.time())
object_type, object_names = skew_time(time_scenario)
not_reset = check_date_time(object_type, object_names)
if len(not_reset) > 0:
logging.info("Object times were not reset")
logging.info(
"Waiting for the specified duration: %s" % (wait_duration)
)
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(
config,
not_reset,
start_time,
end_time
)
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = time_scenario_config
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, time_scenario_config)
try:
with open(time_scenario_config, "r") as f:
scenario_config = yaml.full_load(f)
for time_scenario in scenario_config["time_scenarios"]:
start_time = int(time.time())
object_type, object_names = skew_time(time_scenario, kubecli)
not_reset = check_date_time(object_type, object_names, kubecli)
if len(not_reset) > 0:
logging.info("Object times were not reset")
logging.info(
"Waiting for the specified duration: %s" % (wait_duration)
)
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(
config,
not_reset,
start_time,
end_time
)
except (RuntimeError, Exception):
scenario_telemetry.exitStatus = 1
log_exception(time_scenario_config)
failed_scenarios.append(time_scenario_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries

View File

@@ -1,100 +1,121 @@
import yaml
import sys
import logging
import time
from ..node_actions.aws_node_scenarios import AWS
from ..cerberus import setup as cerberus
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.utils.functions import log_exception
def run(scenarios_list, config, wait_duration):
def run(scenarios_list, config, wait_duration, telemetry: KrknTelemetryKubernetes) -> (list[str], list[ScenarioTelemetry]) :
"""
filters the subnet of interest and applies the network acl
to create zone outage
"""
failed_post_scenarios = ""
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
for zone_outage_config in scenarios_list:
if len(zone_outage_config) > 1:
with open(zone_outage_config, "r") as f:
zone_outage_config_yaml = yaml.full_load(f)
scenario_config = zone_outage_config_yaml["zone_outage"]
vpc_id = scenario_config["vpc_id"]
subnet_ids = scenario_config["subnet_id"]
duration = scenario_config["duration"]
cloud_type = scenario_config["cloud_type"]
ids = {}
acl_ids_created = []
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = zone_outage_config
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, zone_outage_config)
try:
if len(zone_outage_config) > 1:
with open(zone_outage_config, "r") as f:
zone_outage_config_yaml = yaml.full_load(f)
scenario_config = zone_outage_config_yaml["zone_outage"]
vpc_id = scenario_config["vpc_id"]
subnet_ids = scenario_config["subnet_id"]
duration = scenario_config["duration"]
cloud_type = scenario_config["cloud_type"]
ids = {}
acl_ids_created = []
if cloud_type.lower() == "aws":
cloud_object = AWS()
else:
logging.error(
"Cloud type %s is not currently supported for "
"zone outage scenarios"
% cloud_type
)
sys.exit(1)
start_time = int(time.time())
for subnet_id in subnet_ids:
logging.info("Targeting subnet_id")
network_association_ids = []
associations, original_acl_id = \
cloud_object.describe_network_acls(vpc_id, subnet_id)
for entry in associations:
if entry["SubnetId"] == subnet_id:
network_association_ids.append(
entry["NetworkAclAssociationId"]
)
logging.info(
"Network association ids associated with "
"the subnet %s: %s"
% (subnet_id, network_association_ids)
)
acl_id = cloud_object.create_default_network_acl(vpc_id)
new_association_id = \
cloud_object.replace_network_acl_association(
network_association_ids[0], acl_id
if cloud_type.lower() == "aws":
cloud_object = AWS()
else:
logging.error(
"Cloud type %s is not currently supported for "
"zone outage scenarios"
% cloud_type
)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# capture the orginal_acl_id, created_acl_id and
# new association_id to use during the recovery
ids[new_association_id] = original_acl_id
acl_ids_created.append(acl_id)
start_time = int(time.time())
# wait for the specified duration
logging.info(
"Waiting for the specified duration "
"in the config: %s" % (duration)
)
time.sleep(duration)
for subnet_id in subnet_ids:
logging.info("Targeting subnet_id")
network_association_ids = []
associations, original_acl_id = \
cloud_object.describe_network_acls(vpc_id, subnet_id)
for entry in associations:
if entry["SubnetId"] == subnet_id:
network_association_ids.append(
entry["NetworkAclAssociationId"]
)
logging.info(
"Network association ids associated with "
"the subnet %s: %s"
% (subnet_id, network_association_ids)
)
acl_id = cloud_object.create_default_network_acl(vpc_id)
new_association_id = \
cloud_object.replace_network_acl_association(
network_association_ids[0], acl_id
)
# replace the applied acl with the previous acl in use
for new_association_id, original_acl_id in ids.items():
cloud_object.replace_network_acl_association(
new_association_id,
original_acl_id
# capture the orginal_acl_id, created_acl_id and
# new association_id to use during the recovery
ids[new_association_id] = original_acl_id
acl_ids_created.append(acl_id)
# wait for the specified duration
logging.info(
"Waiting for the specified duration "
"in the config: %s" % (duration)
)
logging.info(
"Wating for 60 seconds to make sure "
"the changes are in place"
)
time.sleep(60)
time.sleep(duration)
# delete the network acl created for the run
for acl_id in acl_ids_created:
cloud_object.delete_network_acl(acl_id)
# replace the applied acl with the previous acl in use
for new_association_id, original_acl_id in ids.items():
cloud_object.replace_network_acl_association(
new_association_id,
original_acl_id
)
logging.info(
"Wating for 60 seconds to make sure "
"the changes are in place"
)
time.sleep(60)
logging.info(
"End of scenario. "
"Waiting for the specified duration: %s" % (wait_duration)
)
time.sleep(wait_duration)
# delete the network acl created for the run
for acl_id in acl_ids_created:
cloud_object.delete_network_acl(acl_id)
logging.info(
"End of scenario. "
"Waiting for the specified duration: %s" % (wait_duration)
)
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(
config,
failed_post_scenarios,
start_time,
end_time
)
except (RuntimeError, Exception):
scenario_telemetry.exitStatus = 1
failed_scenarios.append(zone_outage_config)
log_exception(zone_outage_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries
end_time = int(time.time())
cerberus.publish_kraken_status(
config,
failed_post_scenarios,
start_time,
end_time
)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 248 KiB

After

Width:  |  Height:  |  Size: 283 KiB

View File

@@ -1,39 +1,40 @@
coverage
datetime
pyfiglet
PyYAML>=5.1
requests
boto3
google-api-python-client
azure-mgmt-compute
azure-keyvault
azure-identity
kubernetes
oauth2client>=4.1.3
python-openstackclient
gitpython
paramiko
setuptools==65.5.1
openshift-client
python-ipmi
podman-compose
docker-compose
docker
jinja2==3.0.3
itsdangerous==2.0.1
werkzeug==2.2.3
lxml >= 4.3.0
pyVmomi >= 6.7
zope.interface==5.4.0
aliyun-python-sdk-core==2.13.36
aliyun-python-sdk-ecs==4.24.25
arcaflow-plugin-sdk>=0.9.0
wheel
service_identity
git+https://github.com/vmware/vsphere-automation-sdk-python.git@v8.0.0.0
arcaflow==0.9.0
arcaflow-plugin-sdk==0.10.0
azure-identity
azure-keyvault
azure-mgmt-compute
boto3==1.28.61
coverage
datetime
docker
docker-compose
git+https://github.com/redhat-chaos/arcaflow-plugin-kill-pod.git
arcaflow >= 0.4.1
prometheus_api_client
git+https://github.com/vmware/vsphere-automation-sdk-python.git@v8.0.0.0
gitpython
google-api-python-client
ibm_cloud_sdk_core
ibm_vpc
itsdangerous==2.0.1
jinja2==3.1.3
krkn-lib >= 1.4.6
kubernetes
lxml >= 4.3.0
oauth2client>=4.1.3
openshift-client
paramiko
podman-compose
pyVmomi >= 6.7
pyfiglet
pytest
python-ipmi
python-openstackclient
requests
service_identity
setuptools==65.5.1
werkzeug==3.0.1
wheel
zope.interface==5.4.0
pandas<2.0.0

View File

@@ -1,5 +1,6 @@
#!/usr/bin/env python
import datetime
import json
import os
import sys
import yaml
@@ -8,30 +9,31 @@ import optparse
import pyfiglet
import uuid
import time
import kraken.kubernetes.client as kubecli
import kraken.litmus.common_litmus as common_litmus
from krkn_lib.prometheus.krkn_prometheus import KrknPrometheus
import kraken.time_actions.common_time_functions as time_actions
import kraken.performance_dashboards.setup as performance_dashboards
import kraken.pod_scenarios.setup as pod_scenarios
import kraken.namespace_actions.common_namespace_functions as namespace_actions
import kraken.service_disruption.common_service_disruption_functions as service_disruption
import kraken.shut_down.common_shut_down_func as shut_down
import kraken.node_actions.run as nodeaction
import kraken.managedcluster_scenarios.run as managedcluster_scenarios
import kraken.kube_burner.client as kube_burner
import kraken.zone_outage.actions as zone_outages
import kraken.application_outage.actions as application_outage
import kraken.pvc.pvc_scenario as pvc_scenario
import kraken.network_chaos.actions as network_chaos
import kraken.arcaflow_plugin as arcaflow_plugin
import kraken.prometheus as prometheus_plugin
import server as server
import kraken.prometheus.client as promcli
from kraken import plugins
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.ocp import KrknOpenshift
from krkn_lib.telemetry.k8s import KrknTelemetryKubernetes
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from krkn_lib.models.telemetry import ChaosRunTelemetry
from krkn_lib.utils import SafeLogger
from krkn_lib.utils.functions import get_yaml_item_value
KUBE_BURNER_URL = (
"https://github.com/cloud-bulldozer/kube-burner/"
"releases/download/v{version}/kube-burner-{version}-Linux-x86_64.tar.gz"
)
KUBE_BURNER_VERSION = "0.9.1"
# Main function
@@ -45,51 +47,54 @@ def main(cfg):
with open(cfg, "r") as f:
config = yaml.full_load(f)
global kubeconfig_path, wait_duration, kraken_config
distribution = config["kraken"].get("distribution", "openshift")
distribution = get_yaml_item_value(
config["kraken"], "distribution", "openshift"
)
kubeconfig_path = os.path.expanduser(
config["kraken"].get("kubeconfig_path", "")
get_yaml_item_value(config["kraken"], "kubeconfig_path", "")
)
kraken_config = cfg
chaos_scenarios = config["kraken"].get("chaos_scenarios", [])
publish_running_status = config["kraken"].get("publish_kraken_status", False)
port = config["kraken"].get("port")
signal_address = config["kraken"].get("signal_address")
run_signal = config["kraken"].get("signal_state", "RUN")
litmus_install = config["kraken"].get("litmus_install", True)
litmus_version = config["kraken"].get("litmus_version", "v1.9.1")
litmus_uninstall = config["kraken"].get("litmus_uninstall", False)
litmus_uninstall_before_run = config["kraken"].get(
"litmus_uninstall_before_run", True
chaos_scenarios = get_yaml_item_value(
config["kraken"], "chaos_scenarios", []
)
wait_duration = config["tunings"].get("wait_duration", 60)
iterations = config["tunings"].get("iterations", 1)
daemon_mode = config["tunings"].get("daemon_mode", False)
deploy_performance_dashboards = config["performance_monitoring"].get(
"deploy_dashboards", False
publish_running_status = get_yaml_item_value(
config["kraken"], "publish_kraken_status", False
)
dashboard_repo = config["performance_monitoring"].get(
"repo", "https://github.com/cloud-bulldozer/performance-dashboards.git"
port = get_yaml_item_value(config["kraken"], "port", 8081)
signal_address = get_yaml_item_value(
config["kraken"], "signal_address", "0.0.0.0")
run_signal = get_yaml_item_value(
config["kraken"], "signal_state", "RUN"
)
capture_metrics = config["performance_monitoring"].get("capture_metrics", False)
kube_burner_url = config["performance_monitoring"].get(
"kube_burner_binary_url",
KUBE_BURNER_URL.format(version=KUBE_BURNER_VERSION),
wait_duration = get_yaml_item_value(
config["tunings"], "wait_duration", 60
)
config_path = config["performance_monitoring"].get(
"config_path", "config/kube_burner.yaml"
iterations = get_yaml_item_value(config["tunings"], "iterations", 1)
daemon_mode = get_yaml_item_value(
config["tunings"], "daemon_mode", False
)
metrics_profile = config["performance_monitoring"].get(
"metrics_profile_path", "config/metrics-aggregated.yaml"
deploy_performance_dashboards = get_yaml_item_value(
config["performance_monitoring"], "deploy_dashboards", False
)
prometheus_url = config["performance_monitoring"].get("prometheus_url", "")
prometheus_bearer_token = config["performance_monitoring"].get(
"prometheus_bearer_token", ""
dashboard_repo = get_yaml_item_value(
config["performance_monitoring"], "repo",
"https://github.com/cloud-bulldozer/performance-dashboards.git"
)
run_uuid = config["performance_monitoring"].get("uuid", "")
enable_alerts = config["performance_monitoring"].get("enable_alerts", False)
alert_profile = config["performance_monitoring"].get("alert_profile", "")
check_critical_alerts = config["performance_monitoring"].get("check_critical_alerts", False)
prometheus_url = config["performance_monitoring"].get("prometheus_url")
prometheus_bearer_token = config["performance_monitoring"].get(
"prometheus_bearer_token"
)
run_uuid = config["performance_monitoring"].get("uuid")
enable_alerts = get_yaml_item_value(
config["performance_monitoring"], "enable_alerts", False
)
alert_profile = config["performance_monitoring"].get("alert_profile")
check_critical_alerts = get_yaml_item_value(
config["performance_monitoring"], "check_critical_alerts", False
)
telemetry_api_url = config["telemetry"].get("api_url")
# Initialize clients
if (not os.path.isfile(kubeconfig_path) and
not os.path.isfile("/var/run/secrets/kubernetes.io/serviceaccount/token")):
@@ -98,13 +103,34 @@ def main(cfg):
)
sys.exit(1)
logging.info("Initializing client to talk to the Kubernetes cluster")
# Generate uuid for the run
if run_uuid:
logging.info(
"Using the uuid defined by the user for the run: %s" % run_uuid
)
else:
run_uuid = str(uuid.uuid4())
logging.info("Generated a uuid for the run: %s" % run_uuid)
# request_id for telemetry is generated once here and used everywhere
telemetry_request_id = f"{int(time.time())}-{run_uuid}"
if config["telemetry"].get("run_tag"):
telemetry_request_id = f"{telemetry_request_id}-{config['telemetry']['run_tag']}"
telemetry_log_file = f'{config["telemetry"]["archive_path"]}/{telemetry_request_id}.log'
safe_logger = SafeLogger(filename=telemetry_log_file)
try:
kubeconfig_path
os.environ["KUBECONFIG"] = str(kubeconfig_path)
kubecli.initialize_clients(kubeconfig_path)
except NameError:
# krkn-lib-kubernetes init
kubecli = KrknKubernetes(kubeconfig_path=kubeconfig_path)
ocpcli = KrknOpenshift(kubeconfig_path=kubeconfig_path)
except:
kubecli.initialize_clients(None)
# find node kraken might be running on
kubecli.find_kraken_node()
@@ -125,33 +151,37 @@ def main(cfg):
logging.info(
"Publishing kraken status at http://%s:%s" % (server_address, port)
)
logging.info(
"Publishing kraken status at http://%s:%s" % (server_address, port)
)
server.start_server(address, run_signal)
# Cluster info
logging.info("Fetching cluster info")
cv = kubecli.get_clusterversion_string()
cv = ""
if config["kraken"]["distribution"] == "openshift":
cv = ocpcli.get_clusterversion_string()
if prometheus_url is None:
connection_data = ocpcli.get_prometheus_api_connection_data()
prometheus_url = connection_data.endpoint
prometheus_bearer_token = connection_data.token
if cv != "":
logging.info(cv)
else:
logging.info("Cluster version CRD not detected, skipping")
# KrknTelemetry init
telemetry_k8s = KrknTelemetryKubernetes(safe_logger, kubecli)
telemetry_ocp = KrknTelemetryOpenshift(safe_logger, ocpcli)
if enable_alerts:
prometheus = KrknPrometheus(prometheus_url, prometheus_bearer_token)
logging.info("Server URL: %s" % kubecli.get_host())
# Deploy performance dashboards
if deploy_performance_dashboards:
performance_dashboards.setup(dashboard_repo, distribution)
# Generate uuid for the run
if run_uuid:
logging.info(
"Using the uuid defined by the user for the run: %s" % run_uuid
)
else:
run_uuid = str(uuid.uuid4())
logging.info("Generated a uuid for the run: %s" % run_uuid)
# Initialize the start iteration to 0
iteration = 0
@@ -173,8 +203,9 @@ def main(cfg):
# Capture the start time
start_time = int(time.time())
litmus_installed = False
chaos_telemetry = ChaosRunTelemetry()
chaos_telemetry.run_uuid = run_uuid
# Loop to run the chaos starts here
while int(iteration) < iterations and run_signal != "STOP":
# Inject chaos scenarios specified in the config
@@ -206,133 +237,115 @@ def main(cfg):
)
sys.exit(1)
elif scenario_type == "arcaflow_scenarios":
failed_post_scenarios = arcaflow_plugin.run(
scenarios_list, kubeconfig_path
failed_post_scenarios, scenario_telemetries = arcaflow_plugin.run(
scenarios_list, kubeconfig_path, telemetry_k8s
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
elif scenario_type == "plugin_scenarios":
failed_post_scenarios = plugins.run(
failed_post_scenarios, scenario_telemetries = plugins.run(
scenarios_list,
kubeconfig_path,
kraken_config,
failed_post_scenarios,
wait_duration,
telemetry_k8s
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# krkn_lib
elif scenario_type == "container_scenarios":
logging.info("Running container scenarios")
failed_post_scenarios = pod_scenarios.container_run(
failed_post_scenarios, scenario_telemetries = pod_scenarios.container_run(
kubeconfig_path,
scenarios_list,
config,
failed_post_scenarios,
wait_duration,
kubecli,
telemetry_k8s
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Inject node chaos scenarios specified in the config
# krkn_lib
elif scenario_type == "node_scenarios":
logging.info("Running node scenarios")
nodeaction.run(scenarios_list, config, wait_duration)
failed_post_scenarios, scenario_telemetries = nodeaction.run(scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Inject managedcluster chaos scenarios specified in the config
# krkn_lib
elif scenario_type == "managedcluster_scenarios":
logging.info("Running managedcluster scenarios")
managedcluster_scenarios.run(
scenarios_list, config, wait_duration
scenarios_list, config, wait_duration, kubecli
)
# Inject time skew chaos scenarios specified
# in the config
# krkn_lib
elif scenario_type == "time_scenarios":
if distribution == "openshift":
logging.info("Running time skew scenarios")
time_actions.run(scenarios_list, config, wait_duration)
else:
logging.error(
"Litmus scenarios are currently "
"supported only on openshift"
)
sys.exit(1)
# Inject litmus based chaos scenarios
elif scenario_type == "litmus_scenarios":
if distribution == "openshift":
logging.info("Running litmus scenarios")
litmus_namespace = "litmus"
if litmus_install:
# Remove Litmus resources
# before running the scenarios
common_litmus.delete_chaos(litmus_namespace)
common_litmus.delete_chaos_experiments(
litmus_namespace
)
if litmus_uninstall_before_run:
common_litmus.uninstall_litmus(
litmus_version, litmus_namespace
)
common_litmus.install_litmus(
litmus_version, litmus_namespace
)
common_litmus.deploy_all_experiments(
litmus_version, litmus_namespace
)
litmus_installed = True
common_litmus.run(
scenarios_list,
config,
litmus_uninstall,
wait_duration,
litmus_namespace,
)
else:
logging.error(
"Litmus scenarios are currently "
"only supported on openshift"
)
sys.exit(1)
failed_post_scenarios, scenario_telemetries = time_actions.run(scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Inject cluster shutdown scenarios
# krkn_lib
elif scenario_type == "cluster_shut_down_scenarios":
shut_down.run(scenarios_list, config, wait_duration)
failed_post_scenarios, scenario_telemetries = shut_down.run(scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Inject namespace chaos scenarios
elif scenario_type == "namespace_scenarios":
logging.info("Running namespace scenarios")
namespace_actions.run(
# krkn_lib
elif scenario_type == "service_disruption_scenarios":
logging.info("Running service disruption scenarios")
failed_post_scenarios, scenario_telemetries = service_disruption.run(
scenarios_list,
config,
wait_duration,
failed_post_scenarios,
kubeconfig_path,
kubecli,
telemetry_k8s
)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Inject zone failures
elif scenario_type == "zone_outages":
logging.info("Inject zone outages")
zone_outages.run(scenarios_list, config, wait_duration)
failed_post_scenarios, scenario_telemetries = zone_outages.run(scenarios_list, config, wait_duration, telemetry_k8s)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Application outages
elif scenario_type == "application_outages":
logging.info("Injecting application outage")
application_outage.run(
scenarios_list, config, wait_duration
)
failed_post_scenarios, scenario_telemetries = application_outage.run(
scenarios_list, config, wait_duration, telemetry_k8s)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# PVC scenarios
# krkn_lib
elif scenario_type == "pvc_scenarios":
logging.info("Running PVC scenario")
pvc_scenario.run(scenarios_list, config)
failed_post_scenarios, scenario_telemetries = pvc_scenario.run(scenarios_list, config, kubecli, telemetry_k8s)
chaos_telemetry.scenarios.extend(scenario_telemetries)
# Network scenarios
# krkn_lib
elif scenario_type == "network_chaos":
logging.info("Running Network Chaos")
network_chaos.run(scenarios_list, config, wait_duration)
failed_post_scenarios, scenario_telemetries = network_chaos.run(scenarios_list, config, wait_duration, kubecli, telemetry_k8s)
# Check for critical alerts when enabled
if check_critical_alerts:
if enable_alerts and check_critical_alerts :
logging.info("Checking for critical alerts firing post choas")
promcli.initialize_prom_client(distribution, prometheus_url, prometheus_bearer_token)
##PROM
query = r"""ALERTS{severity="critical"}"""
critical_alerts = promcli.process_prom_query(query)
end_time = datetime.datetime.now()
critical_alerts = prometheus.process_prom_query_in_range(
query,
start_time = datetime.datetime.fromtimestamp(start_time),
end_time = end_time
)
critical_alerts_count = len(critical_alerts)
if critical_alerts_count > 0:
logging.error("Critical alerts are firing: %s", critical_alerts)
@@ -344,33 +357,49 @@ def main(cfg):
iteration += 1
logging.info("")
# Capture the end time
# telemetry
# in order to print decoded telemetry data even if telemetry collection
# is disabled, it's necessary to serialize the ChaosRunTelemetry object
# to json, and recreate a new object from it.
end_time = int(time.time())
# Capture metrics for the run
if capture_metrics:
logging.info("Capturing metrics")
kube_burner.setup(kube_burner_url)
kube_burner.scrape_metrics(
distribution,
run_uuid,
prometheus_url,
prometheus_bearer_token,
start_time,
end_time,
config_path,
metrics_profile,
)
# if platform is openshift will be collected
# Cloud platform and network plugins metadata
# through OCP specific APIs
if config["kraken"]["distribution"] == "openshift":
telemetry_ocp.collect_cluster_metadata(chaos_telemetry)
else:
telemetry_k8s.collect_cluster_metadata(chaos_telemetry)
decoded_chaos_run_telemetry = ChaosRunTelemetry(json.loads(chaos_telemetry.to_json()))
logging.info(f"Telemetry data:\n{decoded_chaos_run_telemetry.to_json()}")
if config["telemetry"]["enabled"]:
logging.info(f"telemetry data will be stored on s3 bucket folder: {telemetry_api_url}/download/{telemetry_request_id}")
logging.info(f"telemetry upload log: {safe_logger.log_file_name}")
try:
telemetry_k8s.send_telemetry(config["telemetry"], telemetry_request_id, chaos_telemetry)
telemetry_k8s.put_cluster_events(telemetry_request_id, config["telemetry"], start_time, end_time)
# prometheus data collection is available only on Openshift
if config["telemetry"]["prometheus_backup"] and config["kraken"]["distribution"] == "openshift":
safe_logger.info("archives download started:")
prometheus_archive_files = telemetry_ocp.get_ocp_prometheus_data(config["telemetry"], telemetry_request_id)
safe_logger.info("archives upload started:")
telemetry_k8s.put_prometheus_data(config["telemetry"], prometheus_archive_files, telemetry_request_id)
if config["telemetry"]["logs_backup"]:
telemetry_ocp.put_ocp_logs(telemetry_request_id, config["telemetry"], start_time, end_time)
except Exception as e:
logging.error(f"failed to send telemetry data: {str(e)}")
else:
logging.info("telemetry collection disabled, skipping.")
# Check for the alerts specified
if enable_alerts:
logging.info("Alerts checking is enabled")
kube_burner.setup(kube_burner_url)
if alert_profile:
kube_burner.alerts(
distribution,
prometheus_url,
prometheus_bearer_token,
prometheus_plugin.alerts(
prometheus,
start_time,
end_time,
alert_profile,
@@ -378,11 +407,6 @@ def main(cfg):
else:
logging.error("Alert profile is not defined")
sys.exit(1)
if litmus_uninstall and litmus_installed:
common_litmus.delete_chaos(litmus_namespace)
common_litmus.delete_chaos_experiments(litmus_namespace)
common_litmus.uninstall_litmus(litmus_version, litmus_namespace)
if failed_post_scenarios:
logging.error(

View File

@@ -1,7 +1,8 @@
---
deployer:
connection: {}
type: kubernetes
deployers:
image:
connection: {}
deployer_name: kubernetes
log:
level: debug
logged_outputs:

View File

@@ -2,13 +2,7 @@ input_list:
- cpu_count: 1
cpu_load_percentage: 80
cpu_method: all
duration: 30s
node_selector: {}
# node selector example
# node_selector:
# kubernetes.io/hostname: master
kubeconfig: ""
duration: 1s
kubeconfig: ''
namespace: default
# duplicate this section to run simultaneous stressors in the same run
node_selector: {}

View File

@@ -1,8 +1,9 @@
version: v0.2.0
input:
root: RootObject
objects:
RootObject:
id: RootObject
id: input_item
properties:
kubeconfig:
display:
@@ -60,23 +61,27 @@ input:
steps:
kubeconfig:
plugin: quay.io/arcalot/arcaflow-plugin-kubeconfig:latest
plugin:
src: quay.io/arcalot/arcaflow-plugin-kubeconfig:0.2.0
deployment_type: image
input:
kubeconfig: !expr $.input.kubeconfig
stressng:
plugin: quay.io/arcalot/arcaflow-plugin-stressng:latest
plugin:
src: quay.io/arcalot/arcaflow-plugin-stressng:0.5.0
deployment_type: image
step: workload
input:
cleanup: "true"
StressNGParams:
timeout: !expr $.input.duration
cleanup: "true"
items:
stressors:
- stressor: cpu
cpu_count: !expr $.input.cpu_count
cpu_method: !expr $.input.cpu_method
cpu_load: !expr $.input.cpu_load_percentage
deploy:
type: kubernetes
deployer_name: kubernetes
connection: !expr $.steps.kubeconfig.outputs.success.connection
pod:
metadata:

View File

@@ -1,3 +1,4 @@
version: v0.2.0
input:
root: RootObject
objects:

View File

@@ -0,0 +1,11 @@
deployers:
image:
connection: {}
deployer_name: kubernetes
log:
level: debug
logged_outputs:
error:
level: error
success:
level: debug

View File

@@ -0,0 +1,13 @@
input_list:
- duration: 30s
io_block_size: 1m
io_workers: 1
io_write_bytes: 10m
kubeconfig: ''
namespace: default
node_selector: {}
target_pod_folder: /hog-data
target_pod_volume:
hostPath:
path: /tmp
name: node-volume

View File

@@ -0,0 +1,143 @@
version: v0.2.0
input:
root: RootObject
objects:
RootObject:
id: input_item
properties:
kubeconfig:
display:
description: The complete kubeconfig file as a string
name: Kubeconfig file contents
type:
type_id: string
required: true
namespace:
display:
description: The namespace where the container will be deployed
name: Namespace
type:
type_id: string
required: true
node_selector:
display:
description: kubernetes node name where the plugin must be deployed
type:
type_id: map
values:
type_id: string
keys:
type_id: string
required: true
duration:
display:
name: duration the scenario expressed in seconds
description: stop stress test after T seconds. One can also specify the units of time in
seconds, minutes, hours, days or years with the suffix s, m, h, d or y
type:
type_id: string
required: true
io_workers:
display:
description: number of workers
name: start N workers continually writing, reading and removing temporary files
type:
type_id: integer
required: true
io_block_size:
display:
description: single write size
name: specify size of each write in bytes. Size can be from 1 byte to 4MB.
type:
type_id: string
required: true
io_write_bytes:
display:
description: Total number of bytes written
name: write N bytes for each hdd process, the default is 1 GB. One can specify the size
as % of free space on the file system or in units of Bytes, KBytes, MBytes and
GBytes using the suffix b, k, m or g
type:
type_id: string
required: true
target_pod_folder:
display:
description: Target Folder
name: Folder in the pod where the test will be executed and the test files will be written
type:
type_id: string
required: true
target_pod_volume:
display:
name: kubernetes volume definition
description: the volume that will be attached to the pod. In order to stress
the node storage only hosPath mode is currently supported
type:
type_id: object
id: k8s_volume
properties:
name:
display:
description: name of the volume (must match the name in pod definition)
type:
type_id: string
required: true
hostPath:
display:
description: hostPath options expressed as string map (key-value)
type:
type_id: map
values:
type_id: string
keys:
type_id: string
required: true
required: true
steps:
kubeconfig:
plugin:
src: quay.io/arcalot/arcaflow-plugin-kubeconfig:0.2.0
deployment_type: image
input:
kubeconfig: !expr $.input.kubeconfig
stressng:
plugin:
src: quay.io/arcalot/arcaflow-plugin-stressng:0.5.0
deployment_type: image
step: workload
input:
cleanup: "true"
StressNGParams:
timeout: !expr $.input.duration
workdir: !expr $.input.target_pod_folder
stressors:
- stressor: hdd
hdd: !expr $.input.io_workers
hdd_bytes: !expr $.input.io_write_bytes
hdd_write_size: !expr $.input.io_block_size
deploy:
deployer_name: kubernetes
connection: !expr $.steps.kubeconfig.outputs.success.connection
pod:
metadata:
namespace: !expr $.input.namespace
labels:
arcaflow: stressng
spec:
nodeSelector: !expr $.input.node_selector
pluginContainer:
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- mountPath: /hog-data
name: node-volume
volumes:
- !expr $.input.target_pod_volume
outputs:
success:
stressng: !expr $.steps.stressng.outputs.success

View File

@@ -0,0 +1,114 @@
version: v0.2.0
input:
root: RootObject
objects:
RootObject:
id: RootObject
properties:
input_list:
type:
type_id: list
items:
id: input_item
type_id: object
properties:
kubeconfig:
display:
description: The complete kubeconfig file as a string
name: Kubeconfig file contents
type:
type_id: string
required: true
namespace:
display:
description: The namespace where the container will be deployed
name: Namespace
type:
type_id: string
required: true
node_selector:
display:
description: kubernetes node name where the plugin must be deployed
type:
type_id: map
values:
type_id: string
keys:
type_id: string
required: true
duration:
display:
name: duration the scenario expressed in seconds
description: stop stress test after T seconds. One can also specify the units of time in
seconds, minutes, hours, days or years with the suffix s, m, h, d or y
type:
type_id: string
required: true
io_workers:
display:
description: number of workers
name: start N workers continually writing, reading and removing temporary files
type:
type_id: integer
required: true
io_block_size:
display:
description: single write size
name: specify size of each write in bytes. Size can be from 1 byte to 4MB.
type:
type_id: string
required: true
io_write_bytes:
display:
description: Total number of bytes written
name: write N bytes for each hdd process, the default is 1 GB. One can specify the size
as % of free space on the file system or in units of Bytes, KBytes, MBytes and
GBytes using the suffix b, k, m or g
type:
type_id: string
required: true
target_pod_folder:
display:
description: Target Folder
name: Folder in the pod where the test will be executed and the test files will be written
type:
type_id: string
required: true
target_pod_volume:
display:
name: kubernetes volume definition
description: the volume that will be attached to the pod. In order to stress
the node storage only hosPath mode is currently supported
type:
type_id: object
id: k8s_volume
properties:
name:
display:
description: name of the volume (must match the name in pod definition)
type:
type_id: string
required: true
hostPath:
display:
description: hostPath options expressed as string map (key-value)
type:
type_id: map
values:
type_id: string
keys:
type_id: string
required: true
required: true
steps:
workload_loop:
kind: foreach
items: !expr $.input.input_list
workflow: sub-workflow.yaml
parallelism: 1000
outputs:
success:
workloads: !expr $.steps.workload_loop.outputs.success.data

View File

@@ -1,7 +1,8 @@
---
deployer:
connection: {}
type: kubernetes
deployers:
image:
connection: {}
deployer_name: kubernetes
log:
level: debug
logged_outputs:

View File

@@ -1,8 +1,9 @@
version: v0.2.0
input:
root: RootObject
objects:
RootObject:
id: RootObject
id: input_item
properties:
kubeconfig:
display:
@@ -52,22 +53,26 @@ input:
steps:
kubeconfig:
plugin: quay.io/arcalot/arcaflow-plugin-kubeconfig:latest
plugin:
src: quay.io/arcalot/arcaflow-plugin-kubeconfig:0.2.0
deployment_type: image
input:
kubeconfig: !expr $.input.kubeconfig
stressng:
plugin: quay.io/arcalot/arcaflow-plugin-stressng:latest
plugin:
src: quay.io/arcalot/arcaflow-plugin-stressng:0.5.0
deployment_type: image
step: workload
input:
cleanup: "true"
StressNGParams:
timeout: !expr $.input.duration
cleanup: "true"
items:
stressors:
- stressor: vm
vm: !expr $.input.vm_workers
vm_bytes: !expr $.input.vm_bytes
deploy:
type: kubernetes
deployer_name: kubernetes
connection: !expr $.steps.kubeconfig.outputs.success.connection
pod:
metadata:

View File

@@ -1,3 +1,4 @@
version: v0.2.0
input:
root: RootObject
objects:

Some files were not shown because too many files have changed in this diff Show More