Compare commits

...

150 Commits

Author SHA1 Message Date
Dustin Black
2c6b50bcdc bump arcaflow stressng plugin to 0.3.1 for bug fix 2023-08-24 12:50:28 -04:00
Naga Ravi Chaitanya Elluri
ed97c8df2b Bump release version to v1.4.3 2023-08-23 11:56:39 -04:00
Tullio Sebastiani
1baa68bcee engine bump to v0.6.1 2023-08-23 11:38:23 -04:00
Naga Ravi Chaitanya Elluri
ab84f09448 Use release tags vs latest for kubeconfig arca plugins (#473) 2023-08-23 09:59:33 -04:00
Dustin Black
6ace3c952b update to plugin release stressng:0.3.0 (#472) 2023-08-23 09:15:30 -04:00
Tullio Sebastiani
cee5259fd3 arcaflow scenarios removed from config.yaml 2023-08-23 08:50:19 -04:00
Tullio Sebastiani
f868000ebd Switched from krkn_lib_kubernetes to krkn_lib v1.0.0 (#469)
* changed all the references to krkn_lib_kubernetes to the new krkn_lib


changed all the references

* added krkn-lib pointer in documentation
2023-08-22 12:41:40 -04:00
pratyusha
d2d80be241 Updated config.yaml file with more scenarios (#468) 2023-08-21 11:26:33 -04:00
Naga Ravi Chaitanya Elluri
da464859c4 Bump release version to v1.4.2 2023-08-21 09:06:28 -04:00
Naga Ravi Chaitanya Elluri
ef88005985 Use images tagged with a release for hog scenarios
This commit switches from using latest images to a specific release
to review changes and update configs before using the latest bits.
2023-08-18 01:47:17 -04:00
Sahil Shah
102bdfdc96 Bump the release version to v1.4.1 (#465) 2023-08-17 10:18:11 -04:00
Sahil Shah
b569e6a9d5 Fixing pvc scenario 2023-08-16 16:05:18 -04:00
Tullio Sebastiani
dba38668b7 Dockerfile version bump 2023-08-11 11:12:56 -04:00
Tullio Sebastiani
39c0152b7b Krkn telemetry integration (#435)
* adapted config.yaml to the new feature

* temporarly pointing requirement.txt to the lib feature branch

* run_kraken.py + arcaflow scenarios refactoring


typo

* plugin scenario

* node scenarios


return failed scenarios

* container scenarios


fix

* time scenarios

* cluster shutdown  scenarios

* namespace scenarios

* zone outage scenarios

* app outage scenarios

* pvc scenarios

* network chaos scenarios

* run_kraken.py adaptation to telemetry

* prometheus telemetry upload + config.yaml


some fixes


typos and logs


max retries in config


telemetry id with run_uuid


safe_logger

* catch send_telemetry exception

* scenario collection bug fixes

* telemetry enabled check

* telemetry run tag

* requirements pointing to main + archive_size

* requirements.txt and config.yaml update

* added telemetry config to common config

* fixed scenario array elements for telemetry
2023-08-10 14:42:53 -04:00
jtydlack
491dc17267 Slo via http (#459)
* Fix typo

* Enable loading SLO profile via URL (#438)
2023-08-10 11:02:33 -04:00
yogananth-subramanian
b2b5002f45 Pod egress network shapping Chaos scenario
The scenario introduces network latency, packet loss, and bandwidth restriction in the Pod's network interface.
The purpose of this scenario is to observe faults caused by random variations in the network.

Below example config applies egress traffic shaping to openshift console.
````
- id: pod_egress_shaping
  config:
    namespace: openshift-console   # Required - Namespace of the pod to which filter need to be applied.
    label_selector: 'component=ui' # Applies traffic shaping to access openshift console.
    network_params:
        latency: 500ms             # Add 500ms latency to egress traffic from the pod.
````
2023-08-08 11:45:03 -04:00
Sahil Shah
fccd701dee Changed the image in volume_scenario.yml to a public one (#458) 2023-08-02 00:11:38 -04:00
José Castillo Lema
570631ebfc Widen except (#457)
Signed-off-by: José Castillo Lema <josecastillolema@gmail.com>
2023-07-26 18:53:52 +02:00
Naga Ravi Chaitanya Elluri
3ab9ca4319 Bump release version to v1.3.6 2023-07-24 14:06:37 -04:00
Naga Ravi Chaitanya Elluri
4084ffd9c6 Bake in virtualenv in krkn images
This is needed to tie the python version being used in case multiple
versions are installed.
2023-07-24 12:52:20 -04:00
Sahil Shah
19cc2c047f Fix for pvc scenario 2023-07-21 15:41:28 -04:00
Paige Rubendall
6197fc6722 separating build and test workflows (#448)
* separating build and test workflows

* only run build on pull request
2023-07-20 16:01:50 -04:00
Naga Ravi Chaitanya Elluri
2a8ac41ebf Bump release version to v1.3.5 2023-07-20 15:24:56 -04:00
Naga Ravi Chaitanya Elluri
b4d235d31c Bake in yq dependency in Kraken container images (#450)
This commit also updates ppc64le image to have the latest bits.
2023-07-20 13:17:52 -04:00
Naga Ravi Chaitanya Elluri
e4e4620d10 Bump release version to 1.3.4 (#447) 2023-06-28 16:30:28 -04:00
Naga Ravi Chaitanya Elluri
a2c24ab7ed Install latest version of krkn-lib-kubernetes (#446) 2023-06-28 15:21:19 -04:00
Naga Ravi Chaitanya Elluri
fe892fd9bf Switch from centos to redhat ubi base image
This replaces the base image for Kraken container images to use
redhat ubi image to be more secure and stable.
2023-06-22 12:10:51 -04:00
Naga Ravi Chaitanya Elluri
74613fdb4b Install oc and kubectl clients from stable releases
This makes sure latest clients are installed and used:
- This will avoid compatability issues with the server
- Fixes security vulnerabilities and CVEs
2023-06-20 15:39:53 -04:00
Naga Ravi Chaitanya Elluri
28c37c9353 Bump release version to v1.3.3 2023-06-16 09:42:46 -04:00
Naga Ravi Chaitanya Elluri
de0567b067 Tweak the etcd alert severity 2023-06-16 09:19:17 -04:00
Naga Ravi Chaitanya Elluri
83486557f1 Bump release version to v1.3.2 (#439) 2023-06-15 12:12:42 -04:00
Naga Ravi Chaitanya Elluri
ce409ea6fb Update kube-burner dependency version to 1.7.0 2023-06-15 11:55:17 -04:00
Naga Ravi Chaitanya Elluri
0eb8d38596 Expand SLOs profile to cover monitoring for more alerts
This commit:
- Also sets appropriate severity to avoid false failures for the
  test cases especially given that theses are monitored during the chaos
  vs post chaos. Critical alerts are all monitored post chaos with few
  monitored during the chaos that represent overall health and performance
  of the service.
- Renames Alerts to SLOs validation

Metrics reference: f09a492b13/cmd/kube-burner/ocp-config/alerts.yml
2023-06-14 16:58:36 -04:00
Tullio Sebastiani
68dc17bc44 krkn-lib-kubernetes refactoring proposal (#400)
* run_kraken.py updated + renamed kubernetes library folder


unstaged files


kubecli marker

* container scenarios updated

* node scenarios updated


typo


injected kubecli

* managed cluster scenarios updated

* time scenarios updated

* litmus scenarios updated

* cluster scenarios updated

* namespace scenarios updated

* pvc scenarios updated

* network chaos scenarios updated

* common_managed_cluster functions updated

* switched draft library to official one

* regression on rebase
2023-06-13 10:02:35 -04:00
Naga Ravi Chaitanya Elluri
572eeefaf4 Minor fixes
This commit fixes few typos and duplicate logs
2023-06-12 21:05:27 -04:00
Naga Ravi Chaitanya Elluri
81376bad56 Bump release version to v1.3.1
This updates the Krkn container images to use the latest v1.3.1
minor release: https://github.com/redhat-chaos/krkn/releases.
2023-06-07 14:41:09 -04:00
Tullio Sebastiani
72b46f8393 temporarly removed io-hog scenario (#433)
* temporarly removed io-hog scenario

* removed litmus documentation & config
2023-06-05 11:03:44 -04:00
José Castillo Lema
a7938e58d2 Allow kraken to run with environment variables instead of kubeconfig file (#429)
* Include check for inside k8s scenario

* Include check for inside k8s scenario (2)

* Include check for inside k8s scenario (3)

* Include check for inside k8s scenario (4)
2023-06-01 14:43:01 -04:00
Naga Ravi Chaitanya Elluri
9858f96c78 Change the severity of the etcd leader election check to warning
This is the first step towards the goal to only have metrics tracking
the overall health and performance of the component/cluster. For instance,
for etcd disruption scenarios, leader elections are expected, we should instead
track etcd leader availability and fsync latency under critical catergory vs leader
elections.
2023-05-31 11:50:20 -04:00
Tullio Sebastiani
c91e8db928 Added Tullio Sebastiani to the mantainers list 2023-05-25 06:18:33 -04:00
Naga Ravi Chaitanya Elluri
54ea98be9c Add enhancements being planned as part of the roadmap (#425) 2023-05-24 14:36:59 -04:00
Pradeep Surisetty
9748622e4f Add maintainers details 2023-05-24 10:38:53 -04:00
Pradeep Surisetty
47f93b39c2 Add Code of Conduct 2023-05-22 13:25:52 -04:00
Tullio Sebastiani
aa715bf566 bump Dockerfile to release v1.3.0 2023-05-15 12:50:44 -04:00
Tullio Sebastiani
b9c08a45db extracted the namespace as scenario input (#419)
fixed sub-workflow and input

Co-authored-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2023-05-15 18:24:23 +02:00
Naga Ravi Chaitanya Elluri
d9f4607aa6 Add blogs and update roadmap 2023-05-15 11:50:16 -04:00
yogananth-subramanian
8806781a4f Pod network outage Chaos scenario
Pod network outage chaos scenario blocks traffic at pod level irrespective of the network policy used.
With the current network policies, it is not possible to explicitly block ports which are enabled
by allowed network policy rule. This chaos scenario addresses this issue by using OVS flow rules
to block ports related to the pod. It supports OpenShiftSDN and OVNKubernetes based networks.

Below example config blocks access to openshift console.
````
- id: pod_network_outage
  config:
    namespace: openshift-console
    direction:
        - ingress
    ingress_ports:
        - 8443
    label_selector: 'component=ui'
````
2023-05-15 10:43:58 -04:00
Tullio Sebastiani
83b811bee4 Arcaflow stress-ng hogs with parallelism support (#418)
* kubeconfig management for arcaflow + hogs scenario refactoring  

  * kubeconfig authentication parsing refactored to support arcaflow kubernetes deployer  
  * reimplemented all the hog scenarios to allow multiple parallel containers of the same scenarios 
  (eg. to stress two or more nodes in the same run simultaneously) 
  * updated documentation 
* removed sysbench scenarios


* recovered cpu hogs


* updated requirements.txt


* updated config.yaml

* added gitleaks file for test fixtures

* imported sys and logging

* removed config_arcaflow.yaml

* updated readme

* refactored arcaflow documentation entrypoint
2023-05-15 09:45:16 -04:00
Paige Rubendall
16ea18c718 Ibm plugin node scenario (#417)
* Node scenarios for ibmcloud

* adding openshift check info
2023-05-09 12:07:38 -04:00
Naga Ravi Chaitanya Elluri
1ab94754e3 Add missing parameters supported by container scenarios (#415)
Also renames retry_wait to expected_recovery_time to make it clear that
the Kraken will exit 1 if the container doesn't recover within the expected
time.
Fixes https://github.com/redhat-chaos/krkn/issues/414
2023-05-05 13:02:07 -04:00
Tullio Sebastiani
278b2bafd7 Kraken is pointing to a buggy kill-pod plugin implementation (#416) 2023-05-04 18:19:54 +02:00
Naga Ravi Chaitanya Elluri
bc863fa01f Add support to check for critical alerts
This commit enables users to opt in to check for critical alerts firing
in the cluster post chaos at the end of each scenario. Chaos scenario is
considered as failed if the cluster is unhealthy in which case user can
start debugging to fix and harden respective areas.

Fixes https://github.com/redhat-chaos/krkn/issues/410
2023-05-03 16:14:13 -04:00
Naga Ravi Chaitanya Elluri
900ca74d80 Reorganize the content from https://github.com/startx-lab (#346)
Moving the content around installing kraken using helm to the
chaos in practice section of the guide to showcase how startx-lab
is deploying and leveraging Kraken.
2023-04-24 13:51:49 -04:00
Tullio Sebastiani
82b8df4e85 kill-pod plugin dependency pointing to specific commit
switched to redhat-chaos repo
2023-04-20 08:26:51 -04:00
Tullio Sebastiani
691be66b0a kubeconfig_path in new_client_from_config
added clients in the same context of the config
2023-04-19 14:12:46 -04:00
Tullio Sebastiani
019b036f9f renamed trigger work from /test to funtest (#401)
added quotes


renamed trigger to funtest

Co-authored-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2023-04-10 09:30:53 -04:00
Paige Rubendall
13fa711c9b adding privileged namespace (#399)
Co-authored-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2023-04-06 16:18:57 -04:00
Naga Ravi Chaitanya Elluri
17f61625e4 Exit on critical alert failures
This commit captures and exits on non-zero return code i.e when
critical alerts are fired

Fixes https://github.com/redhat-chaos/krkn/issues/396
2023-03-27 12:43:57 -04:00
Tullio Sebastiani
3627b5ba88 cpu hog scenario + basic arcaflow documentation (#391)
typo


typo


updated documentation


fixed workflow map issue
2023-03-15 16:52:20 +01:00
Tullio Sebastiani
fee4f7d2bf arcaflow integration (#384)
arcaflow library version

Co-authored-by: Tullio Sebastiani <tsebasti@redhat.com>
2023-03-08 12:01:03 +01:00
Tullio Sebastiani
0534e03c48 removed useless step that was failing (#389)
removed only old namespace file cat

Co-authored-by: Tullio Sebastiani <tsebasti@redhat.com>
2023-02-23 16:28:09 +01:00
Tullio Sebastiani
bb9a19ab71 removed blocking event check 2023-02-22 09:41:52 -05:00
Tullio Sebastiani
c5b9554de5 check user's authorization before running functional tests
check users authorization before running functional tests


removed usesless checkout


step rename


typo in trigger
2023-02-21 12:38:34 -05:00
dependabot[bot]
e5f97434d3 Bump werkzeug from 2.0.3 to 2.2.3 (#385)
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.0.3 to 2.2.3.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/2.0.3...2.2.3)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2023-02-20 14:34:31 -05:00
Tullio Sebastiani
8b18fa8a35 Github Action + functional tests (no *hog tests) (#382)
* Github Action + functional tests (no *hog tests)

* changed the trigger keyword to /test

* removed deprecated kill_pod scenario + added namespace to app_outage (new kill_pod)

* #365: renamed ingress_namespace scenario to network_diagnostrcs

* requested team filter added

---------

Co-authored-by: Tullio Sebastiani <tullio.sebastiani@x3solutions.it>
2023-02-16 09:42:33 +01:00
Paige Rubendall
93686ca736 new quay image reference 2023-01-31 17:21:45 -05:00
Naga Ravi Chaitanya Elluri
64f4c234e9 Add prom token creation step
This enables compatability with all OpenShift versions.
Reference PR by Paige in Cerberus: https://github.com/redhat-chaos/cerberus/pull/190.
2023-01-31 12:36:09 -05:00
Naga Ravi Chaitanya Elluri
915cc5db94 Bump release version to v1.2.0 2023-01-19 12:03:46 -05:00
José Castillo Lema
493a8a245f Docker provider for node actions (#369)
* Docker provider for node actions

* Adjusted dependencies and imports

* Update config_kind.yaml

Signed-off-by: José Castillo Lema <josecastillolema@gmail.com>

Signed-off-by: José Castillo Lema <josecastillolema@gmail.com>
2023-01-10 14:36:18 -05:00
José Castillo Lema
d76ab31155 OCM/ACM integration (#370)
* OCM support for ManagedClusters

* Updated docs and general adjustments

* Improved docs

* Improved docs2

* Removed io packet import

Signed-off-by: José Castillo Lema <josecastillolema@gmail.com>

* Removed time from imports

Signed-off-by: José Castillo Lema <josecastillolema@gmail.com>

* Removed duplicate logging import

Signed-off-by: José Castillo Lema <josecastillolema@gmail.com>

* Removed sys import

Signed-off-by: José Castillo Lema <josecastillolema@gmail.com>

* Update run.py

Signed-off-by: José Castillo Lema <josecastillolema@gmail.com>

Signed-off-by: José Castillo Lema <josecastillolema@gmail.com>
2023-01-10 08:58:17 -05:00
dependabot[bot]
bed40b0c6a Bump setuptools from 63.4.1 to 65.5.1
Bumps [setuptools](https://github.com/pypa/setuptools) from 63.4.1 to 65.5.1.
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/CHANGES.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v63.4.1...v65.5.1)

---
updated-dependencies:
- dependency-name: setuptools
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-01-04 18:42:52 +05:30
Paige Rubendall
3c5c3c5665 Giving more details on configuration (#371)
* givng more details on configuration

* adding few changes
2022-12-08 11:18:42 +05:30
Tullio Sebastiani
cf7bc28a2d updated k8s/Openshift installation documentation (#359)
* Added some bits and pieces to the krkn k8s installation to make it easier

* updated k8s/Oc installation documentation

* gitignore

* doc reorg

* fixed numbering + removed italic

Co-authored-by: Tullio Sebastiani <tullio.sebastiani@x3solutions.it>
2022-11-30 23:02:17 +05:30
Paige Rubendall
4035f2724b Adding wait duration for pods (#368)
* adding wait duration for pods

* adding kube apiserver with plugin schema
2022-11-18 07:43:26 +05:30
Naga Ravi Chaitanya Elluri
6b17dbdbb3 Allow users to set the listening address
This commit provides an option for the user to set the listening address
for the signal. This also fixes a security vulnerability.

Fixes https://github.com/redhat-chaos/krkn/issues/307
2022-11-08 15:59:57 -05:00
Naga Ravi Chaitanya Elluri
1c207538b6 Use run dir instead of tmp
This commit also logs a message to handle the exception during the
node checks.

Fixes https://github.com/redhat-chaos/krkn/issues/356, https://github.com/redhat-chaos/krkn/issues/357
2022-11-08 15:46:08 -05:00
Naga Ravi Chaitanya Elluri
6ccc16a0ab Use autoescape=True to mitigate XSS vulnerabilities
Fixes https://github.com/redhat-chaos/krkn/issues/354
2022-11-08 14:34:06 -05:00
Naga Ravi Chaitanya Elluri
b9d5a7af4d Use safe loader for Yaml
This fixes the security vulnerabilities for example - it raises an
exception when opening a yaml file with code.

Fixes https://github.com/redhat-chaos/krkn/issues/352
2022-11-08 13:35:06 -05:00
Sandro Bonazzola
1c4a51cbfa refactor: use arcaflow plugin
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-10-18 16:43:33 +02:00
Christophe LARUE
68c02135d3 Add helm and tekton examples 2022-10-18 09:41:24 -04:00
Naga Ravi Chaitanya Elluri
61700c0dc5 Bump release version to v1.1.1 2022-10-14 12:47:17 -04:00
Paige Rubendall
da749339f7 Adding scenarios sub folders to container creation (#337)
* adding scenarios sub folders to container creation
* adding req
* trying other package installations
* more specific versions
* removing vsphere
* adding wheel
* put vmware back

Fixes: #335 
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
Co-authored-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-10-14 09:51:04 +02:00
Sandro Bonazzola
66eb541bfb Docker: take main as 1.1.0 is now broken
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-10-14 08:51:04 +02:00
Sandro Bonazzola
6589e50743 require recent aliyun-python-sdk
reducing the time needed by pip to figure out the version to be
installed.

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-10-14 08:51:04 +02:00
Sandro Bonazzola
026fbd9987 test: check for control-plane label
previously the test was looking for master label.
Recent kubernetes uses control-plane lable instead.

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-10-14 08:51:04 +02:00
Sandro Bonazzola
4357ce5386 adjust vmware requirement to latest tag
Require latest tag rather than main branch as main branch is broken.

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-10-14 08:51:04 +02:00
Sandro Bonazzola
d5e364ab62 CI: fail CI if a failure happen.
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
09069211c4 CI: drop namespace test
as it requires openshift

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
19e65f5e60 CI: drop cpu hog test
as it requires litmus on openshift

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
a3ffa1d0ff CI: drop mem hog test
as it requires litmus on openshift

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
b4d987461b CI: drop io hog
as it requires litmus on openshift

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
90b3fc9106 CI: drop container test
as it requries openshift

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
aecaaf286f CI: drop app outages as it requires openshift
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
19d24e3d46 CI: drop nodes as it requires AWS nodes
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
0731b32397 CI: drop pods test
As it says:
Pod scenarios have been removed, please use plugin_scenarios
with the kill-pods configuration instead.

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
cfef92e177 CI: drop shutdown test
shutdown test requires AWS nodes

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
134069a1fa CI: drop test time as it requires etcd
etcd pod is not available in KinD

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
34124c705d CI: drop zone test as it requires AWS
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-15 15:29:49 +02:00
Sandro Bonazzola
80829fcafe run_kraken.py: resolve ~ with kubeconfig
as we default to ~ for kubeconfig, we need to be able to read it.

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-13 12:01:16 +02:00
Sandro Bonazzola
0c36903fff config: really default to ~ instead of /root
Documentation says we default to ~ for looking up the kubernetes config
but then we set everywhere /root. Fixed the config to really look for ~.

Should solve #327.

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-13 12:01:16 +02:00
Naga Ravi Chaitanya Elluri
c3db7f236f Bump release version
This releases includes the changes needed for the customer as well as
number of other fixes and enhancements:
- Support for VMware node sceanrios
- Support for ingress traffic shaping
- Other changes can be found at https://github.com/redhat-chaos/krkn/releases/tag/v1.1.0
2022-09-13 08:14:39 +02:00
Sandro Bonazzola
0dbc58c146 automation: save CI logs
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-13 07:35:58 +02:00
Sandro Bonazzola
af58296984 automation: make unittest verbose
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-13 07:35:58 +02:00
Sandro Bonazzola
9bc8e6a4c9 automation: add CI test results to summary
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-13 07:35:58 +02:00
Sandro Bonazzola
51a2fbd77d automation: add coverage report
Add coverage report for performed tests.

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-13 07:35:58 +02:00
Paige Rubendall
9de6c7350e adding stringio for security reasons 2022-09-12 11:14:08 -04:00
Naga Ravi Chaitanya Elluri
9f23699cfa Document node scenario actions for VMware
This commit also updates the id's for the VMware scenarios to be aligned
with other cloud providers.
2022-09-07 11:34:14 -04:00
Sandro Bonazzola
fcc7145b98 post_action_regex: fix log message for list_namespace
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-07 16:48:58 +02:00
Sandro Bonazzola
bce5be9667 make post_action_regex importable
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-07 16:48:58 +02:00
Sandro Bonazzola
0031912000 post_action_regex: avoid redevining variables from outer scope
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-07 16:48:58 +02:00
Sandro Bonazzola
1a1a9c9bfe pycodestyle fixes: scenarios/openshift/post_action_regex.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-07 16:48:58 +02:00
Sandro Bonazzola
ec807e3b3a pycodestyle fixes: vmware_plugin.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-05 14:15:38 +02:00
Sandro Bonazzola
b444854cb2 pycodestyle fixes: kraken/pvc/pvc_scenario.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-05 13:36:16 +02:00
Sandro Bonazzola
1dc58d8721 pycodestyle fixes: ingress_shaping.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-05 13:20:23 +02:00
Sandro Bonazzola
6112ba63c3 plugins/run_python_plugin.py: remove unused import
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-05 13:20:23 +02:00
Sandro Bonazzola
155269fd9d pycodestyle fixes: run_kraken.py
Other than plain style changes, introduced constants
`KUBE_BURNER_URL` and `KUBE_BURNER_VERSION`
solving the problem of having a too long string and at the same time
make it easier to bump the requirement on Kube Burner.

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-05 10:25:59 +02:00
Sandro Bonazzola
79b92fc395 pycodestyle fixes: tests/test_ingress_network_plugin.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-05 08:47:55 +02:00
Sandro Bonazzola
ed1c486c85 pycodestyle fixes: tests/test_vmware_plugin.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-02 12:56:47 -04:00
Sandro Bonazzola
6ba1e1ad8b waive bandit report on insecure random usage
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-02 15:57:39 +02:00
Sandro Bonazzola
3b476b68f2 pycodestyle fixes: kraken/time_actions/common_time_functions.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-02 15:57:39 +02:00
Sandro Bonazzola
e17ebd0e7b pycodestyle fixes: kraken/shut_down/common_shut_down_func.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-02 15:44:42 +02:00
Sandro Bonazzola
d0d289fb7c update references to github organization
Updated references from chaos-kubox to redhat-chaos.

Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-02 14:38:25 +02:00
Sandro Bonazzola
66f88f5a78 pyflakes: fix imports for allowing analysis
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-02 14:23:11 +02:00
Sandro Bonazzola
abc635c699 server.py: change comment to pydoc
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-02 13:44:17 +02:00
Sandro Bonazzola
90b45538f2 pycodestyle fixes: kraken/cerberus/setup.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-02 06:32:46 -04:00
Sandro Bonazzola
c6469ef6cd pycodestyle: fix server.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-02 09:47:41 +02:00
Sandro Bonazzola
c94c2b22a9 pycodestyle fixes: kraken/zone_outage/actions.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-02 09:15:59 +02:00
Shreyas Anantha Ramaprasad
9421a0c2c2 Added support for ingress traffic shaping (#299)
* Added plugin for ingress network traffic shaping

* Documentation changes

* Minor changes

* Documentation and formatting fixes

* Added trap to sleep infinity command running in containers

* Removed shell injection threat for modprobe commands

* Added docstrings to cerberus functions

* Added checks to prevent shell injection

* Bug fix
2022-09-02 07:54:11 +02:00
Sandro Bonazzola
8a68e1cc9b pycodestyle fixes: kraken/kubernetes/client.py
Signed-off-by: Sandro Bonazzola <sbonazzo@redhat.com>
2022-09-01 12:32:52 -04:00
Shreyas Anantha Ramaprasad
d5615ac470 Fixing parts of issue #185 for PVC scenario (#290)
* Created new file for dataclasses and replaced kubectl pvc cli calls

* Added checks for existence of pod/pvc

* Modified command to get pvc capacity

Removed redundant function call
2022-09-01 15:44:37 +02:00
Naga Ravi Chaitanya Elluri
5ab16baafa Bump release version 2022-08-25 16:45:47 -04:00
Naga Ravi Chaitanya Elluri
412d718985 Fix code alignment 2022-08-25 11:32:19 -04:00
Naga Ravi Chaitanya Elluri
11f469cb8e Update install sources to use the latest release 2022-08-24 15:34:42 -04:00
Naga Ravi Chaitanya Elluri
6c75d3dddb Add option to skip litmus installation
This commit adds an option for the user to pick whether to install
litmus or not depending on their use case. One use case is disconnected
environments where litmus is pre-installed insted of reaching out to the
internet.
2022-08-23 14:09:10 -04:00
Paige Rubendall
f7e27a215e Move plugin tests (#289)
* moving pytests

* adding tests folder not under CI
2022-08-19 09:23:37 -04:00
Naga Ravi Chaitanya Elluri
e680592762 Create prometheus token to use for OCP versions >=4.11
This commit adopts the code from https://github.com/redhat-chaos/cerberus/pull/176
to support using the existing/creating new token needed to query prometheus depending
on the OpenShift version in use.

Co-authored-by: Paige Rubendall <prubenda@redhat.com>
2022-08-16 08:07:28 -04:00
Shreyas Anantha Ramaprasad
08deae63dd Added VMware Node Scenarios (#285)
* Added VMware node scenarios

* Made vmware plugin independent of Krkn

* Revert changes made to node status watch

* Fixed minor documentation changes
2022-08-15 23:35:16 +02:00
Sam Doran
f4bc30d2a1 Update README (#284)
* Update link to documentation

* Update container status badge and link

Use the correct link to the status badge on Quay.
2022-08-07 02:20:32 -04:00
Robert O'Brien
bbde837360 Refactor node status function 2022-08-03 16:51:49 +02:00
Robert O'Brien
5d789e7d30 Refactor client watch 2022-08-03 16:51:49 +02:00
Robert O'Brien
69fc8e8d1b Add resource version to list node call 2022-08-03 16:51:49 +02:00
Robert O'Brien
77f53b3a23 Rework node status to use watches 2022-08-03 16:51:49 +02:00
Janos Bonic
ccd902565e Fixes #265: Replace Powerfulseal and introduce Wolkenwalze SDK for plugin system 2022-08-02 16:25:03 +01:00
Naga Ravi Chaitanya Elluri
da117ad9d9 Switch to python3.9 2022-07-22 16:56:47 -04:00
Janos Bonic
ca7bc3f67b Removing cryptography pinning
Signed-off-by: Janos Bonic <86970079+janosdebugs@users.noreply.github.com>
2022-07-20 13:31:56 -04:00
Shreyas Anantha Ramaprasad
b01d9895fb Continue fixing small parts of issue #185 (#277)
* Added dataclasses to store info retrieved from k8 client calls

* Replaced few invoke commands in common_litmus

* Minor Documentation Changes

* Removed unused import and redundant variable

Signed-off-by: Shreyas Anantha Ramaprasad <ars.shreyas@gmail.com>
2022-07-19 14:57:17 +02:00
Naga Ravi Chaitanya Elluri
bbb66aa322 Fix source to install azure-cli
This commit updates Krkn source Dockerfile to copy azure client binary
from the official azure-cli image instead of using package manager to
avoid dependency issues.
2022-07-18 16:21:29 -04:00
harshil-redhat
97d4f51f74 Fix installation docs with updated git repo (#270)
Signed-off-by: harshil-redhat <72143431+harshil-redhat@users.noreply.github.com>
2022-06-23 19:29:36 -04:00
Alejandro Gullón
4522ab77b1 Updating commands to get used PVC capacity and allocate file 2022-06-19 18:43:01 -04:00
STARTX
f4bfc08186 debug error message when network interface not found (#268)
Debug error occured when giving a bad network interface list

Traceback (most recent call last):
  File "/root/kraken/run_kraken.py", line 318, in <module>
    main(options.cfg)
  File "/root/kraken/run_kraken.py", line 239, in main
    network_chaos.run(scenarios_list, config, wait_duration)
  File "/root/kraken/kraken/network_chaos/actions.py", line 39, in run
    test_interface = verify_interface(test_interface, nodelst, pod_template)
  File "/root/kraken/kraken/network_chaos/actions.py", line 111, in verify_interface
    "Interface %s not found in node %s interface list %s" % (interface, nodelst[pod_index]),
TypeError: not enough arguments for format string

Signed-off-by: STARTX <clarue@startx.fr>
2022-06-14 18:33:59 -04:00
162 changed files with 13180 additions and 1931 deletions

View File

@@ -1,8 +1,5 @@
name: Build Krkn
on:
push:
branches:
- main
pull_request:
jobs:
@@ -11,28 +8,44 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Build the Docker images
run: docker build --no-cache -t quay.io/chaos-kubox/krkn containers/
- name: Create multi-node KinD cluster
uses: chaos-kubox/actions/kind@main
uses: redhat-chaos/actions/kind@main
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
architecture: 'x64'
- name: Install environment
run: |
sudo apt-get install build-essential python3-dev
pip install --upgrade pip
pip install -r requirements.txt
- name: Run unit tests
run: python -m coverage run -a -m unittest discover -s tests -v
- name: Run CI
run: ./CI/run.sh
- name: Login in quay
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
env:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}
- name: Push the Docker images
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker push quay.io/chaos-kubox/krkn
- name: Rebuild krkn-hub
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
uses: chaos-kubox/actions/krkn-hub@main
run: |
./CI/run.sh
cat ./CI/results.markdown >> $GITHUB_STEP_SUMMARY
echo >> $GITHUB_STEP_SUMMARY
- name: Upload CI logs
uses: actions/upload-artifact@v3
with:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}
name: ci-logs
path: CI/out
if-no-files-found: error
- name: Collect coverage report
run: |
python -m coverage html
- name: Publish coverage report to job summary
run: |
pip install html2text
html2text --ignore-images --ignore-links -b 0 htmlcov/index.html >> $GITHUB_STEP_SUMMARY
- name: Upload coverage data
uses: actions/upload-artifact@v3
with:
name: coverage
path: htmlcov
if-no-files-found: error
- name: Check CI results
run: grep Fail CI/results.markdown && false || true

30
.github/workflows/docker-image.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Docker Image CI
on:
push:
branches:
- main
pull_request:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Build the Docker images
run: docker build --no-cache -t quay.io/redhat-chaos/krkn containers/
- name: Login in quay
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
env:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}
- name: Push the Docker images
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: docker push quay.io/redhat-chaos/krkn
- name: Rebuild krkn-hub
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
uses: redhat-chaos/actions/krkn-hub@main
with:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}

111
.github/workflows/functional_tests.yaml vendored Normal file
View File

@@ -0,0 +1,111 @@
on: issue_comment
jobs:
check_user:
# This job only runs for pull request comments
name: Check User Authorization
env:
USERS: ${{vars.USERS}}
if: contains(github.event.comment.body, '/funtest') && contains(github.event.comment.html_url, '/pull/')
runs-on: ubuntu-latest
steps:
- name: Check User
run: |
for name in `echo $USERS`
do
name="${name//$'\r'/}"
name="${name//$'\n'/}"
if [ $name == "${{github.event.sender.login}}" ]
then
echo "user ${{github.event.sender.login}} authorized, action started..."
exit 0
fi
done
echo "user ${{github.event.sender.login}} is not allowed to run functional tests Action"
exit 1
pr_commented:
# This job only runs for pull request comments containing /functional
name: Functional Tests
if: contains(github.event.comment.body, '/funtest') && contains(github.event.comment.html_url, '/pull/')
runs-on: ubuntu-latest
needs:
- check_user
steps:
- name: Check out Kraken
uses: actions/checkout@v3
- name: Checkout Pull Request
run: hub pr checkout ${{ github.event.issue.number }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Install OC CLI
uses: redhat-actions/oc-installer@v1
with:
oc_version: latest
- name: Install python 3.9
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Setup kraken dependencies
run: pip install -r requirements.txt
- name: Create Workdir & export the path
run: |
mkdir workdir
echo "WORKDIR_PATH=`pwd`/workdir" >> $GITHUB_ENV
- name: Teardown CRC (Post Action)
uses: webiny/action-post-run@3.0.0
id: post-run-command
with:
# currently using image coming from tsebastiani quay.io repo
# waiting that a fix is merged in the upstream one
# post action run cannot (apparently) be properly indented
run: docker run -v "${{ env.WORKDIR_PATH }}:/workdir" -e WORKING_MODE=T -e AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }} -e AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }} -e AWS_DEFAULT_REGION=us-west-2 -e TEARDOWN_RUN_ID=crc quay.io/tsebastiani/crc-cloud
- name: Run CRC
# currently using image coming from tsebastiani quay.io repo
# waiting that a fix is merged in the upstream one
run: |
docker run -v "${{ env.WORKDIR_PATH }}:/workdir" \
-e WORKING_MODE=C \
-e PULL_SECRET="${{ secrets.PULL_SECRET }}" \
-e AWS_ACCESS_KEY_ID="${{ secrets.AWS_ACCESS_KEY_ID }}" \
-e AWS_SECRET_ACCESS_KEY="${{ secrets.AWS_SECRET_ACCESS_KEY }}" \
-e AWS_DEFAULT_REGION=us-west-2 \
-e CREATE_RUN_ID=crc \
-e PASS_KUBEADMIN="${{ secrets.KUBEADMIN_PWD }}" \
-e PASS_REDHAT="${{ secrets.REDHAT_PWD }}" \
-e PASS_DEVELOPER="${{ secrets.DEVELOPER_PWD }}" \
quay.io/tsebastiani/crc-cloud
- name: OpenShift login and example deployment, GitHub Action env init
env:
NAMESPACE: test-namespace
DEPLOYMENT_NAME: test-nginx
KUBEADMIN_PWD: '${{ secrets.KUBEADMIN_PWD }}'
run: ./CI/CRC/init_github_action.sh
- name: Setup test suite
run: |
yq -i '.kraken.port="8081"' CI/config/common_test_config.yaml
yq -i '.kraken.signal_address="0.0.0.0"' CI/config/common_test_config.yaml
echo "test_app_outages_gh" > ./CI/tests/my_tests
echo "test_container" >> ./CI/tests/my_tests
echo "test_namespace" >> ./CI/tests/my_tests
echo "test_net_chaos" >> ./CI/tests/my_tests
echo "test_time" >> ./CI/tests/my_tests
- name: Print affected config files
run: |
echo -e "## CI/config/common_test_config.yaml\n\n"
cat CI/config/common_test_config.yaml
- name: Running test suite
run: |
./CI/run.sh
- name: Print test output
run: cat CI/out/*
- name: Create coverage report
run: |
echo "# Test results" > $GITHUB_STEP_SUMMARY
cat CI/results.markdown >> $GITHUB_STEP_SUMMARY
echo "# Test coverage" >> $GITHUB_STEP_SUMMARY
python -m coverage report --format=markdown >> $GITHUB_STEP_SUMMARY

6
.gitignore vendored
View File

@@ -23,6 +23,8 @@ kube_burner*
.pydevproject
.settings
.idea
.vscode
config/debug.yaml
tags
# Package files
@@ -61,3 +63,7 @@ CI/out/*
CI/ci_results
CI/scenarios/*node.yaml
CI/results.markdown
#env
chaos/*

6
.gitleaks.toml Normal file
View File

@@ -0,0 +1,6 @@
[allowlist]
description = "Global Allowlist"
paths = [
'''kraken/arcaflow_plugin/fixtures/*'''
]

44
CI/CRC/deployment.yaml Normal file
View File

@@ -0,0 +1,44 @@
apiVersion: v1
kind: Namespace
metadata:
name: $NAMESPACE
---
apiVersion: v1
kind: Service
metadata:
name: $DEPLOYMENT_NAME-service
namespace: $NAMESPACE
spec:
selector:
app: $DEPLOYMENT_NAME
ports:
- name: http
port: 80
targetPort: 8080
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: $NAMESPACE
name: $DEPLOYMENT_NAME-deployment
spec:
replicas: 3
selector:
matchLabels:
app: $DEPLOYMENT_NAME
template:
metadata:
labels:
app: $DEPLOYMENT_NAME
spec:
containers:
- name: $DEPLOYMENT_NAME
image: nginxinc/nginx-unprivileged:stable-alpine
ports:
- name: http
containerPort: 8080

72
CI/CRC/init_github_action.sh Executable file
View File

@@ -0,0 +1,72 @@
#!/bin/bash
SCRIPT_PATH=./CI/CRC
DEPLOYMENT_PATH=$SCRIPT_PATH/deployment.yaml
CLUSTER_INFO=cluster_infos.json
[[ -z $WORKDIR_PATH ]] && echo "[ERROR] please set \$WORKDIR_PATH environment variable" && exit 1
CLUSTER_INFO_PATH=$WORKDIR_PATH/crc/$CLUSTER_INFO
[[ ! -f $DEPLOYMENT_PATH ]] && echo "[ERROR] please run $0 from GitHub action root directory" && exit 1
[[ -z $KUBEADMIN_PWD ]] && echo "[ERROR] kubeadmin password not set, please check the repository secrets" && exit 1
[[ -z $DEPLOYMENT_NAME ]] && echo "[ERROR] please set \$DEPLOYMENT_NAME environment variable" && exit 1
[[ -z $NAMESPACE ]] && echo "[ERROR] please set \$NAMESPACE environment variable" && exit 1
[[ ! -f $CLUSTER_INFO_PATH ]] && echo "[ERROR] cluster_info.json not found in $CLUSTER_INFO_PATH" && exit 1
OPENSSL=`which openssl 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: openssl missing, please install it and try again" && exit 1
OC=`which oc 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: oc missing, please install it and try again" && exit 1
SED=`which sed 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: sed missing, please install it and try again" && exit 1
JQ=`which jq 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: jq missing, please install it and try again" && exit 1
ENVSUBST=`which envsubst 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: envsubst missing, please install it and try again" && exit 1
API_ADDRESS="$($JQ -r '.api.address' $CLUSTER_INFO_PATH)"
API_PORT="$($JQ -r '.api.port' $CLUSTER_INFO_PATH)"
BASE_HOST=`$JQ -r '.api.address' $CLUSTER_INFO_PATH | sed -r 's#https:\/\/api\.(.+\.nip\.io)#\1#'`
FQN=$DEPLOYMENT_NAME.apps.$BASE_HOST
echo "[INF] logging on $API_ADDRESS:$API_PORT"
COUNTER=1
until `$OC login --insecure-skip-tls-verify -u kubeadmin -p $KUBEADMIN_PWD $API_ADDRESS:$API_PORT > /dev/null 2>&1`
do
echo "[INF] login attempt $COUNTER"
[[ $COUNTER == 20 ]] && echo "[ERR] maximum login attempts exceeded, failing" && exit 1
((COUNTER++))
sleep 10
done
echo "[INF] deploying example deployment: $DEPLOYMENT_NAME in namespace: $NAMESPACE"
$ENVSUBST < $DEPLOYMENT_PATH | $OC apply -f - > /dev/null 2>&1
echo "[INF] creating SSL self-signed certificates for route https://$FQN"
$OPENSSL genrsa -out servercakey.pem > /dev/null 2>&1
$OPENSSL req -new -x509 -key servercakey.pem -out serverca.crt -subj "/CN=$FQN/O=Red Hat Inc./C=US" > /dev/null 2>&1
$OPENSSL genrsa -out server.key > /dev/null 2>&1
$OPENSSL req -new -key server.key -out server_reqout.txt -subj "/CN=$FQN/O=Red Hat Inc./C=US" > /dev/null 2>&1
$OPENSSL x509 -req -in server_reqout.txt -days 3650 -sha256 -CAcreateserial -CA serverca.crt -CAkey servercakey.pem -out server.crt > /dev/null 2>&1
echo "[INF] creating deployment: $DEPLOYMENT_NAME public route: https://$FQN"
$OC create route --namespace $NAMESPACE edge --service=$DEPLOYMENT_NAME-service --cert=server.crt --key=server.key --ca-cert=serverca.crt --hostname="$FQN" > /dev/null 2>&1
echo "[INF] setting github action environment variables"
NODE_NAME="`$OC get nodes -o json | $JQ -r '.items[0].metadata.name'`"
COVERAGE_FILE="`pwd`/coverage.md"
echo "DEPLOYMENT_NAME=$DEPLOYMENT_NAME" >> $GITHUB_ENV
echo "DEPLOYMENT_FQN=$FQN" >> $GITHUB_ENV
echo "API_ADDRESS=$API_ADDRESS" >> $GITHUB_ENV
echo "API_PORT=$API_PORT" >> $GITHUB_ENV
echo "NODE_NAME=$NODE_NAME" >> $GITHUB_ENV
echo "NAMESPACE=$NAMESPACE" >> $GITHUB_ENV
echo "COVERAGE_FILE=$COVERAGE_FILE" >> $GITHUB_ENV
echo "[INF] deployment fully qualified name will be available in \${{ env.DEPLOYMENT_NAME }} with value $DEPLOYMENT_NAME"
echo "[INF] deployment name will be available in \${{ env.DEPLOYMENT_FQN }} with value $FQN"
echo "[INF] OCP API address will be available in \${{ env.API_ADDRESS }} with value $API_ADDRESS"
echo "[INF] OCP API port will be available in \${{ env.API_PORT }} with value $API_PORT"
echo "[INF] OCP node name will be available in \${{ env.NODE_NAME }} with value $NODE_NAME"
echo "[INF] coverage file will ve available in \${{ env.COVERAGE_FILE }} with value $COVERAGE_FILE"

View File

@@ -1,6 +1,6 @@
kraken:
distribution: openshift # Distribution can be kubernetes or openshift.
kubeconfig_path: /root/.kube/config # Path to kubeconfig.
kubeconfig_path: ~/.kube/config # Path to kubeconfig.
exit_on_failure: False # Exit when a post action scenario fails.
litmus_version: v1.13.6 # Litmus version to install.
litmus_uninstall: False # If you want to uninstall litmus if failure.
@@ -29,3 +29,15 @@ tunings:
wait_duration: 6 # Duration to wait between each chaos scenario.
iterations: 1 # Number of times to execute the scenarios.
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever.
telemetry:
enabled: False # enable/disables the telemetry collection feature
api_url: https://ulnmf9xv7j.execute-api.us-west-2.amazonaws.com/production #telemetry service endpoint
username: username # telemetry service username
password: password # telemetry service password
prometheus_backup: True # enables/disables prometheus data collection
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
backup_threads: 5 # number of telemetry download/upload threads
archive_path: /tmp # local path where the archive files will be temporarly stored
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
archive_size: 10000 # the size of the prometheus data archive size in KB. The lower the size of archive is

View File

@@ -1,5 +1,22 @@
#!/bin/bash
set -x
MAX_RETRIES=60
OC=`which oc 2>/dev/null`
[[ $? != 0 ]] && echo "[ERROR]: oc missing, please install it and try again" && exit 1
wait_cluster_become_ready() {
COUNT=1
until `$OC get namespace > /dev/null 2>&1`
do
echo "[INF] waiting OpenShift to become ready, after $COUNT check"
sleep 3
[[ $COUNT == $MAX_RETRIES ]] && echo "[ERR] max retries exceeded, failing" && exit 1
((COUNT++))
done
}
ci_tests_loc="CI/tests/my_tests"
@@ -22,5 +39,7 @@ echo '-----------------------|--------|---------' >> $results
# Run each test
for test_name in `cat CI/tests/my_tests`
do
wait_cluster_become_ready
./CI/run_test.sh $test_name $results
wait_cluster_become_ready
done

View File

@@ -1,31 +0,0 @@
---
kind: Pod
apiVersion: v1
metadata:
name: hello-pod
creationTimestamp:
labels:
name: hello-openshift
spec:
containers:
- name: hello-openshift
image: openshift/hello-openshift
ports:
- containerPort: 5050
protocol: TCP
resources: {}
volumeMounts:
- name: tmp
mountPath: "/tmp"
terminationMessagePath: "/dev/termination-log"
imagePullPolicy: IfNotPresent
securityContext:
capabilities: {}
privileged: false
volumes:
- name: tmp
emptyDir: {}
restartPolicy: Always
dnsPolicy: ClusterFirst
serviceAccount: ''
status: {}

View File

@@ -1,31 +0,0 @@
config:
runStrategy:
runs: 1
maxSecondsBetweenRuns: 30
minSecondsBetweenRuns: 1
scenarios:
- name: "delete hello pods"
steps:
- podAction:
matches:
- labels:
namespace: "default"
selector: "hello-openshift"
filters:
- randomSample:
size: 1
actions:
- kill:
probability: 1
force: true
- podAction:
matches:
- labels:
namespace: "default"
selector: "hello-openshift"
retries:
retriesTimeout:
timeout: 180
actions:
- checkPodCount:
count: 1

View File

@@ -1,7 +0,0 @@
scenarios:
- action: delete
namespace: "^.*ingress.*$"
label_selector:
runs: 1
sleep: 15
wait_time: 30

View File

@@ -0,0 +1,7 @@
scenarios:
- action: delete
namespace: "^$openshift-network-diagnostics$"
label_selector:
runs: 1
sleep: 15
wait_time: 30

View File

@@ -1,7 +1,20 @@
apiVersion: v1
kind: Namespace
metadata:
labels:
kubernetes.io/metadata.name: kraken
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/enforce-version: v1.24
pod-security.kubernetes.io/warn: privileged
security.openshift.io/scc.podSecurityLabelSync: "false"
name: kraken
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: kraken-test-pv
namespace: kraken
labels:
type: local
spec:
@@ -17,6 +30,7 @@ apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kraken-test-pvc
namespace: kraken
spec:
storageClassName: manual
accessModes:
@@ -29,6 +43,7 @@ apiVersion: v1
kind: Pod
metadata:
name: kraken-test-pod
namespace: kraken
spec:
volumes:
- name: kraken-test-pv
@@ -36,7 +51,7 @@ spec:
claimName: kraken-test-pvc
containers:
- name: kraken-test-container
image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest'
image: 'quay.io/centos7/httpd-24-centos7:latest'
volumeMounts:
- mountPath: "/home/krake-dir/"
name: kraken-test-pv

View File

@@ -1,12 +1 @@
test_pods
test_nodes
test_time
test_app_outages
test_container
test_zone
test_io_hog
test_mem_hog
test_cpu_hog
test_shut_down
test_net_chaos
test_namespace

View File

@@ -12,7 +12,7 @@ function functional_test_app_outage {
export scenario_file="CI/scenarios/app_outage.yaml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/app_outage.yaml
python3 run_kraken.py -c CI/config/app_outage.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/app_outage.yaml
echo "App outage scenario test: Success"
}

21
CI/tests/test_app_outages_gh.sh Executable file
View File

@@ -0,0 +1,21 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_app_outage {
[ -z $DEPLOYMENT_NAME ] && echo "[ERR] DEPLOYMENT_NAME variable not set, failing." && exit 1
yq -i '.application_outage.pod_selector={"app":"'$DEPLOYMENT_NAME'"}' CI/scenarios/app_outage.yaml
yq -i '.application_outage.namespace="'$NAMESPACE'"' CI/scenarios/app_outage.yaml
export scenario_type="application_outages"
export scenario_file="CI/scenarios/app_outage.yaml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/app_outage.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/app_outage.yaml
echo "App outage scenario test: Success"
}
functional_test_app_outage

View File

@@ -14,7 +14,7 @@ function functional_test_container_crash {
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/container_config.yaml
python3 run_kraken.py -c CI/config/container_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/container_config.yaml
echo "Container scenario test: Success"
}

View File

@@ -13,8 +13,8 @@ function functional_test_litmus_cpu {
export post_config="- CI/scenarios/node_cpu_hog_engine_node.yaml"
envsubst < CI/config/common_test_config.yaml > CI/config/litmus_config.yaml
envsubst < CI/scenarios/node_cpu_hog_engine.yaml > CI/scenarios/node_cpu_hog_engine_node.yaml
python3 run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario $1 test: Success"
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario test: Success"
}
functional_test_litmus_cpu

20
CI/tests/test_cpu_hog_gh.sh Executable file
View File

@@ -0,0 +1,20 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_litmus_cpu {
[ -z $NODE_NAME ] && echo "[ERR] NODE_NAME variable not set, failing." && exit 1
yq -i ' .spec.experiments = [{"name": "node-cpu-hog", "spec":{"components":{"env":[{"name":"TOTAL_CHAOS_DURATION","value":"10"},{"name":"NODE_CPU_CORE","value":"1"},{"name":"NODES_AFFECTED_PERC","value":"30"},{"name":"TARGET_NODES","value":"'$NODE_NAME'"}]}}}]' CI/scenarios/node_cpu_hog_engine_node.yaml
cp CI/config/common_test_config.yaml CI/config/litmus_config.yaml
yq '.kraken.chaos_scenarios = [{"litmus_scenarios":[["scenarios/openshift/templates/litmus-rbac.yaml","CI/scenarios/node_cpu_hog_engine_node.yaml"]]}]' -i CI/config/litmus_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario test: Success"
}
functional_test_litmus_cpu

View File

@@ -13,8 +13,8 @@ function functional_test_litmus_io {
export post_config="- CI/scenarios/node_io_engine_node.yaml"
envsubst < CI/config/common_test_config.yaml > CI/config/litmus_config.yaml
envsubst < CI/scenarios/node_io_engine.yaml > CI/scenarios/node_io_engine_node.yaml
python3 run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario $1 test: Success"
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario test: Success"
}
functional_test_litmus_io

19
CI/tests/test_io_hog_gh.sh Executable file
View File

@@ -0,0 +1,19 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_litmus_io {
[ -z $NODE_NAME ] && echo "[ERR] NODE_NAME variable not set, failing." && exit 1
yq -i ' .spec.experiments = [{"name": "node-io-stress", "spec":{"components":{"env":[{"name":"TOTAL_CHAOS_DURATION","value":"10"},{"name":"FILESYSTEM_UTILIZATION_PERCENTAGE","value":"100"},{"name":"CPU","value":"1"},{"name":"NUMBER_OF_WORKERS","value":"3"},{"name":"TARGET_NODES","value":"'$NODE_NAME'"}]}}}]' CI/scenarios/node_io_engine_node.yaml
cp CI/config/common_test_config.yaml CI/config/litmus_config.yaml
yq '.kraken.chaos_scenarios = [{"litmus_scenarios":[["scenarios/openshift/templates/litmus-rbac.yaml","CI/scenarios/node_io_engine_node.yaml"]]}]' -i CI/config/litmus_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario test: Success"
}
functional_test_litmus_io

View File

@@ -13,7 +13,7 @@ function functional_test_litmus_mem {
export post_config="- CI/scenarios/node_mem_engine_node.yaml"
envsubst < CI/config/common_test_config.yaml > CI/config/litmus_config.yaml
envsubst < CI/scenarios/node_mem_engine.yaml > CI/scenarios/node_mem_engine_node.yaml
python3 run_kraken.py -c CI/config/litmus_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario $1 test: Success"
}

19
CI/tests/test_mem_hog_gh.sh Executable file
View File

@@ -0,0 +1,19 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_litmus_mem {
[ -z $NODE_NAME ] && echo "[ERR] NODE_NAME variable not set, failing." && exit 1
yq -i ' .spec.experiments = [{"name": "node-io-stress", "spec":{"components":{"env":[{"name":"TOTAL_CHAOS_DURATION","value":"10"},{"name":"CPU","value":"1"},{"name":"TARGET_NODES","value":"'$NODE_NAME'"}]}}}]' CI/scenarios/node_mem_engine_node.yaml
cp CI/config/common_test_config.yaml CI/config/litmus_config.yaml
yq '.kraken.chaos_scenarios = [{"litmus_scenarios":[["scenarios/openshift/templates/litmus-rbac.yaml","CI/scenarios/node_mem_engine_node.yaml"]]}]' -i CI/config/litmus_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/litmus_config.yaml
echo "Litmus scenario test: Success"
}
functional_test_litmus_mem

View File

@@ -7,11 +7,11 @@ trap finish EXIT
function funtional_test_namespace_deletion {
export scenario_type="namespace_scenarios"
export scenario_file="- CI/scenarios/ingress_namespace.yaml"
export scenario_file="- CI/scenarios/network_diagnostics_namespace.yaml"
export post_config=""
yq '.scenarios.[0].namespace="^openshift-network-diagnostics$"' -i CI/scenarios/network_diagnostics_namespace.yaml
envsubst < CI/config/common_test_config.yaml > CI/config/namespace_config.yaml
python3 run_kraken.py -c CI/config/namespace_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/namespace_config.yaml
echo $?
echo "Namespace scenario test: Success"
}

View File

@@ -12,7 +12,7 @@ function functional_test_network_chaos {
export scenario_file="CI/scenarios/network_chaos.yaml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/network_chaos.yaml
python3 run_kraken.py -c CI/config/network_chaos.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/network_chaos.yaml
echo "Network Chaos test: Success"
}

View File

@@ -13,7 +13,7 @@ function funtional_test_node_crash {
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/node_config.yaml
python3 run_kraken.py -c CI/config/node_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/node_config.yaml
echo "Node scenario test: Success"
}

View File

@@ -1,19 +0,0 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function funtional_test_pod_deletion {
export scenario_type="pod_scenarios"
export scenario_file="- CI/scenarios/hello_pod_killing.yml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/pod_config.yaml
python3 run_kraken.py -c CI/config/pod_config.yaml
echo $?
echo "Pod scenario test: Success"
}
funtional_test_pod_deletion

View File

@@ -12,7 +12,7 @@ function functional_test_shut_down {
export scenario_file="- CI/scenarios/cluster_shut_down_scenario.yml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/shut_down.yaml
python3 run_kraken.py -c CI/config/shut_down.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/shut_down.yaml
echo "Cluster shut down scenario test: Success"
}

View File

@@ -12,7 +12,7 @@ function functional_test_time_scenario {
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/time_config.yaml
python3 run_kraken.py -c CI/config/time_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/time_config.yaml
echo "Time scenario test: Success"
}

View File

@@ -13,7 +13,7 @@ function functional_test_zone_crash {
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/zone3_config.yaml
envsubst < CI/scenarios/zone_outage.yaml > CI/scenarios/zone_outage_env.yaml
python3 run_kraken.py -c CI/config/zone3_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/zone3_config.yaml
echo "zone3 scenario test: Success"
}

127
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,127 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

12
MAINTAINERS.md Normal file
View File

@@ -0,0 +1,12 @@
## Overview
This document contains a list of maintainers in this repo.
## Current Maintainers
| Maintainer | GitHub ID | Email |
|---------------------| --------------------------------------------------------- | ----------------------- |
| Ravi Elluri | [chaitanyaenr](https://github.com/chaitanyaenr) | nelluri@redhat.com |
| Pradeep Surisetty | [psuriset](https://github.com/psuriset) | psuriset@redhat.com |
| Paige Rubendall | [paigerube14](https://github.com/paigerube14) | prubenda@redhat.com |
| Tullio Sebastiani | [tsebastiani](https://github.com/tsebastiani) | tsebasti@redhat.com |

View File

@@ -1,5 +1,5 @@
# Krkn aka Kraken
[![Docker Repository on Quay](https://quay.io/repository/chaos-kubox/krkn?tab=tags&tag=latest "Docker Repository on Quay")](https://quay.io/chaos-kubox/krkn)
[![Docker Repository on Quay](https://quay.io/repository/redhat-chaos/krkn/status "Docker Repository on Quay")](https://quay.io/repository/redhat-chaos/krkn?tab=tags&tag=latest)
![Krkn logo](media/logo.png)
@@ -23,7 +23,7 @@ Kraken injects deliberate failures into Kubernetes/OpenShift clusters to check i
- Test environment recommendations as to how and where to run chaos tests.
- Chaos testing in practice.
The guide is hosted at [https://chaos-kubox.github.io/krkn/](https://chaos-kubox.github.io/krkn/).
The guide is hosted at https://redhat-chaos.github.io/krkn.
### How to Get Started
@@ -35,7 +35,7 @@ After installation, refer back to the below sections for supported scenarios and
#### Running Kraken with minimal configuration tweaks
For cases where you want to run Kraken with minimal configuration changes, refer to [Kraken-hub](https://github.com/chaos-kubox/krkn-hub). One use case is CI integration where you do not want to carry around different configuration files for the scenarios.
For cases where you want to run Kraken with minimal configuration changes, refer to [Kraken-hub](https://github.com/redhat-chaos/krkn-hub). One use case is CI integration where you do not want to carry around different configuration files for the scenarios.
### Setting up infrastructure dependencies
Kraken indexes the metrics specified in the profile into Elasticsearch in addition to leveraging Cerberus for understanding the health of the Kubernetes/OpenShift cluster under test. More information on the features is documented below. The infrastructure pieces can be easily installed and uninstalled by running:
@@ -56,25 +56,27 @@ Instructions on how to setup the config and the options supported can be found a
### Kubernetes/OpenShift chaos scenarios supported
Scenario type | Kubernetes | OpenShift
--------------------------- | ------------- | -------------------- |
Scenario type | Kubernetes | OpenShift
--------------------------- | ------------- |--------------------|
[Pod Scenarios](docs/pod_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Pod Network Scenarios](docs/pod_network_scenarios.md) | :x: | :heavy_check_mark: |
[Container Scenarios](docs/container_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Node Scenarios](docs/node_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Time Scenarios](docs/time_scenarios.md) | :x: | :heavy_check_mark: |
[Litmus Scenarios](docs/litmus_scenarios.md) | :x: | :heavy_check_mark: |
[Hog Scenarios: CPU, Memory](docs/arcaflow_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Cluster Shut Down Scenarios](docs/cluster_shut_down_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Namespace Scenarios](docs/namespace_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
[Zone Outage Scenarios](docs/zone_outage.md) | :heavy_check_mark: | :heavy_check_mark: |
[Application_outages](docs/application_outages.md) | :heavy_check_mark: | :heavy_check_mark: |
[PVC scenario](docs/pvc_scenario.md) | :heavy_check_mark: | :heavy_check_mark: |
[Network_Chaos](docs/network_chaos.md) | :heavy_check_mark: | :heavy_check_mark: |
[ManagedCluster Scenarios](docs/managedcluster_scenarios.md) | :heavy_check_mark: | :question: |
### Kraken scenario pass/fail criteria and report
It is important to make sure to check if the targeted component recovered from the chaos injection and also if the Kubernetes/OpenShift cluster is healthy as failures in one component can have an adverse impact on other components. Kraken does this by:
- Having built in checks for pod and node based scenarios to ensure the expected number of replicas and nodes are up. It also supports running custom scripts with the checks.
- Leveraging [Cerberus](https://github.com/openshift-scale/cerberus) to monitor the cluster under test and consuming the aggregated go/no-go signal to determine pass/fail post chaos. It is highly recommended to turn on the Cerberus health check feature available in Kraken. Instructions on installing and setting up Cerberus can be found [here](https://github.com/openshift-scale/cerberus#installation) or can be installed from Kraken using the [instructions](https://github.com/chaos-kubox/krkn#setting-up-infrastructure-dependencies). Once Cerberus is up and running, set cerberus_enabled to True and cerberus_url to the url where Cerberus publishes go/no-go signal in the Kraken config file. Cerberus can monitor [application routes](https://github.com/chaos-kubox/cerberus/blob/main/docs/config.md#watch-routes) during the chaos and fails the run if it encounters downtime as it is a potential downtime in a customers, or users environment as well. It is especially important during the control plane chaos scenarios including the API server, Etcd, Ingress etc. It can be enabled by setting `check_applicaton_routes: True` in the [Kraken config](https://github.com/chaos-kubox/krkn/blob/main/config/config.yaml) provided application routes are being monitored in the [cerberus config](https://github.com/chaos-kubox/krkn/blob/main/config/cerberus.yaml).
- Leveraging [Cerberus](https://github.com/openshift-scale/cerberus) to monitor the cluster under test and consuming the aggregated go/no-go signal to determine pass/fail post chaos. It is highly recommended to turn on the Cerberus health check feature available in Kraken. Instructions on installing and setting up Cerberus can be found [here](https://github.com/openshift-scale/cerberus#installation) or can be installed from Kraken using the [instructions](https://github.com/redhat-chaos/krkn#setting-up-infrastructure-dependencies). Once Cerberus is up and running, set cerberus_enabled to True and cerberus_url to the url where Cerberus publishes go/no-go signal in the Kraken config file. Cerberus can monitor [application routes](https://github.com/redhat-chaos/cerberus/blob/main/docs/config.md#watch-routes) during the chaos and fails the run if it encounters downtime as it is a potential downtime in a customers, or users environment as well. It is especially important during the control plane chaos scenarios including the API server, Etcd, Ingress etc. It can be enabled by setting `check_applicaton_routes: True` in the [Kraken config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) provided application routes are being monitored in the [cerberus config](https://github.com/redhat-chaos/krkn/blob/main/config/cerberus.yaml).
- Leveraging [kube-burner](docs/alerts.md) alerting feature to fail the runs in case of critical alerts.
### Signaling
@@ -93,23 +95,26 @@ Monitoring the Kubernetes/OpenShift cluster to observe the impact of Kraken chao
Kraken supports capturing metrics for the duration of the scenarios defined in the config and indexes then into Elasticsearch to be able to store and evaluate the state of the runs long term. The indexed metrics can be visualized with the help of Grafana. It uses [Kube-burner](https://github.com/cloud-bulldozer/kube-burner) under the hood. The metrics to capture need to be defined in a metrics profile which Kraken consumes to query prometheus ( installed by default in OpenShift ) with the start and end timestamp of the run. Information on enabling and leveraging this feature can be found [here](docs/metrics.md).
### Alerts
In addition to checking the recovery and health of the cluster and components under test, Kraken takes in a profile with the Prometheus expressions to validate and alerts, exits with a non-zero return code depending on the severity set. This feature can be used to determine pass/fail or alert on abnormalities observed in the cluster based on the metrics. Information on enabling and leveraging this feature can be found [here](docs/alerts.md).
### SLOs validation during and post chaos
- In addition to checking the recovery and health of the cluster and components under test, Kraken takes in a profile with the Prometheus expressions to validate and alerts, exits with a non-zero return code depending on the severity set. This feature can be used to determine pass/fail or alert on abnormalities observed in the cluster based on the metrics.
- Kraken also provides ability to check if any critical alerts are firing in the cluster post chaos and pass/fail's.
Information on enabling and leveraging this feature can be found [here](docs/SLOs_validation.md)
### OCM / ACM integration
Kraken supports injecting faults into [Open Cluster Management (OCM)](https://open-cluster-management.io/) and [Red Hat Advanced Cluster Management for Kubernetes (ACM)](https://www.redhat.com/en/technologies/management/advanced-cluster-management) managed clusters through [ManagedCluster Scenarios](docs/managedcluster_scenarios.md).
### Blogs and other useful resources
- Blog post on introduction to Kraken: https://www.openshift.com/blog/introduction-to-kraken-a-chaos-tool-for-openshift/kubernetes
- Discussion and demo on how Kraken can be leveraged to ensure OpenShift is reliable, performant and scalable: https://www.youtube.com/watch?v=s1PvupI5sD0&ab_channel=OpenShift
- Blog post emphasizing the importance of making Chaos part of Performance and Scale runs to mimic the production environments: https://www.openshift.com/blog/making-chaos-part-of-kubernetes/openshift-performance-and-scalability-tests
- Blog post on findings from Chaos test runs: https://cloud.redhat.com/blog/openshift/kubernetes-chaos-stories
### Roadmap
Following is a list of enhancements that we are planning to work on adding support in Kraken. Of course any help/contributions are greatly appreciated.
- [Ability to visualize the metrics that are being captured by Kraken and stored in Elasticsearch](https://github.com/chaos-kubox/krkn/issues/124)
- Ability to shape the ingress network similar to how Kraken supports [egress traffic shaping](https://github.com/chaos-kubox/krkn/blob/main/docs/network_chaos.md) today.
- Continue to improve [Chaos Testing Guide](https://cloud-bulldozer.github.io/kraken/) in terms of adding best practices, test environment recommendations and scenarios to make sure the OpenShift platform, as well the applications running on top it, are resilient and performant under chaotic conditions.
- Support for running Kraken on Kubernetes distribution - see https://github.com/chaos-kubox/krkn/issues/185, https://github.com/chaos-kubox/krkn/issues/186
- Sweet logo for Kraken - see https://github.com/chaos-kubox/krkn/issues/195
Enhancements being planned can be found in the [roadmap](ROADMAP.md).
### Contributions

11
ROADMAP.md Normal file
View File

@@ -0,0 +1,11 @@
## Krkn Roadmap
Following are a list of enhancements that we are planning to work on adding support in Krkn. Of course any help/contributions are greatly appreciated.
- [Ability to run multiple chaos scenarios in parallel under load to mimic real world outages](https://github.com/redhat-chaos/krkn/issues/424)
- [Centralized storage for chaos experiments artifacts](https://github.com/redhat-chaos/krkn/issues/423)
- [Support for causing DNS outages](https://github.com/redhat-chaos/krkn/issues/394)
- [Support for pod level network traffic shaping](https://github.com/redhat-chaos/krkn/issues/393)
- [Ability to visualize the metrics that are being captured by Kraken and stored in Elasticsearch](https://github.com/redhat-chaos/krkn/issues/124)
- Support for running all the scenarios of Kraken on Kubernetes distribution - see https://github.com/redhat-chaos/krkn/issues/185, https://github.com/redhat-chaos/krkn/issues/186
- Continue to improve [Chaos Testing Guide](https://redhat-chaos.github.io/krkn) in terms of adding best practices, test environment recommendations and scenarios to make sure the OpenShift platform, as well the applications running on top it, are resilient and performant under chaotic conditions.

View File

@@ -8,13 +8,13 @@ orchestration_user: "{{ lookup('env', 'ORCHESTRATION_USER')|default('root', true
###############################################################################
# kube config location
kubeconfig_path: "{{ lookup('env', 'KUBECONFIG_PATH')|default('/root/.kube/config', true) }}"
kubeconfig_path: "{{ lookup('env', 'KUBECONFIG_PATH')|default('~/.kube/config', true) }}"
# kraken dir location on jump host
kraken_dir: "{{ lookup('env', 'KRAKEN_DIR')|default('/root/kraken', true) }}"
kraken_dir: "{{ lookup('env', 'KRAKEN_DIR')|default('~/kraken', true) }}"
# kraken config path location
kraken_config: "{{ lookup('env', 'KRAKEN_CONFIG')|default('/root/kraken/config/config.yaml', true) }}"
kraken_config: "{{ lookup('env', 'KRAKEN_CONFIG')|default('~/kraken/config/config.yaml', true) }}"
# kraken repository location
kraken_repository: "{{ lookup('env', 'KRAKEN_REPOSITORY')|default('https://github.com/openshift-scale/kraken.git', true) }}"

View File

@@ -1,11 +1,65 @@
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[2m]))[5m:]) > 0.01
description: 5 minutes avg. etcd fsync latency on {{$labels.pod}} higher than 10ms {{$value}}
# etcd
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[2m]))[10m:]) > 0.01
description: 10 minutes avg. 99th etcd fsync latency on {{$labels.pod}} higher than 10ms. {{$value}}s
severity: warning
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[2m]))[10m:]) > 1
description: 10 minutes avg. 99th etcd fsync latency on {{$labels.pod}} higher than 1s. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[5m]))[5m:]) > 0.1
description: 5 minutes avg. etcd netowrk peer round trip on {{$labels.pod}} higher than 100ms {{$value}}
severity: info
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket[2m]))[10m:]) > 0.03
description: 10 minutes avg. 99th etcd commit latency on {{$labels.pod}} higher than 30ms. {{$value}}s
severity: warning
- expr: increase(etcd_server_leader_changes_seen_total[2m]) > 0
- expr: rate(etcd_server_leader_changes_seen_total[2m]) > 0
description: etcd leader changes observed
severity: critical
severity: warning
# API server
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"POST|PUT|DELETE|PATCH", subresource!~"log|exec|portforward|attach|proxy"}[2m])) by (le, resource, verb))[10m:]) > 1
description: 10 minutes avg. 99th mutating API call latency for {{$labels.verb}}/{{$labels.resource}} higher than 1 second. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"LIST|GET", subresource!~"log|exec|portforward|attach|proxy", scope="resource"}[2m])) by (le, resource, verb, scope))[5m:]) > 1
description: 5 minutes avg. 99th read-only API call latency for {{$labels.verb}}/{{$labels.resource}} in scope {{$labels.scope}} higher than 1 second. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"LIST|GET", subresource!~"log|exec|portforward|attach|proxy", scope="namespace"}[2m])) by (le, resource, verb, scope))[5m:]) > 5
description: 5 minutes avg. 99th read-only API call latency for {{$labels.verb}}/{{$labels.resource}} in scope {{$labels.scope}} higher than 5 seconds. {{$value}}s
severity: error
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"LIST|GET", subresource!~"log|exec|portforward|attach|proxy", scope="cluster"}[2m])) by (le, resource, verb, scope))[5m:]) > 30
description: 5 minutes avg. 99th read-only API call latency for {{$labels.verb}}/{{$labels.resource}} in scope {{$labels.scope}} higher than 30 seconds. {{$value}}s
severity: error
# Control plane pods
- expr: up{apiserver=~"kube-apiserver|openshift-apiserver"} == 0
description: "{{$labels.apiserver}} {{$labels.instance}} down"
severity: warning
- expr: up{namespace=~"openshift-etcd"} == 0
description: "{{$labels.namespace}}/{{$labels.pod}} down"
severity: warning
- expr: up{namespace=~"openshift-.*(kube-controller-manager|scheduler|controller-manager|sdn|ovn-kubernetes|dns)"} == 0
description: "{{$labels.namespace}}/{{$labels.pod}} down"
severity: warning
- expr: up{job=~"crio|kubelet"} == 0
description: "{{$labels.node}}/{{$labels.job}} down"
severity: warning
- expr: up{job="ovnkube-node"} == 0
description: "{{$labels.instance}}/{{$labels.pod}} {{$labels.job}} down"
severity: warning
# Service sync latency
- expr: histogram_quantile(0.99, sum(rate(kubeproxy_network_programming_duration_seconds_bucket[2m])) by (le)) > 10
description: 99th Kubeproxy network programming latency higher than 10 seconds. {{$value}}s
severity: warning
# Prometheus alerts
- expr: ALERTS{severity="critical", alertstate="firing"} > 0
description: Critical prometheus alert. {{$labels.alertname}}
severity: warning

View File

@@ -1,6 +1,6 @@
cerberus:
distribution: openshift # Distribution can be kubernetes or openshift
kubeconfig_path: /root/.kube/config # Path to kubeconfig
kubeconfig_path: ~/.kube/config # Path to kubeconfig
port: 8080 # http server port where cerberus status is published
watch_nodes: True # Set to True for the cerberus to monitor the cluster nodes
watch_cluster_operators: True # Set to True for cerberus to monitor cluster operators

View File

@@ -1,48 +1,51 @@
kraken:
distribution: openshift # Distribution can be kubernetes or openshift
kubeconfig_path: /root/.kube/config # Path to kubeconfig
kubeconfig_path: ~/.kube/config # Path to kubeconfig
exit_on_failure: False # Exit when a post action scenario fails
port: 8081
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
litmus_version: v1.13.6 # Litmus version to install
litmus_uninstall: False # If you want to uninstall litmus if failure
litmus_uninstall_before_run: True # If you want to uninstall litmus before a new run starts
chaos_scenarios: # List of policies/chaos scenarios to load
- container_scenarios: # List of chaos pod scenarios to load
signal_address: 0.0.0.0 # Signal listening address
port: 8081 # Signal port
chaos_scenarios:
# List of policies/chaos scenarios to load
- arcaflow_scenarios:
- scenarios/arcaflow/cpu-hog/input.yaml
- scenarios/arcaflow/memory-hog/input.yaml
- application_outages:
- scenarios/openshift/app_outage.yaml
- container_scenarios: # List of chaos pod scenarios to load
- - scenarios/openshift/container_etcd.yml
- pod_scenarios:
- - scenarios/openshift/etcd.yml
- - scenarios/openshift/regex_openshift_pod_kill.yml
- scenarios/openshift/post_action_regex.py
- node_scenarios: # List of chaos node scenarios to load
- plugin_scenarios:
- scenarios/openshift/etcd.yml
- scenarios/openshift/regex_openshift_pod_kill.yml
- scenarios/openshift/vmware_node_scenarios.yml
- scenarios/openshift/network_chaos_ingress.yml
- node_scenarios: # List of chaos node scenarios to load
- scenarios/openshift/node_scenarios_example.yml
- pod_scenarios:
- - scenarios/openshift/openshift-apiserver.yml
- - scenarios/openshift/openshift-kube-apiserver.yml
- time_scenarios: # List of chaos time scenarios to load
- plugin_scenarios:
- scenarios/openshift/openshift-apiserver.yml
- scenarios/openshift/openshift-kube-apiserver.yml
- time_scenarios: # List of chaos time scenarios to load
- scenarios/openshift/time_scenarios_example.yml
- litmus_scenarios: # List of litmus scenarios to load
- litmus_scenarios: # List of litmus scenarios to load
- - scenarios/openshift/templates/litmus-rbac.yaml
- scenarios/openshift/node_cpu_hog_engine.yaml
- - scenarios/openshift/templates/litmus-rbac.yaml
- scenarios/openshift/node_mem_engine.yaml
- - scenarios/openshift/templates/litmus-rbac.yaml
- scenarios/openshift/node_io_engine.yaml
- cluster_shut_down_scenarios:
- cluster_shut_down_scenarios:
- - scenarios/openshift/cluster_shut_down_scenario.yml
- scenarios/openshift/post_action_shut_down.py
- namespace_scenarios:
- namespace_scenarios:
- - scenarios/openshift/regex_namespace.yaml
- - scenarios/openshift/ingress_namespace.yaml
- scenarios/openshift/post_action_namespace.py
- zone_outages:
- zone_outages:
- scenarios/openshift/zone_outage.yaml
- application_outages:
- scenarios/openshift/app_outage.yaml
- pvc_scenarios:
- pvc_scenarios:
- scenarios/openshift/pvc_scenario.yaml
- network_chaos:
- network_chaos:
- scenarios/openshift/network_chaos.yaml
cerberus:
@@ -53,7 +56,7 @@ cerberus:
performance_monitoring:
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
kube_burner_binary_url: "https://github.com/cloud-bulldozer/kube-burner/releases/download/v0.9.1/kube-burner-0.9.1-Linux-x86_64.tar.gz"
kube_burner_binary_url: "https://github.com/cloud-bulldozer/kube-burner/releases/download/v1.7.0/kube-burner-1.7.0-Linux-x86_64.tar.gz"
capture_metrics: False
config_path: config/kube_burner.yaml # Define the Elasticsearch url and index name in this config
metrics_profile_path: config/metrics-aggregated.yaml
@@ -61,9 +64,26 @@ performance_monitoring:
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
uuid: # uuid for the run is generated by default if not set
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
alert_profile: config/alerts # Path to alert profile with the prometheus queries
alert_profile: config/alerts # Path or URL to alert profile with the prometheus queries
check_critical_alerts: False # When enabled will check prometheus for critical alerts firing post chaos
tunings:
wait_duration: 60 # Duration to wait between each chaos scenario
iterations: 1 # Number of times to execute the scenarios
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever
telemetry:
enabled: False # enable/disables the telemetry collection feature
api_url: https://ulnmf9xv7j.execute-api.us-west-2.amazonaws.com/production #telemetry service endpoint
username: username # telemetry service username
password: password # telemetry service password
prometheus_backup: True # enables/disables prometheus data collection
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
backup_threads: 5 # number of telemetry download/upload threads
archive_path: /tmp # local path where the archive files will be temporarly stored
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
archive_size: 10000 # the size of the prometheus data archive size in KB. The lower the size of archive is
# the higher the number of archive files will be produced and uploaded (and processed by backup_threads
# simultaneously).
# For unstable/slow connection is better to keep this value low
# increasing the number of backup_threads, in this way, on upload failure, the retry will happen only on the
# failed chunk without affecting the whole upload.

40
config/config_kind.yaml Normal file
View File

@@ -0,0 +1,40 @@
kraken:
distribution: kubernetes # Distribution can be kubernetes or openshift
kubeconfig_path: ~/.kube/config # Path to kubeconfig
exit_on_failure: False # Exit when a post action scenario fails
port: 8081
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
signal_address: 0.0.0.0 # Signal listening address
litmus_install: True # Installs specified version, set to False if it's already setup
litmus_version: v1.13.6 # Litmus version to install
litmus_uninstall: False # If you want to uninstall litmus if failure
litmus_uninstall_before_run: True # If you want to uninstall litmus before a new run starts
chaos_scenarios: # List of policies/chaos scenarios to load
- plugin_scenarios:
- scenarios/kind/scheduler.yml
- node_scenarios:
- scenarios/kind/node_scenarios_example.yml
cerberus:
cerberus_enabled: False # Enable it when cerberus is previously installed
cerberus_url: # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
performance_monitoring:
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
kube_burner_binary_url: "https://github.com/cloud-bulldozer/kube-burner/releases/download/v0.9.1/kube-burner-0.9.1-Linux-x86_64.tar.gz"
capture_metrics: False
config_path: config/kube_burner.yaml # Define the Elasticsearch url and index name in this config
metrics_profile_path: config/metrics-aggregated.yaml
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
uuid: # uuid for the run is generated by default if not set
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
alert_profile: config/alerts # Path to alert profile with the prometheus queries
tunings:
wait_duration: 60 # Duration to wait between each chaos scenario
iterations: 1 # Number of times to execute the scenarios
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever

View File

@@ -1,18 +1,19 @@
kraken:
distribution: kubernetes # Distribution can be kubernetes or openshift
kubeconfig_path: /root/.kube/config # Path to kubeconfig
distribution: kubernetes # Distribution can be kubernetes or openshift
kubeconfig_path: ~/.kube/config # Path to kubeconfig
exit_on_failure: False # Exit when a post action scenario fails
port: 8081
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
litmus_install: True # Installs specified version, set to False if it's already setup
litmus_version: v1.13.6 # Litmus version to install
litmus_uninstall: False # If you want to uninstall litmus if failure
litmus_uninstall_before_run: True # If you want to uninstall litmus before a new run starts
chaos_scenarios: # List of policies/chaos scenarios to load
- container_scenarios: # List of chaos pod scenarios to load
- - scenarios/kube/container_dns.yml
- pod_scenarios:
- - scenarios/kube/scheduler.yml
- plugin_scenarios:
- scenarios/kube/scheduler.yml
cerberus:
cerberus_enabled: False # Enable it when cerberus is previously installed
@@ -31,7 +32,7 @@ performance_monitoring:
uuid: # uuid for the run is generated by default if not set
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
alert_profile: config/alerts # Path to alert profile with the prometheus queries
check_critical_alerts: False # When enabled will check prometheus for critical alerts firing post chaos after soak time for the cluster to settle down
tunings:
wait_duration: 60 # Duration to wait between each chaos scenario
iterations: 1 # Number of times to execute the scenarios

View File

@@ -1,23 +1,23 @@
kraken:
distribution: openshift # Distribution can be kubernetes or openshift
kubeconfig_path: /root/.kube/config # Path to kubeconfig
kubeconfig_path: ~/.kube/config # Path to kubeconfig
exit_on_failure: False # Exit when a post action scenario fails
port: 8081
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
signal_address: 0.0.0.0 # Signal listening address
port: 8081 # Signal port
litmus_version: v1.13.6 # Litmus version to install
litmus_uninstall: False # If you want to uninstall litmus if failure
litmus_uninstall_before_run: True # If you want to uninstall litmus before a new run starts
chaos_scenarios: # List of policies/chaos scenarios to load
- pod_scenarios: # List of chaos pod scenarios to load
- - scenarios/openshift/etcd.yml
- - scenarios/openshift/regex_openshift_pod_kill.yml
- scenarios/openshift/post_action_regex.py
- plugin_scenarios: # List of chaos pod scenarios to load
- scenarios/openshift/etcd.yml
- scenarios/openshift/regex_openshift_pod_kill.yml
- node_scenarios: # List of chaos node scenarios to load
- scenarios/openshift/node_scenarios_example.yml
- pod_scenarios:
- - scenarios/openshift/openshift-apiserver.yml
- - scenarios/openshift/openshift-kube-apiserver.yml
- plugin_scenarios:
- scenarios/openshift/openshift-apiserver.yml
- scenarios/openshift/openshift-kube-apiserver.yml
- time_scenarios: # List of chaos time scenarios to load
- scenarios/openshift/time_scenarios_example.yml
- litmus_scenarios: # List of litmus scenarios to load

View File

@@ -1,28 +1,30 @@
# Dockerfile for kraken
FROM quay.io/openshift/origin-tests:latest as origintests
FROM mcr.microsoft.com/azure-cli:latest as azure-cli
FROM quay.io/centos/centos:stream9
FROM registry.access.redhat.com/ubi8/ubi:latest
LABEL org.opencontainers.image.authors="Red Hat OpenShift Chaos Engineering"
ENV KUBECONFIG /root/.kube/config
# Copy OpenShift CLI, Kubernetes CLI from origin-tests image
COPY --from=origintests /usr/bin/oc /usr/bin/oc
COPY --from=origintests /usr/bin/kubectl /usr/bin/kubectl
# Copy azure client binary from azure-cli image
COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
# Install dependencies
RUN yum install epel-release -y && \
yum install -y git python python3-pip jq gettext && \
python3 -m pip install -U pip && \
rpm --import https://packages.microsoft.com/keys/microsoft.asc && \
echo -e "[azure-cli]\nname=Azure CLI\nbaseurl=https://packages.microsoft.com/yumrepos/azure-cli\nenabled=1\ngpgcheck=1\ngpgkey=https://packages.microsoft.com/keys/microsoft.asc" > /etc/yum.repos.d/azure-cli.repo && yum install -y azure-cli && \
git clone https://github.com/openshift-scale/kraken /root/kraken && \
RUN yum install -y git python39 python3-pip jq gettext wget && \
python3.9 -m pip install -U pip && \
git clone https://github.com/redhat-chaos/krkn.git --branch v1.4.3 /root/kraken && \
mkdir -p /root/.kube && cd /root/kraken && \
pip3 install -r requirements.txt
pip3.9 install -r requirements.txt && \
pip3.9 install virtualenv && \
wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && chmod +x /usr/bin/yq
# Get Kubernetes and OpenShift clients from stable releases
WORKDIR /tmp
RUN wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz && tar -xvf openshift-client-linux.tar.gz && cp oc /usr/local/bin/oc && cp kubectl /usr/local/bin/kubectl
WORKDIR /root/kraken
ENTRYPOINT ["python3", "run_kraken.py"]
ENTRYPOINT ["python3.9", "run_kraken.py"]
CMD ["--config=config/config.yaml"]

View File

@@ -2,24 +2,28 @@
FROM ppc64le/centos:8
MAINTAINER Red Hat OpenShift Performance and Scale
FROM mcr.microsoft.com/azure-cli:latest as azure-cli
LABEL org.opencontainers.image.authors="Red Hat OpenShift Chaos Engineering"
ENV KUBECONFIG /root/.kube/config
RUN curl -L -o kubernetes-client-linux-ppc64le.tar.gz https://dl.k8s.io/v1.19.0/kubernetes-client-linux-ppc64le.tar.gz \
&& tar xf kubernetes-client-linux-ppc64le.tar.gz && mv kubernetes/client/bin/kubectl /usr/bin/ && rm -rf kubernetes-client-linux-ppc64le.tar.gz
RUN curl -L -o openshift-client-linux.tar.gz https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/stable/openshift-client-linux.tar.gz \
&& tar xf openshift-client-linux.tar.gz -C /usr/bin && rm -rf openshift-client-linux.tar.gz
# Copy azure client binary from azure-cli image
COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
# Install dependencies
RUN yum install epel-release -y && \
yum install -y git python36 python3-pip gcc libffi-devel python36-devel openssl-devel gcc-c++ make jq gettext && \
git clone https://github.com/cloud-bulldozer/kraken /root/kraken && \
mkdir -p /root/.kube && cd /root/kraken && \
pip3 install cryptography==3.3.2 && \
pip3 install -r requirements.txt setuptools==40.3.0 urllib3==1.25.4
RUN yum install -y git python39 python3-pip jq gettext wget && \
python3.9 -m pip install -U pip && \
git clone https://github.com/redhat-chaos/krkn.git --branch v1.4.3 /root/kraken && \
mkdir -p /root/.kube && cd /root/kraken && \
pip3.9 install -r requirements.txt && \
pip3.9 install virtualenv && \
wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && chmod +x /usr/bin/yq
# Get Kubernetes and OpenShift clients from stable releases
WORKDIR /tmp
RUN wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz && tar -xvf openshift-client-linux.tar.gz && cp oc /usr/local/bin/oc && cp kubectl /usr/local/bin/kubectl
WORKDIR /root/kraken
ENTRYPOINT python3 run_kraken.py --config=config/config.yaml
ENTRYPOINT python3.9 run_kraken.py --config=config/config.yaml

View File

@@ -1,28 +1,53 @@
### Kraken image
Container image gets automatically built by quay.io at [Kraken image](https://quay.io/chaos-kubox/krkn).
Container image gets automatically built by quay.io at [Kraken image](https://quay.io/redhat-chaos/krkn).
### Run containerized version
Refer [instructions](https://github.com/chaos-kubox/krkn/blob/main/docs/installation.md#run-containerized-version) for information on how to run the containerized version of kraken.
Refer [instructions](https://github.com/redhat-chaos/krkn/blob/main/docs/installation.md#run-containerized-version) for information on how to run the containerized version of kraken.
### Run Custom Kraken Image
Refer to [instructions](https://github.com/chaos-kubox/krkn/blob/main/containers/build_own_image-README.md) for information on how to run a custom containerized version of kraken using podman.
Refer to [instructions](https://github.com/redhat-chaos/krkn/blob/main/containers/build_own_image-README.md) for information on how to run a custom containerized version of kraken using podman.
### Kraken as a KubeApp
#### GENERAL NOTES:
- It is not generally recommended to run Kraken internal to the cluster as the pod which is running Kraken might get disrupted, the suggested use case to run kraken from inside k8s/OpenShift is to target **another** cluster (eg. to bypass network restrictions or to leverage cluster's computational resources)
- your kubeconfig might contain several cluster contexts and credentials so be sure, before creating the ConfigMap, to keep **only** the credentials related to the destination cluster. Please refer to the [Kubernetes documentation](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) for more details
- to add privileges to the service account you must be logged in the cluster with an highly privileged account (ideally kubeadmin)
To run containerized Kraken as a Kubernetes/OpenShift Deployment, follow these steps:
1. Configure the [config.yaml](https://github.com/chaos-kubox/krkn/blob/main/config/config.yaml) file according to your requirements.
1. Configure the [config.yaml](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) file according to your requirements.
**NOTE**: both the scenarios ConfigMaps are needed regardless you're running kraken in Kubernetes or OpenShift
2. Create a namespace under which you want to run the kraken pod using `kubectl create ns <namespace>`.
3. Switch to `<namespace>` namespace:
- In Kubernetes, use `kubectl config set-context --current --namespace=<namespace>`
- In OpenShift, use `oc project <namespace>`
4. Create a ConfigMap named kube-config using `kubectl create configmap kube-config --from-file=<path_to_kubeconfig>`
5. Create a ConfigMap named kraken-config using `kubectl create configmap kraken-config --from-file=<path_to_kraken_config>`
6. Create a ConfigMap named scenarios-config using `kubectl create configmap scenarios-config --from-file=<path_to_scenarios_folder>`
7. Create a service account to run the kraken pod `kubectl create serviceaccount useroot`.
8. In Openshift, add privileges to service account and execute `oc adm policy add-scc-to-user privileged -z useroot`.
9. Create a Job using `kubectl apply -f kraken.yml` and monitor the status using `oc get jobs` and `oc get pods`.
NOTE: It is not recommended to run Kraken internal to the cluster as the pod which is running Kraken might get disrupted.
- In Kubernetes, use `kubectl config set-context --current --namespace=<namespace>`
- In OpenShift, use `oc project <namespace>`
4. Create a ConfigMap named kube-config using `kubectl create configmap kube-config --from-file=<path_to_kubeconfig>` *(eg. ~/.kube/config)*
5. Create a ConfigMap named kraken-config using `kubectl create configmap kraken-config --from-file=<path_to_kraken>/config`
6. Create a ConfigMap named scenarios-config using `kubectl create configmap scenarios-config --from-file=<path_to_kraken>/scenarios`
7. Create a ConfigMap named scenarios-openshift-config using `kubectl create configmap scenarios-openshift-config --from-file=<path_to_kraken>/scenarios/openshift`
8. Create a ConfigMap named scenarios-kube-config using `kubectl create configmap scenarios-kube-config --from-file=<path_to_kraken>/scenarios/kube`
9. Create a service account to run the kraken pod `kubectl create serviceaccount useroot`.
10. In Openshift, add privileges to service account and execute `oc adm policy add-scc-to-user privileged -z useroot`.
11. Create a Job using `kubectl apply -f <path_to_kraken>/containers/kraken.yml` and monitor the status using `oc get jobs` and `oc get pods`.

View File

@@ -16,9 +16,9 @@ spec:
- name: kraken
securityContext:
privileged: true
image: quay.io/chaos-kubox/krkn
image: quay.io/redhat-chaos/krkn
command: ["/bin/sh", "-c"]
args: ["python3 run_kraken.py -c config/config.yaml"]
args: ["python3.9 run_kraken.py -c config/config.yaml"]
volumeMounts:
- mountPath: "/root/.kube"
name: config
@@ -26,6 +26,10 @@ spec:
name: kraken-config
- mountPath: "/root/kraken/scenarios"
name: scenarios-config
- mountPath: "/root/kraken/scenarios/openshift"
name: scenarios-openshift-config
- mountPath: "/root/kraken/scenarios/kube"
name: scenarios-kube-config
restartPolicy: Never
volumes:
- name: config
@@ -37,3 +41,9 @@ spec:
- name: scenarios-config
configMap:
name: scenarios-config
- name: scenarios-openshift-config
configMap:
name: scenarios-openshift-config
- name: scenarios-kube-config
configMap:
name: scenarios-kube-config

View File

@@ -1,6 +1,17 @@
## Alerts
## SLOs validation
Pass/fail based on metrics captured from the cluster is important in addition to checking the health status and recovery. Kraken supports alerting based on the queries defined by the user and modifies the return code of the run to determine pass/fail. It's especially useful in case of automated runs in CI where user won't be able to monitor the system. It uses [Kube-burner](https://kube-burner.readthedocs.io/en/latest/) under the hood. This feature can be enabled in the [config](https://github.com/chaos-kubox/krkn/blob/main/config/config.yaml) by setting the following:
Pass/fail based on metrics captured from the cluster is important in addition to checking the health status and recovery. Kraken supports:
### Checking for critical alerts post chaos
If enabled, the check runs at the end of each scenario ( post chaos ) and Kraken exits in case critical alerts are firing to allow user to debug. You can enable it in the config:
```
performance_monitoring:
check_critical_alerts: False # When enabled will check prometheus for critical alerts firing post chaos
```
### Validation and alerting based on the queries defined by the user during chaos
Takes PromQL queries as input and modifies the return code of the run to determine pass/fail. It's especially useful in case of automated runs in CI where user won't be able to monitor the system. It uses [Kube-burner](https://kube-burner.readthedocs.io/en/latest/) under the hood. This feature can be enabled in the [config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) by setting the following:
```
performance_monitoring:
@@ -11,8 +22,8 @@ performance_monitoring:
alert_profile: config/alerts # Path to alert profile with the prometheus queries.
```
### Alert profile
A couple of [alert profiles](https://github.com/chaos-kubox/krkn/tree/main/config) [alerts](https://github.com/chaos-kubox/krkn/blob/main/config/alerts) are shipped by default and can be tweaked to add more queries to alert on. The following are a few alerts examples:
#### Alert profile
A couple of [alert profiles](https://github.com/redhat-chaos/krkn/tree/main/config) [alerts](https://github.com/redhat-chaos/krkn/blob/main/config/alerts) are shipped by default and can be tweaked to add more queries to alert on. User can provide a URL or path to the file in the [config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml). The following are a few alerts examples:
```
- expr: avg_over_time(histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[2m]))[5m:]) > 0.01

View File

@@ -0,0 +1,67 @@
## Arcaflow Scenarios
Arcaflow is a workflow engine in development which provides the ability to execute workflow steps in sequence, in parallel, repeatedly, etc. The main difference to competitors such as Netflix Conductor is the ability to run ad-hoc workflows without an infrastructure setup required.
The engine uses containers to execute plugins and runs them either locally in Docker/Podman or remotely on a Kubernetes cluster. The workflow system is strongly typed and allows for generating JSON schema and OpenAPI documents for all data formats involved.
### Available Scenarios
#### Hog scenarios:
- [CPU Hog](arcaflow_scenarios/cpu_hog.md)
- [Memory Hog](arcaflow_scenarios/memory_hog.md)
### Prequisites
Arcaflow supports three deployment technologies:
- Docker
- Podman
- Kubernetes
#### Docker
In order to run Arcaflow Scenarios with the Docker deployer, be sure that:
- Docker is correctly installed in your Operating System (to find instructions on how to install docker please refer to [Docker Documentation](https://www.docker.com/))
- The Docker daemon is running
#### Podman
The podman deployer is built around the podman CLI and doesn't need necessarily to be run along with the podman daemon.
To run Arcaflow Scenarios in your Operating system be sure that:
- podman is correctly installed in your Operating System (to find instructions on how to install podman refer to [Podman Documentation](https://podman.io/))
- the podman CLI is in your shell PATH
#### Kubernetes
The kubernetes deployer integrates directly the Kubernetes API Client and needs only a valid kubeconfig file and a reachable Kubernetes/OpenShift Cluster.
### Usage
To enable arcaflow scenarios edit the kraken config file, go to the section `kraken -> chaos_scenarios` of the yaml structure
and add a new element to the list named `arcaflow_scenarios` then add the desired scenario
pointing to the `input.yaml` file.
```
kraken:
...
chaos_scenarios:
- arcaflow_scenarios:
- scenarios/arcaflow/cpu-hog/input.yaml
```
#### input.yaml
The implemented scenarios can be found in *scenarios/arcaflow/<scenario_name>* folder.
The entrypoint of each scenario is the *input.yaml* file.
In this file there are all the options to set up the scenario accordingly to the desired target
### config.yaml
The arcaflow config file. Here you can set the arcaflow deployer and the arcaflow log level.
The supported deployers are:
- Docker
- Podman (podman daemon not needed, suggested option)
- Kubernetes
The supported log levels are:
- debug
- info
- warning
- error
### workflow.yaml
This file contains the steps that will be executed to perform the scenario against the target.
Each step is represented by a container that will be executed from the deployer and its options.
Note that we provide the scenarios as a template, but they can be manipulated to define more complex workflows.
To have more details regarding the arcaflow workflows architecture and syntax it is suggested to refer to the [Arcaflow Documentation](https://arcalot.io/arcaflow/).

View File

@@ -0,0 +1,19 @@
# CPU Hog
This scenario is based on the arcaflow [arcaflow-plugin-stressng](https://github.com/arcalot/arcaflow-plugin-stressng) plugin.
The purpose of this scenario is to create cpu pressure on a particular node of the Kubernetes/OpenShift cluster for a time span.
To enable this plugin add the pointer to the scenario input file `scenarios/arcaflow/cpu-hog/input.yaml` as described in the
Usage section.
This scenario takes a list of objects named `input_list` with the following properties:
- **kubeconfig :** *string* the kubeconfig needed by the deployer to deploy the sysbench plugin in the target cluster
- **namespace :** *string* the namespace where the scenario container will be deployed
**Note:** this parameter will be automatically filled by kraken if the `kubeconfig_path` property is correctly set
- **node_selector :** *key-value map* the node label that will be used as `nodeSelector` by the pod to target a specific cluster node
- **duration :** *string* stop stress test after N seconds. One can also specify the units of time in seconds, minutes, hours, days or years with the suffix s, m, h, d or y.
- **cpu_count :** *int* the number of CPU cores to be used (0 means all)
- **cpu_method :** *string* a fine-grained control of which cpu stressors to use (ackermann, cfloat etc. see [manpage](https://manpages.org/sysbench) for all the cpu_method options)
- **cpu_load_percentage :** *int* the CPU load by percentage
To perform several load tests in the same run simultaneously (eg. stress two or more nodes in the same run) add another item
to the `input_list` with the same properties (and eventually different values eg. different node_selectors
to schedule the pod on different nodes). To reduce (or increase) the parallelism change the value `parallelism` in `workload.yaml` file

View File

@@ -0,0 +1,18 @@
# Memory Hog
This scenario is based on the arcaflow [arcaflow-plugin-stressng](https://github.com/arcalot/arcaflow-plugin-stressng) plugin.
The purpose of this scenario is to create Virtual Memory pressure on a particular node of the Kubernetes/OpenShift cluster for a time span.
To enable this plugin add the pointer to the scenario input file `scenarios/arcaflow/memory-hog/input.yaml` as described in the
Usage section.
This scenario takes a list of objects named `input_list` with the following properties:
- **kubeconfig :** *string* the kubeconfig needed by the deployer to deploy the sysbench plugin in the target cluster
- **namespace :** *string* the namespace where the scenario container will be deployed
**Note:** this parameter will be automatically filled by kraken if the `kubeconfig_path` property is correctly set
- **node_selector :** *key-value map* the node label that will be used as `nodeSelector` by the pod to target a specific cluster node
- **duration :** *string* stop stress test after N seconds. One can also specify the units of time in seconds, minutes, hours, days or years with the suffix s, m, h, d or y.
- **vm_bytes :** *string* N bytes per vm process or percentage of memory used (using the % symbol). The size can be expressed in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g.
- **vm_workers :** *int* Number of VM stressors to be run (0 means 1 stressor per CPU)
To perform several load tests in the same run simultaneously (eg. stress two or more nodes in the same run) add another item
to the `input_list` with the same properties (and eventually different values eg. different node_selectors
to schedule the pod on different nodes). To reduce (or increase) the parallelism change the value `parallelism` in `workload.yaml` file

View File

@@ -1,10 +1,12 @@
Supported Cloud Providers:
* [AWS](#aws)
* [GCP](#gcp)
* [Openstack](#openstack)
* [Azure](#azure)
* [Alibaba](#alibaba)
- [AWS](#aws)
- [GCP](#gcp)
- [Openstack](#openstack)
- [Azure](#azure)
- [Alibaba](#alibaba)
- [VMware](#vmware)
- [IBMCloud](#ibmcloud)
## AWS
@@ -53,3 +55,35 @@ See the [Installation guide](https://www.alibabacloud.com/help/en/alibaba-cloud-
Refer to [region and zone page](https://www.alibabacloud.com/help/en/elastic-compute-service/latest/regions-and-zones#concept-2459516) to get the region id for the region you are running on.
Set cloud_type to either alibaba or alicloud in your node scenario yaml file.
## VMware
Set the following environment variables
1. ```export VSPHERE_IP=<vSphere_client_IP_address>```
2. ```export VSPHERE_USERNAME=<vSphere_client_username>```
3. ```export VSPHERE_PASSWORD=<vSphere_client_password>```
These are the credentials that you would normally use to access the vSphere client.
## IBMCloud
If no api key is set up with proper VPC resource permissions, use the following to create:
* Access group
* Service id with the following access
* With policy **VPC Infrastructure Services**
* Resources = All
* Roles:
* Editor
* Administrator
* Operator
* Viewer
* API Key
Set the following environment variables
1. ```export IBMC_URL=https://<region>.iaas.cloud.ibm.com/v1```
2. ```export IBMC_APIKEY=<ibmcloud_api_key>```

View File

@@ -1,5 +1,5 @@
#### Kubernetes/OpenShift cluster shut down scenario
Scenario to shut down all the nodes including the masters and restart them after specified duration. Cluster shut down scenario can be injected by placing the shut_down config file under cluster_shut_down_scenario option in the kraken config. Refer to [cluster_shut_down_scenario](https://github.com/chaos-kubox/krkn/blob/main/scenarios/cluster_shut_down_scenario.yml) config file.
Scenario to shut down all the nodes including the masters and restart them after specified duration. Cluster shut down scenario can be injected by placing the shut_down config file under cluster_shut_down_scenario option in the kraken config. Refer to [cluster_shut_down_scenario](https://github.com/redhat-chaos/krkn/blob/main/scenarios/cluster_shut_down_scenario.yml) config file.
Refer to [cloud setup](cloud_setup.md) to configure your cli properly for the cloud provider of the cluster you want to shut down.

View File

@@ -1,4 +1,65 @@
### Config
Set the scenarios to inject and the tunings like duration to wait between each scenario in the config file located at [config/config.yaml](https://github.com/chaos-kubox/krkn/blob/main/config/config.yaml).
Set the scenarios to inject and the tunings like duration to wait between each scenario in the config file located at [config/config.yaml](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml).
**NOTE**: [config](https://github.com/redhat-chaos/krkn/blob/main/config/config_performance.yaml) can be used if leveraging the [automated way](https://github.com/redhat-chaos/krkn#setting-up-infrastructure-dependencies) to install the infrastructure pieces.
Config components:
* [Kraken](#kraken)
* [Cerberus](#cerberus)
* [Performance Monitoring](#performance-monitoring)
* [Tunings](#tunings)
# Kraken
This section defines scenarios and specific data to the chaos run
## Distribution
Either **openshift** or **kubernetes** depending on the type of cluster you want to run chaos on.
The prometheus url/route and bearer token are automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
## Exit on failure
**exit_on_failure**: Exit when a post action check or cerberus run fails
## Publish kraken status
**publish_kraken_status**: Can be accessed at http://0.0.0.0:8081 (or what signal_address and port you set in signal address section)
**signal_state**: State you want kraken to start at; will wait for the RUN signal to start running a chaos iteration. When set to PAUSE before running the scenarios, refer to [signal.md](signal.md) for more details
## Signal Address
**signal_address**: Address to listen/post the signal state to
**port**: port to listen/post the signal state to
## Chaos Scenarios
**chaos_scenarios**: List of different types of chaos scenarios you want to run with paths to their specific yaml file configurations
If a scenario has a post action check script, it will be run before and after each scenario to validate the component under test starts and ends at the same state
Currently the scenarios are run one after another (in sequence) and will exit if one of the scenarios fail, without moving onto the next one
Chaos scenario types:
- container_scenarios
- plugin_scenarios
- node_scenarios
- time_scenarios
- cluster_shut_down_scenarios
- namespace_scenarios
- zone_outages
- application_outages
- pvc_scenarios
- network_chaos
# Cerberus
Parameters to set for enabling of cerberus checks at the end of each executed scenario. The given url will pinged after the scenario and post action check have been completed for each scenario and iteration.
**cerberus_enabled**: Enable it when cerberus is previously installed
**cerberus_url**: When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal
**check_applicaton_routes**: When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
# Performance Monitoring
There are 2 main sections defined in this part of the config [metrics](metrics.md) and [alerts](alerts.md); read more about each of these configurations in their respective docs
# Tunings
**wait_duration**: Duration to wait between each chaos scenario
**iterations**: Number of times to execute the scenarios
**daemon_mode**: True or False; If true, iterations are set to infinity which means that the kraken will cause chaos forever and number of iterations is ignored
**NOTE**: [config](https://github.com/chaos-kubox/krkn/blob/main/config/config_performance.yaml) can be used if leveraging the [automated way](https://github.com/chaos-kubox/krkn#setting-up-infrastructure-dependencies) to install the infrastructure pieces.

View File

@@ -14,7 +14,9 @@ scenarios:
container_name: "<specific container name>" # This is optional, can take out and will kill all containers in all pods found under namespace and label
pod_names: # This is optional, can take out and will select all pods with given namespace and label
- <pod_name>
retry_wait: <number of seconds to wait for container to be running again> (defaults to 120seconds)
count: <number of containers to disrupt, default=1>
action: <Action to run. For example kill 1 ( hang up ) or kill 9. Default is set to kill 1>
expected_recovery_time: <number of seconds to wait for container to be running again> (defaults to 120seconds)
```
#### Post Action
@@ -23,7 +25,7 @@ In all scenarios we do a post chaos check to wait and verify the specific compon
Here there are two options:
1. Pass a custom script in the main config scenario list that will run before the chaos and verify the output matches post chaos scenario.
See [scenarios/post_action_etcd_container.py](https://github.com/chaos-kubox/krkn/blob/main/scenarios/post_action_etcd_container.py) for an example.
See [scenarios/post_action_etcd_container.py](https://github.com/redhat-chaos/krkn/blob/main/scenarios/post_action_etcd_container.py) for an example.
```
- container_scenarios: # List of chaos pod scenarios to load.
- - scenarios/container_etcd.yml
@@ -34,5 +36,5 @@ See [scenarios/post_action_etcd_container.py](https://github.com/chaos-kubox/krk
containers that were killed as well as the namespaces and pods to verify all containers that were affected recover properly.
```
retry_wait: <seconds to wait for container to recover>
expected_recovery_time: <seconds to wait for container to recover>
```

View File

@@ -1,52 +1,26 @@
## Getting Started Running Chaos Scenarios
#### Adding New Scenarios
Adding a new scenario is as simple as adding a new config file under [scenarios directory](https://github.com/chaos-kubox/krkn/tree/main/scenarios) and defining it in the main kraken [config](https://github.com/chaos-kubox/krkn/blob/main/config/config.yaml#L8).
Adding a new scenario is as simple as adding a new config file under [scenarios directory](https://github.com/redhat-chaos/krkn/tree/main/scenarios) and defining it in the main kraken [config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml#L8).
You can either copy an existing yaml file and make it your own, or fill in one of the templates below to suit your needs.
### Templates
#### Pod Scenario Yaml Template
For example, for adding a pod level scenario for a new application, refer to the sample scenario below to know what fields are necessary and what to add in each location:
```
config:
runStrategy:
runs: <number of times to execute the scenario>
#This will choose a random number to wait between min and max
maxSecondsBetweenRuns: 30
minSecondsBetweenRuns: 1
scenarios:
- name: "delete pods example"
steps:
- podAction:
matches:
- labels:
namespace: "<namespace>"
selector: "<pod label>" # This can be left blank.
filters:
- randomSample:
size: <number of pods to kill>
actions:
- kill:
probability: 1
force: true
- podAction:
matches:
- labels:
namespace: "<namespace>"
selector: "<pod label>" # This can be left blank.
retries:
retriesTimeout:
# Amount of time to wait with retrying, before failing if pod count does not match expected
# timeout: 180.
actions:
- checkPodCount:
count: <expected number of pods that match namespace and label"
# yaml-language-server: $schema=../plugin.schema.json
- id: kill-pods
config:
namespace_pattern: ^<namespace>$
label_selector: <pod label>
kill: <number of pods to kill>
- id: wait-for-pods
config:
namespace_pattern: ^<namespace>$
label_selector: <pod label>
count: <expected number of pods that match namespace and label>
```
More information on specific items that you can add to the pod killing scenarios can be found in the [powerfulseal policies](https://powerfulseal.github.io/powerfulseal/policies) documentation
#### Node Scenario Yaml Template
```

View File

@@ -10,7 +10,9 @@
* [Cluster recovery checks, metrics evaluation and pass/fail criteria](#cluster-recovery-checks-metrics-evaluation-and-passfail-criteria)
* [Scenarios](#scenarios)
* [Test Environment Recommendations - how and where to run chaos tests](#test-environment-recommendations---how-and-where-to-run-chaos-tests)
* [Chaos testing in Practice within the OpenShift Organization](#chaos-testing-in-practice-within-the-OpenShift-Organization)
* [Chaos testing in Practice](#chaos-testing-in-practice)
* [OpenShift oraganization](#openshift-organization)
* [startx-lab](#startx-lab)
### Introduction
@@ -90,18 +92,18 @@ We want to look at this in terms of CPU, Memory, Disk, Throughput, Network etc.
### Tooling
Now that we looked at the best practices, In this section, we will go through how [Kraken](https://github.com/chaos-kubox/krkn) - a chaos testing framework can help test the resilience of OpenShift and make sure the applications and services are following the best practices.
Now that we looked at the best practices, In this section, we will go through how [Kraken](https://github.com/redhat-chaos/krkn) - a chaos testing framework can help test the resilience of OpenShift and make sure the applications and services are following the best practices.
#### Workflow
Let us start by understanding the workflow of kraken: the user will start by running kraken by pointing to a specific OpenShift cluster using kubeconfig to be able to talk to the platform on top of which the OpenShift cluster is hosted. This can be done by either the oc/kubectl API or the cloud API. Based on the configuration of kraken, it will inject specific chaos scenarios as shown below, talk to [Cerberus](https://github.com/chaos-kubox/cerberus) to get the go/no-go signal representing the overall health of the cluster ( optional - can be turned off ), scrapes metrics from in-cluster prometheus given a metrics profile with the promql queries and stores them long term in Elasticsearch configured ( optional - can be turned off ), evaluates the promql expressions specified in the alerts profile ( optional - can be turned off ) and aggregated everything to set the pass/fail i.e. exits 0 or 1. More about the metrics collection, cerberus and metrics evaluation can be found in the next section.
Let us start by understanding the workflow of kraken: the user will start by running kraken by pointing to a specific OpenShift cluster using kubeconfig to be able to talk to the platform on top of which the OpenShift cluster is hosted. This can be done by either the oc/kubectl API or the cloud API. Based on the configuration of kraken, it will inject specific chaos scenarios as shown below, talk to [Cerberus](https://github.com/redhat-chaos/cerberus) to get the go/no-go signal representing the overall health of the cluster ( optional - can be turned off ), scrapes metrics from in-cluster prometheus given a metrics profile with the promql queries and stores them long term in Elasticsearch configured ( optional - can be turned off ), evaluates the promql expressions specified in the alerts profile ( optional - can be turned off ) and aggregated everything to set the pass/fail i.e. exits 0 or 1. More about the metrics collection, cerberus and metrics evaluation can be found in the next section.
![Kraken workflow](../media/kraken-workflow.png)
#### Cluster recovery checks, metrics evaluation and pass/fail criteria
- Most of the scenarios have built in checks to verify if the targeted component recovered from the failure after the specified duration of time but there might be cases where other components might have an impact because of a certain failure and its extremely important to make sure that the system/application is healthy as a whole post chaos. This is exactly where [Cerberus](https://github.com/chaos-kubox/cerberus) comes to the rescue.
- Most of the scenarios have built in checks to verify if the targeted component recovered from the failure after the specified duration of time but there might be cases where other components might have an impact because of a certain failure and its extremely important to make sure that the system/application is healthy as a whole post chaos. This is exactly where [Cerberus](https://github.com/redhat-chaos/cerberus) comes to the rescue.
If the monitoring tool, cerberus is enabled it will consume the signal and continue running chaos or not based on that signal.
- Apart from checking the recovery and cluster health status, its equally important to evaluate the performance metrics like latency, resource usage spikes, throughput, etcd health like disk fsync, leader elections etc. To help with this, Kraken has a way to evaluate promql expressions from the incluster prometheus and set the exit status to 0 or 1 based on the severity set for each of the query. Details on how to use this feature can be found [here](https://github.com/chaos-kubox/krkn#alerts).
- Apart from checking the recovery and cluster health status, its equally important to evaluate the performance metrics like latency, resource usage spikes, throughput, etcd health like disk fsync, leader elections etc. To help with this, Kraken has a way to evaluate promql expressions from the incluster prometheus and set the exit status to 0 or 1 based on the severity set for each of the query. Details on how to use this feature can be found [here](https://github.com/redhat-chaos/krkn#alerts).
- The overall pass or fail of kraken is based on the recovery of the specific component (within a certain amount of time), the cerberus health signal which tracks the health of the entire cluster and metrics evaluation from incluster prometheus.
@@ -112,17 +114,17 @@ If the monitoring tool, cerberus is enabled it will consume the signal and conti
Let us take a look at how to run the chaos scenarios on your OpenShift clusters using Kraken-hub - a lightweight wrapper around Kraken to ease the runs by providing the ability to run them by just running container images using podman with parameters set as environment variables. This eliminates the need to carry around and edit configuration files and makes it easy for any CI framework integration. Here are the scenarios supported:
- Pod Scenarios ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/pod-scenarios.md))
- Pod Scenarios ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/pod-scenarios.md))
- Disrupts OpenShift/Kubernetes and applications deployed as pods:
- Helps understand the availability of the application, the initialization timing and recovery status.
- [Demo](https://asciinema.org/a/452351?speed=3&theme=solarized-dark)
- Container Scenarios ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/container-scenarios.md))
- Container Scenarios ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/container-scenarios.md))
- Disrupts OpenShift/Kubernetes and applications deployed as containers running as part of a pod(s) using a specified kill signal to mimic failures:
- Helps understand the impact and recovery timing when the program/process running in the containers are disrupted - hangs, paused, killed etc., using various kill signals, i.e. SIGHUP, SIGTERM, SIGKILL etc.
- [Demo](https://asciinema.org/a/BXqs9JSGDSEKcydTIJ5LpPZBM?speed=3&theme=solarized-dark)
- Node Scenarios ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/node-scenarios.md))
- Node Scenarios ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/node-scenarios.md))
- Disrupts nodes as part of the cluster infrastructure by talking to the cloud API. AWS, Azure, GCP, OpenStack and Baremetal are the supported platforms as of now. Possible disruptions include:
- Terminate nodes
- Fork bomb inside the node
@@ -131,18 +133,18 @@ Let us take a look at how to run the chaos scenarios on your OpenShift clusters
- etc.
- [Demo](https://asciinema.org/a/ANZY7HhPdWTNaWt4xMFanF6Q5)
- Zone Outages ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/zone-outages.md))
- Zone Outages ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/zone-outages.md))
- Creates outage of availability zone(s) in a targeted region in the public cloud where the OpenShift cluster is running by tweaking the network acl of the zone to simulate the failure, and that in turn will stop both ingress and egress traffic from all nodes in a particular zone for the specified duration and reverts it back to the previous state.
- Helps understand the impact on both Kubernetes/OpenShift control plane as well as applications and services running on the worker nodes in that zone.
- Currently, only set up for AWS cloud platform: 1 VPC and multiples subnets within the VPC can be specified.
- [Demo](https://asciinema.org/a/452672?speed=3&theme=solarized-dark)
- Application Outages ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/application-outages.md))
- Application Outages ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/application-outages.md))
- Scenario to block the traffic ( Ingress/Egress ) of an application matching the labels for the specified duration of time to understand the behavior of the service/other services which depend on it during the downtime.
- Helps understand how the dependent services react to the unavailability.
- [Demo](https://asciinema.org/a/452403?speed=3&theme=solarized-dark)
- Power Outages ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/power-outages.md))
- Power Outages ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/power-outages.md))
- This scenario imitates a power outage by shutting down of the entire cluster for a specified duration of time, then restarts all the nodes after the specified time and checks the health of the cluster.
- There are various use cases in the customer environments. For example, when some of the clusters are shutdown in cases where the applications are not needed to run in a particular time/season in order to save costs.
- The nodes are stopped in parallel to mimic a power outage i.e., pulling off the plug
@@ -151,24 +153,23 @@ Let us take a look at how to run the chaos scenarios on your OpenShift clusters
- Resource Hog
- Hogs CPU, Memory and IO on the targeted nodes
- Helps understand if the application/system components have reserved resources to not get disrupted because of rogue applications, or get performance throttled.
- CPU Hog ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/node-cpu-hog.md), [Demo](https://asciinema.org/a/452762))
- Memory Hog ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/node-memory-hog.md), [Demo](https://asciinema.org/a/452742?speed=3&theme=solarized-dark))
- IO Hog ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/node-io-hog.md))
- CPU Hog ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/node-cpu-hog.md), [Demo](https://asciinema.org/a/452762))
- Memory Hog ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/node-memory-hog.md), [Demo](https://asciinema.org/a/452742?speed=3&theme=solarized-dark))
- Time Skewing ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/time-scenarios.md))
- Time Skewing ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/time-scenarios.md))
- Manipulate the system time and/or date of specific pods/nodes.
- Verify scheduling of objects so they continue to work.
- Verify time gets reset properly.
- Namespace Failures ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/namespace-scenarios.md))
- Namespace Failures ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/namespace-scenarios.md))
- Delete namespaces for the specified duration.
- Helps understand the impact on other components and tests/improves recovery time of the components in the targeted namespace.
- Persistent Volume Fill ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/pvc-scenarios.md))
- Persistent Volume Fill ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/pvc-scenarios.md))
- Fills up the persistent volumes, up to a given percentage, used by the pod for the specified duration.
- Helps understand how an application deals when it is no longer able to write data to the disk. For example, kafkas behavior when it is not able to commit data to the disk.
- Network Chaos ([Documentation](https://github.com/chaos-kubox/krkn-hub/blob/main/docs/network-chaos.md))
- Network Chaos ([Documentation](https://github.com/redhat-chaos/krkn-hub/blob/main/docs/network-chaos.md))
- Scenarios supported includes:
- Network latency
- Packet loss
@@ -207,8 +208,9 @@ Let us take a look at few recommendations on how and where to run the chaos test
- You might have existing test cases, be it related to Performance, Scalability or QE. Run the chaos in the background during the test runs to observe the impact. Signaling feature in Kraken can help with coordinating the chaos runs i.e., start, stop, pause the scenarios based on the state of the other test jobs.
#### Chaos testing in Practice within the OpenShift Organization
#### Chaos testing in Practice
##### OpenShift organization
Within the OpenShift organization we use kraken to perform chaos testing throughout a release before the code is available to customers.
1. We execute kraken during our regression test suite.
@@ -226,3 +228,83 @@ Within the OpenShift organization we use kraken to perform chaos testing through
iii. This test can be seen here: https://github.com/openshift/svt/tree/master/reliability-v2
3. We are starting to add in test cases that perform chaos testing during an upgrade (not many iterations of this have been completed).
##### startx-lab
**NOTE**: Requests for enhancements and any issues need to be filed at the mentioned links given that they are not natively supported in Kraken.
The following content covers the implementation details around how Startx is leveraging Kraken:
* Using kraken as part of a tekton pipeline
You can find on [artifacthub.io](https://artifacthub.io/packages/search?kind=7&ts_query_web=kraken) the
[kraken-scenario](https://artifacthub.io/packages/tekton-task/startx-tekton-catalog/kraken-scenario) `tekton-task`
which can be used to start a kraken chaos scenarios as part of a chaos pipeline.
To use this task, you must have :
- Openshift pipeline enabled (or tekton CRD loaded for Kubernetes clusters)
- 1 Secret named `kraken-aws-creds` for scenarios using aws
- 1 ConfigMap named `kraken-kubeconfig` with credentials to the targeted cluster
- 1 ConfigMap named `kraken-config-example` with kraken configuration file (config.yaml)
- 1 ConfigMap named `kraken-common-example` with all kraken related files
- The `pipeline` SA with be autorized to run with priviveged SCC
You can create theses resources using the following sequence :
```bash
oc project default
oc adm policy add-scc-to-user privileged -z pipeline
oc apply -f https://github.com/startxfr/tekton-catalog/raw/stable/task/kraken-scenario/0.1/samples/common.yaml
```
Then you must change content of `kraken-aws-creds` secret, `kraken-kubeconfig` and `kraken-config-example` configMap
to reflect your cluster configuration. Refer to the [kraken configuration](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml)
and [configuration examples](https://github.com/startxfr/tekton-catalog/blob/stable/task/kraken-scenario/0.1/samples/)
for details on how to configure theses resources.
* Start as a single taskrun
```bash
oc apply -f https://github.com/startxfr/tekton-catalog/raw/stable/task/kraken-scenario/0.1/samples/taskrun.yaml
```
* Start as a pipelinerun
```yaml
oc apply -f https://github.com/startxfr/tekton-catalog/raw/stable/task/kraken-scenario/0.1/samples/pipelinerun.yaml
```
* Deploying kraken using a helm-chart
You can find on [artifacthub.io](https://artifacthub.io/packages/search?kind=0&ts_query_web=kraken) the
[chaos-kraken](https://artifacthub.io/packages/helm/startx/chaos-kraken) `helm-chart`
which can be used to deploy a kraken chaos scenarios.
Default configuration create the following resources :
- 1 project named **chaos-kraken**
- 1 scc with privileged context for kraken deployment
- 1 configmap with kraken 21 generic scenarios, various scripts and configuration
- 1 configmap with kubeconfig of the targeted cluster
- 1 job named kraken-test-xxx
- 1 service to the kraken pods
- 1 route to the kraken service
```bash
# Install the startx helm repository
helm repo add startx https://startxfr.github.io/helm-repository/packages/
# Install the kraken project
helm install --set project.enabled=true chaos-kraken-project startx/chaos-kraken
# Deploy the kraken instance
helm install \
--set kraken.enabled=true \
--set kraken.aws.credentials.region="eu-west-3" \
--set kraken.aws.credentials.key_id="AKIAXXXXXXXXXXXXXXXX" \
--set kraken.aws.credentials.secret="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
--set kraken.kubeconfig.token.server="https://api.mycluster:6443" \
--set kraken.kubeconfig.token.token="sha256~XXXXXXXXXX_PUT_YOUR_TOKEN_HERE_XXXXXXXXXXXX" \
-n chaos-kraken \
chaos-kraken-instance startx/chaos-kraken
```

View File

@@ -9,49 +9,60 @@ The following ways are supported to run Kraken:
**NOTE**: It is recommended to run Kraken external to the cluster ( Standalone or Containerized ) hitting the Kubernetes/OpenShift API as running it internal to the cluster might be disruptive to itself and also might not report back the results if the chaos leads to cluster's API server instability.
**NOTE**: To run Kraken on Power (ppc64le) architecture, build and run a containerized version by following the
instructions given [here](https://github.com/chaos-kubox/krkn/blob/main/containers/build_own_image-README.md).
instructions given [here](https://github.com/redhat-chaos/krkn/blob/main/containers/build_own_image-README.md).
**NOTE**: Helper functions for interactions in Krkn are part of [krkn-lib](https://github.com/redhat-chaos/krkn-lib).
Please feel free to reuse and expand them as you see fit when adding a new scenario or expanding
the capabilities of the current supported scenarios.
### Git
#### Clone the repository
Pick the latest stable release to install [here](https://github.com/redhat-chaos/krkn/releases).
```
$ git clone https://github.com/openshift-scale/krkn.git
$ git clone https://github.com/redhat-chaos/krkn.git --branch <release version>
$ cd kraken
```
#### Install the dependencies
```
$ python3 -m venv chaos
$ python3.9 -m venv chaos
$ source chaos/bin/activate
$ pip3 install -r requirements.txt
$ pip3.9 install -r requirements.txt
```
**NOTE**: Make sure python3-devel and latest pip versions are installed on the system. The dependencies install has been tested with pip >= 21.1.3 versions.
#### Run
```
$ python3 run_kraken.py --config <config_file_location>
$ python3.9 run_kraken.py --config <config_file_location>
```
### Run containerized version
Assuming that the latest docker ( 17.05 or greater with multi-build support ) is installed on the host, run:
```
$ docker pull quay.io/chaos-kubox/krkn:latest
$ docker run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -d quay.io/chaos-kubox/krkn:latest
$ docker run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -v <path_to_scenarios_directory>:/root/kraken/scenarios:Z -d quay.io/chaos-kubox/krkn:latest #custom or tweaked scenario configs
$ docker pull quay.io/redhat-chaos/krkn:latest
$ docker run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -d quay.io/redhat-chaos/krkn:latest
$ docker run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -v <path_to_scenarios_directory>:/root/kraken/scenarios:Z -d quay.io/redhat-chaos/krkn:latest #custom or tweaked scenario configs
$ docker logs -f kraken
```
Similarly, podman can be used to achieve the same:
```
$ podman pull quay.io/chaos-kubox/krkn
$ podman run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -d quay.io/chaos-kubox/krkn:latest
$ podman run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -v <path_to_scenarios_directory>:/root/kraken/scenarios:Z -d quay.io/chaos-kubox/krkn:latest #custom or tweaked scenario configs
$ podman pull quay.io/redhat-chaos/krkn
$ podman run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -d quay.io/redhat-chaos/krkn:latest
$ podman run --name=kraken --net=host -v <path_to_kubeconfig>:/root/.kube/config:Z -v <path_to_kraken_config>:/root/kraken/config/config.yaml:Z -v <path_to_scenarios_directory>:/root/kraken/scenarios:Z -d quay.io/redhat-chaos/krkn:latest #custom or tweaked scenario configs
$ podman logs -f kraken
```
If you want to build your own kraken image see [here](https://github.com/chaos-kubox/krkn/blob/main/containers/build_own_image-README.md)
If you want to build your own kraken image see [here](https://github.com/redhat-chaos/krkn/blob/main/containers/build_own_image-README.md)
### Run Kraken as a Kubernetes deployment
Refer [Instructions](https://github.com/chaos-kubox/krkn/blob/main/containers/README.md) on how to deploy and run Kraken as a Kubernetes/OpenShift deployment.
Refer [Instructions](https://github.com/redhat-chaos/krkn/blob/main/containers/README.md) on how to deploy and run Kraken as a Kubernetes/OpenShift deployment.
Refer to the [chaos-kraken chart manpage](https://artifacthub.io/packages/helm/startx/chaos-kraken)
and especially the [kraken configuration values](https://artifacthub.io/packages/helm/startx/chaos-kraken#chaos-kraken-values-dictionary)
for details on how to configure this chart.

View File

@@ -1,41 +0,0 @@
### Litmus Scenarios
Kraken consumes [Litmus](https://github.com/litmuschaos/litmus) under the hood for some scenarios
Official Litmus documentation and specifics of Litmus resources can be found [here](https://docs.litmuschaos.io/docs/next/getstarted/)
#### Litmus Chaos Custom Resources
There are 3 custom resources that are created during each Litmus scenario. Below is a description of the resources:
* ChaosEngine: A resource to link a Kubernetes application or Kubernetes node to a ChaosExperiment. ChaosEngine is watched by Litmus' Chaos-Operator which then invokes Chaos-Experiments.
* ChaosExperiment: A resource to group the configuration parameters of a chaos experiment. ChaosExperiment CRs are created by the operator when experiments are invoked by ChaosEngine.
* ChaosResult : A resource to hold the results of a chaos-experiment. The Chaos-exporter reads the results and exports the metrics into a configured Prometheus server.
### Understanding Litmus Scenarios
To run Litmus scenarios we need to apply 3 different resources/yaml files to our cluster.
1. **Chaos experiments** contain the actual chaos details of a scenario.
i. This is installed automatically by Kraken (does not need to be specified in kraken scenario configuration).
2. **Service Account**: should be created to allow chaosengine to run experiments in your application namespace. Usually it sets just enough permissions to a specific namespace to be able to run the experiment properly.
i. This can be defined using either a link to a yaml file or a downloaded file in the scenarios' folder.
3. **Chaos Engine** connects the application instance to a Chaos Experiment. This is where you define the specifics of your scenario; i.e.: the node or pod name you want to cause chaos within.
i. This is a downloaded yaml file in the scenarios' folder. A full list of scenarios can be found [here](https://hub.litmuschaos.io/)
**NOTE**: By default, all chaos experiments will be installed based on the version you give in the config file.
Adding a new Litmus based scenario is as simple as adding references to 2 new yaml files (the Service Account and Chaos engine files for your scenario ) in the Kraken config.
### Supported scenarios
The following are the start of scenarios for which a chaos scenario config exists today.
Scenario | Description | Working
------------------------ |-----------------------------------------------------------------------------------------| ------------------------- |
[Node CPU Hog](https://github.com/chaos-kubox/krkn/blob/main/scenarios/node_cpu_hog_engine.yaml) | Chaos scenario that hogs up the CPU on a defined node for a specific amount of time. | :heavy_check_mark: |
[Node Memory Hog](https://github.com/chaos-kubox/krkn/blob/main/scenarios/node_mem_engine.yaml) | Chaos scenario that hogs up the memory on a defined node for a specific amount of time. | :heavy_check_mark: |
[Node IO Hog](https://github.com/chaos-kubox/krkn/blob/main/scenarios/node_io_engine.yaml) | Chaos scenario that hogs up the IO on a defined node for a specific amount of time. | :heavy_check_mark: |

View File

@@ -0,0 +1,36 @@
### ManagedCluster Scenarios
[ManagedCluster](https://open-cluster-management.io/concepts/managedcluster/) scenarios provide a way to integrate kraken with [Open Cluster Management (OCM)](https://open-cluster-management.io/) and [Red Hat Advanced Cluster Management for Kubernetes (ACM)](https://www.redhat.com/en/technologies/management/advanced-cluster-management).
ManagedCluster scenarios leverage [ManifestWorks](https://open-cluster-management.io/concepts/manifestwork/) to inject faults into the ManagedClusters.
The following ManagedCluster chaos scenarios are supported:
1. **managedcluster_start_scenario**: Scenario to start the ManagedCluster instance.
2. **managedcluster_stop_scenario**: Scenario to stop the ManagedCluster instance.
3. **managedcluster_stop_start_scenario**: Scenario to stop and then start the ManagedCluster instance.
4. **start_klusterlet_scenario**: Scenario to start the klusterlet of the ManagedCluster instance.
5. **stop_klusterlet_scenario**: Scenario to stop the klusterlet of the ManagedCluster instance.
6. **stop_start_klusterlet_scenario**: Scenario to stop and start the klusterlet of the ManagedCluster instance.
ManagedCluster scenarios can be injected by placing the ManagedCluster scenarios config files under `managedcluster_scenarios` option in the Kraken config. Refer to [managedcluster_scenarios_example](https://github.com/redhat-chaos/krkn/blob/main/scenarios/kube/managedcluster_scenarios_example.yml) config file.
```
managedcluster_scenarios:
- actions: # ManagedCluster chaos scenarios to be injected
- managedcluster_stop_start_scenario
managedcluster_name: cluster1 # ManagedCluster on which scenario has to be injected; can set multiple names separated by comma
# label_selector: # When managedcluster_name is not specified, a ManagedCluster with matching label_selector is selected for ManagedCluster chaos scenario injection
instance_count: 1 # Number of managedcluster to perform action/select that match the label selector
runs: 1 # Number of times to inject each scenario under actions (will perform on same ManagedCluster each time)
timeout: 420 # Duration to wait for completion of ManagedCluster scenario injection
# For OCM to detect a ManagedCluster as unavailable, have to wait 5*leaseDurationSeconds
# (default leaseDurationSeconds = 60 sec)
- actions:
- stop_start_klusterlet_scenario
managedcluster_name: cluster1
# label_selector:
instance_count: 1
runs: 1
timeout: 60
```

View File

@@ -2,7 +2,7 @@
There are cases where the state of the cluster and metrics on the cluster during the chaos test run need to be stored long term to review after the cluster is terminated, for example CI and automation test runs. To help with this, Kraken supports capturing metrics for the duration of the scenarios defined in the config and indexes them into Elasticsearch. The indexed metrics can be visualized with the help of Grafana.
It uses [Kube-burner](https://github.com/cloud-bulldozer/kube-burner) under the hood. The metrics to capture need to be defined in a metrics profile which Kraken consumes to query prometheus ( installed by default in OpenShift ) with the start and end timestamp of the run. Each run has a unique identifier ( uuid ) and all the metrics/documents in Elasticsearch will be associated with it. The uuid is generated automatically if not set in the config. This feature can be enabled in the [config](https://github.com/chaos-kubox/krkn/blob/main/config/config.yaml) by setting the following:
It uses [Kube-burner](https://github.com/cloud-bulldozer/kube-burner) under the hood. The metrics to capture need to be defined in a metrics profile which Kraken consumes to query prometheus ( installed by default in OpenShift ) with the start and end timestamp of the run. Each run has a unique identifier ( uuid ) and all the metrics/documents in Elasticsearch will be associated with it. The uuid is generated automatically if not set in the config. This feature can be enabled in the [config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) by setting the following:
```
performance_monitoring:
@@ -16,7 +16,7 @@ performance_monitoring:
```
### Metrics profile
A couple of [metric profiles](https://github.com/chaos-kubox/krkn/tree/main/config), [metrics.yaml](https://github.com/chaos-kubox/krkn/blob/main/config/metrics.yaml), and [metrics-aggregated.yaml](https://github.com/chaos-kubox/krkn/blob/main/config/metrics-aggregated.yaml) are shipped by default and can be tweaked to add more metrics to capture during the run. The following are the API server metrics for example:
A couple of [metric profiles](https://github.com/redhat-chaos/krkn/tree/main/config), [metrics.yaml](https://github.com/redhat-chaos/krkn/blob/main/config/metrics.yaml), and [metrics-aggregated.yaml](https://github.com/redhat-chaos/krkn/blob/main/config/metrics-aggregated.yaml) are shipped by default and can be tweaked to add more metrics to capture during the run. The following are the API server metrics for example:
```
metrics:

View File

@@ -16,7 +16,7 @@ Set to '^.*$' and label_selector to "" to randomly select any namespace in your
**sleep:** Number of seconds to wait between each iteration/count of killing namespaces. Defaults to 10 seconds if not set
Refer to [namespace_scenarios_example](https://github.com/chaos-kubox/krkn/blob/main/scenarios/regex_namespace.yaml) config file.
Refer to [namespace_scenarios_example](https://github.com/redhat-chaos/krkn/blob/main/scenarios/regex_namespace.yaml) config file.
```
scenarios:

View File

@@ -1,7 +1,7 @@
### Network chaos
Scenario to introduce network latency, packet loss, and bandwidth restriction in the Node's host network interface. The purpose of this scenario is to observe faults caused by random variations in the network.
##### Sample scenario config
##### Sample scenario config for egress traffic shaping
```
network_chaos: # Scenario to create an outage by simulating random variations in the network.
duration: 300 # In seconds - duration network chaos will be applied.
@@ -17,6 +17,29 @@ network_chaos: # Scenario to create an outage
bandwidth: 100mbit
```
##### Sample scenario config for ingress traffic shaping (using a plugin)
'''
- id: network_chaos
config:
node_interface_name: # Dictionary with key as node name(s) and value as a list of its interfaces to test
ip-10-0-128-153.us-west-2.compute.internal:
- ens5
- genev_sys_6081
label_selector: node-role.kubernetes.io/master # When node_interface_name is not specified, nodes with matching label_selector is selected for node chaos scenario injection
instance_count: 1 # Number of nodes to perform action/select that match the label selector
kubeconfig_path: ~/.kube/config # Path to kubernetes config file. If not specified, it defaults to ~/.kube/config
execution_type: parallel # Execute each of the ingress options as a single scenario(parallel) or as separate scenario(serial).
network_params:
latency: 50ms
loss: '0.02'
bandwidth: 100mbit
wait_duration: 120
test_duration: 60
'''
Note: For ingress traffic shaping, ensure that your node doesn't have any [IFB](https://wiki.linuxfoundation.org/networking/ifb) interfaces already present. The scenario relies on creating IFBs to do the shaping, and they are deleted at the end of the scenario.
##### Steps
- Pick the nodes to introduce the network anomaly either from node_name or label_selector.
- Verify interface list in one of the nodes or use the interface with a default route, as test interface, if no interface is specified by the user.

View File

@@ -4,7 +4,7 @@ The following node chaos scenarios are supported:
1. **node_start_scenario**: Scenario to stop the node instance.
2. **node_stop_scenario**: Scenario to stop the node instance.
3. **node_stop_start_scenario**: Scenario to stop and then start the node instance.
3. **node_stop_start_scenario**: Scenario to stop and then start the node instance. Not supported on VMware.
4. **node_termination_scenario**: Scenario to terminate the node instance.
5. **node_reboot_scenario**: Scenario to reboot the node instance.
6. **stop_kubelet_scenario**: Scenario to stop the kubelet of the node instance.
@@ -12,13 +12,14 @@ The following node chaos scenarios are supported:
8. **node_crash_scenario**: Scenario to crash the node instance.
9. **stop_start_helper_node_scenario**: Scenario to stop and start the helper node and check service status.
**NOTE**: If the node does not recover from the node_crash_scenario injection, reboot the node to get it back to Ready state.
**NOTE**: node_start_scenario, node_stop_scenario, node_stop_start_scenario, node_termination_scenario
, node_reboot_scenario and stop_start_kubelet_scenario are supported only on AWS, Azure, OpenStack, BareMetal, GCP
, and Alibaba as of now.
, VMware and Alibaba as of now.
**NOTE**: Node scenarios are supported only when running the standalone version of Kraken until https://github.com/chaos-kubox/krkn/issues/106 gets fixed.
**NOTE**: Node scenarios are supported only when running the standalone version of Kraken until https://github.com/redhat-chaos/krkn/issues/106 gets fixed.
#### AWS
@@ -37,6 +38,14 @@ See the example node scenario or the example below.
**NOTE**: Baremetal machines are fragile. Some node actions can occasionally corrupt the filesystem if it does not shut down properly, and sometimes the kubelet does not start properly.
#### Docker
The Docker provider can be used to run node scenarios against kind clusters.
[kind](https://kind.sigs.k8s.io/) is a tool for running local Kubernetes clusters using Docker container "nodes".
kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.
#### GCP
How to set up GCP cli to run node scenarios is defined [here](cloud_setup.md#gcp).
@@ -64,13 +73,53 @@ How to set up Alibaba cli to run node scenarios is defined [here](cloud_setup.md
. Releasing a node is 2 steps, stopping the node and then releasing it.
#### VMware
How to set up VMware vSphere to run node scenarios is defined [here](cloud_setup.md#vmware)
This cloud type uses a different configuration style, see actions below and [example config file](../scenarios/openshift/vmware_node_scenarios.yml)
*vmware-node-terminate, vmware-node-reboot, vmware-node-stop, vmware-node-start*
#### IBMCloud
How to set up IBMCloud to run node scenarios is defined [here](cloud_setup.md#ibmcloud)
This cloud type uses a different configuration style, see actions below and [example config file](../scenarios/openshift/ibmcloud_node_scenarios.yml)
*ibmcloud-node-terminate, ibmcloud-node-reboot, ibmcloud-node-stop, ibmcloud-node-start
*
#### IBMCloud and Vmware example
```
- id: ibmcloud-node-stop
config:
name: "<node_name>"
label_selector: "node-role.kubernetes.io/worker" # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time)
instance_count: 1 # Number of nodes to perform action/select that match the label selector
timeout: 30 # Duration to wait for completion of node scenario injection
skip_openshift_checks: False # Set to True if you don't want to wait for the status of the nodes to change on OpenShift before passing the scenario
- id: ibmcloud-node-start
config:
name: "<node_name>" #Same name as before
label_selector: "node-role.kubernetes.io/worker" # When node_name is not specified, a node with matching label_selector is selected for node chaos scenario injection
runs: 1 # Number of times to inject each scenario under actions (will perform on same node each time)
instance_count: 1 # Number of nodes to perform action/select that match the label selector
timeout: 30 # Duration to wait for completion of node scenario injection
skip_openshift_checks: False # Set to True if you don't want to wait for the status of the nodes to change on OpenShift before passing the scenario
```
#### General
**NOTE**: The `node_crash_scenario` and `stop_kubelet_scenario` scenario is supported independent of the cloud platform.
Use 'generic' or do not add the 'cloud_type' key to your scenario if your cluster is not set up using one of the current supported cloud types.
Node scenarios can be injected by placing the node scenarios config files under node_scenarios option in the kraken config. Refer to [node_scenarios_example](https://github.com/chaos-kubox/krkn/blob/main/scenarios/node_scenarios_example.yml) config file.
Node scenarios can be injected by placing the node scenarios config files under node_scenarios option in the kraken config. Refer to [node_scenarios_example](https://github.com/redhat-chaos/krkn/blob/main/scenarios/node_scenarios_example.yml) config file.
```

View File

@@ -0,0 +1,37 @@
## Pod network Scenarios
### Pod outage
Scenario to block the traffic ( Ingress/Egress ) of a pod matching the labels for the specified duration of time to understand the behavior of the service/other services which depend on it during downtime. This helps with planning the requirements accordingly, be it improving the timeouts or tweaking the alerts etc.
With the current network policies, it is not possible to explicitly block ports which are enabled by allowed network policy rule. This chaos scenario addresses this issue by using OVS flow rules to block ports related to the pod. It supports OpenShiftSDN and OVNKubernetes based networks.
##### Sample scenario config (using a plugin)
```
- id: pod_network_outage
config:
namespace: openshift-console # Required - Namespace of the pod to which filter need to be applied
direction: # Optioinal - List of directions to apply filters
- ingress # Blocks ingress traffic, Default both egress and ingress
ingress_ports: # Optional - List of ports to block traffic on
- 8443 # Blocks 8443, Default [], i.e. all ports.
label_selector: 'component=ui' # Blocks access to openshift console
```
### Pod Network shaping
Scenario to introduce network latency, packet loss, and bandwidth restriction in the Pod's network interface. The purpose of this scenario is to observe faults caused by random variations in the network.
##### Sample scenario config for egress traffic shaping (using plugin)
```
- id: pod_egress_shaping
config:
namespace: openshift-console # Required - Namespace of the pod to which filter need to be applied.
label_selector: 'component=ui' # Applies traffic shaping to access openshift console.
network_params:
latency: 500ms # Add 500ms latency to egress traffic from the pod.
```
##### Steps
- Pick the pods to introduce the network anomaly either from label_selector or pod_name.
- Identify the pod interface name on the node.
- Set traffic shaping config on pod's interface using tc and netem.
- Wait for the duration time.
- Remove traffic shaping config on pod's interface.
- Remove the job that spawned the pod.

View File

@@ -1,14 +1,40 @@
### Pod Scenarios
Kraken consumes [Powerfulseal](https://github.com/powerfulseal/powerfulseal) under the hood to run the pod scenarios.
These scenarios are in a simple yaml format that you can manipulate to run your specific tests or use the pre-existing scenarios to see how it works.
Krkn recently replaced PowerfulSeal with its own internal pod scenarios using a plugin system. You can run pod scenarios by adding the following config to Krkn:
```yaml
kraken:
chaos_scenarios:
- plugin_scenarios:
- path/to/scenario.yaml
```
You can then create the scenario file with the following contents:
```yaml
# yaml-language-server: $schema=../plugin.schema.json
- id: kill-pods
config:
namespace_pattern: ^kube-system$
label_selector: k8s-app=kube-scheduler
- id: wait-for-pods
config:
namespace_pattern: ^kube-system$
label_selector: k8s-app=kube-scheduler
count: 3
```
Please adjust the schema reference to point to the [schema file](../scenarios/plugin.schema.json). This file will give you code completion and documentation for the available options in your IDE.
#### Pod Chaos Scenarios
The following are the components of Kubernetes/OpenShift for which a basic chaos scenario config exists today.
Component | Description | Working
------------------------ |----------------------------------------------------------------------------------------------| ------------------------- |
[Etcd](https://github.com/chaos-kubox/krkn/blob/main/scenarios/etcd.yml) | Kills a single/multiple etcd replicas for the specified number of times in a loop. | :heavy_check_mark: |
[Kube ApiServer](https://github.com/chaos-kubox/krkn/blob/main/scenarios/openshift-kube-apiserver.yml) | Kills a single/multiple kube-apiserver replicas for the specified number of times in a loop. | :heavy_check_mark: |
[ApiServer](https://github.com/chaos-kubox/krkn/blob/main/scenarios/openshift-apiserver.yml) | Kills a single/multiple apiserver replicas for the specified number of times in a loop. | :heavy_check_mark: |
[Prometheus](https://github.com/chaos-kubox/krkn/blob/main/scenarios/prometheus.yml) | Kills a single/multiple prometheus replicas for the specified number of times in a loop. | :heavy_check_mark: |
[OpenShift System Pods](https://github.com/chaos-kubox/krkn/blob/main/scenarios/regex_openshift_pod_kill.yml) | Kills random pods running in the OpenShift system namespaces. | :heavy_check_mark: |
| Component | Description | Working |
| ------------------------ |-------------| -------- |
| [Basic pod scenario](../scenarios/kube/pod.yml) | Kill a pod. | :heavy_check_mark: |
| [Etcd](../scenarios/openshift/etcd.yml) | Kills a single/multiple etcd replicas. | :heavy_check_mark: |
| [Kube ApiServer](../scenarios/openshift/openshift-kube-apiserver.yml)| Kills a single/multiple kube-apiserver replicas. | :heavy_check_mark: |
| [ApiServer](../scenarios/openshift/openshift-apiserver.yml) | Kills a single/multiple apiserver replicas. | :heavy_check_mark: |
| [Prometheus](../scenarios/openshift/prometheus.yml) | Kills a single/multiple prometheus replicas. | :heavy_check_mark: |
| [OpenShift System Pods](../scenarios/openshift/regex_openshift_pod_kill.yml) | Kills random pods running in the OpenShift system namespaces. | :heavy_check_mark: |

View File

@@ -16,7 +16,7 @@ Configuration Options:
**object_name:** List of the names of pods or nodes you want to skew.
Refer to [time_scenarios_example](https://github.com/chaos-kubox/krkn/blob/main/scenarios/time_scenarios_example.yml) config file.
Refer to [time_scenarios_example](https://github.com/redhat-chaos/krkn/blob/main/scenarios/time_scenarios_example.yml) config file.
```
time_scenarios:

View File

@@ -1,5 +1,5 @@
### Zone outage scenario
Scenario to create outage in a targeted zone in the public cloud to understand the impact on both Kubernetes/OpenShift control plane as well as applications running on the worker nodes in that zone. It tweaks the network acl of the zone to simulate the failure and that in turn will stop both ingress and egress traffic from all the nodes in a particular zone for the specified duration and reverts it back to the previous state. Zone outage can be injected by placing the zone_outage config file under zone_outages option in the [kraken config](https://github.com/chaos-kubox/krkn/blob/main/config/config.yaml). Refer to [zone_outage_scenario](https://github.com/chaos-kubox/krkn/blob/main/scenarios/zone_outage.yaml) config file for the parameters that need to be defined.
Scenario to create outage in a targeted zone in the public cloud to understand the impact on both Kubernetes/OpenShift control plane as well as applications running on the worker nodes in that zone. It tweaks the network acl of the zone to simulate the failure and that in turn will stop both ingress and egress traffic from all the nodes in a particular zone for the specified duration and reverts it back to the previous state. Zone outage can be injected by placing the zone_outage config file under zone_outages option in the [kraken config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml). Refer to [zone_outage_scenario](https://github.com/redhat-chaos/krkn/blob/main/scenarios/zone_outage.yaml) config file for the parameters that need to be defined.
Refer to [cloud setup](cloud_setup.md) to configure your cli properly for the cloud provider of the cluster you want to shut down.

View File

@@ -4,25 +4,33 @@ import time
import kraken.cerberus.setup as cerberus
from jinja2 import Template
import kraken.invoke.command as runcommand
from krkn_lib.telemetry import KrknTelemetry
from krkn_lib.models.telemetry import ScenarioTelemetry
# Reads the scenario config, applies and deletes a network policy to
# block the traffic for the specified duration
def run(scenarios_list, config, wait_duration):
def run(scenarios_list, config, wait_duration, telemetry: KrknTelemetry) -> (list[str], list[ScenarioTelemetry]):
failed_post_scenarios = ""
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
for app_outage_config in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = app_outage_config
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, app_outage_config)
if len(app_outage_config) > 1:
with open(app_outage_config, "r") as f:
app_outage_config_yaml = yaml.full_load(f)
scenario_config = app_outage_config_yaml["application_outage"]
pod_selector = scenario_config.get("pod_selector", "{}")
traffic_type = scenario_config.get("block", "[Ingress, Egress]")
namespace = scenario_config.get("namespace", "")
duration = scenario_config.get("duration", 60)
try:
with open(app_outage_config, "r") as f:
app_outage_config_yaml = yaml.full_load(f)
scenario_config = app_outage_config_yaml["application_outage"]
pod_selector = scenario_config.get("pod_selector", "{}")
traffic_type = scenario_config.get("block", "[Ingress, Egress]")
namespace = scenario_config.get("namespace", "")
duration = scenario_config.get("duration", 60)
start_time = int(time.time())
start_time = int(time.time())
network_policy_template = """---
network_policy_template = """---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
@@ -31,28 +39,38 @@ spec:
podSelector:
matchLabels: {{ pod_selector }}
policyTypes: {{ traffic_type }}
"""
t = Template(network_policy_template)
rendered_spec = t.render(pod_selector=pod_selector, traffic_type=traffic_type)
# Write the rendered template to a file
with open("kraken_network_policy.yaml", "w") as f:
f.write(rendered_spec)
# Block the traffic by creating network policy
logging.info("Creating the network policy")
runcommand.invoke(
"kubectl create -f %s -n %s --validate=false" % ("kraken_network_policy.yaml", namespace)
)
"""
t = Template(network_policy_template)
rendered_spec = t.render(pod_selector=pod_selector, traffic_type=traffic_type)
# Write the rendered template to a file
with open("kraken_network_policy.yaml", "w") as f:
f.write(rendered_spec)
# Block the traffic by creating network policy
logging.info("Creating the network policy")
runcommand.invoke(
"kubectl create -f %s -n %s --validate=false" % ("kraken_network_policy.yaml", namespace)
)
# wait for the specified duration
logging.info("Waiting for the specified duration in the config: %s" % (duration))
time.sleep(duration)
# wait for the specified duration
logging.info("Waiting for the specified duration in the config: %s" % (duration))
time.sleep(duration)
# unblock the traffic by deleting the network policy
logging.info("Deleting the network policy")
runcommand.invoke("kubectl delete -f %s -n %s" % ("kraken_network_policy.yaml", namespace))
# unblock the traffic by deleting the network policy
logging.info("Deleting the network policy")
runcommand.invoke("kubectl delete -f %s -n %s" % ("kraken_network_policy.yaml", namespace))
logging.info("End of scenario. Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
logging.info("End of scenario. Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
except Exception as e :
scenario_telemetry.exitStatus = 1
failed_scenarios.append(app_outage_config)
telemetry.log_exception(app_outage_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)

View File

@@ -0,0 +1,2 @@
from .arcaflow_plugin import *
from .context_auth import ContextAuth

View File

@@ -0,0 +1,178 @@
import time
import arcaflow
import os
import yaml
import logging
from pathlib import Path
from typing import List
from .context_auth import ContextAuth
from krkn_lib.telemetry import KrknTelemetry
from krkn_lib.models.telemetry import ScenarioTelemetry
def run(scenarios_list: List[str], kubeconfig_path: str, telemetry: KrknTelemetry) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
failed_post_scenarios = []
for scenario in scenarios_list:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry,scenario)
engine_args = build_args(scenario)
status_code = run_workflow(engine_args, kubeconfig_path)
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetry.exitStatus = status_code
scenario_telemetries.append(scenario_telemetry)
if status_code != 0:
failed_post_scenarios.append(scenario)
return failed_post_scenarios, scenario_telemetries
def run_workflow(engine_args: arcaflow.EngineArgs, kubeconfig_path: str) -> int:
set_arca_kubeconfig(engine_args, kubeconfig_path)
exit_status = arcaflow.run(engine_args)
return exit_status
def build_args(input_file: str) -> arcaflow.EngineArgs:
"""sets the kubeconfig parsed by setArcaKubeConfig as an input to the arcaflow workflow"""
context = Path(input_file).parent
workflow = "{}/workflow.yaml".format(context)
config = "{}/config.yaml".format(context)
if not os.path.exists(context):
raise Exception(
"context folder for arcaflow workflow not found: {}".format(
context)
)
if not os.path.exists(input_file):
raise Exception(
"input file for arcaflow workflow not found: {}".format(input_file))
if not os.path.exists(workflow):
raise Exception(
"workflow file for arcaflow workflow not found: {}".format(
workflow)
)
if not os.path.exists(config):
raise Exception(
"configuration file for arcaflow workflow not found: {}".format(
config)
)
engine_args = arcaflow.EngineArgs()
engine_args.context = context
engine_args.config = config
engine_args.input = input_file
return engine_args
def set_arca_kubeconfig(engine_args: arcaflow.EngineArgs, kubeconfig_path: str):
context_auth = ContextAuth()
if not os.path.exists(kubeconfig_path):
raise Exception("kubeconfig not found in {}".format(kubeconfig_path))
with open(kubeconfig_path, "r") as stream:
try:
kubeconfig = yaml.safe_load(stream)
context_auth.fetch_auth_data(kubeconfig)
except Exception as e:
logging.error("impossible to read kubeconfig file in: {}".format(
kubeconfig_path))
raise e
kubeconfig_str = set_kubeconfig_auth(kubeconfig, context_auth)
with open(engine_args.input, "r") as stream:
input_file = yaml.safe_load(stream)
if "input_list" in input_file and isinstance(input_file["input_list"],list):
for index, _ in enumerate(input_file["input_list"]):
if isinstance(input_file["input_list"][index], dict):
input_file["input_list"][index]["kubeconfig"] = kubeconfig_str
else:
input_file["kubeconfig"] = kubeconfig_str
stream.close()
with open(engine_args.input, "w") as stream:
yaml.safe_dump(input_file, stream)
with open(engine_args.config, "r") as stream:
config_file = yaml.safe_load(stream)
if config_file["deployer"]["type"] == "kubernetes":
kube_connection = set_kubernetes_deployer_auth(config_file["deployer"]["connection"], context_auth)
config_file["deployer"]["connection"]=kube_connection
with open(engine_args.config, "w") as stream:
yaml.safe_dump(config_file, stream,explicit_start=True, width=4096)
def set_kubernetes_deployer_auth(deployer: any, context_auth: ContextAuth) -> any:
if context_auth.clusterHost is not None :
deployer["host"] = context_auth.clusterHost
if context_auth.clientCertificateData is not None :
deployer["cert"] = context_auth.clientCertificateData
if context_auth.clientKeyData is not None:
deployer["key"] = context_auth.clientKeyData
if context_auth.clusterCertificateData is not None:
deployer["cacert"] = context_auth.clusterCertificateData
if context_auth.username is not None:
deployer["username"] = context_auth.username
if context_auth.password is not None:
deployer["password"] = context_auth.password
if context_auth.bearerToken is not None:
deployer["bearerToken"] = context_auth.bearerToken
return deployer
def set_kubeconfig_auth(kubeconfig: any, context_auth: ContextAuth) -> str:
"""
Builds an arcaflow-compatible kubeconfig representation and returns it as a string.
In order to run arcaflow plugins in kubernetes/openshift the kubeconfig must contain client certificate/key
and server certificate base64 encoded within the kubeconfig file itself in *-data fields. That is not always the
case, infact kubeconfig may contain filesystem paths to those files, this function builds an arcaflow-compatible
kubeconfig file and returns it as a string that can be safely included in input.yaml
"""
if "current-context" not in kubeconfig.keys():
raise Exception(
"invalid kubeconfig file, impossible to determine current-context"
)
user_id = None
cluster_id = None
user_name = None
cluster_name = None
current_context = kubeconfig["current-context"]
for context in kubeconfig["contexts"]:
if context["name"] == current_context:
user_name = context["context"]["user"]
cluster_name = context["context"]["cluster"]
if user_name is None:
raise Exception(
"user not set for context {} in kubeconfig file".format(current_context)
)
if cluster_name is None:
raise Exception(
"cluster not set for context {} in kubeconfig file".format(current_context)
)
for index, user in enumerate(kubeconfig["users"]):
if user["name"] == user_name:
user_id = index
for index, cluster in enumerate(kubeconfig["clusters"]):
if cluster["name"] == cluster_name:
cluster_id = index
if cluster_id is None:
raise Exception(
"no cluster {} found in kubeconfig users".format(cluster_name)
)
if "client-certificate" in kubeconfig["users"][user_id]["user"]:
kubeconfig["users"][user_id]["user"]["client-certificate-data"] = context_auth.clientCertificateDataBase64
del kubeconfig["users"][user_id]["user"]["client-certificate"]
if "client-key" in kubeconfig["users"][user_id]["user"]:
kubeconfig["users"][user_id]["user"]["client-key-data"] = context_auth.clientKeyDataBase64
del kubeconfig["users"][user_id]["user"]["client-key"]
if "certificate-authority" in kubeconfig["clusters"][cluster_id]["cluster"]:
kubeconfig["clusters"][cluster_id]["cluster"]["certificate-authority-data"] = context_auth.clusterCertificateDataBase64
del kubeconfig["clusters"][cluster_id]["cluster"]["certificate-authority"]
kubeconfig_str = yaml.dump(kubeconfig)
return kubeconfig_str

View File

@@ -0,0 +1,142 @@
import yaml
import os
import base64
class ContextAuth:
clusterCertificate: str = None
clusterCertificateData: str = None
clusterHost: str = None
clientCertificate: str = None
clientCertificateData: str = None
clientKey: str = None
clientKeyData: str = None
clusterName: str = None
username: str = None
password: str = None
bearerToken: str = None
# TODO: integrate in krkn-lib-kubernetes in the next iteration
@property
def clusterCertificateDataBase64(self):
if self.clusterCertificateData is not None:
return base64.b64encode(bytes(self.clusterCertificateData,'utf8')).decode("ascii")
return
@property
def clientCertificateDataBase64(self):
if self.clientCertificateData is not None:
return base64.b64encode(bytes(self.clientCertificateData,'utf8')).decode("ascii")
return
@property
def clientKeyDataBase64(self):
if self.clientKeyData is not None:
return base64.b64encode(bytes(self.clientKeyData,"utf-8")).decode("ascii")
return
def fetch_auth_data(self, kubeconfig: any):
context_username = None
current_context = kubeconfig["current-context"]
if current_context is None:
raise Exception("no current-context found in kubeconfig")
for context in kubeconfig["contexts"]:
if context["name"] == current_context:
context_username = context["context"]["user"]
self.clusterName = context["context"]["cluster"]
if context_username is None:
raise Exception("user not found for context {0}".format(current_context))
if self.clusterName is None:
raise Exception("cluster not found for context {0}".format(current_context))
cluster_id = None
user_id = None
for index, user in enumerate(kubeconfig["users"]):
if user["name"] == context_username:
user_id = index
if user_id is None :
raise Exception("user {0} not found in kubeconfig users".format(context_username))
for index, cluster in enumerate(kubeconfig["clusters"]):
if cluster["name"] == self.clusterName:
cluster_id = index
if cluster_id is None:
raise Exception(
"no cluster {} found in kubeconfig users".format(self.clusterName)
)
user = kubeconfig["users"][user_id]["user"]
cluster = kubeconfig["clusters"][cluster_id]["cluster"]
# sets cluster api URL
self.clusterHost = cluster["server"]
# client certificates
if "client-key" in user:
try:
self.clientKey = user["client-key"]
self.clientKeyData = self.read_file(user["client-key"])
except Exception as e:
raise e
if "client-key-data" in user:
try:
self.clientKeyData = base64.b64decode(user["client-key-data"]).decode('utf-8')
except Exception as e:
raise Exception("impossible to decode client-key-data")
if "client-certificate" in user:
try:
self.clientCertificate = user["client-certificate"]
self.clientCertificateData = self.read_file(user["client-certificate"])
except Exception as e:
raise e
if "client-certificate-data" in user:
try:
self.clientCertificateData = base64.b64decode(user["client-certificate-data"]).decode('utf-8')
except Exception as e:
raise Exception("impossible to decode client-certificate-data")
# cluster certificate authority
if "certificate-authority" in cluster:
try:
self.clusterCertificate = cluster["certificate-authority"]
self.clusterCertificateData = self.read_file(cluster["certificate-authority"])
except Exception as e:
raise e
if "certificate-authority-data" in cluster:
try:
self.clusterCertificateData = base64.b64decode(cluster["certificate-authority-data"]).decode('utf-8')
except Exception as e:
raise Exception("impossible to decode certificate-authority-data")
if "username" in user:
self.username = user["username"]
if "password" in user:
self.password = user["password"]
if "token" in user:
self.bearerToken = user["token"]
def read_file(self, filename:str) -> str:
if not os.path.exists(filename):
raise Exception("file not found {0} ".format(filename))
with open(filename, "rb") as file_stream:
return file_stream.read().decode('utf-8')

View File

@@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDBjCCAe6gAwIBAgIBATANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwptaW5p
a3ViZUNBMB4XDTIzMDMxMzE1NDAxM1oXDTMzMDMxMTE1NDAxM1owFTETMBEGA1UE
AxMKbWluaWt1YmVDQTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMnz
U/gIbJBRGOgNYVKX2fV03ANOwnM4VjquR28QMAdxURqgOFZ6IxYNysHEyxxE9I+I
DAm9hi4vQPbOX7FlxUezuzw+ExEfa6RRJ+n+AGJOV1lezCVph6OaJxB1+L1UqaDZ
eM3B4cUf/iCc5Y4bs927+CBG3MJL/jmCVPCO+MiSn/l73PXSFNJAYMvRj42zkXqD
CVG9CwY2vWgZnnzl01l7jNGtie871AmV2uqKakJrQ2ILhD+8fZk4jE5JBDTCZnqQ
pXIc+vERNKLUS8cvjO6Ux8dMv/Z7+xonpXOU59LlpUdHWP9jgCvMTwiOriwqGjJ+
pQJWpX9Dm+oxJiVOJzsCAwEAAaNhMF8wDgYDVR0PAQH/BAQDAgKkMB0GA1UdJQQW
MBQGCCsGAQUFBwMCBggrBgEFBQcDATAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQW
BBQU9pDMtbayJdNM6bp0IG8dcs15qTANBgkqhkiG9w0BAQsFAAOCAQEAtl9TVKPA
hTnPODqv0AGTqreS9kLg4WUUjZRaPUkPWmtCoTh2Yf55nRWdHOHeZnCWDSg24x42
lpt+13IdqKew1RKTpKCTkicMFi090A01bYu/w39Cm6nOAA5h8zkgSkV5czvQotuV
SoN2vB+nbuY28ah5PkdqjMHEZbNwa59cgEke8wB1R1DWFQ/pqflrH2v9ACAuY+5Q
i673tA6CXrb1YfaCQnVBzcfvjGS1MqShPKpOLMF+/GccPczNimaBxMnKvYLvf3pN
qEUrJC00mAcein8HmxR2Xz8wredbMUUyrQxW29pZJwfGE5GU0olnlsA0lZLbTwio
xoolo5y+fsK/dA==
-----END CERTIFICATE-----

View File

@@ -0,0 +1,19 @@
-----BEGIN CERTIFICATE-----
MIIDITCCAgmgAwIBAgIBAjANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwptaW5p
a3ViZUNBMB4XDTIzMDUwMTA4NTc0N1oXDTI2MDUwMTA4NTc0N1owMTEXMBUGA1UE
ChMOc3lzdGVtOm1hc3RlcnMxFjAUBgNVBAMTDW1pbmlrdWJlLXVzZXIwggEiMA0G
CSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC0b7uy9nQYrh7uC5NODve7dFNLAgo5
pWRS6Kx13ULA55gOpieZiI5/1jwUBjOz0Hhl5QAdHC1HDNu5wf4MmwIEheuq3kMA
mfuvNxW2BnWSDuXyUMlBfqlwg5o6W8ndEWaK33D7wd2WQsSsAnhQPJSjnzWKvWKq
+Kbcygc4hdss/ZWN+SXLTahNpHBw0sw8AcJqddNeXs2WI5GdZmbXL4QZI36EaNUm
m4xKmKRKYIP9wYkmXOV/D2h1meM44y4lul5v2qvo6I+umJ84q4W1/W1vVmAzyVfL
v1TQCUx8cpKMHzw3ma6CTBCtU3Oq9HKHBnf8GyHZicmV7ESzf/phJu4ZAgMBAAGj
YDBeMA4GA1UdDwEB/wQEAwIFoDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUH
AwIwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBQU9pDMtbayJdNM6bp0IG8dcs15
qTANBgkqhkiG9w0BAQsFAAOCAQEABNzEQQMYUcLsBASHladEjr46avKn7gREfaDl
Y5PBvgCPP42q/sW/9iCNY3UpT9TJZWM6s01+0p6I96jYbRQER1NX7O4OgQYHmFw2
PF6UOG2vMo54w11OvL7sbr4d+nkE6ItdM9fLDIJ3fEOYJZkSoxhOL/U3jSjIl7Wu
KCIlpM/M/gcZ4w2IvcLrWtvswbFNUd+dwQfBGcQTmSQDOLE7MqSvzYAkeNv73GLB
ieba7gs/PmoTFsf9nW60iXymDDF4MtODn15kqT/y1uD6coujmiEiIomBfxqAkUCU
0ciP/KF5oOEMmMedm7/peQxaRTMdRSk4yu7vbj/BxnTcj039Qg==
-----END CERTIFICATE-----

View File

@@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAtG+7svZ0GK4e7guTTg73u3RTSwIKOaVkUuisdd1CwOeYDqYn
mYiOf9Y8FAYzs9B4ZeUAHRwtRwzbucH+DJsCBIXrqt5DAJn7rzcVtgZ1kg7l8lDJ
QX6pcIOaOlvJ3RFmit9w+8HdlkLErAJ4UDyUo581ir1iqvim3MoHOIXbLP2Vjfkl
y02oTaRwcNLMPAHCanXTXl7NliORnWZm1y+EGSN+hGjVJpuMSpikSmCD/cGJJlzl
fw9odZnjOOMuJbpeb9qr6OiPrpifOKuFtf1tb1ZgM8lXy79U0AlMfHKSjB88N5mu
gkwQrVNzqvRyhwZ3/Bsh2YnJlexEs3/6YSbuGQIDAQABAoIBAQCdJxPb8zt6o2zc
98f8nJy378D7+3LccmjGrVBH98ZELXIKkDy9RGqYfQcmiaBOZKv4U1OeBwSIdXKK
f6O9ZuSC/AEeeSbyRysmmFuYhlewNrmgKyyelqsNDBIv8fIHUTh2i9Xj8B4G2XBi
QGR5vcnYGLqRdBGTx63Nb0iKuksDCwPAuPA/e0ySz9HdWL1j4bqpVSYsOIXsqTDr
CVnxUeSIL0fFQnRm3IASXQD7zdq9eEFX7vESeleZoz8qNcKb4Na/C3N6crScjgH7
qyNZ2zNLfy1LT84k8uc1TMX2KcEVEmfdDv5cCnUH2ic12CwXMZ0vgId5LJTaHx4x
ytIQIe5hAoGBANB+TsRXP4KzcjZlUUfiAp/pWUM4kVktbsfZa1R2NEuIGJUxPk3P
7WS0WX5W75QKRg+UWTubg5kfd0f9fklLgofmliBnY/HrpgdyugJmUZBgzIxmy0k+
aCe0biD1gULfyyrKtfe8k5wRFstzhfGszlOf2ebR87sSVNBuF2lEwPTvAoGBAN2M
0/XrsodGU4B9Mj86Go2gb2k2WU2izI0cO+tm2S5U5DvKmVEnmjXfPRaOFj2UUQjo
cljnDAinbN+O0+Inc35qsEeYdAIepNAPglzcpfTHagja9mhx2idLYTXGhbZLL+Ei
TRzMyP27NF+GVVfYU/cA86ns6NboG6spohmnqh13AoGAKPc4aNGv0/GIVnHP56zb
0SnbdR7PSFNp+fCZay4Slmi2U9IqKMXbIjdhgjZ4uoDORU9jvReQYuzQ1h9TyfkB
O8yt4M4P0D/6DmqXa9NI4XJznn6wIMMXWf3UybsTW913IQBVgsjVxAuDjBQ11Eec
/sdg3D6SgkZWzeFjzjZJJ5cCgYBSYVg7fE3hERxhjawOaJuRCBQFSklAngVzfwkk
yhR9ruFC/l2uGIy19XFwnprUgP700gIa3qbR3PeV1TUiRcsjOaacqKqSUzSzjODL
iNxIvZHHAyxWv+b/b38REOWNWD3QeAG2cMtX1bFux7OaO31VPkxcZhRaPOp05cE5
yudtlwKBgDBbR7RLYn03OPm3NDBLLjTybhD8Iu8Oj7UeNCiEWAdZpqIKYnwSxMzQ
kdo4aTENA/seEwq+XDV7TwbUIFFJg5gDXIhkcK2c9kiO2bObCAmKpBlQCcrp0a5X
NSBk1N/ZG/Qhqns7z8k01KN4LNcdpRoNiYYPgY+p3xbY8+nWhv+q
-----END RSA PRIVATE KEY-----

View File

@@ -0,0 +1,100 @@
import os
import unittest
from context_auth import ContextAuth
class TestCurrentContext(unittest.TestCase):
def get_kubeconfig_with_data(self) -> str:
"""
This function returns a test kubeconfig file as a string.
:return: a test kubeconfig file in string format (for unit testing purposes)
""" # NOQA
return """apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5ekNDQWQrZ0F3SUJBZ0lVV01PTVBNMVUrRi9uNXN6TSthYzlMcGZISHB3d0RRWUpLb1pJaHZjTkFRRUwKQlFBd0hqRWNNQm9HQTFVRUF3d1RhM1ZpZFc1MGRTNXNiMk5oYkdSdmJXRnBiakFlRncweU1URXlNRFl4T0RBdwpNRFJhRncwek1URXlNRFF4T0RBd01EUmFNQjR4SERBYUJnTlZCQU1NRTJ0MVluVnVkSFV1Ykc5allXeGtiMjFoCmFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDNExhcG00SDB0T1NuYTNXVisKdzI4a0tOWWRwaHhYOUtvNjUwVGlOK2c5ZFNQU3VZK0V6T1JVOWVONlgyWUZkMEJmVFNodno4Y25rclAvNysxegpETEoxQ3MwRi9haEV3ZDQxQXN5UGFjbnRiVE80dGRLWm9POUdyODR3YVdBN1hSZmtEc2ZxRGN1YW5UTmVmT1hpCkdGbmdDVzU5Q285M056alB1eEFrakJxdVF6eE5GQkgwRlJPbXJtVFJ4cnVLZXo0aFFuUW1OWEFUNnp0M21udzMKWUtWTzU4b2xlcUxUcjVHNlRtVFQyYTZpVGdtdWY2N0cvaVZlalJGbkw3YkNHWmgzSjlCSTNMcVpqRzE4dWxvbgpaVDdQcGQrQTlnaTJOTm9UZlI2TVB5SndxU1BCL0xZQU5ZNGRoZDVJYlVydDZzbmViTlRZSHV2T0tZTDdNTWRMCmVMSzFBZ01CQUFHakxUQXJNQWtHQTFVZEV3UUNNQUF3SGdZRFZSMFJCQmN3RllJVGEzVmlkVzUwZFM1c2IyTmgKYkdSdmJXRnBiakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBQTVqUHVpZVlnMExySE1PSkxYY0N4d3EvVzBDNApZeFpncVd3VHF5VHNCZjVKdDlhYTk0SkZTc2dHQWdzUTN3NnA2SlBtL0MyR05MY3U4ZWxjV0E4UXViQWxueXRRCnF1cEh5WnYrZ08wMG83TXdrejZrTUxqQVZ0QllkRzJnZ21FRjViTEk5czBKSEhjUGpHUkl1VHV0Z0tHV1dPWHgKSEg4T0RzaG9wZHRXMktrR2c2aThKaEpYaWVIbzkzTHptM00xRUNGcXAvMEdtNkN1RFphVVA2SGpJMWRrYllLdgpsSHNVZ1U1SmZjSWhNYmJLdUllTzRkc1YvT3FHcm9iNW5vcmRjaExBQmRDTnc1cmU5T1NXZGZ1VVhSK0ViZVhrCjVFM0tFYzA1RGNjcGV2a1NTdlJ4SVQrQzNMOTltWGcxL3B5NEw3VUhvNFFLTXlqWXJXTWlLRlVKV1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: testuser
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: testuser
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM5ekNDQWQrZ0F3SUJBZ0lVV01PTVBNMVUrRi9uNXN6TSthYzlMcGZISHB3d0RRWUpLb1pJaHZjTkFRRUwKQlFBd0hqRWNNQm9HQTFVRUF3d1RhM1ZpZFc1MGRTNXNiMk5oYkdSdmJXRnBiakFlRncweU1URXlNRFl4T0RBdwpNRFJhRncwek1URXlNRFF4T0RBd01EUmFNQjR4SERBYUJnTlZCQU1NRTJ0MVluVnVkSFV1Ykc5allXeGtiMjFoCmFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDNExhcG00SDB0T1NuYTNXVisKdzI4a0tOWWRwaHhYOUtvNjUwVGlOK2c5ZFNQU3VZK0V6T1JVOWVONlgyWUZkMEJmVFNodno4Y25rclAvNysxegpETEoxQ3MwRi9haEV3ZDQxQXN5UGFjbnRiVE80dGRLWm9POUdyODR3YVdBN1hSZmtEc2ZxRGN1YW5UTmVmT1hpCkdGbmdDVzU5Q285M056alB1eEFrakJxdVF6eE5GQkgwRlJPbXJtVFJ4cnVLZXo0aFFuUW1OWEFUNnp0M21udzMKWUtWTzU4b2xlcUxUcjVHNlRtVFQyYTZpVGdtdWY2N0cvaVZlalJGbkw3YkNHWmgzSjlCSTNMcVpqRzE4dWxvbgpaVDdQcGQrQTlnaTJOTm9UZlI2TVB5SndxU1BCL0xZQU5ZNGRoZDVJYlVydDZzbmViTlRZSHV2T0tZTDdNTWRMCmVMSzFBZ01CQUFHakxUQXJNQWtHQTFVZEV3UUNNQUF3SGdZRFZSMFJCQmN3RllJVGEzVmlkVzUwZFM1c2IyTmgKYkdSdmJXRnBiakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBQTVqUHVpZVlnMExySE1PSkxYY0N4d3EvVzBDNApZeFpncVd3VHF5VHNCZjVKdDlhYTk0SkZTc2dHQWdzUTN3NnA2SlBtL0MyR05MY3U4ZWxjV0E4UXViQWxueXRRCnF1cEh5WnYrZ08wMG83TXdrejZrTUxqQVZ0QllkRzJnZ21FRjViTEk5czBKSEhjUGpHUkl1VHV0Z0tHV1dPWHgKSEg4T0RzaG9wZHRXMktrR2c2aThKaEpYaWVIbzkzTHptM00xRUNGcXAvMEdtNkN1RFphVVA2SGpJMWRrYllLdgpsSHNVZ1U1SmZjSWhNYmJLdUllTzRkc1YvT3FHcm9iNW5vcmRjaExBQmRDTnc1cmU5T1NXZGZ1VVhSK0ViZVhrCjVFM0tFYzA1RGNjcGV2a1NTdlJ4SVQrQzNMOTltWGcxL3B5NEw3VUhvNFFLTXlqWXJXTWlLRlVKV1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2QUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktZd2dnU2lBZ0VBQW9JQkFRQzRMYXBtNEgwdE9TbmEKM1dWK3cyOGtLTllkcGh4WDlLbzY1MFRpTitnOWRTUFN1WStFek9SVTllTjZYMllGZDBCZlRTaHZ6OGNua3JQLwo3KzF6RExKMUNzMEYvYWhFd2Q0MUFzeVBhY250YlRPNHRkS1pvTzlHcjg0d2FXQTdYUmZrRHNmcURjdWFuVE5lCmZPWGlHRm5nQ1c1OUNvOTNOempQdXhBa2pCcXVRenhORkJIMEZST21ybVRSeHJ1S2V6NGhRblFtTlhBVDZ6dDMKbW53M1lLVk81OG9sZXFMVHI1RzZUbVRUMmE2aVRnbXVmNjdHL2lWZWpSRm5MN2JDR1poM0o5QkkzTHFaakcxOAp1bG9uWlQ3UHBkK0E5Z2kyTk5vVGZSNk1QeUp3cVNQQi9MWUFOWTRkaGQ1SWJVcnQ2c25lYk5UWUh1dk9LWUw3Ck1NZExlTEsxQWdNQkFBRUNnZ0VBQ28rank4NW5ueVk5L2l6ZjJ3cjkzb2J3OERaTVBjYnIxQURhOUZYY1hWblEKT2c4bDZhbU9Ga2tiU0RNY09JZ0VDdkx6dEtXbmQ5OXpydU5sTEVtNEdmb0trNk5kK01OZEtKRUdoZHE5RjM1Qgpqdi91R1owZTIyRE5ZLzFHNVdDTE5DcWMwQkVHY2RFOTF0YzJuMlppRVBTNWZ6WVJ6L1k4cmJ5K1NqbzJkWE9RCmRHYWRlUFplbi9UbmlHTFlqZWhrbXZNQjJvU0FDbVMycTd2OUNrcmdmR1RZbWJzeGVjSU1QK0JONG9KS3BOZ28KOUpnRWJ5SUxkR1pZS2pQb2lLaHNjMVhmSy8zZStXSmxuYjJBaEE5Y1JMUzhMcDdtcEYySWp4SjNSNE93QTg3WQpNeGZvZWFGdnNuVUFHWUdFWFo4Z3BkWmhQMEoxNWRGdERjajIrcngrQVFLQmdRRDFoSE9nVGdFbERrVEc5bm5TCjE1eXYxRzUxYnJMQU1UaWpzNklEMU1qelhzck0xY2ZvazVaaUlxNVJsQ3dReTlYNDdtV1RhY0lZRGR4TGJEcXEKY0IydjR5Wm1YK1VleGJ3cDU1OWY0V05HdzF5YzQrQjdaNFF5aTRFelN4WmFjbldjMnBzcHJMUFVoOUFXRXVNcApOaW1vcXNiVGNnNGs5QWRxeUIrbWhIWmJRUUtCZ1FEQUNzU09qNXZMU1VtaVpxYWcrOVMySUxZOVNOdDZzS1VyCkprcjdCZEVpN3N2YmU5cldRR2RBb0xkQXNzcU94aENydmtPNkpSSHB1YjlRRjlYdlF4Riszc2ZpZm4yYkQ0ZloKMlVsclA1emF3RlNrNDNLbjdMZzRscURpaVUxVGlqTkJBL3dUcFlmbTB4dW5WeFRWNDZpNVViQW1XRk12TWV0bQozWUZYQmJkK2RRS0JnRGl6Q1B6cFpzeEcrazAwbUxlL2dYajl4ekNwaXZCbHJaM29teTdsVWk4YUloMmg5VlBaCjJhMzZNbVcyb1dLVG9HdW5xcCtibWU1eUxRRGlFcjVQdkJ0bGl2V3ppYmRNbFFMY2Nlcnpveml4WDA4QU5WUnEKZUpZdnIzdklDSGFFM25LRjdiVjNJK1NlSk1ra1BYL0QrV1R4WTQ5clZLYm1FRnh4c1JXRW04ekJBb0dBWEZ3UgpZanJoQTZqUW1DRmtYQ0loa0NJMVkwNEorSHpDUXZsY3NGT0EzSnNhUWduVUdwekl5OFUvdlFiLzhpQ0IzZ2RZCmpVck16YXErdnVkbnhYVnRFYVpWWGJIVitPQkVSdHFBdStyUkprZS9yYm1SNS84cUxsVUxOVWd4ZjA4RkRXeTgKTERxOUhKOUZPbnJnRTJvMU9FTjRRMGpSWU81U041dXFXODd0REEwQ2dZQXpXbk1KSFgrbmlyMjhRRXFyVnJKRAo4ZUEwOHIwWTJRMDhMRlcvMjNIVWQ4WU12VnhTUTdwcUwzaE41RXVJQ2dCbEpGVFI3TndBREo3eDY2M002akFMCm1DNlI4dWxSZStwa08xN2Y0UUs3MnVRanJGZEhESnlXQmdDL0RKSkV6d1dwY0Q4VVNPK3A5bVVIbllLTUJTOEsKTVB1ejYrZ3h0VEtsRU5pZUVacXhxZz09Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
username: testuser
password: testpassword
token: sha256~fFyEqjf1xxFMO0tbEyGRvWeNOd7QByuEgS4hyEq_A9o
""" # NOQA
def get_kubeconfig_with_paths(self) -> str:
"""
This function returns a test kubeconfig file as a string.
:return: a test kubeconfig file in string format (for unit testing purposes)
""" # NOQA
return """apiVersion: v1
clusters:
- cluster:
certificate-authority: fixtures/ca.crt
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: testuser
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: testuser
user:
client-certificate: fixtures/client.crt
client-key: fixtures/client.key
username: testuser
password: testpassword
token: sha256~fFyEqjf1xxFMO0tbEyGRvWeNOd7QByuEgS4hyEq_A9o
""" # NOQA
def test_current_context(self):
cwd = os.getcwd()
current_context_data = ContextAuth()
current_context_data.fetch_auth_data(self.get_kubeconfig_with_data())
self.assertIsNotNone(current_context_data.clusterCertificateData)
self.assertIsNotNone(current_context_data.clientCertificateData)
self.assertIsNotNone(current_context_data.clientKeyData)
self.assertIsNotNone(current_context_data.username)
self.assertIsNotNone(current_context_data.password)
self.assertIsNotNone(current_context_data.bearerToken)
self.assertIsNotNone(current_context_data.clusterHost)
current_context_no_data = ContextAuth()
current_context_no_data.fetch_auth_data(self.get_kubeconfig_with_paths())
self.assertIsNotNone(current_context_no_data.clusterCertificate)
self.assertIsNotNone(current_context_no_data.clusterCertificateData)
self.assertIsNotNone(current_context_no_data.clientCertificate)
self.assertIsNotNone(current_context_no_data.clientCertificateData)
self.assertIsNotNone(current_context_no_data.clientKey)
self.assertIsNotNone(current_context_no_data.clientKeyData)
self.assertIsNotNone(current_context_no_data.username)
self.assertIsNotNone(current_context_no_data.password)
self.assertIsNotNone(current_context_no_data.bearerToken)
self.assertIsNotNone(current_context_data.clusterHost)

View File

@@ -4,30 +4,45 @@ import sys
import json
# Get cerberus status
def get_status(config, start_time, end_time):
"""
Get cerberus status
"""
cerberus_status = True
check_application_routes = False
application_routes_status = True
if config["cerberus"]["cerberus_enabled"]:
cerberus_url = config["cerberus"]["cerberus_url"]
check_application_routes = config["cerberus"]["check_applicaton_routes"]
check_application_routes = \
config["cerberus"]["check_applicaton_routes"]
if not cerberus_url:
logging.error("url where Cerberus publishes True/False signal is not provided.")
logging.error(
"url where Cerberus publishes True/False signal "
"is not provided."
)
sys.exit(1)
cerberus_status = requests.get(cerberus_url, timeout=60).content
cerberus_status = True if cerberus_status == b"True" else False
# Fail if the application routes monitored by cerberus experience downtime during the chaos
# Fail if the application routes monitored by cerberus
# experience downtime during the chaos
if check_application_routes:
application_routes_status, unavailable_routes = application_status(cerberus_url, start_time, end_time)
application_routes_status, unavailable_routes = application_status(
cerberus_url,
start_time,
end_time
)
if not application_routes_status:
logging.error(
"Application routes: %s monitored by cerberus encountered downtime during the run, failing"
"Application routes: %s monitored by cerberus "
"encountered downtime during the run, failing"
% unavailable_routes
)
else:
logging.info("Application routes being monitored didn't encounter any downtime during the run!")
logging.info(
"Application routes being monitored "
"didn't encounter any downtime during the run!"
)
if not cerberus_status:
logging.error(
@@ -39,42 +54,65 @@ def get_status(config, start_time, end_time):
if not application_routes_status or not cerberus_status:
sys.exit(1)
else:
logging.info("Received a go signal from Ceberus, the cluster is healthy. " "Test passed.")
logging.info(
"Received a go signal from Ceberus, the cluster is healthy. "
"Test passed."
)
return cerberus_status
# Function to publish kraken status to cerberus
def publish_kraken_status(config, failed_post_scenarios, start_time, end_time):
"""
Publish kraken status to cerberus
"""
cerberus_status = get_status(config, start_time, end_time)
if not cerberus_status:
if failed_post_scenarios:
if config["kraken"]["exit_on_failure"]:
logging.info(
"Cerberus status is not healthy and post action scenarios " "are still failing, exiting kraken run"
"Cerberus status is not healthy and post action scenarios "
"are still failing, exiting kraken run"
)
sys.exit(1)
else:
logging.info("Cerberus status is not healthy and post action scenarios " "are still failing")
logging.info(
"Cerberus status is not healthy and post action scenarios "
"are still failing"
)
else:
if failed_post_scenarios:
if config["kraken"]["exit_on_failure"]:
logging.info(
"Cerberus status is healthy but post action scenarios " "are still failing, exiting kraken run"
"Cerberus status is healthy but post action scenarios "
"are still failing, exiting kraken run"
)
sys.exit(1)
else:
logging.info("Cerberus status is healthy but post action scenarios " "are still failing")
logging.info(
"Cerberus status is healthy but post action scenarios "
"are still failing"
)
# Check application availability
def application_status(cerberus_url, start_time, end_time):
"""
Check application availability
"""
if not cerberus_url:
logging.error("url where Cerberus publishes True/False signal is not provided.")
logging.error(
"url where Cerberus publishes True/False signal is not provided."
)
sys.exit(1)
else:
duration = (end_time - start_time) / 60
url = cerberus_url + "/" + "history" + "?" + "loopback=" + str(duration)
logging.info("Scraping the metrics for the test duration from cerberus url: %s" % url)
url = "{baseurl}/history?loopback={duration}".format(
baseurl=cerberus_url,
duration=str(duration)
)
logging.info(
"Scraping the metrics for the test "
"duration from cerberus url: %s" % url
)
try:
failed_routes = []
status = True
@@ -88,6 +126,11 @@ def application_status(cerberus_url, start_time, end_time):
else:
continue
except Exception as e:
logging.error("Failed to scrape metrics from cerberus API at %s: %s" % (url, e))
logging.error(
"Failed to scrape metrics from cerberus API at %s: %s" % (
url,
e
)
)
sys.exit(1)
return status, set(failed_routes)

View File

@@ -3,7 +3,10 @@ import logging
import urllib.request
import shutil
import sys
import requests
import tempfile
import kraken.prometheus.client as prometheus
from urllib.parse import urlparse
def setup(url):
@@ -40,7 +43,7 @@ def scrape_metrics(
distribution, prometheus_url, prometheus_bearer_token
)
else:
logging.error("Looks like proemtheus url is not defined, exiting")
logging.error("Looks like prometheus url is not defined, exiting")
sys.exit(1)
command = (
"./kube-burner index --uuid "
@@ -72,6 +75,14 @@ def alerts(distribution, prometheus_url, prometheus_bearer_token, start_time, en
Scrapes metrics defined in the profile from Prometheus and alerts based on the severity defined
"""
is_url = urlparse(alert_profile)
if is_url.scheme and is_url.netloc:
response = requests.get(alert_profile)
temp_alerts = tempfile.NamedTemporaryFile()
temp_alerts.write(response.content)
temp_alerts.flush()
alert_profile = temp_alerts.name
if not prometheus_url:
if distribution == "openshift":
logging.info("Looks like prometheus_url is not defined, trying to use the default instance on the cluster")
@@ -79,7 +90,7 @@ def alerts(distribution, prometheus_url, prometheus_bearer_token, start_time, en
distribution, prometheus_url, prometheus_bearer_token
)
else:
logging.error("Looks like proemtheus url is not defined, exiting")
logging.error("Looks like prometheus url is not defined, exiting")
sys.exit(1)
command = (
"./kube-burner check-alerts "
@@ -96,7 +107,10 @@ def alerts(distribution, prometheus_url, prometheus_bearer_token, start_time, en
)
try:
logging.info("Running kube-burner to capture the metrics: %s" % command)
subprocess.run(command, shell=True, universal_newlines=True)
output = subprocess.run(command, shell=True, universal_newlines=True)
if output.returncode != 0:
logging.error("command exited with a non-zero rc, please check the logs for errors or critical alerts")
sys.exit(output.returncode)
except Exception as e:
logging.error("Failed to run kube-burner, error: %s" % (e))
sys.exit(1)

View File

@@ -1,430 +0,0 @@
from kubernetes import client, config
from kubernetes.stream import stream
from kubernetes.client.rest import ApiException
import logging
import kraken.invoke.command as runcommand
import sys
import re
import time
kraken_node_name = ""
# Load kubeconfig and initialize kubernetes python client
def initialize_clients(kubeconfig_path):
global cli
global batch_cli
try:
config.load_kube_config(kubeconfig_path)
cli = client.CoreV1Api()
batch_cli = client.BatchV1Api()
except ApiException as e:
logging.error("Failed to initialize kubernetes client: %s\n" % e)
sys.exit(1)
def get_host() -> str:
"""Returns the Kubernetes server URL"""
return client.configuration.Configuration.get_default_copy().host
def get_clusterversion_string() -> str:
"""Returns clusterversion status text on OpenShift, empty string on other distributions"""
try:
custom_objects_api = client.CustomObjectsApi()
cvs = custom_objects_api.list_cluster_custom_object(
"config.openshift.io",
"v1",
"clusterversions",
)
for cv in cvs["items"]:
for condition in cv["status"]["conditions"]:
if condition["type"] == "Progressing":
return condition["message"]
return ""
except client.exceptions.ApiException as e:
if e.status == 404:
return ""
else:
raise
# List all namespaces
def list_namespaces(label_selector=None):
namespaces = []
try:
if label_selector:
ret = cli.list_namespace(pretty=True, label_selector=label_selector)
else:
ret = cli.list_namespace(pretty=True)
except ApiException as e:
logging.error("Exception when calling CoreV1Api->list_namespaced_pod: %s\n" % e)
raise e
for namespace in ret.items:
namespaces.append(namespace.metadata.name)
return namespaces
def get_namespace_status(namespace_name):
"""Get status of a given namespace"""
ret = ""
try:
ret = cli.read_namespace_status(namespace_name)
except ApiException as e:
logging.error("Exception when calling CoreV1Api->read_namespace_status: %s\n" % e)
return ret.status.phase
def delete_namespace(namespace):
"""Deletes a given namespace using kubernetes python client"""
try:
api_response = cli.delete_namespace(namespace)
logging.debug("Namespace deleted. status='%s'" % str(api_response.status))
return api_response
except Exception as e:
logging.error(
"Exception when calling \
CoreV1Api->delete_namespace: %s\n"
% e
)
def check_namespaces(namespaces, label_selectors=None):
"""Check if all the watch_namespaces are valid"""
try:
valid_namespaces = list_namespaces(label_selectors)
regex_namespaces = set(namespaces) - set(valid_namespaces)
final_namespaces = set(namespaces) - set(regex_namespaces)
valid_regex = set()
if regex_namespaces:
for namespace in valid_namespaces:
for regex_namespace in regex_namespaces:
if re.search(regex_namespace, namespace):
final_namespaces.add(namespace)
valid_regex.add(regex_namespace)
break
invalid_namespaces = regex_namespaces - valid_regex
if invalid_namespaces:
raise Exception("There exists no namespaces matching: %s" % (invalid_namespaces))
return list(final_namespaces)
except Exception as e:
logging.info("%s" % (e))
sys.exit(1)
# List nodes in the cluster
def list_nodes(label_selector=None):
nodes = []
try:
if label_selector:
ret = cli.list_node(pretty=True, label_selector=label_selector)
else:
ret = cli.list_node(pretty=True)
except ApiException as e:
logging.error("Exception when calling CoreV1Api->list_node: %s\n" % e)
raise e
for node in ret.items:
nodes.append(node.metadata.name)
return nodes
# List nodes in the cluster that can be killed
def list_killable_nodes(label_selector=None):
nodes = []
try:
if label_selector:
ret = cli.list_node(pretty=True, label_selector=label_selector)
else:
ret = cli.list_node(pretty=True)
except ApiException as e:
logging.error("Exception when calling CoreV1Api->list_node: %s\n" % e)
raise e
for node in ret.items:
if kraken_node_name != node.metadata.name:
for cond in node.status.conditions:
if str(cond.type) == "Ready" and str(cond.status) == "True":
nodes.append(node.metadata.name)
return nodes
# List pods in the given namespace
def list_pods(namespace, label_selector=None):
pods = []
try:
if label_selector:
ret = cli.list_namespaced_pod(namespace, pretty=True, label_selector=label_selector)
else:
ret = cli.list_namespaced_pod(namespace, pretty=True)
except ApiException as e:
logging.error(
"Exception when calling \
CoreV1Api->list_namespaced_pod: %s\n"
% e
)
raise e
for pod in ret.items:
pods.append(pod.metadata.name)
return pods
def get_all_pods(label_selector=None):
pods = []
if label_selector:
ret = cli.list_pod_for_all_namespaces(pretty=True, label_selector=label_selector)
else:
ret = cli.list_pod_for_all_namespaces(pretty=True)
for pod in ret.items:
pods.append([pod.metadata.name, pod.metadata.namespace])
return pods
# Execute command in pod
def exec_cmd_in_pod(command, pod_name, namespace, container=None, base_command="bash"):
exec_command = [base_command, "-c", command]
try:
if container:
ret = stream(
cli.connect_get_namespaced_pod_exec,
pod_name,
namespace,
container=container,
command=exec_command,
stderr=True,
stdin=False,
stdout=True,
tty=False,
)
else:
ret = stream(
cli.connect_get_namespaced_pod_exec,
pod_name,
namespace,
command=exec_command,
stderr=True,
stdin=False,
stdout=True,
tty=False,
)
except Exception:
return False
return ret
def delete_pod(name, namespace):
try:
cli.delete_namespaced_pod(name=name, namespace=namespace)
while cli.read_namespaced_pod(name=name, namespace=namespace):
time.sleep(1)
except ApiException as e:
if e.status == 404:
logging.info("Pod already deleted")
else:
logging.error("Failed to delete pod %s" % e)
raise e
def create_pod(body, namespace, timeout=120):
try:
pod_stat = None
pod_stat = cli.create_namespaced_pod(body=body, namespace=namespace)
end_time = time.time() + timeout
while True:
pod_stat = cli.read_namespaced_pod(name=body["metadata"]["name"], namespace=namespace)
if pod_stat.status.phase == "Running":
break
if time.time() > end_time:
raise Exception("Starting pod failed")
time.sleep(1)
except Exception as e:
logging.error("Pod creation failed %s" % e)
if pod_stat:
logging.error(pod_stat.status.container_statuses)
delete_pod(body["metadata"]["name"], namespace)
sys.exit(1)
def read_pod(name, namespace="default"):
return cli.read_namespaced_pod(name=name, namespace=namespace)
def get_pod_log(name, namespace="default"):
return cli.read_namespaced_pod_log(
name=name, namespace=namespace, _return_http_data_only=True, _preload_content=False
)
def get_containers_in_pod(pod_name, namespace):
pod_info = cli.read_namespaced_pod(pod_name, namespace)
container_names = []
for cont in pod_info.spec.containers:
container_names.append(cont.name)
return container_names
def delete_job(name, namespace="default"):
try:
api_response = batch_cli.delete_namespaced_job(
name=name,
namespace=namespace,
body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=0),
)
logging.debug("Job deleted. status='%s'" % str(api_response.status))
return api_response
except ApiException as api:
logging.warn(
"Exception when calling \
BatchV1Api->create_namespaced_job: %s"
% api
)
logging.warn("Job already deleted\n")
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->delete_namespaced_job: %s\n"
% e
)
sys.exit(1)
def create_job(body, namespace="default"):
try:
api_response = batch_cli.create_namespaced_job(body=body, namespace=namespace)
return api_response
except ApiException as api:
logging.warn(
"Exception when calling \
BatchV1Api->create_job: %s"
% api
)
if api.status == 409:
logging.warn("Job already present")
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->create_namespaced_job: %s"
% e
)
raise
def get_job_status(name, namespace="default"):
try:
return batch_cli.read_namespaced_job_status(name=name, namespace=namespace)
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->read_namespaced_job_status: %s"
% e
)
raise
# Obtain node status
def get_node_status(node, timeout=60):
try:
node_info = cli.read_node_status(node, pretty=True, _request_timeout=timeout)
except ApiException as e:
logging.error(
"Exception when calling \
CoreV1Api->read_node_status: %s\n"
% e
)
return None
for condition in node_info.status.conditions:
if condition.type == "Ready":
return condition.status
# Monitor the status of the cluster nodes and set the status to true or false
def monitor_nodes():
nodes = list_nodes()
notready_nodes = []
node_kerneldeadlock_status = "False"
for node in nodes:
try:
node_info = cli.read_node_status(node, pretty=True)
except ApiException as e:
logging.error(
"Exception when calling \
CoreV1Api->read_node_status: %s\n"
% e
)
raise e
for condition in node_info.status.conditions:
if condition.type == "KernelDeadlock":
node_kerneldeadlock_status = condition.status
elif condition.type == "Ready":
node_ready_status = condition.status
else:
continue
if node_kerneldeadlock_status != "False" or node_ready_status != "True": # noqa # noqa
notready_nodes.append(node)
if len(notready_nodes) != 0:
status = False
else:
status = True
return status, notready_nodes
# Monitor the status of the pods in the specified namespace
# and set the status to true or false
def monitor_namespace(namespace):
pods = list_pods(namespace)
notready_pods = []
for pod in pods:
try:
pod_info = cli.read_namespaced_pod_status(pod, namespace, pretty=True)
except ApiException as e:
logging.error(
"Exception when calling \
CoreV1Api->read_namespaced_pod_status: %s\n"
% e
)
raise e
pod_status = pod_info.status.phase
if pod_status != "Running" and pod_status != "Completed" and pod_status != "Succeeded":
notready_pods.append(pod)
if len(notready_pods) != 0:
status = False
else:
status = True
return status, notready_pods
# Monitor component namespace
def monitor_component(iteration, component_namespace):
watch_component_status, failed_component_pods = monitor_namespace(component_namespace)
logging.info("Iteration %s: %s: %s" % (iteration, component_namespace, watch_component_status))
return watch_component_status, failed_component_pods
# Find the node kraken is deployed on
# Set global kraken node to not delete
def find_kraken_node():
pods = get_all_pods()
kraken_pod_name = None
for pod in pods:
if "kraken-deployment" in pod[0]:
kraken_pod_name = pod[0]
kraken_project = pod[1]
break
# have to switch to proper project
if kraken_pod_name:
# get kraken-deployment pod, find node name
try:
node_name = runcommand.invoke(
"kubectl get pods/"
+ str(kraken_pod_name)
+ ' -o jsonpath="{.spec.nodeName}"'
+ " -n"
+ str(kraken_project)
)
global kraken_node_name
kraken_node_name = node_name
except Exception as e:
logging.info("%s" % (e))
sys.exit(1)

View File

View File

@@ -0,0 +1,892 @@
import logging
import re
import sys
import time
from kubernetes import client, config, utils, watch
from kubernetes.client.rest import ApiException
from kubernetes.dynamic.client import DynamicClient
from kubernetes.stream import stream
from ..kubernetes.resources import (PVC, ChaosEngine, ChaosResult, Container,
LitmusChaosObject, Pod, Volume,
VolumeMount)
kraken_node_name = ""
# Load kubeconfig and initialize kubernetes python client
def initialize_clients(kubeconfig_path):
global cli
global batch_cli
global watch_resource
global api_client
global dyn_client
global custom_object_client
try:
if kubeconfig_path:
config.load_kube_config(kubeconfig_path)
else:
config.load_incluster_config()
api_client = client.ApiClient()
cli = client.CoreV1Api(api_client)
batch_cli = client.BatchV1Api(api_client)
custom_object_client = client.CustomObjectsApi(api_client)
dyn_client = DynamicClient(api_client)
watch_resource = watch.Watch()
except ApiException as e:
logging.error("Failed to initialize kubernetes client: %s\n" % e)
sys.exit(1)
def get_host() -> str:
"""Returns the Kubernetes server URL"""
return client.configuration.Configuration.get_default_copy().host
def get_clusterversion_string() -> str:
"""
Returns clusterversion status text on OpenShift, empty string
on other distributions
"""
try:
cvs = custom_object_client.list_cluster_custom_object(
"config.openshift.io",
"v1",
"clusterversions",
)
for cv in cvs["items"]:
for condition in cv["status"]["conditions"]:
if condition["type"] == "Progressing":
return condition["message"]
return ""
except client.exceptions.ApiException as e:
if e.status == 404:
return ""
else:
raise
# List all namespaces
def list_namespaces(label_selector=None):
namespaces = []
try:
if label_selector:
ret = cli.list_namespace(
pretty=True,
label_selector=label_selector
)
else:
ret = cli.list_namespace(pretty=True)
except ApiException as e:
logging.error(
"Exception when calling CoreV1Api->list_namespaced_pod: %s\n" % e
)
raise e
for namespace in ret.items:
namespaces.append(namespace.metadata.name)
return namespaces
def get_namespace_status(namespace_name):
"""Get status of a given namespace"""
ret = ""
try:
ret = cli.read_namespace_status(namespace_name)
except ApiException as e:
logging.error(
"Exception when calling CoreV1Api->read_namespace_status: %s\n" % e
)
return ret.status.phase
def delete_namespace(namespace):
"""Deletes a given namespace using kubernetes python client"""
try:
api_response = cli.delete_namespace(namespace)
logging.debug(
"Namespace deleted. status='%s'" % str(api_response.status)
)
return api_response
except Exception as e:
logging.error(
"Exception when calling \
CoreV1Api->delete_namespace: %s\n"
% e
)
def check_namespaces(namespaces, label_selectors=None):
"""Check if all the watch_namespaces are valid"""
try:
valid_namespaces = list_namespaces(label_selectors)
regex_namespaces = set(namespaces) - set(valid_namespaces)
final_namespaces = set(namespaces) - set(regex_namespaces)
valid_regex = set()
if regex_namespaces:
for namespace in valid_namespaces:
for regex_namespace in regex_namespaces:
if re.search(regex_namespace, namespace):
final_namespaces.add(namespace)
valid_regex.add(regex_namespace)
break
invalid_namespaces = regex_namespaces - valid_regex
if invalid_namespaces:
raise Exception(
"There exists no namespaces matching: %s" %
(invalid_namespaces)
)
return list(final_namespaces)
except Exception as e:
logging.info("%s" % (e))
sys.exit(1)
# List nodes in the cluster
def list_nodes(label_selector=None):
nodes = []
try:
if label_selector:
ret = cli.list_node(pretty=True, label_selector=label_selector)
else:
ret = cli.list_node(pretty=True)
except ApiException as e:
logging.error("Exception when calling CoreV1Api->list_node: %s\n" % e)
raise e
for node in ret.items:
nodes.append(node.metadata.name)
return nodes
# List nodes in the cluster that can be killed
def list_killable_nodes(label_selector=None):
nodes = []
try:
if label_selector:
ret = cli.list_node(pretty=True, label_selector=label_selector)
else:
ret = cli.list_node(pretty=True)
except ApiException as e:
logging.error("Exception when calling CoreV1Api->list_node: %s\n" % e)
raise e
for node in ret.items:
if kraken_node_name != node.metadata.name:
for cond in node.status.conditions:
if str(cond.type) == "Ready" and str(cond.status) == "True":
nodes.append(node.metadata.name)
return nodes
# List managedclusters attached to the hub that can be killed
def list_killable_managedclusters(label_selector=None):
managedclusters = []
try:
ret = custom_object_client.list_cluster_custom_object(
group="cluster.open-cluster-management.io",
version="v1",
plural="managedclusters",
label_selector=label_selector
)
except ApiException as e:
logging.error("Exception when calling CustomObjectsApi->list_cluster_custom_object: %s\n" % e)
raise e
for managedcluster in ret['items']:
conditions = managedcluster['status']['conditions']
available = list(filter(lambda condition: condition['reason'] == 'ManagedClusterAvailable', conditions))
if available and available[0]['status'] == 'True':
managedclusters.append(managedcluster['metadata']['name'])
return managedclusters
# List pods in the given namespace
def list_pods(namespace, label_selector=None):
pods = []
try:
if label_selector:
ret = cli.list_namespaced_pod(
namespace,
pretty=True,
label_selector=label_selector
)
else:
ret = cli.list_namespaced_pod(namespace, pretty=True)
except ApiException as e:
logging.error(
"Exception when calling \
CoreV1Api->list_namespaced_pod: %s\n"
% e
)
raise e
for pod in ret.items:
pods.append(pod.metadata.name)
return pods
def get_all_pods(label_selector=None):
pods = []
if label_selector:
ret = cli.list_pod_for_all_namespaces(
pretty=True,
label_selector=label_selector
)
else:
ret = cli.list_pod_for_all_namespaces(pretty=True)
for pod in ret.items:
pods.append([pod.metadata.name, pod.metadata.namespace])
return pods
# Execute command in pod
def exec_cmd_in_pod(
command,
pod_name,
namespace,
container=None,
base_command="bash"
):
exec_command = [base_command, "-c", command]
try:
if container:
ret = stream(
cli.connect_get_namespaced_pod_exec,
pod_name,
namespace,
container=container,
command=exec_command,
stderr=True,
stdin=False,
stdout=True,
tty=False,
)
else:
ret = stream(
cli.connect_get_namespaced_pod_exec,
pod_name,
namespace,
command=exec_command,
stderr=True,
stdin=False,
stdout=True,
tty=False,
)
except Exception:
return False
return ret
def delete_pod(name, namespace):
try:
cli.delete_namespaced_pod(name=name, namespace=namespace)
while cli.read_namespaced_pod(name=name, namespace=namespace):
time.sleep(1)
except ApiException as e:
if e.status == 404:
logging.info("Pod already deleted")
else:
logging.error("Failed to delete pod %s" % e)
raise e
def create_pod(body, namespace, timeout=120):
try:
pod_stat = None
pod_stat = cli.create_namespaced_pod(body=body, namespace=namespace)
end_time = time.time() + timeout
while True:
pod_stat = cli.read_namespaced_pod(
name=body["metadata"]["name"],
namespace=namespace
)
if pod_stat.status.phase == "Running":
break
if time.time() > end_time:
raise Exception("Starting pod failed")
time.sleep(1)
except Exception as e:
logging.error("Pod creation failed %s" % e)
if pod_stat:
logging.error(pod_stat.status.container_statuses)
delete_pod(body["metadata"]["name"], namespace)
sys.exit(1)
def read_pod(name, namespace="default"):
return cli.read_namespaced_pod(name=name, namespace=namespace)
def get_pod_log(name, namespace="default"):
return cli.read_namespaced_pod_log(
name=name,
namespace=namespace,
_return_http_data_only=True,
_preload_content=False
)
def get_containers_in_pod(pod_name, namespace):
pod_info = cli.read_namespaced_pod(pod_name, namespace)
container_names = []
for cont in pod_info.spec.containers:
container_names.append(cont.name)
return container_names
def delete_job(name, namespace="default"):
try:
api_response = batch_cli.delete_namespaced_job(
name=name,
namespace=namespace,
body=client.V1DeleteOptions(
propagation_policy="Foreground",
grace_period_seconds=0
),
)
logging.debug("Job deleted. status='%s'" % str(api_response.status))
return api_response
except ApiException as api:
logging.warn(
"Exception when calling \
BatchV1Api->create_namespaced_job: %s"
% api
)
logging.warn("Job already deleted\n")
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->delete_namespaced_job: %s\n"
% e
)
sys.exit(1)
def create_job(body, namespace="default"):
try:
api_response = batch_cli.create_namespaced_job(
body=body,
namespace=namespace
)
return api_response
except ApiException as api:
logging.warn(
"Exception when calling \
BatchV1Api->create_job: %s"
% api
)
if api.status == 409:
logging.warn("Job already present")
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->create_namespaced_job: %s"
% e
)
raise
def create_manifestwork(body, namespace):
try:
api_response = custom_object_client.create_namespaced_custom_object(
group="work.open-cluster-management.io",
version="v1",
plural="manifestworks",
body=body,
namespace=namespace
)
return api_response
except ApiException as e:
print("Exception when calling CustomObjectsApi->create_namespaced_custom_object: %s\n" % e)
def delete_manifestwork(namespace):
try:
api_response = custom_object_client.delete_namespaced_custom_object(
group="work.open-cluster-management.io",
version="v1",
plural="manifestworks",
name="managedcluster-scenarios-template",
namespace=namespace
)
return api_response
except ApiException as e:
print("Exception when calling CustomObjectsApi->delete_namespaced_custom_object: %s\n" % e)
def get_job_status(name, namespace="default"):
try:
return batch_cli.read_namespaced_job_status(
name=name,
namespace=namespace
)
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->read_namespaced_job_status: %s"
% e
)
raise
# Monitor the status of the cluster nodes and set the status to true or false
def monitor_nodes():
nodes = list_nodes()
notready_nodes = []
node_kerneldeadlock_status = "False"
for node in nodes:
try:
node_info = cli.read_node_status(node, pretty=True)
except ApiException as e:
logging.error(
"Exception when calling \
CoreV1Api->read_node_status: %s\n"
% e
)
raise e
for condition in node_info.status.conditions:
if condition.type == "KernelDeadlock":
node_kerneldeadlock_status = condition.status
elif condition.type == "Ready":
node_ready_status = condition.status
else:
continue
if node_kerneldeadlock_status != "False" or node_ready_status != "True": # noqa # noqa
notready_nodes.append(node)
if len(notready_nodes) != 0:
status = False
else:
status = True
return status, notready_nodes
# Monitor the status of the pods in the specified namespace
# and set the status to true or false
def monitor_namespace(namespace):
pods = list_pods(namespace)
notready_pods = []
for pod in pods:
try:
pod_info = cli.read_namespaced_pod_status(
pod,
namespace,
pretty=True
)
except ApiException as e:
logging.error(
"Exception when calling \
CoreV1Api->read_namespaced_pod_status: %s\n"
% e
)
raise e
pod_status = pod_info.status.phase
if (
pod_status != "Running" and
pod_status != "Completed" and
pod_status != "Succeeded"
):
notready_pods.append(pod)
if len(notready_pods) != 0:
status = False
else:
status = True
return status, notready_pods
# Monitor component namespace
def monitor_component(iteration, component_namespace):
watch_component_status, failed_component_pods = \
monitor_namespace(component_namespace)
logging.info(
"Iteration %s: %s: %s" % (
iteration,
component_namespace,
watch_component_status
)
)
return watch_component_status, failed_component_pods
def apply_yaml(path, namespace='default'):
"""
Apply yaml config to create Kubernetes resources
Args:
path (string)
- Path to the YAML file
namespace (string)
- Namespace to create the resource
Returns:
The object created
"""
return utils.create_from_yaml(
api_client,
yaml_file=path,
namespace=namespace
)
def get_pod_info(name: str, namespace: str = 'default') -> Pod:
"""
Function to retrieve information about a specific pod
in a given namespace. The kubectl command is given by:
kubectl get pods <name> -n <namespace>
Args:
name (string)
- Name of the pod
namespace (string)
- Namespace to look for the pod
Returns:
- Data class object of type Pod with the output of the above
kubectl command in the given format if the pod exists
- Returns None if the pod doesn't exist
"""
pod_exists = check_if_pod_exists(name=name, namespace=namespace)
if pod_exists:
response = cli.read_namespaced_pod(
name=name,
namespace=namespace,
pretty='true'
)
container_list = []
# Create a list of containers present in the pod
for container in response.spec.containers:
volume_mount_list = []
for volume_mount in container.volume_mounts:
volume_mount_list.append(
VolumeMount(
name=volume_mount.name,
mountPath=volume_mount.mount_path
)
)
container_list.append(
Container(
name=container.name,
image=container.image,
volumeMounts=volume_mount_list
)
)
for i, container in enumerate(response.status.container_statuses):
container_list[i].ready = container.ready
# Create a list of volumes associated with the pod
volume_list = []
for volume in response.spec.volumes:
volume_name = volume.name
pvc_name = (
volume.persistent_volume_claim.claim_name
if volume.persistent_volume_claim is not None
else None
)
volume_list.append(Volume(name=volume_name, pvcName=pvc_name))
# Create the Pod data class object
pod_info = Pod(
name=response.metadata.name,
podIP=response.status.pod_ip,
namespace=response.metadata.namespace,
containers=container_list,
nodeName=response.spec.node_name,
volumes=volume_list
)
return pod_info
else:
logging.error(
"Pod '%s' doesn't exist in namespace '%s'" % (
str(name),
str(namespace)
)
)
return None
def get_litmus_chaos_object(
kind: str,
name: str,
namespace: str
) -> LitmusChaosObject:
"""
Function that returns an object of a custom resource type of
the litmus project. Currently, only ChaosEngine and ChaosResult
objects are supported.
Args:
kind (string)
- The custom resource type
namespace (string)
- Namespace where the custom object is present
Returns:
Data class object of a subclass of LitmusChaosObject
"""
group = 'litmuschaos.io'
version = 'v1alpha1'
if kind.lower() == 'chaosengine':
plural = 'chaosengines'
response = custom_object_client.get_namespaced_custom_object(
group=group,
plural=plural,
version=version,
namespace=namespace,
name=name
)
try:
engine_status = response['status']['engineStatus']
exp_status = response['status']['experiments'][0]['status']
except Exception:
engine_status = 'Not Initialized'
exp_status = 'Not Initialized'
custom_object = ChaosEngine(
kind='ChaosEngine',
group=group,
namespace=namespace,
name=name,
plural=plural,
version=version,
engineStatus=engine_status,
expStatus=exp_status
)
elif kind.lower() == 'chaosresult':
plural = 'chaosresults'
response = custom_object_client.get_namespaced_custom_object(
group=group,
plural=plural,
version=version,
namespace=namespace,
name=name
)
try:
verdict = response['status']['experimentStatus']['verdict']
fail_step = response['status']['experimentStatus']['failStep']
except Exception:
verdict = 'N/A'
fail_step = 'N/A'
custom_object = ChaosResult(
kind='ChaosResult',
group=group,
namespace=namespace,
name=name,
plural=plural,
version=version,
verdict=verdict,
failStep=fail_step
)
else:
logging.error("Invalid litmus chaos custom resource name")
custom_object = None
return custom_object
def check_if_namespace_exists(name: str) -> bool:
"""
Function that checks if a namespace exists by parsing through
the list of projects.
Args:
name (string)
- Namespace name
Returns:
Boolean value indicating whether the namespace exists or not
"""
v1_projects = dyn_client.resources.get(
api_version='project.openshift.io/v1',
kind='Project'
)
project_list = v1_projects.get()
return True if name in str(project_list) else False
def check_if_pod_exists(name: str, namespace: str) -> bool:
"""
Function that checks if a pod exists in the given namespace
Args:
name (string)
- Pod name
namespace (string)
- Namespace name
Returns:
Boolean value indicating whether the pod exists or not
"""
namespace_exists = check_if_namespace_exists(namespace)
if namespace_exists:
pod_list = list_pods(namespace=namespace)
if name in pod_list:
return True
else:
logging.error("Namespace '%s' doesn't exist" % str(namespace))
return False
def check_if_pvc_exists(name: str, namespace: str) -> bool:
"""
Function that checks if a namespace exists by parsing through
the list of projects.
Args:
name (string)
- PVC name
namespace (string)
- Namespace name
Returns:
Boolean value indicating whether the Persistent Volume Claim
exists or not.
"""
namespace_exists = check_if_namespace_exists(namespace)
if namespace_exists:
response = cli.list_namespaced_persistent_volume_claim(
namespace=namespace
)
pvc_list = [pvc.metadata.name for pvc in response.items]
if name in pvc_list:
return True
else:
logging.error("Namespace '%s' doesn't exist" % str(namespace))
return False
def get_pvc_info(name: str, namespace: str) -> PVC:
"""
Function to retrieve information about a Persistent Volume Claim in a
given namespace
Args:
name (string)
- Name of the persistent volume claim
namespace (string)
- Namespace where the persistent volume claim is present
Returns:
- A PVC data class containing the name, capacity, volume name,
namespace and associated pod names of the PVC if the PVC exists
- Returns None if the PVC doesn't exist
"""
pvc_exists = check_if_pvc_exists(name=name, namespace=namespace)
if pvc_exists:
pvc_info_response = cli.read_namespaced_persistent_volume_claim(
name=name,
namespace=namespace,
pretty=True
)
pod_list_response = cli.list_namespaced_pod(namespace=namespace)
capacity = pvc_info_response.status.capacity['storage']
volume_name = pvc_info_response.spec.volume_name
# Loop through all pods in the namespace to find associated PVCs
pvc_pod_list = []
for pod in pod_list_response.items:
for volume in pod.spec.volumes:
if (
volume.persistent_volume_claim is not None
and volume.persistent_volume_claim.claim_name == name
):
pvc_pod_list.append(pod.metadata.name)
pvc_info = PVC(
name=name,
capacity=capacity,
volumeName=volume_name,
podNames=pvc_pod_list,
namespace=namespace
)
return pvc_info
else:
logging.error(
"PVC '%s' doesn't exist in namespace '%s'" % (
str(name),
str(namespace)
)
)
return None
# Find the node kraken is deployed on
# Set global kraken node to not delete
def find_kraken_node():
pods = get_all_pods()
kraken_pod_name = None
for pod in pods:
if "kraken-deployment" in pod[0]:
kraken_pod_name = pod[0]
kraken_project = pod[1]
break
# have to switch to proper project
if kraken_pod_name:
# get kraken-deployment pod, find node name
try:
node_name = get_pod_info(kraken_pod_name, kraken_project).nodeName
global kraken_node_name
kraken_node_name = node_name
except Exception as e:
logging.info("%s" % (e))
sys.exit(1)
# Watch for a specific node status
def watch_node_status(node, status, timeout, resource_version):
count = timeout
for event in watch_resource.stream(
cli.list_node,
field_selector=f"metadata.name={node}",
timeout_seconds=timeout,
resource_version=f"{resource_version}"
):
conditions = [
status
for status in event["object"].status.conditions
if status.type == "Ready"
]
if conditions[0].status == status:
watch_resource.stop()
break
else:
count -= 1
logging.info(
"Status of node " + node + ": " + str(conditions[0].status)
)
if not count:
watch_resource.stop()
# Watch for a specific managedcluster status
# TODO: Implement this with a watcher instead of polling
def watch_managedcluster_status(managedcluster, status, timeout):
elapsed_time = 0
while True:
conditions = custom_object_client.get_cluster_custom_object_status(
"cluster.open-cluster-management.io", "v1", "managedclusters", managedcluster
)['status']['conditions']
available = list(filter(lambda condition: condition['reason'] == 'ManagedClusterAvailable', conditions))
if status == "True":
if available and available[0]['status'] == "True":
logging.info("Status of managedcluster " + managedcluster + ": Available")
return True
else:
if not available:
logging.info("Status of managedcluster " + managedcluster + ": Unavailable")
return True
time.sleep(2)
elapsed_time += 2
if elapsed_time >= timeout:
logging.info("Timeout waiting for managedcluster " + managedcluster + " to become: " + status)
return False
# Get the resource version for the specified node
def get_node_resource_version(node):
return cli.read_node(name=node).metadata.resource_version

View File

@@ -0,0 +1,74 @@
from dataclasses import dataclass
from typing import List
@dataclass(frozen=True, order=False)
class Volume:
"""Data class to hold information regarding volumes in a pod"""
name: str
pvcName: str
@dataclass(order=False)
class VolumeMount:
"""Data class to hold information regarding volume mounts"""
name: str
mountPath: str
@dataclass(frozen=True, order=False)
class PVC:
"""Data class to hold information regarding persistent volume claims"""
name: str
capacity: str
volumeName: str
podNames: List[str]
namespace: str
@dataclass(order=False)
class Container:
"""Data class to hold information regarding containers in a pod"""
image: str
name: str
volumeMounts: List[VolumeMount]
ready: bool = False
@dataclass(frozen=True, order=False)
class Pod:
"""Data class to hold information regarding a pod"""
name: str
podIP: str
namespace: str
containers: List[Container]
nodeName: str
volumes: List[Volume]
@dataclass(frozen=True, order=False)
class LitmusChaosObject:
"""Data class to hold information regarding a custom object of litmus project"""
kind: str
group: str
namespace: str
name: str
plural: str
version: str
@dataclass(frozen=True, order=False)
class ChaosEngine(LitmusChaosObject):
"""Data class to hold information regarding a ChaosEngine object"""
engineStatus: str
expStatus: str
@dataclass(frozen=True, order=False)
class ChaosResult(LitmusChaosObject):
"""Data class to hold information regarding a ChaosResult object"""
verdict: str
failStep: str

View File

@@ -5,10 +5,18 @@ import sys
import requests
import yaml
import kraken.cerberus.setup as cerberus
from krkn_lib.k8s import KrknKubernetes
# krkn_lib
# Inject litmus scenarios defined in the config
def run(scenarios_list, config, litmus_uninstall, wait_duration, litmus_namespace):
def run(
scenarios_list,
config,
litmus_uninstall,
wait_duration,
litmus_namespace,
kubecli: KrknKubernetes
):
# Loop to run the scenarios starts here
for l_scenario in scenarios_list:
start_time = int(time.time())
@@ -34,16 +42,16 @@ def run(scenarios_list, config, litmus_uninstall, wait_duration, litmus_namespac
sys.exit(1)
for expr in experiment_names:
expr_name = expr["name"]
experiment_result = check_experiment(engine_name, expr_name, litmus_namespace)
experiment_result = check_experiment(engine_name, expr_name, litmus_namespace, kubecli)
if experiment_result:
logging.info("Scenario: %s has been successfully injected!" % item)
else:
logging.info("Scenario: %s was not successfully injected, please check" % item)
if litmus_uninstall:
delete_chaos(litmus_namespace)
delete_chaos(litmus_namespace, kubecli)
sys.exit(1)
if litmus_uninstall:
delete_chaos(litmus_namespace)
delete_chaos(litmus_namespace, kubecli)
logging.info("Waiting for the specified duration: %s" % wait_duration)
time.sleep(wait_duration)
end_time = int(time.time())
@@ -85,19 +93,19 @@ def deploy_all_experiments(version_string, namespace):
)
def wait_for_initialized(engine_name, experiment_name, namespace):
chaos_engine = runcommand.invoke(
"kubectl get chaosengines/%s -n %s -o jsonpath='{.status.engineStatus}'" % (engine_name, namespace)
)
# krkn_lib
def wait_for_initialized(engine_name, experiment_name, namespace, kubecli: KrknKubernetes):
chaos_engine = kubecli.get_litmus_chaos_object(kind='chaosengine', name=engine_name,
namespace=namespace).engineStatus
engine_status = chaos_engine.strip()
max_tries = 30
engine_counter = 0
while engine_status.lower() != "initialized":
time.sleep(10)
logging.info("Waiting for " + experiment_name + " to be initialized")
chaos_engine = runcommand.invoke(
"kubectl get chaosengines/%s -n %s -o jsonpath='{.status.engineStatus}'" % (engine_name, namespace)
)
chaos_engine = kubecli.get_litmus_chaos_object(kind='chaosengine', name=engine_name,
namespace=namespace).engineStatus
engine_status = chaos_engine.strip()
if engine_counter >= max_tries:
logging.error("Chaos engine " + experiment_name + " took longer than 5 minutes to be initialized")
@@ -110,25 +118,30 @@ def wait_for_initialized(engine_name, experiment_name, namespace):
return True
def wait_for_status(engine_name, expected_status, experiment_name, namespace):
# krkn_lib
def wait_for_status(
engine_name,
expected_status,
experiment_name,
namespace,
kubecli: KrknKubernetes
):
if expected_status == "running":
response = wait_for_initialized(engine_name, experiment_name, namespace)
response = wait_for_initialized(engine_name, experiment_name, namespace, kubecli)
if not response:
logging.info("Chaos engine never initialized, exiting")
return False
chaos_engine = runcommand.invoke(
"kubectl get chaosengines/%s -n %s -o jsonpath='{.status.experiments[0].status}'" % (engine_name, namespace)
)
chaos_engine = kubecli.get_litmus_chaos_object(kind='chaosengine', name=engine_name,
namespace=namespace).expStatus
engine_status = chaos_engine.strip()
max_tries = 30
engine_counter = 0
while engine_status.lower() != expected_status:
time.sleep(10)
logging.info("Waiting for " + experiment_name + " to be " + expected_status)
chaos_engine = runcommand.invoke(
"kubectl get chaosengines/%s -n %s -o jsonpath='{.status.experiments[0].status}'" % (engine_name, namespace)
)
chaos_engine = kubecli.get_litmus_chaos_object(kind='chaosengine', name=engine_name,
namespace=namespace).expStatus
engine_status = chaos_engine.strip()
if engine_counter >= max_tries:
logging.error("Chaos engine " + experiment_name + " took longer than 5 minutes to be " + expected_status)
@@ -142,29 +155,24 @@ def wait_for_status(engine_name, expected_status, experiment_name, namespace):
# Check status of experiment
def check_experiment(engine_name, experiment_name, namespace):
# krkn_lib
def check_experiment(engine_name, experiment_name, namespace, kubecli: KrknKubernetes):
wait_response = wait_for_status(engine_name, "running", experiment_name, namespace)
wait_response = wait_for_status(engine_name, "running", experiment_name, namespace, kubecli)
if wait_response:
wait_for_status(engine_name, "completed", experiment_name, namespace)
wait_for_status(engine_name, "completed", experiment_name, namespace, kubecli)
else:
sys.exit(1)
chaos_result = runcommand.invoke(
"kubectl get chaosresult %s"
"-%s -n %s -o "
"jsonpath='{.status.experimentStatus.verdict}'" % (engine_name, experiment_name, namespace)
)
chaos_result = kubecli.get_litmus_chaos_object(kind='chaosresult', name=engine_name+'-'+experiment_name,
namespace=namespace).verdict
if chaos_result == "Pass":
logging.info("Engine " + str(engine_name) + " finished with status " + str(chaos_result))
return True
else:
chaos_result = runcommand.invoke(
"kubectl get chaosresult %s"
"-%s -n %s -o jsonpath="
"'{.status.experimentStatus.failStep}'" % (engine_name, experiment_name, namespace)
)
chaos_result = kubecli.get_litmus_chaos_object(kind='chaosresult', name=engine_name+'-'+experiment_name,
namespace=namespace).failStep
logging.info("Chaos scenario:" + engine_name + " failed with error: " + str(chaos_result))
logging.info(
"See 'kubectl get chaosresult %s"
@@ -174,10 +182,10 @@ def check_experiment(engine_name, experiment_name, namespace):
# Delete all chaos engines in a given namespace
def delete_chaos_experiments(namespace):
# krkn_lib
def delete_chaos_experiments(namespace, kubecli: KrknKubernetes):
namespace_exists = runcommand.invoke("oc get project -o name | grep -c " + namespace + " | xargs")
if namespace_exists.strip() != "0":
if kubecli.check_if_namespace_exists(namespace):
chaos_exp_exists = runcommand.invoke_no_exit("kubectl get chaosexperiment")
if "returned non-zero exit status 1" not in chaos_exp_exists:
logging.info("Deleting all litmus experiments")
@@ -185,10 +193,10 @@ def delete_chaos_experiments(namespace):
# Delete all chaos engines in a given namespace
def delete_chaos(namespace):
# krkn_lib
def delete_chaos(namespace, kubecli:KrknKubernetes):
namespace_exists = runcommand.invoke("oc get project -o name | grep -c " + namespace + " | xargs")
if namespace_exists.strip() != "0":
if kubecli.check_if_namespace_exists(namespace):
logging.info("Deleting all litmus run objects")
chaos_engine_exists = runcommand.invoke_no_exit("kubectl get chaosengine")
if "returned non-zero exit status 1" not in chaos_engine_exists:
@@ -200,9 +208,10 @@ def delete_chaos(namespace):
logging.info(namespace + " namespace doesn't exist")
def uninstall_litmus(version, litmus_namespace):
namespace_exists = runcommand.invoke("oc get project -o name | grep -c " + litmus_namespace + " | xargs")
if namespace_exists.strip() != "0":
# krkn_lib
def uninstall_litmus(version, litmus_namespace, kubecli: KrknKubernetes):
if kubecli.check_if_namespace_exists(litmus_namespace):
logging.info("Uninstalling Litmus operator")
runcommand.invoke_no_exit(
"kubectl delete -n %s -f "

View File

@@ -0,0 +1,41 @@
import random
import logging
from krkn_lib.k8s import KrknKubernetes
# krkn_lib
# Pick a random managedcluster with specified label selector
def get_managedcluster(
managedcluster_name,
label_selector,
instance_kill_count,
kubecli: KrknKubernetes):
if managedcluster_name in kubecli.list_killable_managedclusters():
return [managedcluster_name]
elif managedcluster_name:
logging.info("managedcluster with provided managedcluster_name does not exist or the managedcluster might " "be in unavailable state.")
managedclusters = kubecli.list_killable_managedclusters(label_selector)
if not managedclusters:
raise Exception("Available managedclusters with the provided label selector do not exist")
logging.info("Available managedclusters with the label selector %s: %s" % (label_selector, managedclusters))
number_of_managedclusters = len(managedclusters)
if instance_kill_count == number_of_managedclusters:
return managedclusters
managedclusters_to_return = []
for i in range(instance_kill_count):
managedcluster_to_add = managedclusters[random.randint(0, len(managedclusters) - 1)]
managedclusters_to_return.append(managedcluster_to_add)
managedclusters.remove(managedcluster_to_add)
return managedclusters_to_return
# Wait until the managedcluster status becomes Available
# krkn_lib
def wait_for_available_status(managedcluster, timeout, kubecli: KrknKubernetes):
kubecli.watch_managedcluster_status(managedcluster, "True", timeout)
# Wait until the managedcluster status becomes Not Available
# krkn_lib
def wait_for_unavailable_status(managedcluster, timeout, kubecli: KrknKubernetes):
kubecli.watch_managedcluster_status(managedcluster, "Unknown", timeout)

View File

@@ -0,0 +1,140 @@
from jinja2 import Environment, FileSystemLoader
import os
import time
import logging
import sys
import yaml
import kraken.managedcluster_scenarios.common_managedcluster_functions as common_managedcluster_functions
from krkn_lib.k8s import KrknKubernetes
class GENERAL:
def __init__(self):
pass
# krkn_lib
class managedcluster_scenarios():
kubecli: KrknKubernetes
def __init__(self, kubecli: KrknKubernetes):
self.kubecli = kubecli
self.general = GENERAL()
# managedcluster scenario to start the managedcluster
def managedcluster_start_scenario(self, instance_kill_count, managedcluster, timeout):
for _ in range(instance_kill_count):
try:
logging.info("Starting managedcluster_start_scenario injection")
file_loader = FileSystemLoader(os.path.abspath(os.path.dirname(__file__)))
env = Environment(loader=file_loader, autoescape=False)
template = env.get_template("manifestwork.j2")
body = yaml.safe_load(
template.render(managedcluster_name=managedcluster,
args="""kubectl scale deployment.apps/klusterlet --replicas 3 &
kubectl scale deployment.apps/klusterlet-registration-agent --replicas 1 -n open-cluster-management-agent""")
)
self.kubecli.create_manifestwork(body, managedcluster)
logging.info("managedcluster_start_scenario has been successfully injected!")
logging.info("Waiting for the specified timeout: %s" % timeout)
common_managedcluster_functions.wait_for_available_status(managedcluster, timeout, self.kubecli)
except Exception as e:
logging.error("managedcluster scenario exiting due to Exception %s" % e)
sys.exit(1)
finally:
logging.info("Deleting manifestworks")
self.kubecli.delete_manifestwork(managedcluster)
# managedcluster scenario to stop the managedcluster
def managedcluster_stop_scenario(self, instance_kill_count, managedcluster, timeout):
for _ in range(instance_kill_count):
try:
logging.info("Starting managedcluster_stop_scenario injection")
file_loader = FileSystemLoader(os.path.abspath(os.path.dirname(__file__)),encoding='utf-8')
env = Environment(loader=file_loader, autoescape=False)
template = env.get_template("manifestwork.j2")
body = yaml.safe_load(
template.render(managedcluster_name=managedcluster,
args="""kubectl scale deployment.apps/klusterlet --replicas 0 &&
kubectl scale deployment.apps/klusterlet-registration-agent --replicas 0 -n open-cluster-management-agent""")
)
self.kubecli.create_manifestwork(body, managedcluster)
logging.info("managedcluster_stop_scenario has been successfully injected!")
logging.info("Waiting for the specified timeout: %s" % timeout)
common_managedcluster_functions.wait_for_unavailable_status(managedcluster, timeout, self.kubecli)
except Exception as e:
logging.error("managedcluster scenario exiting due to Exception %s" % e)
sys.exit(1)
finally:
logging.info("Deleting manifestworks")
self.kubecli.delete_manifestwork(managedcluster)
# managedcluster scenario to stop and then start the managedcluster
def managedcluster_stop_start_scenario(self, instance_kill_count, managedcluster, timeout):
logging.info("Starting managedcluster_stop_start_scenario injection")
self.managedcluster_stop_scenario(instance_kill_count, managedcluster, timeout)
time.sleep(10)
self.managedcluster_start_scenario(instance_kill_count, managedcluster, timeout)
logging.info("managedcluster_stop_start_scenario has been successfully injected!")
# managedcluster scenario to terminate the managedcluster
def managedcluster_termination_scenario(self, instance_kill_count, managedcluster, timeout):
logging.info("managedcluster termination is not implemented, " "no action is going to be taken")
# managedcluster scenario to reboot the managedcluster
def managedcluster_reboot_scenario(self, instance_kill_count, managedcluster, timeout):
logging.info("managedcluster reboot is not implemented," " no action is going to be taken")
# managedcluster scenario to start the klusterlet
def start_klusterlet_scenario(self, instance_kill_count, managedcluster, timeout):
for _ in range(instance_kill_count):
try:
logging.info("Starting start_klusterlet_scenario injection")
file_loader = FileSystemLoader(os.path.abspath(os.path.dirname(__file__)))
env = Environment(loader=file_loader, autoescape=False)
template = env.get_template("manifestwork.j2")
body = yaml.safe_load(
template.render(managedcluster_name=managedcluster,
args="""kubectl scale deployment.apps/klusterlet --replicas 3""")
)
self.kubecli.create_manifestwork(body, managedcluster)
logging.info("start_klusterlet_scenario has been successfully injected!")
time.sleep(30) # until https://github.com/open-cluster-management-io/OCM/issues/118 gets solved
except Exception as e:
logging.error("managedcluster scenario exiting due to Exception %s" % e)
sys.exit(1)
finally:
logging.info("Deleting manifestworks")
self.kubecli.delete_manifestwork(managedcluster)
# managedcluster scenario to stop the klusterlet
def stop_klusterlet_scenario(self, instance_kill_count, managedcluster, timeout):
for _ in range(instance_kill_count):
try:
logging.info("Starting stop_klusterlet_scenario injection")
file_loader = FileSystemLoader(os.path.abspath(os.path.dirname(__file__)))
env = Environment(loader=file_loader, autoescape=False)
template = env.get_template("manifestwork.j2")
body = yaml.safe_load(
template.render(managedcluster_name=managedcluster,
args="""kubectl scale deployment.apps/klusterlet --replicas 0""")
)
self.kubecli.create_manifestwork(body, managedcluster)
logging.info("stop_klusterlet_scenario has been successfully injected!")
time.sleep(30) # until https://github.com/open-cluster-management-io/OCM/issues/118 gets solved
except Exception as e:
logging.error("managedcluster scenario exiting due to Exception %s" % e)
sys.exit(1)
finally:
logging.info("Deleting manifestworks")
self.kubecli.delete_manifestwork(managedcluster)
# managedcluster scenario to stop and start the klusterlet
def stop_start_klusterlet_scenario(self, instance_kill_count, managedcluster, timeout):
logging.info("Starting stop_start_klusterlet_scenario injection")
self.stop_klusterlet_scenario(instance_kill_count, managedcluster, timeout)
time.sleep(10)
self.start_klusterlet_scenario(instance_kill_count, managedcluster, timeout)
logging.info("stop_start_klusterlet_scenario has been successfully injected!")
# managedcluster scenario to crash the managedcluster
def managedcluster_crash_scenario(self, instance_kill_count, managedcluster, timeout):
logging.info("managedcluster crash scenario is not implemented, " "no action is going to be taken")

View File

@@ -0,0 +1,68 @@
apiVersion: work.open-cluster-management.io/v1
kind: ManifestWork
metadata:
namespace: {{managedcluster_name}}
name: managedcluster-scenarios-template
spec:
workload:
manifests:
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: scale-deploy
namespace: open-cluster-management
rules:
- apiGroups: ["apps"]
resources: ["deployments/scale"]
verbs: ["patch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get"]
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scale-deploy-to-sa
namespace: open-cluster-management
subjects:
- kind: ServiceAccount
name: internal-kubectl
namespace: open-cluster-management
roleRef:
kind: ClusterRole
name: scale-deploy
apiGroup: rbac.authorization.k8s.io
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scale-deploy-to-sa
namespace: open-cluster-management-agent
subjects:
- kind: ServiceAccount
name: internal-kubectl
namespace: open-cluster-management
roleRef:
kind: ClusterRole
name: scale-deploy
apiGroup: rbac.authorization.k8s.io
- apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: open-cluster-management
- apiVersion: batch/v1
kind: Job
metadata:
name: managedcluster-scenarios-template
namespace: open-cluster-management
spec:
template:
spec:
serviceAccountName: internal-kubectl
containers:
- name: kubectl
image: quay.io/sighup/kubectl-kustomize:1.21.6_3.9.1
command: ["/bin/sh", "-c"]
args:
- {{args}}
restartPolicy: Never
backoffLimit: 0

View File

@@ -0,0 +1,69 @@
import yaml
import logging
import time
from kraken.managedcluster_scenarios.managedcluster_scenarios import managedcluster_scenarios
import kraken.managedcluster_scenarios.common_managedcluster_functions as common_managedcluster_functions
import kraken.cerberus.setup as cerberus
from krkn_lib.k8s import KrknKubernetes
# Get the managedcluster scenarios object of specfied cloud type
# krkn_lib
def get_managedcluster_scenario_object(managedcluster_scenario, kubecli: KrknKubernetes):
return managedcluster_scenarios(kubecli)
# Run defined scenarios
# krkn_lib
def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes):
for managedcluster_scenario_config in scenarios_list:
with open(managedcluster_scenario_config, "r") as f:
managedcluster_scenario_config = yaml.full_load(f)
for managedcluster_scenario in managedcluster_scenario_config["managedcluster_scenarios"]:
managedcluster_scenario_object = get_managedcluster_scenario_object(managedcluster_scenario, kubecli)
if managedcluster_scenario["actions"]:
for action in managedcluster_scenario["actions"]:
start_time = int(time.time())
inject_managedcluster_scenario(action, managedcluster_scenario, managedcluster_scenario_object, kubecli)
logging.info("Waiting for the specified duration: %s" % (wait_duration))
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.get_status(config, start_time, end_time)
logging.info("")
# Inject the specified managedcluster scenario
# krkn_lib
def inject_managedcluster_scenario(action, managedcluster_scenario, managedcluster_scenario_object, kubecli: KrknKubernetes):
# Get the managedcluster scenario configurations
run_kill_count = managedcluster_scenario.get("runs", 1)
instance_kill_count = managedcluster_scenario.get("instance_count", 1)
managedcluster_name = managedcluster_scenario.get("managedcluster_name", "")
label_selector = managedcluster_scenario.get("label_selector", "")
timeout = managedcluster_scenario.get("timeout", 120)
# Get the managedcluster to apply the scenario
if managedcluster_name:
managedcluster_name_list = managedcluster_name.split(",")
else:
managedcluster_name_list = [managedcluster_name]
for single_managedcluster_name in managedcluster_name_list:
managedclusters = common_managedcluster_functions.get_managedcluster(single_managedcluster_name, label_selector, instance_kill_count, kubecli)
for single_managedcluster in managedclusters:
if action == "managedcluster_start_scenario":
managedcluster_scenario_object.managedcluster_start_scenario(run_kill_count, single_managedcluster, timeout)
elif action == "managedcluster_stop_scenario":
managedcluster_scenario_object.managedcluster_stop_scenario(run_kill_count, single_managedcluster, timeout)
elif action == "managedcluster_stop_start_scenario":
managedcluster_scenario_object.managedcluster_stop_start_scenario(run_kill_count, single_managedcluster, timeout)
elif action == "managedcluster_termination_scenario":
managedcluster_scenario_object.managedcluster_termination_scenario(run_kill_count, single_managedcluster, timeout)
elif action == "managedcluster_reboot_scenario":
managedcluster_scenario_object.managedcluster_reboot_scenario(run_kill_count, single_managedcluster, timeout)
elif action == "stop_start_klusterlet_scenario":
managedcluster_scenario_object.stop_start_klusterlet_scenario(run_kill_count, single_managedcluster, timeout)
elif action == "start_klusterlet_scenario":
managedcluster_scenario_object.stop_klusterlet_scenario(run_kill_count, single_managedcluster, timeout)
elif action == "stop_klusterlet_scenario":
managedcluster_scenario_object.stop_klusterlet_scenario(run_kill_count, single_managedcluster, timeout)
elif action == "managedcluster_crash_scenario":
managedcluster_scenario_object.managedcluster_crash_scenario(run_kill_count, single_managedcluster, timeout)
else:
logging.info("There is no managedcluster action that matches %s, skipping scenario" % action)

View File

@@ -1,80 +1,114 @@
import time
import random
import logging
import kraken.kubernetes.client as kubecli
import kraken.cerberus.setup as cerberus
import kraken.post_actions.actions as post_actions
import yaml
import sys
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry import KrknTelemetry
from krkn_lib.models.telemetry import ScenarioTelemetry
def run(scenarios_list, config, wait_duration, failed_post_scenarios, kubeconfig_path):
# krkn_lib
def run(
scenarios_list,
config,
wait_duration,
failed_post_scenarios,
kubeconfig_path,
kubecli: KrknKubernetes,
telemetry: KrknTelemetry
) -> (list[str], list[ScenarioTelemetry]):
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
for scenario_config in scenarios_list:
if len(scenario_config) > 1:
pre_action_output = post_actions.run(kubeconfig_path, scenario_config[1])
else:
pre_action_output = ""
with open(scenario_config[0], "r") as f:
scenario_config_yaml = yaml.full_load(f)
for scenario in scenario_config_yaml["scenarios"]:
scenario_namespace = scenario.get("namespace", "")
scenario_label = scenario.get("label_selector", "")
if scenario_namespace is not None and scenario_namespace.strip() != "":
if scenario_label is not None and scenario_label.strip() != "":
logging.error("You can only have namespace or label set in your namespace scenario")
logging.error(
"Current scenario config has namespace '%s' and label selector '%s'"
% (scenario_namespace, scenario_label)
)
logging.error(
"Please set either namespace to blank ('') or label_selector to blank ('') to continue"
)
sys.exit(1)
delete_count = scenario.get("delete_count", 1)
run_count = scenario.get("runs", 1)
run_sleep = scenario.get("sleep", 10)
wait_time = scenario.get("wait_time", 30)
killed_namespaces = []
start_time = int(time.time())
for i in range(run_count):
namespaces = kubecli.check_namespaces([scenario_namespace], scenario_label)
for j in range(delete_count):
if len(namespaces) == 0:
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = scenario_config[0]
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, scenario_config[0])
try:
if len(scenario_config) > 1:
pre_action_output = post_actions.run(kubeconfig_path, scenario_config[1])
else:
pre_action_output = ""
with open(scenario_config[0], "r") as f:
scenario_config_yaml = yaml.full_load(f)
for scenario in scenario_config_yaml["scenarios"]:
scenario_namespace = scenario.get("namespace", "")
scenario_label = scenario.get("label_selector", "")
if scenario_namespace is not None and scenario_namespace.strip() != "":
if scenario_label is not None and scenario_label.strip() != "":
logging.error("You can only have namespace or label set in your namespace scenario")
logging.error(
"Couldn't delete %s namespaces, not enough namespaces matching %s with label %s"
% (str(run_count), scenario_namespace, str(scenario_label))
"Current scenario config has namespace '%s' and label selector '%s'"
% (scenario_namespace, scenario_label)
)
sys.exit(1)
selected_namespace = namespaces[random.randint(0, len(namespaces) - 1)]
killed_namespaces.append(selected_namespace)
try:
kubecli.delete_namespace(selected_namespace)
logging.info("Delete on namespace %s was successful" % str(selected_namespace))
except Exception as e:
logging.info("Delete on namespace %s was unsuccessful" % str(selected_namespace))
logging.info("Namespace action error: " + str(e))
sys.exit(1)
namespaces.remove(selected_namespace)
logging.info("Waiting %s seconds between namespace deletions" % str(run_sleep))
time.sleep(run_sleep)
logging.info("Waiting for the specified duration: %s" % wait_duration)
time.sleep(wait_duration)
if len(scenario_config) > 1:
try:
failed_post_scenarios = post_actions.check_recovery(
kubeconfig_path, scenario_config, failed_post_scenarios, pre_action_output
logging.error(
"Please set either namespace to blank ('') or label_selector to blank ('') to continue"
)
# removed_exit
# sys.exit(1)
raise RuntimeError()
delete_count = scenario.get("delete_count", 1)
run_count = scenario.get("runs", 1)
run_sleep = scenario.get("sleep", 10)
wait_time = scenario.get("wait_time", 30)
killed_namespaces = []
start_time = int(time.time())
for i in range(run_count):
namespaces = kubecli.check_namespaces([scenario_namespace], scenario_label)
for j in range(delete_count):
if len(namespaces) == 0:
logging.error(
"Couldn't delete %s namespaces, not enough namespaces matching %s with label %s"
% (str(run_count), scenario_namespace, str(scenario_label))
)
# removed_exit
# sys.exit(1)
raise RuntimeError()
selected_namespace = namespaces[random.randint(0, len(namespaces) - 1)]
killed_namespaces.append(selected_namespace)
try:
kubecli.delete_namespace(selected_namespace)
logging.info("Delete on namespace %s was successful" % str(selected_namespace))
except Exception as e:
logging.error("Failed to run post action checks: %s" % e)
sys.exit(1)
else:
failed_post_scenarios = check_active_namespace(killed_namespaces, wait_time)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
logging.info("Delete on namespace %s was unsuccessful" % str(selected_namespace))
logging.info("Namespace action error: " + str(e))
# removed_exit
# sys.exit(1)
raise RuntimeError()
namespaces.remove(selected_namespace)
logging.info("Waiting %s seconds between namespace deletions" % str(run_sleep))
time.sleep(run_sleep)
logging.info("Waiting for the specified duration: %s" % wait_duration)
time.sleep(wait_duration)
if len(scenario_config) > 1:
try:
failed_post_scenarios = post_actions.check_recovery(
kubeconfig_path, scenario_config, failed_post_scenarios, pre_action_output
)
except Exception as e:
logging.error("Failed to run post action checks: %s" % e)
# removed_exit
# sys.exit(1)
raise RuntimeError()
else:
failed_post_scenarios = check_active_namespace(killed_namespaces, wait_time, kubecli)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
except (Exception, RuntimeError):
scenario_telemetry.exitStatus = 1
failed_scenarios.append(scenario_config[0])
telemetry.log_exception(scenario_config[0])
else:
scenario_telemetry.exitStatus = 0
scenario_telemetry.endTimeStamp = time.time()
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries
def check_active_namespace(killed_namespaces, wait_time):
# krkn_lib
def check_active_namespace(killed_namespaces, wait_time, kubecli: KrknKubernetes):
active_namespace = []
timer = 0
while timer < wait_time and killed_namespaces:

View File

@@ -1,97 +1,116 @@
import yaml
import logging
import time
import sys
import os
import random
from jinja2 import Environment, FileSystemLoader
import kraken.cerberus.setup as cerberus
import kraken.kubernetes.client as kubecli
import kraken.node_actions.common_node_functions as common_node_functions
from jinja2 import Environment, FileSystemLoader
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.telemetry import KrknTelemetry
from krkn_lib.models.telemetry import ScenarioTelemetry
# krkn_lib
# Reads the scenario config and introduces traffic variations in Node's host network interface.
def run(scenarios_list, config, wait_duration):
def run(scenarios_list, config, wait_duration, kubecli: KrknKubernetes, telemetry: KrknTelemetry) -> (list[str], list[ScenarioTelemetry]):
failed_post_scenarios = ""
logging.info("Runing the Network Chaos tests")
failed_post_scenarios = ""
scenario_telemetries: list[ScenarioTelemetry] = []
failed_scenarios = []
for net_config in scenarios_list:
with open(net_config, "r") as file:
param_lst = ["latency", "loss", "bandwidth"]
test_config = yaml.safe_load(file)
test_dict = test_config["network_chaos"]
test_duration = int(test_dict.get("duration", 300))
test_interface = test_dict.get("interfaces", [])
test_node = test_dict.get("node_name", "")
test_node_label = test_dict.get("label_selector", "node-role.kubernetes.io/master")
test_execution = test_dict.get("execution", "serial")
test_instance_count = test_dict.get("instance_count", 1)
test_egress = test_dict.get("egress", {"bandwidth": "100mbit"})
if test_node:
node_name_list = test_node.split(",")
else:
node_name_list = [test_node]
nodelst = []
for single_node_name in node_name_list:
nodelst.extend(common_node_functions.get_node(single_node_name, test_node_label, test_instance_count))
file_loader = FileSystemLoader(os.path.abspath(os.path.dirname(__file__)))
env = Environment(loader=file_loader)
pod_template = env.get_template("pod.j2")
test_interface = verify_interface(test_interface, nodelst, pod_template)
joblst = []
egress_lst = [i for i in param_lst if i in test_egress]
chaos_config = {
"network_chaos": {
"duration": test_duration,
"interfaces": test_interface,
"node_name": ",".join(nodelst),
"execution": test_execution,
"instance_count": test_instance_count,
"egress": test_egress,
scenario_telemetry = ScenarioTelemetry()
scenario_telemetry.scenario = net_config
scenario_telemetry.startTimeStamp = time.time()
telemetry.set_parameters_base64(scenario_telemetry, net_config)
try:
with open(net_config, "r") as file:
param_lst = ["latency", "loss", "bandwidth"]
test_config = yaml.safe_load(file)
test_dict = test_config["network_chaos"]
test_duration = int(test_dict.get("duration", 300))
test_interface = test_dict.get("interfaces", [])
test_node = test_dict.get("node_name", "")
test_node_label = test_dict.get("label_selector", "node-role.kubernetes.io/master")
test_execution = test_dict.get("execution", "serial")
test_instance_count = test_dict.get("instance_count", 1)
test_egress = test_dict.get("egress", {"bandwidth": "100mbit"})
if test_node:
node_name_list = test_node.split(",")
else:
node_name_list = [test_node]
nodelst = []
for single_node_name in node_name_list:
nodelst.extend(common_node_functions.get_node(single_node_name, test_node_label, test_instance_count, kubecli))
file_loader = FileSystemLoader(os.path.abspath(os.path.dirname(__file__)))
env = Environment(loader=file_loader, autoescape=True)
pod_template = env.get_template("pod.j2")
test_interface = verify_interface(test_interface, nodelst, pod_template, kubecli)
joblst = []
egress_lst = [i for i in param_lst if i in test_egress]
chaos_config = {
"network_chaos": {
"duration": test_duration,
"interfaces": test_interface,
"node_name": ",".join(nodelst),
"execution": test_execution,
"instance_count": test_instance_count,
"egress": test_egress,
}
}
}
logging.info("Executing network chaos with config \n %s" % yaml.dump(chaos_config))
job_template = env.get_template("job.j2")
try:
for i in egress_lst:
for node in nodelst:
exec_cmd = get_egress_cmd(
test_execution, test_interface, i, test_dict["egress"], duration=test_duration
)
logging.info("Executing %s on node %s" % (exec_cmd, node))
job_body = yaml.safe_load(
job_template.render(jobname=i + str(hash(node))[:5], nodename=node, cmd=exec_cmd)
)
joblst.append(job_body["metadata"]["name"])
api_response = kubecli.create_job(job_body)
if api_response is None:
raise Exception("Error creating job")
if test_execution == "serial":
logging.info("Waiting for serial job to finish")
logging.info("Executing network chaos with config \n %s" % yaml.dump(chaos_config))
job_template = env.get_template("job.j2")
try:
for i in egress_lst:
for node in nodelst:
exec_cmd = get_egress_cmd(
test_execution, test_interface, i, test_dict["egress"], duration=test_duration
)
logging.info("Executing %s on node %s" % (exec_cmd, node))
job_body = yaml.safe_load(
job_template.render(jobname=i + str(hash(node))[:5], nodename=node, cmd=exec_cmd)
)
joblst.append(job_body["metadata"]["name"])
api_response = kubecli.create_job(job_body)
if api_response is None:
raise Exception("Error creating job")
if test_execution == "serial":
logging.info("Waiting for serial job to finish")
start_time = int(time.time())
wait_for_job(joblst[:], kubecli, test_duration + 300)
logging.info("Waiting for wait_duration %s" % wait_duration)
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
if test_execution == "parallel":
break
if test_execution == "parallel":
logging.info("Waiting for parallel job to finish")
start_time = int(time.time())
wait_for_job(joblst[:], test_duration + 300)
wait_for_job(joblst[:], kubecli, test_duration + 300)
logging.info("Waiting for wait_duration %s" % wait_duration)
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
if test_execution == "parallel":
break
if test_execution == "parallel":
logging.info("Waiting for parallel job to finish")
start_time = int(time.time())
wait_for_job(joblst[:], test_duration + 300)
logging.info("Waiting for wait_duration %s" % wait_duration)
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
except Exception as e:
logging.error("Network Chaos exiting due to Exception %s" % e)
sys.exit(1)
finally:
logging.info("Deleting jobs")
delete_job(joblst[:])
except Exception as e:
logging.error("Network Chaos exiting due to Exception %s" % e)
raise RuntimeError()
finally:
logging.info("Deleting jobs")
delete_job(joblst[:], kubecli)
except (RuntimeError, Exception):
scenario_telemetry.exitStatus = 1
failed_scenarios.append(net_config)
telemetry.log_exception(net_config)
else:
scenario_telemetry.exitStatus = 0
scenario_telemetries.append(scenario_telemetry)
return failed_scenarios, scenario_telemetries
def verify_interface(test_interface, nodelst, template):
# krkn_lib
def verify_interface(test_interface, nodelst, template, kubecli: KrknKubernetes):
pod_index = random.randint(0, len(nodelst) - 1)
pod_body = yaml.safe_load(template.render(nodename=nodelst[pod_index]))
logging.info("Creating pod to query interface on node %s" % nodelst[pod_index])
@@ -107,25 +126,25 @@ def verify_interface(test_interface, nodelst, template):
interface_lst = output[:-1].split(",")
for interface in test_interface:
if interface not in interface_lst:
logging.error(
"Interface %s not found in node %s interface list %s" % (interface, nodelst[pod_index]),
interface_lst,
)
sys.exit(1)
logging.error("Interface %s not found in node %s interface list %s" % (interface, nodelst[pod_index], interface_lst))
#sys.exit(1)
raise RuntimeError()
return test_interface
finally:
logging.info("Deleteing pod to query interface on node")
kubecli.delete_pod("fedtools", "default")
def get_job_pods(api_response):
# krkn_lib
def get_job_pods(api_response, kubecli: KrknKubernetes):
controllerUid = api_response.metadata.labels["controller-uid"]
pod_label_selector = "controller-uid=" + controllerUid
pods_list = kubecli.list_pods(label_selector=pod_label_selector, namespace="default")
return pods_list[0]
def wait_for_job(joblst, timeout=300):
# krkn_lib
def wait_for_job(joblst, kubecli: KrknKubernetes, timeout=300):
waittime = time.time() + timeout
count = 0
joblen = len(joblst)
@@ -137,26 +156,27 @@ def wait_for_job(joblst, timeout=300):
count += 1
joblst.remove(jobname)
except Exception:
logging.warn("Exception in getting job status")
logging.warning("Exception in getting job status")
if time.time() > waittime:
raise Exception("Starting pod failed")
time.sleep(5)
def delete_job(joblst):
# krkn_lib
def delete_job(joblst, kubecli: KrknKubernetes):
for jobname in joblst:
try:
api_response = kubecli.get_job_status(jobname, namespace="default")
if api_response.status.failed is not None:
pod_name = get_job_pods(api_response)
pod_name = get_job_pods(api_response, kubecli)
pod_stat = kubecli.read_pod(name=pod_name, namespace="default")
logging.error(pod_stat.status.container_statuses)
pod_log_response = kubecli.get_pod_log(name=pod_name, namespace="default")
pod_log = pod_log_response.data.decode("utf-8")
logging.error(pod_log)
except Exception:
logging.warn("Exception in getting job status")
api_response = kubecli.delete_job(name=jobname, namespace="default")
logging.warning("Exception in getting job status")
kubecli.delete_job(name=jobname, namespace="default")
def get_egress_cmd(execution, test_interface, mod, vallst, duration=30):

View File

@@ -2,10 +2,13 @@ import sys
import logging
import kraken.invoke.command as runcommand
import kraken.node_actions.common_node_functions as nodeaction
from krkn_lib.k8s import KrknKubernetes
# krkn_lib
class abstract_node_scenarios:
kubecli: KrknKubernetes
def __init__(self, kubecli: KrknKubernetes):
self.kubecli = kubecli
# Node scenario to start the node
def node_start_scenario(self, instance_kill_count, node, timeout):
pass
@@ -42,7 +45,7 @@ class abstract_node_scenarios:
logging.info("Starting stop_kubelet_scenario injection")
logging.info("Stopping the kubelet of the node %s" % (node))
runcommand.run("oc debug node/" + node + " -- chroot /host systemctl stop kubelet")
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
logging.info("The kubelet of the node %s has been stopped" % (node))
logging.info("stop_kubelet_scenario has been successfuly injected!")
except Exception as e:

View File

@@ -1,13 +1,14 @@
import sys
import time
import logging
import kraken.node_actions.common_node_functions as nodeaction
import os
import json
from aliyunsdkcore.client import AcsClient
from aliyunsdkecs.request.v20140526 import DescribeInstancesRequest, DeleteInstanceRequest
from aliyunsdkecs.request.v20140526 import StopInstanceRequest, StartInstanceRequest, RebootInstanceRequest
import logging
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
import os
import json
from krkn_lib.k8s import KrknKubernetes
class Alibaba:
@@ -179,9 +180,9 @@ class Alibaba:
logging.info("ECS %s is released" % instance_id)
return True
# krkn_lib
class alibaba_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self,kubecli: KrknKubernetes):
self.alibaba = Alibaba()
# Node scenario to start the node
@@ -193,7 +194,7 @@ class alibaba_node_scenarios(abstract_node_scenarios):
logging.info("Starting the node %s with instance ID: %s " % (node, vm_id))
self.alibaba.start_instances(vm_id)
self.alibaba.wait_until_running(vm_id, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s is in running state" % node)
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -213,7 +214,7 @@ class alibaba_node_scenarios(abstract_node_scenarios):
self.alibaba.stop_instances(vm_id)
self.alibaba.wait_until_stopped(vm_id, timeout)
logging.info("Node with instance ID: %s is in stopped state" % vm_id)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % e)
logging.error("node_stop_scenario injection failed!")
@@ -248,8 +249,8 @@ class alibaba_node_scenarios(abstract_node_scenarios):
instance_id = self.alibaba.get_instance_id(node)
logging.info("Rebooting the node with instance ID: %s " % (instance_id))
self.alibaba.reboot_instances(instance_id)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s has been rebooted" % (instance_id))
logging.info("node_reboot_scenario has been successfully injected!")
except Exception as e:

View File

@@ -2,10 +2,9 @@ import sys
import time
import boto3
import logging
import kraken.kubernetes.client as kubecli
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
from krkn_lib.k8s import KrknKubernetes
class AWS:
def __init__(self):
@@ -27,7 +26,9 @@ class AWS:
logging.error(
"Failed to start node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Stop the node instance
def stop_instances(self, instance_id):
@@ -36,7 +37,9 @@ class AWS:
logging.info("EC2 instance: " + str(instance_id) + " stopped")
except Exception as e:
logging.error("Failed to stop node instance %s. Encountered following " "exception: %s." % (instance_id, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Terminate the node instance
def terminate_instances(self, instance_id):
@@ -47,7 +50,9 @@ class AWS:
logging.error(
"Failed to terminate node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Reboot the node instance
def reboot_instances(self, instance_id):
@@ -58,7 +63,9 @@ class AWS:
logging.error(
"Failed to reboot node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Below functions poll EC2.Client.describe_instances() every 15 seconds
# until a successful state is reached. An error is returned after 40 failed checks
@@ -102,7 +109,9 @@ class AWS:
"Failed to create the default network_acl: %s"
"Make sure you have aws cli configured on the host and set for the region of your vpc/subnet" % (e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
return acl_id
# Replace network acl association
@@ -114,7 +123,9 @@ class AWS:
new_association_id = status["NewAssociationId"]
except Exception as e:
logging.error("Failed to replace network acl association: %s" % (e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
return new_association_id
# Describe network acl
@@ -131,7 +142,9 @@ class AWS:
"Failed to describe network acl: %s."
"Make sure you have aws cli configured on the host and set for the region of your vpc/subnet" % (e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
associations = response["NetworkAcls"][0]["Associations"]
# grab the current network_acl in use
original_acl_id = response["NetworkAcls"][0]["Associations"][0]["NetworkAclId"]
@@ -148,11 +161,14 @@ class AWS:
"Make sure you have aws cli configured on the host and set for the region of your vpc/subnet"
% (acl_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# krkn_lib
class aws_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self, kubecli: KrknKubernetes):
super().__init__(kubecli)
self.aws = AWS()
# Node scenario to start the node
@@ -164,7 +180,7 @@ class aws_node_scenarios(abstract_node_scenarios):
logging.info("Starting the node %s with instance ID: %s " % (node, instance_id))
self.aws.start_instances(instance_id)
self.aws.wait_until_running(instance_id)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s is in running state" % (instance_id))
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -172,7 +188,9 @@ class aws_node_scenarios(abstract_node_scenarios):
"Failed to start node instance. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("node_start_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to stop the node
def node_stop_scenario(self, instance_kill_count, node, timeout):
@@ -184,11 +202,13 @@ class aws_node_scenarios(abstract_node_scenarios):
self.aws.stop_instances(instance_id)
self.aws.wait_until_stopped(instance_id)
logging.info("Node with instance ID: %s is in stopped state" % (instance_id))
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % (e))
logging.error("node_stop_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to terminate the node
def node_termination_scenario(self, instance_kill_count, node, timeout):
@@ -200,10 +220,10 @@ class aws_node_scenarios(abstract_node_scenarios):
self.aws.terminate_instances(instance_id)
self.aws.wait_until_terminated(instance_id)
for _ in range(timeout):
if node not in kubecli.list_nodes():
if node not in self.kubecli.list_nodes():
break
time.sleep(1)
if node in kubecli.list_nodes():
if node in self.kubecli.list_nodes():
raise Exception("Node could not be terminated")
logging.info("Node with instance ID: %s has been terminated" % (instance_id))
logging.info("node_termination_scenario has been successfuly injected!")
@@ -212,7 +232,9 @@ class aws_node_scenarios(abstract_node_scenarios):
"Failed to terminate node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_termination_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to reboot the node
def node_reboot_scenario(self, instance_kill_count, node, timeout):
@@ -222,8 +244,8 @@ class aws_node_scenarios(abstract_node_scenarios):
instance_id = self.aws.get_instance_id(node)
logging.info("Rebooting the node %s with instance ID: %s " % (node, instance_id))
self.aws.reboot_instances(instance_id)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s has been rebooted" % (instance_id))
logging.info("node_reboot_scenario has been successfuly injected!")
except Exception as e:
@@ -231,4 +253,6 @@ class aws_node_scenarios(abstract_node_scenarios):
"Failed to reboot node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_reboot_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()

View File

@@ -1,13 +1,14 @@
import sys
import time
from azure.mgmt.compute import ComputeManagementClient
from azure.identity import DefaultAzureCredential
import yaml
import kraken.invoke.command as runcommand
import logging
import kraken.kubernetes.client as kubecli
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
import kraken.invoke.command as runcommand
import yaml
from azure.mgmt.compute import ComputeManagementClient
from azure.identity import DefaultAzureCredential
from krkn_lib.k8s import KrknKubernetes
class Azure:
@@ -17,7 +18,7 @@ class Azure:
credentials = DefaultAzureCredential()
logging.info("credential " + str(credentials))
az_account = runcommand.invoke("az account list -o yaml")
az_account_yaml = yaml.load(az_account, Loader=yaml.FullLoader)
az_account_yaml = yaml.safe_load(az_account, Loader=yaml.FullLoader)
subscription_id = az_account_yaml[0]["id"]
self.compute_client = ComputeManagementClient(credentials, subscription_id)
@@ -39,7 +40,9 @@ class Azure:
logging.info("vm name " + str(vm_name) + " started")
except Exception as e:
logging.error("Failed to start node instance %s. Encountered following " "exception: %s." % (vm_name, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Stop the node instance
def stop_instances(self, group_name, vm_name):
@@ -48,7 +51,9 @@ class Azure:
logging.info("vm name " + str(vm_name) + " stopped")
except Exception as e:
logging.error("Failed to stop node instance %s. Encountered following " "exception: %s." % (vm_name, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Terminate the node instance
def terminate_instances(self, group_name, vm_name):
@@ -59,7 +64,9 @@ class Azure:
logging.error(
"Failed to terminate node instance %s. Encountered following " "exception: %s." % (vm_name, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Reboot the node instance
def reboot_instances(self, group_name, vm_name):
@@ -68,7 +75,9 @@ class Azure:
logging.info("vm name " + str(vm_name) + " rebooted")
except Exception as e:
logging.error("Failed to reboot node instance %s. Encountered following " "exception: %s." % (vm_name, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
def get_vm_status(self, resource_group, vm_name):
statuses = self.compute_client.virtual_machines.instance_view(resource_group, vm_name).statuses
@@ -121,9 +130,10 @@ class Azure:
logging.info("Vm %s is terminated" % vm_name)
return True
# krkn_lib
class azure_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self, kubecli: KrknKubernetes):
super().__init__(kubecli)
logging.info("init in azure")
self.azure = Azure()
@@ -136,7 +146,7 @@ class azure_node_scenarios(abstract_node_scenarios):
logging.info("Starting the node %s with instance ID: %s " % (vm_name, resource_group))
self.azure.start_instances(resource_group, vm_name)
self.azure.wait_until_running(resource_group, vm_name, timeout)
nodeaction.wait_for_ready_status(vm_name, timeout)
nodeaction.wait_for_ready_status(vm_name, timeout,self.kubecli)
logging.info("Node with instance ID: %s is in running state" % node)
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -144,7 +154,9 @@ class azure_node_scenarios(abstract_node_scenarios):
"Failed to start node instance. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("node_start_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to stop the node
def node_stop_scenario(self, instance_kill_count, node, timeout):
@@ -156,11 +168,13 @@ class azure_node_scenarios(abstract_node_scenarios):
self.azure.stop_instances(resource_group, vm_name)
self.azure.wait_until_stopped(resource_group, vm_name, timeout)
logging.info("Node with instance ID: %s is in stopped state" % vm_name)
nodeaction.wait_for_unknown_status(vm_name, timeout)
nodeaction.wait_for_unknown_status(vm_name, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % e)
logging.error("node_stop_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to terminate the node
def node_termination_scenario(self, instance_kill_count, node, timeout):
@@ -172,10 +186,10 @@ class azure_node_scenarios(abstract_node_scenarios):
self.azure.terminate_instances(resource_group, vm_name)
self.azure.wait_until_terminated(resource_group, vm_name, timeout)
for _ in range(timeout):
if vm_name not in kubecli.list_nodes():
if vm_name not in self.kubecli.list_nodes():
break
time.sleep(1)
if vm_name in kubecli.list_nodes():
if vm_name in self.kubecli.list_nodes():
raise Exception("Node could not be terminated")
logging.info("Node with instance ID: %s has been terminated" % node)
logging.info("node_termination_scenario has been successfully injected!")
@@ -184,7 +198,9 @@ class azure_node_scenarios(abstract_node_scenarios):
"Failed to terminate node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_termination_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to reboot the node
def node_reboot_scenario(self, instance_kill_count, node, timeout):
@@ -194,8 +210,8 @@ class azure_node_scenarios(abstract_node_scenarios):
vm_name, resource_group = self.azure.get_instance_id(node)
logging.info("Rebooting the node %s with instance ID: %s " % (vm_name, resource_group))
self.azure.reboot_instances(resource_group, vm_name)
nodeaction.wait_for_unknown_status(vm_name, timeout)
nodeaction.wait_for_ready_status(vm_name, timeout)
nodeaction.wait_for_unknown_status(vm_name, timeout, self.kubecli)
nodeaction.wait_for_ready_status(vm_name, timeout, self.kubecli)
logging.info("Node with instance ID: %s has been rebooted" % (vm_name))
logging.info("node_reboot_scenario has been successfully injected!")
except Exception as e:
@@ -203,4 +219,6 @@ class azure_node_scenarios(abstract_node_scenarios):
"Failed to reboot node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_reboot_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()

View File

@@ -7,7 +7,7 @@ import pyipmi.interfaces
import sys
import time
import traceback
from krkn_lib.k8s import KrknKubernetes
class BM:
def __init__(self, bm_info, user, passwd):
@@ -104,9 +104,10 @@ class BM:
while self.get_ipmi_connection(bmc_addr, node_name).get_chassis_status().power_on:
time.sleep(1)
# krkn_lib
class bm_node_scenarios(abstract_node_scenarios):
def __init__(self, bm_info, user, passwd):
def __init__(self, bm_info, user, passwd, kubecli: KrknKubernetes):
super().__init__(kubecli)
self.bm = BM(bm_info, user, passwd)
# Node scenario to start the node
@@ -118,7 +119,7 @@ class bm_node_scenarios(abstract_node_scenarios):
logging.info("Starting the node %s with bmc address: %s " % (node, bmc_addr))
self.bm.start_instances(bmc_addr, node)
self.bm.wait_until_running(bmc_addr, node)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with bmc address: %s is in running state" % (bmc_addr))
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -140,7 +141,7 @@ class bm_node_scenarios(abstract_node_scenarios):
self.bm.stop_instances(bmc_addr, node)
self.bm.wait_until_stopped(bmc_addr, node)
logging.info("Node with bmc address: %s is in stopped state" % (bmc_addr))
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
except Exception as e:
logging.error(
"Failed to stop node instance. Encountered following exception: %s. "
@@ -163,8 +164,8 @@ class bm_node_scenarios(abstract_node_scenarios):
logging.info("BMC Addr: %s" % (bmc_addr))
logging.info("Rebooting the node %s with bmc address: %s " % (node, bmc_addr))
self.bm.reboot_instances(bmc_addr, node)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with bmc address: %s has been rebooted" % (bmc_addr))
logging.info("node_reboot_scenario has been successfuly injected!")
except Exception as e:

View File

@@ -2,15 +2,13 @@ import time
import random
import logging
import paramiko
import kraken.kubernetes.client as kubecli
import kraken.invoke.command as runcommand
from krkn_lib.k8s import KrknKubernetes
node_general = False
# Pick a random node with specified label selector
def get_node(node_name, label_selector, instance_kill_count):
def get_node(node_name, label_selector, instance_kill_count, kubecli: KrknKubernetes):
if node_name in kubecli.list_killable_nodes():
return [node_name]
elif node_name:
@@ -30,30 +28,23 @@ def get_node(node_name, label_selector, instance_kill_count):
return nodes_to_return
# Wait till node status becomes Ready
def wait_for_ready_status(node, timeout):
for _ in range(timeout):
if kubecli.get_node_status(node) == "Ready":
break
time.sleep(3)
if kubecli.get_node_status(node) != "Ready":
raise Exception("Node condition status isn't Ready")
# krkn_lib
# Wait until the node status becomes Ready
def wait_for_ready_status(node, timeout, kubecli: KrknKubernetes):
resource_version = kubecli.get_node_resource_version(node)
kubecli.watch_node_status(node, "True", timeout, resource_version)
# krkn_lib
# Wait until the node status becomes Not Ready
def wait_for_not_ready_status(node, timeout, kubecli: KrknKubernetes):
resource_version = kubecli.get_node_resource_version(node)
kubecli.watch_node_status(node, "False", timeout, resource_version)
# Wait till node status becomes NotReady
def wait_for_unknown_status(node, timeout):
for _ in range(timeout):
try:
node_status = kubecli.get_node_status(node, timeout)
if node_status is None or node_status == "Unknown":
break
except Exception:
logging.error("Encountered error while getting node status, waiting 3 seconds and retrying")
time.sleep(3)
node_status = kubecli.get_node_status(node, timeout)
logging.info("node status " + str(node_status))
if node_status is not None and node_status != "Unknown":
raise Exception("Node condition status isn't Unknown after %s seconds" % str(timeout))
# krkn_lib
# Wait until the node status becomes Unknown
def wait_for_unknown_status(node, timeout, kubecli: KrknKubernetes):
resource_version = kubecli.get_node_resource_version(node)
kubecli.watch_node_status(node, "Unknown", timeout, resource_version)
# Get the ip of the cluster node
@@ -74,12 +65,17 @@ def check_service_status(node, service, ssh_private_key, timeout):
i += sleeper
logging.info("Trying to ssh to instance: %s" % (node))
connection = ssh.connect(
node, username="root", key_filename=ssh_private_key, timeout=800, banner_timeout=400
node,
username="root",
key_filename=ssh_private_key,
timeout=800,
banner_timeout=400,
)
if connection is None:
break
except Exception:
pass
except Exception as e:
logging.error("Failed to ssh to instance: %s within the timeout duration of %s: %s" % (node, timeout, e))
for service_name in service:
logging.info("Checking status of Service: %s" % (service_name))
stdin, stdout, stderr = ssh.exec_command(

View File

@@ -0,0 +1,110 @@
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
import logging
import sys
import docker
from krkn_lib.k8s import KrknKubernetes
class Docker:
def __init__(self):
self.client = docker.from_env()
def get_container_id(self, node_name):
container = self.client.containers.get(node_name)
return container.id
# Start the node instance
def start_instances(self, node_name):
container = self.client.containers.get(node_name)
container.start()
# Stop the node instance
def stop_instances(self, node_name):
container = self.client.containers.get(node_name)
container.stop()
# Reboot the node instance
def reboot_instances(self, node_name):
container = self.client.containers.get(node_name)
container.restart()
# Terminate the node instance
def terminate_instances(self, node_name):
container = self.client.containers.get(node_name)
container.stop()
container.remove()
class docker_node_scenarios(abstract_node_scenarios):
def __init__(self, kubecli: KrknKubernetes):
super().__init__(kubecli)
self.docker = Docker()
# Node scenario to start the node
def node_start_scenario(self, instance_kill_count, node, timeout):
for _ in range(instance_kill_count):
try:
logging.info("Starting node_start_scenario injection")
container_id = self.docker.get_container_id(node)
logging.info("Starting the node %s with container ID: %s " % (node, container_id))
self.docker.start_instances(node)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with container ID: %s is in running state" % (container_id))
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
logging.error(
"Failed to start node instance. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("node_start_scenario injection failed!")
sys.exit(1)
# Node scenario to stop the node
def node_stop_scenario(self, instance_kill_count, node, timeout):
for _ in range(instance_kill_count):
try:
logging.info("Starting node_stop_scenario injection")
container_id = self.docker.get_container_id(node)
logging.info("Stopping the node %s with container ID: %s " % (node, container_id))
self.docker.stop_instances(node)
logging.info("Node with container ID: %s is in stopped state" % (container_id))
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % (e))
logging.error("node_stop_scenario injection failed!")
sys.exit(1)
# Node scenario to terminate the node
def node_termination_scenario(self, instance_kill_count, node, timeout):
for _ in range(instance_kill_count):
try:
logging.info("Starting node_termination_scenario injection")
container_id = self.docker.get_container_id(node)
logging.info("Terminating the node %s with container ID: %s " % (node, container_id))
self.docker.terminate_instances(node)
logging.info("Node with container ID: %s has been terminated" % (container_id))
logging.info("node_termination_scenario has been successfuly injected!")
except Exception as e:
logging.error(
"Failed to terminate node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_termination_scenario injection failed!")
sys.exit(1)
# Node scenario to reboot the node
def node_reboot_scenario(self, instance_kill_count, node, timeout):
for _ in range(instance_kill_count):
try:
logging.info("Starting node_reboot_scenario injection")
container_id = self.docker.get_container_id(node)
logging.info("Rebooting the node %s with container ID: %s " % (node, container_id))
self.docker.reboot_instances(node)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with container ID: %s has been rebooted" % (container_id))
logging.info("node_reboot_scenario has been successfuly injected!")
except Exception as e:
logging.error(
"Failed to reboot node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_reboot_scenario injection failed!")
sys.exit(1)

View File

@@ -1,13 +1,12 @@
import sys
import time
import logging
import kraken.kubernetes.client as kubecli
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
import kraken.invoke.command as runcommand
from krkn_lib.k8s import KrknKubernetes
class GCP:
def __init__(self):
@@ -45,7 +44,9 @@ class GCP:
logging.error(
"Failed to start node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Stop the node instance
def stop_instances(self, zone, instance_id):
@@ -54,7 +55,9 @@ class GCP:
logging.info("vm name " + str(instance_id) + " stopped")
except Exception as e:
logging.error("Failed to stop node instance %s. Encountered following " "exception: %s." % (instance_id, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Start the node instance
def suspend_instances(self, zone, instance_id):
@@ -65,7 +68,9 @@ class GCP:
logging.error(
"Failed to suspend node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Terminate the node instance
def terminate_instances(self, zone, instance_id):
@@ -76,7 +81,9 @@ class GCP:
logging.error(
"Failed to start node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Reboot the node instance
def reboot_instances(self, zone, instance_id):
@@ -87,7 +94,9 @@ class GCP:
logging.error(
"Failed to start node instance %s. Encountered following " "exception: %s." % (instance_id, e)
)
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Get instance status
def get_instance_status(self, zone, instance_id, expected_status, timeout):
@@ -133,8 +142,10 @@ class GCP:
return True
# krkn_lib
class gcp_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self, kubecli: KrknKubernetes):
super().__init__(kubecli)
self.gcp = GCP()
# Node scenario to start the node
@@ -146,7 +157,7 @@ class gcp_node_scenarios(abstract_node_scenarios):
logging.info("Starting the node %s with instance ID: %s " % (node, instance_id))
self.gcp.start_instances(zone, instance_id)
self.gcp.wait_until_running(zone, instance_id, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s is in running state" % instance_id)
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -154,7 +165,9 @@ class gcp_node_scenarios(abstract_node_scenarios):
"Failed to start node instance. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("node_start_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to stop the node
def node_stop_scenario(self, instance_kill_count, node, timeout):
@@ -167,11 +180,13 @@ class gcp_node_scenarios(abstract_node_scenarios):
self.gcp.stop_instances(zone, instance_id)
self.gcp.wait_until_stopped(zone, instance_id, timeout)
logging.info("Node with instance ID: %s is in stopped state" % instance_id)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % (e))
logging.error("node_stop_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to terminate the node
def node_termination_scenario(self, instance_kill_count, node, timeout):
@@ -183,10 +198,10 @@ class gcp_node_scenarios(abstract_node_scenarios):
self.gcp.terminate_instances(zone, instance_id)
self.gcp.wait_until_terminated(zone, instance_id, timeout)
for _ in range(timeout):
if node not in kubecli.list_nodes():
if node not in self.kubecli.list_nodes():
break
time.sleep(1)
if node in kubecli.list_nodes():
if node in self.kubecli.list_nodes():
raise Exception("Node could not be terminated")
logging.info("Node with instance ID: %s has been terminated" % instance_id)
logging.info("node_termination_scenario has been successfuly injected!")
@@ -195,7 +210,9 @@ class gcp_node_scenarios(abstract_node_scenarios):
"Failed to terminate node instance. Encountered following exception:" " %s. Test Failed" % e
)
logging.error("node_termination_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to reboot the node
def node_reboot_scenario(self, instance_kill_count, node, timeout):
@@ -205,7 +222,7 @@ class gcp_node_scenarios(abstract_node_scenarios):
instance_id, zone = self.gcp.get_instance_id(node)
logging.info("Rebooting the node %s with instance ID: %s " % (node, instance_id))
self.gcp.reboot_instances(zone, instance_id)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s has been rebooted" % instance_id)
logging.info("node_reboot_scenario has been successfuly injected!")
except Exception as e:
@@ -213,4 +230,6 @@ class gcp_node_scenarios(abstract_node_scenarios):
"Failed to reboot node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_reboot_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()

View File

@@ -1,14 +1,15 @@
import logging
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
from krkn_lib.k8s import KrknKubernetes
class GENERAL:
def __init__(self):
pass
# krkn_lib
class general_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self, kubecli: KrknKubernetes):
super().__init__(kubecli)
self.general = GENERAL()
# Node scenario to start the node

View File

@@ -4,7 +4,7 @@ import logging
import kraken.invoke.command as runcommand
import kraken.node_actions.common_node_functions as nodeaction
from kraken.node_actions.abstract_node_scenarios import abstract_node_scenarios
from krkn_lib.k8s import KrknKubernetes
class OPENSTACKCLOUD:
def __init__(self):
@@ -23,7 +23,9 @@ class OPENSTACKCLOUD:
logging.info("Instance: " + str(node) + " started")
except Exception as e:
logging.error("Failed to start node instance %s. Encountered following " "exception: %s." % (node, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Stop the node instance
def stop_instances(self, node):
@@ -32,7 +34,9 @@ class OPENSTACKCLOUD:
logging.info("Instance: " + str(node) + " stopped")
except Exception as e:
logging.error("Failed to stop node instance %s. Encountered following " "exception: %s." % (node, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Reboot the node instance
def reboot_instances(self, node):
@@ -41,7 +45,9 @@ class OPENSTACKCLOUD:
logging.info("Instance: " + str(node) + " rebooted")
except Exception as e:
logging.error("Failed to reboot node instance %s. Encountered following " "exception: %s." % (node, e))
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Wait until the node instance is running
def wait_until_running(self, node, timeout):
@@ -86,9 +92,9 @@ class OPENSTACKCLOUD:
return node_name
counter += 1
# krkn_lib
class openstack_node_scenarios(abstract_node_scenarios):
def __init__(self):
def __init__(self, kubecli: KrknKubernetes):
self.openstackcloud = OPENSTACKCLOUD()
# Node scenario to start the node
@@ -100,7 +106,7 @@ class openstack_node_scenarios(abstract_node_scenarios):
openstack_node_name = self.openstackcloud.get_instance_id(node)
self.openstackcloud.start_instances(openstack_node_name)
self.openstackcloud.wait_until_running(openstack_node_name, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance ID: %s is in running state" % (node))
logging.info("node_start_scenario has been successfully injected!")
except Exception as e:
@@ -108,7 +114,9 @@ class openstack_node_scenarios(abstract_node_scenarios):
"Failed to start node instance. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("node_start_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to stop the node
def node_stop_scenario(self, instance_kill_count, node, timeout):
@@ -120,11 +128,13 @@ class openstack_node_scenarios(abstract_node_scenarios):
self.openstackcloud.stop_instances(openstack_node_name)
self.openstackcloud.wait_until_stopped(openstack_node_name, timeout)
logging.info("Node with instance name: %s is in stopped state" % (node))
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % (e))
logging.error("node_stop_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to reboot the node
def node_reboot_scenario(self, instance_kill_count, node, timeout):
@@ -134,8 +144,8 @@ class openstack_node_scenarios(abstract_node_scenarios):
logging.info("Rebooting the node %s" % (node))
openstack_node_name = self.openstackcloud.get_instance_id(node)
self.openstackcloud.reboot_instances(openstack_node_name)
nodeaction.wait_for_unknown_status(node, timeout)
nodeaction.wait_for_ready_status(node, timeout)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli)
logging.info("Node with instance name: %s has been rebooted" % (node))
logging.info("node_reboot_scenario has been successfuly injected!")
except Exception as e:
@@ -143,7 +153,9 @@ class openstack_node_scenarios(abstract_node_scenarios):
"Failed to reboot node instance. Encountered following exception:" " %s. Test Failed" % (e)
)
logging.error("node_reboot_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to start the node
def helper_node_start_scenario(self, instance_kill_count, node_ip, timeout):
@@ -161,7 +173,9 @@ class openstack_node_scenarios(abstract_node_scenarios):
"Failed to start node instance. Encountered following " "exception: %s. Test Failed" % (e)
)
logging.error("helper_node_start_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
# Node scenario to stop the node
def helper_node_stop_scenario(self, instance_kill_count, node_ip, timeout):
@@ -176,7 +190,9 @@ class openstack_node_scenarios(abstract_node_scenarios):
except Exception as e:
logging.error("Failed to stop node instance. Encountered following exception: %s. " "Test Failed" % (e))
logging.error("helper_node_stop_scenario injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()
def helper_node_service_status(self, node_ip, service, ssh_private_key, timeout):
try:
@@ -187,4 +203,6 @@ class openstack_node_scenarios(abstract_node_scenarios):
except Exception as e:
logging.error("Failed to check service status. Encountered following exception:" " %s. Test Failed" % (e))
logging.error("helper_node_service_status injection failed!")
sys.exit(1)
# removed_exit
# sys.exit(1)
raise RuntimeError()

Some files were not shown because too many files have changed in this diff Show More