Compare commits

...

88 Commits

Author SHA1 Message Date
Sahil Shah
dfc3a1d716 Adding http load scenario (#1160)
Signed-off-by: Sahil Shah <sahshah@redhat.com>
2026-04-09 10:47:50 -04:00
Paige Patton
0777ef924f changing pod recovery to vmi recovery
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-04-08 16:37:25 -04:00
Arpit Raj
1623dbac53 fix: resolve Python version mismatch in container entrypoint (#1222)
Signed-off-by: 1PoPTRoN <vrxn.arp1traj@gmail.com>
2026-04-08 09:00:37 -04:00
Arpit Raj
daa6dc4df9 fix: replace hardcoded /tmp paths with secure tempfile.mkdtemp() (#1223)
Signed-off-by: 1PoPTRoN <vrxn.arp1traj@gmail.com>
2026-04-07 10:55:46 -04:00
NITESH SINGH
9c064d888a fix(scenarios): fix network_chaos_ng variable shadowing and instance_count condition (#1219)
Signed-off-by: NETIZEN-11 <niteshkumar121411@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-04-01 08:54:29 -04:00
NITESH SINGH
b3e9ea1c3b fix(utils): fix HealthChecker bool comparisons and add missing return value (#1216)
- Replace '!= False' with 'is not False' and '== True' with 'is True'
  for idiomatic Python bool identity checks
- Add missing 'return self.ret_value' so callers receive the exit code
  instead of always getting None
- Add Apache 2.0 license header

Signed-off-by: NETIZEN-11 <niteshkumar121411@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-04-01 08:53:21 -04:00
Arpit Raj
9f417d8f1a fix: log exception details in delete_job to surface get_job_pods errors (#1220)
Signed-off-by: 1PoPTRoN <vrxn.arp1traj@gmail.com>
2026-04-01 08:52:51 -04:00
Paige Patton
ef50aa8c83 adding licsense to files (#1215)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-31 15:19:19 -04:00
Paige Patton
357889196a Adding node interface down/up scenario' (#1192)
* Adding node interface down/up scenario'

Signed-off-by: Paige Patton <prubenda@redhat.com>

* Trigger CI

---------

Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-31 13:59:41 -04:00
Paige Patton
35ee9d7bae adding changes to properly pass/fail a scenario if errors occur (#1065)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-31 12:31:25 -04:00
Paige Patton
626e203d33 removing kubernetes functions (#1205)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-31 08:46:40 -04:00
NITESH SINGH
8c57b0956b fix(rollback): use == instead of is for boolean check in execute_roll… (#1207)
* fix(rollback): use == instead of is for boolean check in execute_rollback_version_files

Signed-off-by: Nitesh <nitesh@example.com>

* fix(rollback): fix indentation in execute_rollback_version_files and add license header

Signed-off-by: Nitesh <nitesh@example.com>

---------

Signed-off-by: Nitesh <nitesh@example.com>
Co-authored-by: Nitesh <nitesh@example.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-03-30 10:56:36 -04:00
Paige Patton
d55695f7c4 adding pre commit hook (#1206)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-30 09:49:16 -04:00
Paige Patton
71bd34b020 adding better logging for when sceanrio file cant be found (#1203)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-27 13:47:49 -04:00
Paige Patton
6da7c9dec6 adding governance template from cncf (#926)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-27 09:33:00 -04:00
Tullio Sebastiani
4d5aea146d Run method fixes (#1202)
* kubevirt plugin fixes

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* managed_cluster plugin fixes

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* unit tests fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2026-03-27 14:31:19 +01:00
Yashasvi Yadav
62f500fb2e feat: add GCP zone outage rollback support (#1200)
Add rollback functionality for GCP zone outage scenarios following the
established rollback pattern (Service Hijacking, PVC, Syn Flood).

- Add @set_rollback_context_decorator to run()
- Set rollback callable before stopping nodes with base64/JSON encoded data
- Add rollback_gcp_zone_outage() static method with per-node error handling
- Fix missing poll_interval argument in starmap calls
- Add unit tests for rollback and run methods

Closes #915

Signed-off-by: YASHASVIYADAV30 <yashasviydv30@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-03-26 14:42:45 -04:00
Arpit Raj
ec241d35d6 fix: improve logging reliability and code quality (#1199)
- Fix typo 'wating' -> 'waiting' in scenario wait log message
- Replace print() with logging.debug() for pod metrics in prometheus client
- Replace star import with explicit imports in utils/__init__.py
- Remove unnecessary global declaration in main()
- Log VM status exceptions at ERROR level with exception details

Include unit tests in tests/test_logging_and_code_quality.py covering all fixes.

Signed-off-by: 1PoPTRoN <vrxn.arp1traj@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-03-26 13:08:56 -04:00
Arpit Raj
59e10d5a99 fix: bind exception variable in except handlers to prevent NameError (#1198)
Signed-off-by: 1PoPTRoN <vrxn.arp1traj@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-03-26 09:43:37 -04:00
Paige Patton
c8aa959df2 controller -> detailed (#1201)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-26 08:47:06 -04:00
Paige Patton
3db5e1abbe no rebuild image (#1197)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-20 12:54:45 -04:00
Paige Patton
1e699c6cc9 different quay users (#1196)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-20 17:30:42 +01:00
Paige Patton
0ebda3e101 test multi platform (#1194)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-20 11:09:33 -04:00
Tullio Sebastiani
8a5be0dd2f Resiliency Score krknctl compatibility fixes (#1195)
* added console log of the resiliency score when mode is "detailed"

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* base image krknctl input

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

resiliency score flag

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* removed json print in run_krkn.py

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* unit test fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2026-03-20 11:09:07 -04:00
Tullio Sebastiani
62dadfe25c Resiliency Score krknctl compatibility fixes (#1195)
* added console log of the resiliency score when mode is "detailed"

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* base image krknctl input

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

resiliency score flag

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* removed json print in run_krkn.py

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* unit test fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2026-03-20 11:08:56 -04:00
Paige Patton
cb368a2f5c adding tests coverage for resiliency scoring (#1161)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-19 14:16:51 -04:00
Paige Patton
bb636cd3a9 Custom weight to resiliency (#1173)
* feat(resiliency): implement comprehensive resiliency scoring system

- Added resiliency scoring engine
- Implemented scenario-wise scoring with telemetry
- Added configurable SLOs and detailed reporting

Signed-off-by: Abhinav Sharma <abhinavs1920bpl@gmail.com>
Signed-off-by: Paige Patton <prubenda@redhat.com>

* fix: check prometheus url after openshift prometheus check

Signed-off-by: Abhinav Sharma <abhinavs1920bpl@gmail.com>
Signed-off-by: Paige Patton <prubenda@redhat.com>

* custom weight

Signed-off-by: Paige Patton <prubenda@redhat.com>

---------

Signed-off-by: Abhinav Sharma <abhinavs1920bpl@gmail.com>
Signed-off-by: Paige Patton <prubenda@redhat.com>
Co-authored-by: Abhinav Sharma <abhinavs1920bpl@gmail.com>
2026-03-19 13:14:08 -04:00
Arpit Raj
f241b2b62f fix: prevent script injection in require-docs workflow (#1187)
- replace shell interpolation of PR body with jq + $GITHUB_EVENT_PATH
- replace shell interpolation of branch name with actions/github-script
- remove unused actions/checkout step
- add 27 unit tests covering checkbox detection, docs PR search, and
  security regression checks to prevent re-introduction of the bug

Signed-off-by: Arpit Raj <vrxn.arp1traj@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-03-17 09:37:35 -04:00
Paige Patton
2a60a519cd adding run tag (#1179)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-16 16:18:50 -04:00
Naga Ravi Chaitanya Elluri
31756e6d9b Add Beta features governance policy (#1185)
Introduce documentation defining Beta feature expectations, lifecycle,
user guidance, and promotion criteria to GA. This helps users understand
that Beta features are experimental and intended for early feedback.

Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2026-03-12 23:39:14 -04:00
Paige Patton
8c9bce6987 sed change (#1186)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-03-11 12:34:12 -04:00
Arpit Raj
5608482f1b fix: use sorted() instead of .sort() for key validation (#1182) (#1184)
Signed-off-by: Arpit Raj <vrxn.arp1traj@gmail.com>
2026-03-10 10:58:12 -04:00
Darshan Jain
a14d3955a6 feat(ci): add pytest-based CI test framework v2 with ephemeral namespace isolation (#1172) (#1171)
* feat: add pytest-based CI test framework v2 with ephemeral namespace isolation

Signed-off-by: ddjain <darjain@redhat.com>

* feat(ci): add tests_v2 pytest functional test framework

Signed-off-by: ddjain <darjain@redhat.com>
Co-authored-by: Cursor <cursoragent@cursor.com>

* feat: improve naming convention

Signed-off-by: ddjain <darjain@redhat.com>

* improve local setup script.

Signed-off-by: ddjain <darjain@redhat.com>

* added CI job for v2 test

Signed-off-by: ddjain <darjain@redhat.com>

* disabled broken test

Signed-off-by: ddjain <darjain@redhat.com>

* improved CI pipeline execution time

Signed-off-by: ddjain <darjain@redhat.com>

* chore: remove unwanted/generated files from PR

Signed-off-by: ddjain <darjain@redhat.com>

* clean up gitignore file

Signed-off-by: ddjain <darjain@redhat.com>

* fix copilot comments

Signed-off-by: ddjain <darjain@redhat.com>

* fixed copilot suggestion

Signed-off-by: ddjain <darjain@redhat.com>

* uncommented out test upload stage

Signed-off-by: ddjain <darjain@redhat.com>

* exclude CI/tests_v2 from test coverage reporting

Signed-off-by: ddjain <darjain@redhat.com>

* uploading style.css to fix broken report artifacts

Signed-off-by: ddjain <darjain@redhat.com>

* added openshift supported labels in namespace creatation api

Signed-off-by: ddjain <darjain@redhat.com>

---------

Signed-off-by: ddjain <darjain@redhat.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-03-06 08:44:07 -05:00
Arpit Raj
f655ec1a73 fix: accumulate failed scenarios across all scenario types instead of overwriting (#1178)
Signed-off-by: Arpit Raj <vrxn.arp1traj@gmail.com>
2026-03-05 14:06:56 -05:00
Paige Patton
dfc350ac03 adding set run tag (#1174)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-02-27 15:05:05 -05:00
Paige Patton
c474b810b2 updating to use krkn-lib virt functions (#989)
Assisted By: Claude Code

Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-02-27 14:45:31 -05:00
Paige Patton
072e8d0e87 changing pod (#1175)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-02-27 14:40:49 -05:00
Nesar Kavri
aee61061ac Fix: make entrypoint fail fast if setup-ssh.sh fails (#1170)
Signed-off-by: Nesar976 <kavrinesar@gmail.com>
2026-02-27 14:18:01 -05:00
Paige Patton
544cac8bbb merge (#710)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-02-27 14:10:08 -05:00
SurbhiAgarwal
49b1affdb8 Improve error message clarity for setuptools version requirement (#1162)
Fixes #1143 - Updated error message to clearly state that version 38.3 or newer is required

Signed-off-by: Surbhi <agarwalsurbhi1807@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-02-24 10:59:22 -05:00
Darshan Jain
c1dd43fe87 DevConf Pune 2026 feedback (#1169)
Signed-off-by: ddjain <darjain@redhat.com>
2026-02-23 19:54:06 +05:30
Ashish Mahajan
8dad2a3996 fix: use per-URL status_code in HealthChecker telemetry (#1091)
Signed-off-by: AR21SM <mahajanashishar21sm@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-02-19 09:25:03 -05:00
Tullio Sebastiani
cebc60f5a8 Network chaos NG porting - pod network chaos node network chaos (#991)
* fix ibm

Signed-off-by: Paige Patton <prubenda@redhat.com>

* type hint fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* pod network chaos plugin structure + utils method refactoring

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* Pod network chaos plugin

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* Node network chaos plugin

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* default config files

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* config.yaml

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* all field optional

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* minor fixes

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* minor nit on config

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* utils unit tests

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* PodNetworkChaos unit tests

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* NodeNetworkChaos unit test

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* PodNetworkChaos functional test

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* NodeNetworkChaso functional test

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* added funtests to the gh action

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* unit test fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* changed test order + resource rename

* functional tests fix

smallchange

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix requirements

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* changed pod test target

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* added kind port mapping and removed portforwarding

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

fix

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

test fixes

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

test fixes

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Paige Patton <prubenda@redhat.com>
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
Co-authored-by: Paige Patton <prubenda@redhat.com>
2026-02-18 16:20:16 +01:00
Darshan Jain
2065443622 collect ERROR and CRITICAL logs and send to elastic search (#1147) (#1150)
* collect ERROR and CRITICAL logs and send to elastic search

Signed-off-by: ddjain <darjain@redhat.com>

* bump up krkn-lib to 6.0.3

Signed-off-by: ddjain <darjain@redhat.com>

---------

Signed-off-by: ddjain <darjain@redhat.com>
2026-02-18 18:26:14 +05:30
Ashish Mahajan
b6ef7fa052 fix: use list comprehension to avoid skipping nodes during exclusion (#1059)
Fixes #1058

Signed-off-by: AR21SM <mahajanashishar21sm@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-02-17 15:20:10 -05:00
Paige Patton
4f305e78aa remove chaos ai
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-02-11 13:44:13 -05:00
dependabot[bot]
b17e933134 Bump pillow from 10.3.0 to 12.1.1 in /utils/chaos_ai (#1157)
Bumps [pillow](https://github.com/python-pillow/Pillow) from 10.3.0 to 12.1.1.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](https://github.com/python-pillow/Pillow/compare/10.3.0...12.1.1)

---
updated-dependencies:
- dependency-name: pillow
  dependency-version: 12.1.1
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-11 10:08:42 -05:00
Paige Patton
beea484597 adding vm ware tests (#1133)
Signed-off-by: Paige Patton <paigepatton@Paiges-MacBook-Air.local>
Signed-off-by: Paige Patton <prubenda@redhat.com>
Co-authored-by: Paige Patton <paigepatton@Paiges-MacBook-Air.local>
2026-02-10 16:24:26 -05:00
Paige Patton
0222b0f161 fix ibm (#1155)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-02-10 10:09:28 -05:00
Ashish Mahajan
f7e674d5ad docs: fix typos in logs, comments, and documentation (#1079)
Signed-off-by: AR21SM <mahajanashishar21sm@gmail.com>
2026-02-09 09:48:51 -05:00
Ashish Mahajan
7aea12ce6c fix(VirtChecker): handle empty VMI interfaces list (#1072)
Signed-off-by: AR21SM <mahajanashishar21sm@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-02-09 08:29:48 -05:00
Darshan Jain
625e1e90cf feat: add color-coded console logging (#1122) (#1146)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 2m16s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Manage Stale Issues and Pull Requests / Mark and Close Stale Issues and PRs (push) Successful in 24s
Signed-off-by: ddjain <darjain@redhat.com>
2026-02-05 14:27:52 +05:30
dependabot[bot]
a9f1ce8f1b Bump pillow from 10.2.0 to 10.3.0 in /utils/chaos_ai (#1149)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 34m28s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Manage Stale Issues and Pull Requests / Mark and Close Stale Issues and PRs (push) Successful in 5s
Bumps [pillow](https://github.com/python-pillow/Pillow) from 10.2.0 to 10.3.0.
- [Release notes](https://github.com/python-pillow/Pillow/releases)
- [Changelog](https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst)
- [Commits](https://github.com/python-pillow/Pillow/compare/10.2.0...10.3.0)

---
updated-dependencies:
- dependency-name: pillow
  dependency-version: 10.3.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-02 13:47:47 -05:00
Paige Patton
66e364e293 wheel updates (#1148)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-02-02 13:46:22 -05:00
Paige Patton
898ce76648 adding python3.11 updates (#1012)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-02-02 12:00:33 -05:00
Chaudary Farhan Saleem
4a0f4e7cab fix: correct spelling typos in log messages (#1145)
- Fix 'wating' - 'waiting' (2 occurrences)
- Fix 'successfuly' - 'successfully' (12 occurrences)
- Fix 'orginal' - 'original' (1 occurrence)

Improves professionalism of log output and code comments.

Signed-off-by: farhann_saleem <chaudaryfarhann@gmail.com>
2026-02-02 09:23:44 -05:00
Darshan Jain
819191866d Add CLAUDE.md for AI-assisted development (#1141)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 1m38s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Manage Stale Issues and Pull Requests / Mark and Close Stale Issues and PRs (push) Successful in 6s
Signed-off-by: ddjain <darjain@redhat.com>
2026-01-31 23:41:49 +05:30
Paige Patton
37ca4bbce7 removing unneeded requirement (#1066)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 2m50s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Manage Stale Issues and Pull Requests / Mark and Close Stale Issues and PRs (push) Successful in 4s
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-01-20 13:33:28 -05:00
Ashish Mahajan
b9dd4e40d3 fix(hogs): correct off-by-one error in random node selection (#1112)
Signed-off-by: AR21SM <mahajanashishar21sm@gmail.com>
2026-01-20 11:00:50 -05:00
AR21SM
3fd249bb88 Add stale PR management to workflow
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 2m11s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Manage Stale Issues and Pull Requests / Mark and Close Stale Issues and PRs (push) Successful in 5s
Signed-off-by: AR21SM <mahajanashishar21sm@gmail.com>
2026-01-19 15:10:49 -05:00
Naga Ravi Chaitanya Elluri
773107245c Add contribution guidelines reference to the PR template (#1108)
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2026-01-19 14:30:04 -05:00
Paige Patton
05bc201528 adding chaos_ai deprecation (#1106)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-01-19 13:14:04 -05:00
Ashish Mahajan
9a316550e1 fix: add missing 'as e' to capture exception in TimeActionsScenarioPlugin (#1057)
Signed-off-by: AR21SM <mahajanashishar21sm@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-01-19 09:37:30 -05:00
Ashish Mahajan
9c261e2599 feat(ci): add stale issues automation workflow (#1055)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 9m42s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Close Stale Issues / Mark and Close Stale Issues (push) Successful in 9s
Signed-off-by: AR21SM <mahajanashishar21sm@gmail.com>
2026-01-17 10:13:49 -05:00
Paige Patton
0cc82dc65d add service hijacking to add to file not overwrite (#1067)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 5m41s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-01-16 14:24:03 -05:00
Paige Patton
269e21e9eb adding telemety (#1064)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-01-16 13:53:48 -05:00
Paige Patton
d0dbe3354a adding always run tests if pr or main (#1061)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-01-16 13:24:07 -05:00
Paige Patton
4a0686daf3 adding openstack tests (#1060)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-01-16 13:23:49 -05:00
Paige Patton
822bebac0c removing arca utils (#1053)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 9m4s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-01-15 10:50:17 -05:00
Paige Patton
a13150b0f5 changing telemetry test to pod scenarios (#1052)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 5m4s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-01-13 10:16:26 -05:00
Sai Sanjay
0443637fe1 Add unit tests to pvc_scenario_plugin.py (#1014)
* Add PVC outage scenario plugin to manage PVC annotations during outages

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* Remove PvcOutageScenarioPlugin as it is no longer needed; refactor PvcScenarioPlugin to include rollback functionality for temporary file cleanup during PVC scenarios.

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* Refactor rollback_data handling in PvcScenarioPlugin to use str() instead of json.dumps() for resource_identifier.

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* Import json module in PvcScenarioPlugin for decoding rollback data from resource_identifier.

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* feat: Encode rollback data in base64 format for resource_identifier in PvcScenarioPlugin to enhance data handling and security.

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* feat: refactor: Update logging level from debug to info for temp file operations in PvcScenarioPlugin to improve visibility of command execution.

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Add unit tests for PvcScenarioPlugin methods and enhance test coverage

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Add missed lines test cov

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor tests in test_pvc_scenario_plugin.py to use unittest framework and enhance test coverage for to_kbytes method

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Enhance rollback_temp_file test to verify logging of errors for invalid data

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor tests in TestPvcScenarioPluginRun to clarify pod_name behavior and enhance logging verification in rollback_temp_file tests

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactored imports

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor assertions in test cases to use assertEqual for consistency

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

---------

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-01-13 09:47:12 -05:00
Sai Sanjay
36585630f2 Add tests to service_hijacking_scenario.py (#1015)
* Add rollback functionality to ServiceHijackingScenarioPlugin

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* Refactor rollback data handling in ServiceHijackingScenarioPlugin as json string

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* Update rollback data handling in ServiceHijackingScenarioPlugin to decode directly from resource_identifier

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* Add import statement for JSON handling in ServiceHijackingScenarioPlugin

This change introduces an import statement for the JSON module to facilitate the decoding of rollback data from the resource identifier.

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* feat: Enhance rollback data handling in ServiceHijackingScenarioPlugin by encoding and decoding as base64 strings.

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Add rollback tests for ServiceHijackingScenarioPlugin

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor rollback tests for ServiceHijackingScenarioPlugin to improve error logging and remove temporary path dependency

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Remove redundant import of yaml in test_service_hijacking_scenario_plugin.py

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor rollback tests for ServiceHijackingScenarioPlugin to enhance readability and consistency

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

---------

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-01-13 09:26:22 -05:00
dependabot[bot]
1401724312 Bump werkzeug from 3.1.4 to 3.1.5 in /utils/chaos_ai/docker
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 4m7s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 3.1.4 to 3.1.5.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/3.1.4...3.1.5)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-version: 3.1.5
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-08 20:35:19 -05:00
Paige Patton
fa204a515c testing chagnes link (#1047)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 2m7s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-01-08 09:19:33 -05:00
LEEITING
b3a5fc2d53 Fix the typo in krkn/cerberus/setup.py (#1043)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 3m28s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
* Fix typo in key name for application routes in setup.py

Signed-off-by: iting0321 <iting0321@MacBook-11111111.local>

* Fix typo in 'check_applicaton_routes' to 'check_application_routes' in configuration files and cerberus scripts

Signed-off-by: iting0321 <iting0321@MacBook-11111111.local>

---------

Signed-off-by: iting0321 <iting0321@MacBook-11111111.local>
Co-authored-by: iting0321 <iting0321@MacBook-11111111.local>
2026-01-03 23:29:02 -05:00
Paige Patton
05600b62b3 moving tests out from folders (#1042)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 5m7s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Paige Patton <prubenda@redhat.com>
2026-01-02 11:07:29 -05:00
Sai Sanjay
126599e02c Add unit tests for ingress shaping functionality at test_ingress_network_plugin.py (#1036)
* Add unit tests for ingress shaping functionality at test_ingress_network_plugin.py

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Add mocks for Environment and FileSystemLoader in network chaos tests

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

---------

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>
Co-authored-by: Tullio Sebastiani <tsebastiani@users.noreply.github.com>
2026-01-02 14:49:00 +01:00
Sai Sanjay
b3d6a19d24 Add unit tests for logging functions in NetworkChaosNgUtils (#1037)
* Add unit tests for logging functions in NetworkChaosNgUtils

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Add pytest configuration to enable module imports in tests

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Add tests for logging functions handling missing node names in parallel mode

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

---------

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
Co-authored-by: Tullio Sebastiani <tsebastiani@users.noreply.github.com>
2026-01-02 14:48:19 +01:00
Sai Sanjay
65100f26a7 Add unit tests for native plugins.py (#1038)
* Add unit tests for native plugins.py

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Remove redundant yaml import statements in test cases

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Add validation for registered plugin IDs and ensure no legacy aliases exist

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

---------

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>
Co-authored-by: Tullio Sebastiani <tsebastiani@users.noreply.github.com>
2026-01-02 14:47:50 +01:00
Sai Sanjay
10b6e4663e Kubevirt VM outage tests with improved mocking and validation scenarios at test_kubevirt_vm_outage.py (#1041)
* Kubevirt VM outage tests with improved mocking and validation scenarios at test_kubevirt_vm_outage.py

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor Kubevirt VM outage tests to improve time mocking and response handling

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Remove unused subproject reference for pvc_outage

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor Kubevirt VM outage tests to enhance time mocking and improve response handling

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Enhance VMI deletion test by mocking unchanged creationTimestamp to exercise timeout path

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor Kubevirt VM outage tests to use dynamic timestamps and improve mock handling

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

---------

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>
Co-authored-by: Tullio Sebastiani <tsebastiani@users.noreply.github.com>
2026-01-02 14:47:13 +01:00
Sai Sanjay
ce52183a26 Add unit tests for common_functions in ManagedClusterScenarioPlugin, common_function.py (#1039)
* Add unit tests for common_functions in ManagedClusterScenarioPlugin , common_function.py

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor unit tests for common_functions: improve mock behavior and assertions

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Add unit tests for get_managedcluster: handle zero count and random selection

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

---------

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-01-02 08:23:57 -05:00
Sai Sanjay
e9ab3b47b3 Add unit tests for ShutDownScenarioPlugin with AWS, GCP, Azure, and IBM cloud types at shut_down_scenario_plugin.py (#1040)
* Add unit tests for ShutDownScenarioPlugin with AWS, GCP, Azure, and IBM cloud types at shut_down_scenario_plugin.py

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor logging assertions in ShutDownScenarioPlugin tests for clarity and accuracy

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

---------

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>
Co-authored-by: Paige Patton <64206430+paigerube14@users.noreply.github.com>
2026-01-02 08:22:49 -05:00
Sai Sanjay
3e14fe07b7 Add unit tests for Azure class methods in (#1035)
Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>
2026-01-02 08:20:34 -05:00
Paige Patton
d9271a4bcc adding ibm cloud node tests (#1018)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 4m42s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Paige Patton <prubenda@redhat.com>
2025-12-23 12:59:22 -05:00
dependabot[bot]
850930631e Bump werkzeug from 3.0.6 to 3.1.4 in /utils/chaos_ai/docker (#1003)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 9m44s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Bumps [werkzeug](https://github.com/pallets/werkzeug) from 3.0.6 to 3.1.4.
- [Release notes](https://github.com/pallets/werkzeug/releases)
- [Changelog](https://github.com/pallets/werkzeug/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/werkzeug/compare/3.0.6...3.1.4)

---
updated-dependencies:
- dependency-name: werkzeug
  dependency-version: 3.1.4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Tullio Sebastiani <tsebastiani@users.noreply.github.com>
Co-authored-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2025-12-23 08:23:06 -05:00
Sai Sanjay
15eee80c55 Add unit tests for syn_flood_scenario_plugin.py (#1016)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 10m3s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
* Add rollback functionality to SynFloodScenarioPlugin

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* Refactor rollback pod handling in SynFloodScenarioPlugin to handle podnames as string

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* Update resource identifier handling in SynFloodScenarioPlugin to use list format for rollback functionality

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* Refactor chaos scenario configurations in config.yaml to comment out existing scenarios for clarity. Update rollback method in SynFloodScenarioPlugin to improve pod cleanup handling. Modify pvc_scenario.yaml with specific test values for better usability.

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>

* Enhance rollback functionality in SynFloodScenarioPlugin by encoding pod names in base64 format for improved data handling.

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Add unit tests for SynFloodScenarioPlugin methods and rollback functionality

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor TestSynFloodRun and TestRollbackSynFloodPods to inherit from unittest.TestCase

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

* Refactor SynFloodRun tests to use tempfile for scenario file creation and improve error logging in rollback functionality

Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>

---------

Signed-off-by: sanjay7178 <saisanjay7660@gmail.com>
Signed-off-by: Sai Sanjay <saisanjay7660@gmail.com>
2025-12-22 15:01:50 -05:00
Paige Patton
ff3c4f5313 increasing node action coverage (#1010)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2025-12-22 11:36:10 -05:00
Paige Patton
4c74df301f adding alibaba and az tests (#1011)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 9m52s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Paige Patton <prubenda@redhat.com>
2025-12-19 15:31:55 -05:00
259 changed files with 23978 additions and 4544 deletions

View File

@@ -2,3 +2,4 @@
omit =
tests/*
krkn/tests/**
CI/tests_v2/*

View File

@@ -1,27 +1,47 @@
## Type of change
# Type of change
- [ ] Refactor
- [ ] New feature
- [ ] Bug fix
- [ ] Optimization
## Description
<!-- Provide a brief description of the changes made in this PR. -->
# Description
<-- Provide a brief description of the changes made in this PR. -->
## Related Tickets & Documents
If no related issue, please create one and start the converasation on wants of
- Related Issue #
- Closes #
- Related Issue #:
- Closes #:
## Documentation
# Documentation
- [ ] **Is documentation needed for this update?**
If checked, a documentation PR must be created and merged in the [website repository](https://github.com/krkn-chaos/website/).
## Related Documentation PR (if applicable)
<!-- Add the link to the corresponding documentation PR in the website repository -->
<-- Add the link to the corresponding documentation PR in the website repository -->
## Checklist before requesting a review
# Checklist before requesting a review
[ ] Ensure the changes and proposed solution have been discussed in the relevant issue and have received acknowledgment from the community or maintainers. See [contributing guidelines](https://krkn-chaos.dev/docs/contribution-guidelines/)
See [testing your changes](https://krkn-chaos.dev/docs/developers-guide/testing-changes/) and run on any Kubernetes or OpenShift cluster to validate your changes
- [ ] I have performed a self-review of my code by running krkn and specific scenario
- [ ] If it is a core feature, I have added thorough unit tests with above 80% coverage
- [ ] I have performed a self-review of my code.
- [ ] If it is a core feature, I have added thorough tests.
*REQUIRED*:
Description of combination of tests performed and output of run
```bash
python run_kraken.py
...
<---insert test results output--->
```
OR
```bash
python -m coverage run -a -m unittest discover -s tests -v
...
<---insert test results output--->
```

View File

@@ -6,48 +6,117 @@ on:
jobs:
build:
runs-on: ubuntu-latest
runs-on: ${{ matrix.runner }}
strategy:
matrix:
include:
- platform: amd64
runner: ubuntu-latest
- platform: arm64
runner: ubuntu-24.04-arm
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Build the Docker images
if: startsWith(github.ref, 'refs/tags')
run: |
./containers/compile_dockerfile.sh
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/ --build-arg TAG=${GITHUB_REF#refs/tags/}
docker tag quay.io/krkn-chaos/krkn quay.io/redhat-chaos/krkn
docker tag quay.io/krkn-chaos/krkn quay.io/krkn-chaos/krkn:${GITHUB_REF#refs/tags/}
docker tag quay.io/krkn-chaos/krkn quay.io/redhat-chaos/krkn:${GITHUB_REF#refs/tags/}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Test Build the Docker images
if: ${{ github.event_name == 'pull_request' }}
if: github.event_name == 'pull_request'
run: |
./containers/compile_dockerfile.sh
docker build --no-cache -t quay.io/krkn-chaos/krkn containers/ --build-arg PR_NUMBER=${{ github.event.pull_request.number }}
- name: Login in quay
docker buildx build --no-cache \
--platform linux/${{ matrix.platform }} \
-t quay.io/krkn-chaos/krkn \
-t quay.io/redhat-chaos/krkn \
containers/ \
--build-arg PR_NUMBER=${{ github.event.pull_request.number }}
- name: Login to krkn-chaos quay
if: startsWith(github.ref, 'refs/tags')
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
env:
QUAY_USER: ${{ secrets.QUAY_USERNAME }}
QUAY_TOKEN: ${{ secrets.QUAY_PASSWORD }}
- name: Push the KrknChaos Docker images
uses: docker/login-action@v3
with:
registry: quay.io
username: ${{ secrets.QUAY_USERNAME }}
password: ${{ secrets.QUAY_PASSWORD }}
- name: Build and push krkn-chaos images
if: startsWith(github.ref, 'refs/tags')
run: |
docker push quay.io/krkn-chaos/krkn
docker push quay.io/krkn-chaos/krkn:${GITHUB_REF#refs/tags/}
- name: Login in to redhat-chaos quay
if: startsWith(github.ref, 'refs/tags/v')
run: docker login quay.io -u ${QUAY_USER} -p ${QUAY_TOKEN}
env:
QUAY_USER: ${{ secrets.QUAY_USER_1 }}
QUAY_TOKEN: ${{ secrets.QUAY_TOKEN_1 }}
- name: Push the RedHat Chaos Docker images
./containers/compile_dockerfile.sh
TAG=${GITHUB_REF#refs/tags/}
docker buildx build --no-cache \
--platform linux/${{ matrix.platform }} \
--provenance=false \
-t quay.io/krkn-chaos/krkn:latest-${{ matrix.platform }} \
-t quay.io/krkn-chaos/krkn:${TAG}-${{ matrix.platform }} \
containers/ \
--build-arg TAG=${TAG} \
--push --load
- name: Login to redhat-chaos quay
if: startsWith(github.ref, 'refs/tags')
run: |
docker push quay.io/redhat-chaos/krkn
docker push quay.io/redhat-chaos/krkn:${GITHUB_REF#refs/tags/}
uses: docker/login-action@v3
with:
registry: quay.io
username: ${{ secrets.QUAY_USER_1 }}
password: ${{ secrets.QUAY_TOKEN_1 }}
- name: Push redhat-chaos images
if: startsWith(github.ref, 'refs/tags')
run: |
TAG=${GITHUB_REF#refs/tags/}
docker tag quay.io/krkn-chaos/krkn:${TAG}-${{ matrix.platform }} quay.io/redhat-chaos/krkn:${TAG}-${{ matrix.platform }}
docker tag quay.io/krkn-chaos/krkn:${TAG}-${{ matrix.platform }} quay.io/redhat-chaos/krkn:latest-${{ matrix.platform }}
docker push quay.io/redhat-chaos/krkn:${TAG}-${{ matrix.platform }}
docker push quay.io/redhat-chaos/krkn:latest-${{ matrix.platform }}
manifest:
runs-on: ubuntu-latest
needs: build
if: startsWith(github.ref, 'refs/tags')
steps:
- name: Login to krkn-chaos quay
uses: docker/login-action@v3
with:
registry: quay.io
username: ${{ secrets.QUAY_USERNAME }}
password: ${{ secrets.QUAY_PASSWORD }}
- name: Create and push KrknChaos manifests
run: |
TAG=${GITHUB_REF#refs/tags/}
docker manifest create quay.io/krkn-chaos/krkn:${TAG} \
quay.io/krkn-chaos/krkn:${TAG}-amd64 \
quay.io/krkn-chaos/krkn:${TAG}-arm64
docker manifest push quay.io/krkn-chaos/krkn:${TAG}
docker manifest create quay.io/krkn-chaos/krkn:latest \
quay.io/krkn-chaos/krkn:latest-amd64 \
quay.io/krkn-chaos/krkn:latest-arm64
docker manifest push quay.io/krkn-chaos/krkn:latest
- name: Login to redhat-chaos quay
uses: docker/login-action@v3
with:
registry: quay.io
username: ${{ secrets.QUAY_USER_1 }}
password: ${{ secrets.QUAY_TOKEN_1 }}
- name: Create and push RedHat Chaos manifests
run: |
TAG=${GITHUB_REF#refs/tags/}
docker manifest create quay.io/redhat-chaos/krkn:${TAG} \
quay.io/redhat-chaos/krkn:${TAG}-amd64 \
quay.io/redhat-chaos/krkn:${TAG}-arm64
docker manifest push quay.io/redhat-chaos/krkn:${TAG}
docker manifest create quay.io/redhat-chaos/krkn:latest \
quay.io/redhat-chaos/krkn:latest-amd64 \
quay.io/redhat-chaos/krkn:latest-arm64
docker manifest push quay.io/redhat-chaos/krkn:latest
- name: Rebuild krkn-hub
if: startsWith(github.ref, 'refs/tags')
uses: redhat-chaos/actions/krkn-hub@main
with:
QUAY_USER: ${{ secrets.QUAY_USERNAME }}

View File

@@ -9,37 +9,47 @@ jobs:
name: Check Documentation Update
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Check if Documentation is Required
id: check_docs
run: |
echo "Checking PR body for documentation checkbox..."
# Read the PR body from the GitHub event payload
if echo "${{ github.event.pull_request.body }}" | grep -qi '\[x\].*documentation needed'; then
# Read PR body from the event JSON file — never from shell interpolation.
# jq handles all escaping; the shell never sees the user-controlled string.
if jq -r '.pull_request.body // ""' "$GITHUB_EVENT_PATH" | \
grep -qi '\[x\].*documentation needed'; then
echo "Documentation required detected."
echo "docs_required=true" >> $GITHUB_OUTPUT
echo "docs_required=true" >> "$GITHUB_OUTPUT"
else
echo "Documentation not required."
echo "docs_required=false" >> $GITHUB_OUTPUT
echo "docs_required=false" >> "$GITHUB_OUTPUT"
fi
- name: Enforce Documentation Update (if required)
if: steps.check_docs.outputs.docs_required == 'true'
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
# Retrieve feature branch and repository owner from the GitHub context
FEATURE_BRANCH="${{ github.head_ref }}"
REPO_OWNER="${{ github.repository_owner }}"
WEBSITE_REPO="website"
echo "Searching for a merged documentation PR for feature branch: $FEATURE_BRANCH in $REPO_OWNER/$WEBSITE_REPO..."
MERGED_PR=$(gh pr list --repo "$REPO_OWNER/$WEBSITE_REPO" --state merged --json headRefName,title,url | jq -r \
--arg FEATURE_BRANCH "$FEATURE_BRANCH" '.[] | select(.title | contains($FEATURE_BRANCH)) | .url')
if [[ -z "$MERGED_PR" ]]; then
echo ":x: Documentation PR for branch '$FEATURE_BRANCH' is required and has not been merged."
exit 1
else
echo ":white_check_mark: Found merged documentation PR: $MERGED_PR"
fi
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const featureBranch = context.payload.pull_request.head.ref;
const repoOwner = context.repo.owner;
const websiteRepo = 'website';
core.info(`Searching for a merged documentation PR for feature branch: ${featureBranch} in ${repoOwner}/${websiteRepo}...`);
const { data: pulls } = await github.rest.pulls.list({
owner: repoOwner,
repo: websiteRepo,
state: 'closed',
per_page: 100,
});
const mergedPr = pulls.find(
(pr) => pr.merged_at && pr.title.includes(featureBranch)
);
if (!mergedPr) {
core.setFailed(
`❌ Documentation PR for branch '${featureBranch}' is required and has not been merged.`
);
} else {
core.info(`✅ Found merged documentation PR: ${mergedPr.html_url}`);
}

52
.github/workflows/stale.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
name: Manage Stale Issues and Pull Requests
on:
schedule:
# Run daily at 1:00 AM UTC
- cron: '0 1 * * *'
workflow_dispatch:
permissions:
issues: write
pull-requests: write
jobs:
stale:
name: Mark and Close Stale Issues and PRs
runs-on: ubuntu-latest
steps:
- name: Mark and close stale issues and PRs
uses: actions/stale@v9
with:
days-before-issue-stale: 60
days-before-issue-close: 14
stale-issue-label: 'stale'
stale-issue-message: |
This issue has been automatically marked as stale because it has not had any activity in the last 60 days.
It will be closed in 14 days if no further activity occurs.
If this issue is still relevant, please leave a comment or remove the stale label.
Thank you for your contributions to krkn!
close-issue-message: |
This issue has been automatically closed due to inactivity.
If you believe this issue is still relevant, please feel free to reopen it or create a new issue with updated information.
Thank you for your understanding!
close-issue-reason: 'not_planned'
days-before-pr-stale: 90
days-before-pr-close: 14
stale-pr-label: 'stale'
stale-pr-message: |
This pull request has been automatically marked as stale because it has not had any activity in the last 90 days.
It will be closed in 14 days if no further activity occurs.
If this PR is still relevant, please rebase it, address any pending reviews, or leave a comment.
Thank you for your contributions to krkn!
close-pr-message: |
This pull request has been automatically closed due to inactivity.
If you believe this PR is still relevant, please feel free to reopen it or create a new pull request with updated changes.
Thank you for your understanding!
# Exempt labels
exempt-issue-labels: 'bug,enhancement,good first issue'
exempt-pr-labels: 'pending discussions,hold'
remove-stale-when-updated: true

View File

@@ -32,21 +32,22 @@ jobs:
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
python-version: '3.11'
architecture: 'x64'
- name: Install environment
run: |
sudo apt-get install build-essential python3-dev
pip install --upgrade pip
pip install -r requirements.txt
pip install coverage
- name: Deploy test workloads
run: |
es_pod_name=$(kubectl get pods -l "app=elasticsearch-master" -o name)
echo "POD_NAME: $es_pod_name"
kubectl --namespace default port-forward $es_pod_name 9200 &
prom_name=$(kubectl get pods -n monitoring -l "app.kubernetes.io/name=prometheus" -o name)
kubectl --namespace monitoring port-forward $prom_name 9090 &
# es_pod_name=$(kubectl get pods -l "app=elasticsearch-master" -o name)
# echo "POD_NAME: $es_pod_name"
# kubectl --namespace default port-forward $es_pod_name 9200 &
# prom_name=$(kubectl get pods -n monitoring -l "app.kubernetes.io/name=prometheus" -o name)
# kubectl --namespace monitoring port-forward $prom_name 9090 &
# Wait for Elasticsearch to be ready
echo "Waiting for Elasticsearch to be ready..."
@@ -76,9 +77,7 @@ jobs:
- name: Run unit tests
run: python -m coverage run -a -m unittest discover -s tests -v
- name: Setup Pull Request Functional Tests
if: |
github.event_name == 'pull_request'
- name: Setup Functional Tests
run: |
yq -i '.kraken.performance_monitoring="localhost:9090"' CI/config/common_test_config.yaml
yq -i '.elastic.elastic_port=9200' CI/config/common_test_config.yaml
@@ -86,22 +85,26 @@ jobs:
yq -i '.elastic.enable_elastic=False' CI/config/common_test_config.yaml
yq -i '.elastic.password="${{env.ELASTIC_PASSWORD}}"' CI/config/common_test_config.yaml
yq -i '.performance_monitoring.prometheus_url="http://localhost:9090"' CI/config/common_test_config.yaml
echo "test_service_hijacking" > ./CI/tests/functional_tests
echo "test_app_outages" >> ./CI/tests/functional_tests
echo "test_container" >> ./CI/tests/functional_tests
echo "test_pod" >> ./CI/tests/functional_tests
echo "test_pod_error" >> ./CI/tests/functional_tests
echo "test_customapp_pod" >> ./CI/tests/functional_tests
echo "test_namespace" >> ./CI/tests/functional_tests
echo "test_net_chaos" >> ./CI/tests/functional_tests
echo "test_time" >> ./CI/tests/functional_tests
echo "test_app_outages" > ./CI/tests/functional_tests
echo "test_container" >> ./CI/tests/functional_tests
echo "test_cpu_hog" >> ./CI/tests/functional_tests
echo "test_memory_hog" >> ./CI/tests/functional_tests
echo "test_customapp_pod" >> ./CI/tests/functional_tests
echo "test_io_hog" >> ./CI/tests/functional_tests
echo "test_memory_hog" >> ./CI/tests/functional_tests
echo "test_namespace" >> ./CI/tests/functional_tests
echo "test_net_chaos" >> ./CI/tests/functional_tests
echo "test_node" >> ./CI/tests/functional_tests
echo "test_service_hijacking" >> ./CI/tests/functional_tests
echo "test_pod_network_filter" >> ./CI/tests/functional_tests
echo "test_pod_server" >> ./CI/tests/functional_tests
echo "test_node" >> ./CI/tests/functional_tests
echo "test_time" >> ./CI/tests/functional_tests
echo "test_node_network_chaos" >> ./CI/tests/functional_tests
echo "test_pod_network_chaos" >> ./CI/tests/functional_tests
echo "test_cerberus_unhealthy" >> ./CI/tests/functional_tests
echo "test_pod_error" >> ./CI/tests/functional_tests
echo "test_pod" >> ./CI/tests/functional_tests
# echo "test_pvc" >> ./CI/tests/functional_tests
# Push on main only steps + all other functional to collect coverage
# for the badge
@@ -115,31 +118,9 @@ jobs:
- name: Setup Post Merge Request Functional Tests
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: |
yq -i '.kraken.performance_monitoring="localhost:9090"' CI/config/common_test_config.yaml
yq -i '.elastic.enable_elastic=False' CI/config/common_test_config.yaml
yq -i '.elastic.password="${{env.ELASTIC_PASSWORD}}"' CI/config/common_test_config.yaml
yq -i '.elastic.elastic_port=9200' CI/config/common_test_config.yaml
yq -i '.elastic.elastic_url="https://localhost"' CI/config/common_test_config.yaml
yq -i '.performance_monitoring.prometheus_url="http://localhost:9090"' CI/config/common_test_config.yaml
yq -i '.telemetry.username="${{secrets.TELEMETRY_USERNAME}}"' CI/config/common_test_config.yaml
yq -i '.telemetry.password="${{secrets.TELEMETRY_PASSWORD}}"' CI/config/common_test_config.yaml
echo "test_service_hijacking" >> ./CI/tests/functional_tests
echo "test_app_outages" >> ./CI/tests/functional_tests
echo "test_container" >> ./CI/tests/functional_tests
echo "test_pod" >> ./CI/tests/functional_tests
echo "test_telemetry" > ./CI/tests/functional_tests
echo "test_pod_error" >> ./CI/tests/functional_tests
echo "test_customapp_pod" >> ./CI/tests/functional_tests
echo "test_namespace" >> ./CI/tests/functional_tests
echo "test_net_chaos" >> ./CI/tests/functional_tests
echo "test_time" >> ./CI/tests/functional_tests
echo "test_cpu_hog" >> ./CI/tests/functional_tests
echo "test_memory_hog" >> ./CI/tests/functional_tests
echo "test_io_hog" >> ./CI/tests/functional_tests
echo "test_pod_network_filter" >> ./CI/tests/functional_tests
echo "test_pod_server" >> ./CI/tests/functional_tests
echo "test_node" >> ./CI/tests/functional_tests
# echo "test_pvc" >> ./CI/tests/functional_tests
echo "test_telemetry" >> ./CI/tests/functional_tests
# Final common steps
- name: Run Functional tests
env:
@@ -205,7 +186,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.9
python-version: '3.11'
- name: Copy badge on GitHub Page Repo
env:
COLOR: yellow

53
.github/workflows/tests_v2.yml vendored Normal file
View File

@@ -0,0 +1,53 @@
name: Tests v2 (pytest functional)
on:
pull_request:
push:
branches:
- main
jobs:
tests-v2:
name: Tests v2 (pytest functional)
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v3
- name: Create KinD cluster
uses: redhat-chaos/actions/kind@main
- name: Pre-load test images into KinD
run: |
docker pull nginx:alpine
kind load docker-image nginx:alpine
docker pull quay.io/krkn-chaos/krkn:tools
kind load docker-image quay.io/krkn-chaos/krkn:tools
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.11'
architecture: 'x64'
cache: 'pip'
- name: Install dependencies
run: |
sudo apt-get install -y build-essential python3-dev
pip install --upgrade pip
pip install -r requirements.txt
pip install -r CI/tests_v2/requirements.txt
- name: Run tests_v2
run: |
KRKN_TEST_COVERAGE=1 python -m pytest CI/tests_v2/ -v --timeout=300 --reruns=1 --reruns-delay=5 \
--html=CI/tests_v2/report.html -n auto --junitxml=CI/tests_v2/results.xml
- name: Upload tests_v2 artifacts
if: always()
uses: actions/upload-artifact@v4
with:
name: tests-v2-results
path: |
CI/tests_v2/report.html
CI/tests_v2/results.xml
CI/tests_v2/assets/
if-no-files-found: ignore

7
.gitignore vendored
View File

@@ -17,6 +17,7 @@ __pycache__/*
kube-burner*
kube_burner*
recommender_*.json
resiliency*.json
# Project files
.ropeproject
@@ -54,7 +55,7 @@ MANIFEST
# Per-project virtualenvs
.venv*/
venv*/
kraken.report
*.report
collected-metrics/*
inspect.local.*
@@ -64,6 +65,10 @@ CI/out/*
CI/ci_results
CI/legacy/*node.yaml
CI/results.markdown
# CI tests_v2 (pytest-html / pytest outputs)
CI/tests_v2/results.xml
CI/tests_v2/report.html
CI/tests_v2/assets/
#env
chaos/*

9
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,9 @@
repos:
- repo: local
hooks:
- id: check-license-header
name: Check Apache 2.0 license header
language: python
entry: python scripts/check_license.py
types: [python]
exclude: ^tests/|/test_|^CI/

141
BETA_FEATURE_POLICY.md Normal file
View File

@@ -0,0 +1,141 @@
# Beta Features Policy
## Overview
Beta features provide users early access to new capabilities before they reach full stability and general availability (GA). These features allow maintainers to gather feedback, validate usability, and improve functionality based on real-world usage.
Beta features are intended for experimentation and evaluation. While they are functional, they may not yet meet the stability, performance, or backward compatibility guarantees expected from generally available features.
---
## What is a Beta Feature
A **Beta feature** is a feature that is released for user evaluation but is still under active development and refinement.
Beta features may have the following characteristics:
- Functionally usable but still evolving
- APIs or behavior may change between releases
- Performance optimizations may still be in progress
- Documentation may be limited or evolving
- Edge cases may not be fully validated
Beta features should be considered **experimental and optional**.
---
## User Expectations
Users trying Beta features should understand the following:
- Stability is not guaranteed
- APIs and functionality may change without notice
- Backward compatibility is not guaranteed
- The feature may evolve significantly before GA
- Production use should be evaluated carefully
We strongly encourage users to provide feedback to help improve the feature before it becomes generally available.
---
## Beta Feature Identification
All Beta features are clearly identified to ensure transparency.
### In Release Notes
Beta features will be marked with a **[BETA]** tag.
Example: [BETA] Krkn Resilience Score
### In Documentation
Beta features will include a notice similar to:
> **Beta Feature**
> This feature is currently in Beta and is intended for early user feedback. Behavior, APIs, and stability may change in future releases.
---
## Feature Lifecycle
Features typically progress through the following lifecycle stages.
### 1. Development
The feature is under active development and may not yet be visible to users.
### 2. Beta
The feature is released for early adoption and feedback.
Characteristics:
- Feature is usable
- Feedback is encouraged
- Stability improvements are ongoing
### 3. Stabilization
Based on user feedback and testing, the feature is improved to meet stability and usability expectations.
### 4. General Availability (GA)
The feature is considered stable and production-ready.
GA features provide:
- Stable APIs
- Backward compatibility guarantees
- Complete documentation
- Full CI test coverage
---
## Promotion to General Availability
A Beta feature may be promoted to GA once the following criteria are met:
- Critical bugs are resolved
- Feature stability has improved through testing
- APIs and behavior are stable
- Documentation is complete
- Community feedback has been incorporated
The promotion will be announced in the release notes.
Example: Feature promoted from Beta to GA
---
## Deprecation of Beta Features
In some cases, a Beta feature may be redesigned or discontinued.
If this happens:
- The feature will be marked as **Deprecated**
- A removal timeline will be provided
- Alternative approaches will be documented when possible
Example: [DEPRECATED] This feature will be removed in a future release.
---
## Contributing Feedback
User feedback plays a critical role in improving Beta features.
Users are encouraged to report:
- Bugs
- Usability issues
- Performance concerns
- Feature suggestions
Feedback can be submitted through:
- Krkn GitHub Issues
- Krkn GitHub Discussions
- Krkn Community channels
Please include **Beta feature context** when reporting issues.
Your feedback helps guide the roadmap and ensures features are production-ready before GA.

View File

@@ -7,7 +7,7 @@ kraken:
signal_address: 0.0.0.0 # Signal listening address
port: 8081 # Signal port
auto_rollback: True # Enable auto rollback for scenarios.
rollback_versions_directory: /tmp/kraken-rollback # Directory to store rollback version files.
rollback_versions_directory: # Directory to store rollback version files. If empty, a secure temp directory is created automatically.
chaos_scenarios: # List of policies/chaos scenarios to load.
- $scenario_type: # List of chaos pod scenarios to load.
- $scenario_file
@@ -42,7 +42,7 @@ telemetry:
prometheus_backup: True # enables/disables prometheus data collection
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
backup_threads: 5 # number of telemetry download/upload threads
archive_path: /tmp # local path where the archive files will be temporarly stored
archive_path: # local path where the archive files will be temporarily stored. If empty, a secure temp directory is created automatically.
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
archive_size: 10000 # the size of the prometheus data archive size in KB. The lower the size of archive is

View File

@@ -0,0 +1,79 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: mock-cerberus-server
namespace: default
data:
server.py: |
#!/usr/bin/env python3
from http.server import HTTPServer, BaseHTTPRequestHandler
import json
class MockCerberusHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/':
# Return True to indicate cluster is healthy
self.send_response(200)
self.send_header('Content-type', 'text/plain')
self.end_headers()
self.wfile.write(b'True')
elif self.path.startswith('/history'):
# Return empty history (no failures)
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
response = {
"history": {
"failures": []
}
}
self.wfile.write(json.dumps(response).encode())
else:
self.send_response(404)
self.end_headers()
def log_message(self, format, *args):
print(f"[MockCerberus] {format % args}")
if __name__ == '__main__':
server = HTTPServer(('0.0.0.0', 8080), MockCerberusHandler)
print("[MockCerberus] Starting mock cerberus server on port 8080...")
server.serve_forever()
---
apiVersion: v1
kind: Pod
metadata:
name: mock-cerberus
namespace: default
labels:
app: mock-cerberus
spec:
containers:
- name: mock-cerberus
image: python:3.9-slim
command: ["python3", "/app/server.py"]
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: server-script
mountPath: /app
volumes:
- name: server-script
configMap:
name: mock-cerberus-server
defaultMode: 0755
---
apiVersion: v1
kind: Service
metadata:
name: mock-cerberus
namespace: default
spec:
selector:
app: mock-cerberus
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP

View File

@@ -0,0 +1,85 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: mock-cerberus-unhealthy-server
namespace: default
data:
server.py: |
#!/usr/bin/env python3
from http.server import HTTPServer, BaseHTTPRequestHandler
import json
class MockCerberusUnhealthyHandler(BaseHTTPRequestHandler):
def do_GET(self):
if self.path == '/':
# Return False to indicate cluster is unhealthy
self.send_response(200)
self.send_header('Content-type', 'text/plain')
self.end_headers()
self.wfile.write(b'False')
elif self.path.startswith('/history'):
# Return history with failures
self.send_response(200)
self.send_header('Content-type', 'application/json')
self.end_headers()
response = {
"history": {
"failures": [
{
"component": "node",
"name": "test-node",
"timestamp": "2024-01-01T00:00:00Z"
}
]
}
}
self.wfile.write(json.dumps(response).encode())
else:
self.send_response(404)
self.end_headers()
def log_message(self, format, *args):
print(f"[MockCerberusUnhealthy] {format % args}")
if __name__ == '__main__':
server = HTTPServer(('0.0.0.0', 8080), MockCerberusUnhealthyHandler)
print("[MockCerberusUnhealthy] Starting mock cerberus unhealthy server on port 8080...")
server.serve_forever()
---
apiVersion: v1
kind: Pod
metadata:
name: mock-cerberus-unhealthy
namespace: default
labels:
app: mock-cerberus-unhealthy
spec:
containers:
- name: mock-cerberus-unhealthy
image: python:3.9-slim
command: ["python3", "/app/server.py"]
ports:
- containerPort: 8080
name: http
volumeMounts:
- name: server-script
mountPath: /app
volumes:
- name: server-script
configMap:
name: mock-cerberus-unhealthy-server
defaultMode: 0755
---
apiVersion: v1
kind: Service
metadata:
name: mock-cerberus-unhealthy
namespace: default
spec:
selector:
app: mock-cerberus-unhealthy
ports:
- protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP

View File

@@ -0,0 +1,79 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_cerberus_unhealthy {
echo "========================================"
echo "Starting Cerberus Unhealthy Test"
echo "========================================"
# Deploy mock cerberus unhealthy server
echo "Deploying mock cerberus unhealthy server..."
kubectl apply -f CI/templates/mock_cerberus_unhealthy.yaml
# Wait for mock cerberus unhealthy pod to be ready
echo "Waiting for mock cerberus unhealthy to be ready..."
kubectl wait --for=condition=ready pod -l app=mock-cerberus-unhealthy --timeout=300s
# Verify mock cerberus service is accessible
echo "Verifying mock cerberus unhealthy service..."
mock_cerberus_ip=$(kubectl get service mock-cerberus-unhealthy -o jsonpath='{.spec.clusterIP}')
echo "Mock Cerberus Unhealthy IP: $mock_cerberus_ip"
# Test cerberus endpoint from within the cluster (should return False)
kubectl run cerberus-unhealthy-test --image=curlimages/curl:latest --rm -i --restart=Never -- \
curl -s http://mock-cerberus-unhealthy.default.svc.cluster.local:8080/ || echo "Cerberus unhealthy test curl completed"
# Configure scenario for pod disruption with cerberus enabled
export scenario_type="pod_disruption_scenarios"
export scenario_file="scenarios/kind/pod_etcd.yml"
export post_config=""
# Generate config with cerberus enabled
envsubst < CI/config/common_test_config.yaml > CI/config/cerberus_unhealthy_test_config.yaml
# Enable cerberus in the config but DON'T exit_on_failure (so the test can verify the behavior)
# Using yq jq-wrapper syntax with -i -y
yq -i '.cerberus.cerberus_enabled = true' CI/config/cerberus_unhealthy_test_config.yaml
yq -i ".cerberus.cerberus_url = \"http://${mock_cerberus_ip}:8080\"" CI/config/cerberus_unhealthy_test_config.yaml
yq -i '.kraken.exit_on_failure = false' CI/config/cerberus_unhealthy_test_config.yaml
echo "========================================"
echo "Cerberus Unhealthy Configuration:"
yq '.cerberus' CI/config/cerberus_unhealthy_test_config.yaml
echo "exit_on_failure:"
yq '.kraken.exit_on_failure' CI/config/cerberus_unhealthy_test_config.yaml
echo "========================================"
# Run kraken with cerberus unhealthy (should detect unhealthy but not exit due to exit_on_failure=false)
echo "Running kraken with cerberus unhealthy integration..."
# We expect this to complete (not exit 1) because exit_on_failure is false
# But cerberus should log that the cluster is unhealthy
python3 -m coverage run -a run_kraken.py -c CI/config/cerberus_unhealthy_test_config.yaml || {
exit_code=$?
echo "Kraken exited with code: $exit_code"
# If exit_code is 1, that's expected when cerberus reports unhealthy and exit_on_failure would be true
# But since we set exit_on_failure=false, it should not exit
if [ $exit_code -eq 1 ]; then
echo "WARNING: Kraken exited with 1, which may indicate cerberus detected unhealthy cluster"
fi
}
# Verify cerberus was called by checking mock cerberus logs
echo "Checking mock cerberus unhealthy logs..."
kubectl logs -l app=mock-cerberus-unhealthy --tail=50
# Cleanup
echo "Cleaning up mock cerberus unhealthy..."
kubectl delete -f CI/templates/mock_cerberus_unhealthy.yaml || true
echo "========================================"
echo "Cerberus unhealthy functional test: Success"
echo "========================================"
}
functional_test_cerberus_unhealthy

View File

@@ -0,0 +1,165 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_node_network_chaos {
echo "Starting node network chaos functional test"
# Get a worker node
get_node
export TARGET_NODE=$(echo $WORKER_NODE | awk '{print $1}')
echo "Target node: $TARGET_NODE"
# Deploy nginx workload on the target node
echo "Deploying nginx workload on $TARGET_NODE..."
kubectl create deployment nginx-node-net-chaos --image=nginx:latest
# Add node selector to ensure pod runs on target node
kubectl patch deployment nginx-node-net-chaos -p '{"spec":{"template":{"spec":{"nodeSelector":{"kubernetes.io/hostname":"'$TARGET_NODE'"}}}}}'
# Expose service
kubectl expose deployment nginx-node-net-chaos --port=80 --target-port=80 --name=nginx-node-net-chaos-svc
# Wait for nginx to be ready
echo "Waiting for nginx pod to be ready on $TARGET_NODE..."
kubectl wait --for=condition=ready pod -l app=nginx-node-net-chaos --timeout=120s
# Verify pod is on correct node
export POD_NAME=$(kubectl get pods -l app=nginx-node-net-chaos -o jsonpath='{.items[0].metadata.name}')
export POD_NODE=$(kubectl get pod $POD_NAME -o jsonpath='{.spec.nodeName}')
echo "Pod $POD_NAME is running on node $POD_NODE"
if [ "$POD_NODE" != "$TARGET_NODE" ]; then
echo "ERROR: Pod is not on target node (expected $TARGET_NODE, got $POD_NODE)"
kubectl get pods -l app=nginx-node-net-chaos -o wide
exit 1
fi
# Setup port-forward to access nginx
echo "Setting up port-forward to nginx service..."
kubectl port-forward service/nginx-node-net-chaos-svc 8091:80 &
PORT_FORWARD_PID=$!
sleep 3 # Give port-forward time to start
# Test baseline connectivity
echo "Testing baseline connectivity..."
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 http://localhost:8091 || echo "000")
if [ "$response" != "200" ]; then
echo "ERROR: Nginx not responding correctly (got $response, expected 200)"
kubectl get pods -l app=nginx-node-net-chaos
kubectl describe pod $POD_NAME
exit 1
fi
echo "Baseline test passed: nginx responding with 200"
# Measure baseline latency
echo "Measuring baseline latency..."
baseline_start=$(date +%s%3N)
curl -s http://localhost:8091 > /dev/null || true
baseline_end=$(date +%s%3N)
baseline_latency=$((baseline_end - baseline_start))
echo "Baseline latency: ${baseline_latency}ms"
# Configure node network chaos scenario
echo "Configuring node network chaos scenario..."
yq -i '.[0].config.target="'$TARGET_NODE'"' scenarios/kube/node-network-chaos.yml
yq -i '.[0].config.namespace="default"' scenarios/kube/node-network-chaos.yml
yq -i '.[0].config.test_duration=20' scenarios/kube/node-network-chaos.yml
yq -i '.[0].config.latency="200ms"' scenarios/kube/node-network-chaos.yml
yq -i '.[0].config.loss=15' scenarios/kube/node-network-chaos.yml
yq -i '.[0].config.bandwidth="10mbit"' scenarios/kube/node-network-chaos.yml
yq -i '.[0].config.ingress=true' scenarios/kube/node-network-chaos.yml
yq -i '.[0].config.egress=true' scenarios/kube/node-network-chaos.yml
yq -i '.[0].config.force=false' scenarios/kube/node-network-chaos.yml
yq -i 'del(.[0].config.interfaces)' scenarios/kube/node-network-chaos.yml
# Prepare krkn config
export scenario_type="network_chaos_ng_scenarios"
export scenario_file="scenarios/kube/node-network-chaos.yml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/node_network_chaos_config.yaml
# Run krkn in background
echo "Starting krkn with node network chaos scenario..."
python3 -m coverage run -a run_kraken.py -c CI/config/node_network_chaos_config.yaml &
KRKN_PID=$!
echo "Krkn started with PID: $KRKN_PID"
# Wait for chaos to start (give it time to inject chaos)
echo "Waiting for chaos injection to begin..."
sleep 10
# Test during chaos - check for increased latency or packet loss effects
echo "Testing network behavior during chaos..."
chaos_test_count=0
chaos_success=0
for i in {1..5}; do
chaos_test_count=$((chaos_test_count + 1))
chaos_start=$(date +%s%3N)
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 10 http://localhost:8091 || echo "000")
chaos_end=$(date +%s%3N)
chaos_latency=$((chaos_end - chaos_start))
echo "Attempt $i: HTTP $response, latency: ${chaos_latency}ms"
# We expect either increased latency or some failures due to packet loss
if [ "$response" == "200" ] || [ "$response" == "000" ]; then
chaos_success=$((chaos_success + 1))
fi
sleep 2
done
echo "Chaos test results: $chaos_success/$chaos_test_count requests processed"
# Verify node-level chaos affects pod
echo "Verifying node-level chaos affects pod on $TARGET_NODE..."
# The node chaos should affect all pods on the node
# Wait for krkn to complete
echo "Waiting for krkn to complete..."
wait $KRKN_PID || true
echo "Krkn completed"
# Wait a bit for cleanup
sleep 5
# Verify recovery - nginx should respond normally again
echo "Verifying service recovery..."
recovery_attempts=0
max_recovery_attempts=10
while [ $recovery_attempts -lt $max_recovery_attempts ]; do
recovery_attempts=$((recovery_attempts + 1))
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 http://localhost:8091 || echo "000")
if [ "$response" == "200" ]; then
echo "Recovery verified: nginx responding normally (attempt $recovery_attempts)"
break
fi
echo "Recovery attempt $recovery_attempts/$max_recovery_attempts: got $response, retrying..."
sleep 3
done
if [ "$response" != "200" ]; then
echo "ERROR: Service did not recover after chaos (got $response)"
kubectl get pods -l app=nginx-node-net-chaos
kubectl describe pod $POD_NAME
exit 1
fi
# Cleanup
echo "Cleaning up test resources..."
kill $PORT_FORWARD_PID 2>/dev/null || true
kubectl delete deployment nginx-node-net-chaos --ignore-not-found=true
kubectl delete service nginx-node-net-chaos-svc --ignore-not-found=true
echo "Node network chaos test: Success"
}
functional_test_node_network_chaos

View File

@@ -7,14 +7,15 @@ trap finish EXIT
function functional_test_pod_crash {
export scenario_type="pod_disruption_scenarios"
export scenario_file="scenarios/kind/pod_etcd.yml"
export scenario_file="scenarios/kind/pod_path_provisioner.yml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/pod_config.yaml
python3 -m coverage run -a run_kraken.py -c CI/config/pod_config.yaml
echo "Pod disruption scenario test: Success"
date
kubectl get pods -n kube-system -l component=etcd -o yaml
kubectl get pods -n local-path-storage -l app=local-path-provisioner -o yaml
}
functional_test_pod_crash

View File

@@ -1,4 +1,5 @@
source CI/tests/common.sh
trap error ERR
@@ -8,7 +9,9 @@ function functional_test_pod_error {
export scenario_type="pod_disruption_scenarios"
export scenario_file="scenarios/kind/pod_etcd.yml"
export post_config=""
# this test will check if krkn exits with an error when too many pods are targeted
yq -i '.[0].config.kill=5' scenarios/kind/pod_etcd.yml
yq -i '.[0].config.krkn_pod_recovery_time=1' scenarios/kind/pod_etcd.yml
envsubst < CI/config/common_test_config.yaml > CI/config/pod_config.yaml
cat CI/config/pod_config.yaml

View File

@@ -0,0 +1,143 @@
set -xeEo pipefail
source CI/tests/common.sh
trap error ERR
trap finish EXIT
function functional_test_pod_network_chaos {
echo "Starting pod network chaos functional test"
# Deploy nginx workload
echo "Deploying nginx workload..."
kubectl create deployment nginx-pod-net-chaos --image=nginx:latest
kubectl expose deployment nginx-pod-net-chaos --port=80 --target-port=80 --name=nginx-pod-net-chaos-svc
# Wait for nginx to be ready
echo "Waiting for nginx pod to be ready..."
kubectl wait --for=condition=ready pod -l app=nginx-pod-net-chaos --timeout=120s
# Get pod name
export POD_NAME=$(kubectl get pods -l app=nginx-pod-net-chaos -o jsonpath='{.items[0].metadata.name}')
echo "Target pod: $POD_NAME"
# Setup port-forward to access nginx
echo "Setting up port-forward to nginx service..."
kubectl port-forward service/nginx-pod-net-chaos-svc 8090:80 &
PORT_FORWARD_PID=$!
sleep 3 # Give port-forward time to start
# Test baseline connectivity
echo "Testing baseline connectivity..."
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 http://localhost:8090 || echo "000")
if [ "$response" != "200" ]; then
echo "ERROR: Nginx not responding correctly (got $response, expected 200)"
kubectl get pods -l app=nginx-pod-net-chaos
kubectl describe pod $POD_NAME
exit 1
fi
echo "Baseline test passed: nginx responding with 200"
# Measure baseline latency
echo "Measuring baseline latency..."
baseline_start=$(date +%s%3N)
curl -s http://localhost:8090 > /dev/null || true
baseline_end=$(date +%s%3N)
baseline_latency=$((baseline_end - baseline_start))
echo "Baseline latency: ${baseline_latency}ms"
# Configure pod network chaos scenario
echo "Configuring pod network chaos scenario..."
yq -i '.[0].config.target="'$POD_NAME'"' scenarios/kube/pod-network-chaos.yml
yq -i '.[0].config.namespace="default"' scenarios/kube/pod-network-chaos.yml
yq -i '.[0].config.test_duration=20' scenarios/kube/pod-network-chaos.yml
yq -i '.[0].config.latency="200ms"' scenarios/kube/pod-network-chaos.yml
yq -i '.[0].config.loss=15' scenarios/kube/pod-network-chaos.yml
yq -i '.[0].config.bandwidth="10mbit"' scenarios/kube/pod-network-chaos.yml
yq -i '.[0].config.ingress=true' scenarios/kube/pod-network-chaos.yml
yq -i '.[0].config.egress=true' scenarios/kube/pod-network-chaos.yml
yq -i 'del(.[0].config.interfaces)' scenarios/kube/pod-network-chaos.yml
# Prepare krkn config
export scenario_type="network_chaos_ng_scenarios"
export scenario_file="scenarios/kube/pod-network-chaos.yml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/pod_network_chaos_config.yaml
# Run krkn in background
echo "Starting krkn with pod network chaos scenario..."
python3 -m coverage run -a run_kraken.py -c CI/config/pod_network_chaos_config.yaml &
KRKN_PID=$!
echo "Krkn started with PID: $KRKN_PID"
# Wait for chaos to start (give it time to inject chaos)
echo "Waiting for chaos injection to begin..."
sleep 10
# Test during chaos - check for increased latency or packet loss effects
echo "Testing network behavior during chaos..."
chaos_test_count=0
chaos_success=0
for i in {1..5}; do
chaos_test_count=$((chaos_test_count + 1))
chaos_start=$(date +%s%3N)
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 10 http://localhost:8090 || echo "000")
chaos_end=$(date +%s%3N)
chaos_latency=$((chaos_end - chaos_start))
echo "Attempt $i: HTTP $response, latency: ${chaos_latency}ms"
# We expect either increased latency or some failures due to packet loss
if [ "$response" == "200" ] || [ "$response" == "000" ]; then
chaos_success=$((chaos_success + 1))
fi
sleep 2
done
echo "Chaos test results: $chaos_success/$chaos_test_count requests processed"
# Wait for krkn to complete
echo "Waiting for krkn to complete..."
wait $KRKN_PID || true
echo "Krkn completed"
# Wait a bit for cleanup
sleep 5
# Verify recovery - nginx should respond normally again
echo "Verifying service recovery..."
recovery_attempts=0
max_recovery_attempts=10
while [ $recovery_attempts -lt $max_recovery_attempts ]; do
recovery_attempts=$((recovery_attempts + 1))
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 http://localhost:8090 || echo "000")
if [ "$response" == "200" ]; then
echo "Recovery verified: nginx responding normally (attempt $recovery_attempts)"
break
fi
echo "Recovery attempt $recovery_attempts/$max_recovery_attempts: got $response, retrying..."
sleep 3
done
if [ "$response" != "200" ]; then
echo "ERROR: Service did not recover after chaos (got $response)"
kubectl get pods -l app=nginx-pod-net-chaos
kubectl describe pod $POD_NAME
exit 1
fi
# Cleanup
echo "Cleaning up test resources..."
kill $PORT_FORWARD_PID 2>/dev/null || true
kubectl delete deployment nginx-pod-net-chaos --ignore-not-found=true
kubectl delete service nginx-pod-net-chaos-svc --ignore-not-found=true
echo "Pod network chaos test: Success"
}
functional_test_pod_network_chaos

View File

@@ -18,14 +18,13 @@ function functional_test_telemetry {
yq -i '.performance_monitoring.prometheus_url="http://localhost:9090"' CI/config/common_test_config.yaml
yq -i '.telemetry.run_tag=env(RUN_TAG)' CI/config/common_test_config.yaml
export scenario_type="hog_scenarios"
export scenario_file="scenarios/kube/cpu-hog.yml"
export scenario_type="pod_disruption_scenarios"
export scenario_file="scenarios/kind/pod_path_provisioner.yml"
export post_config=""
envsubst < CI/config/common_test_config.yaml > CI/config/telemetry.yaml
retval=$(python3 -m coverage run -a run_kraken.py -c CI/config/telemetry.yaml)
RUN_FOLDER=`cat CI/out/test_telemetry.out | grep amazonaws.com | sed -rn "s#.*https:\/\/.*\/files/(.*)#\1#p"`
RUN_FOLDER=`cat CI/out/test_telemetry.out | grep amazonaws.com | sed -rn "s#.*https:\/\/.*\/files/(.*)#\1#p" | sed 's/\x1b\[[0-9;]*m//g'`
$AWS_CLI s3 ls "s3://$AWS_BUCKET/$RUN_FOLDER/" | awk '{ print $4 }' > s3_remote_files
echo "checking if telemetry files are uploaded on s3"
cat s3_remote_files | grep critical-alerts-00.log || ( echo "FAILED: critical-alerts-00.log not uploaded" && exit 1 )

View File

@@ -0,0 +1,175 @@
# Adding a New Scenario Test (CI/tests_v2)
This guide explains how to add a new chaos scenario test to the v2 pytest framework. The layout is **folder-per-scenario**: each scenario has its own directory under `scenarios/<scenario_name>/` containing the test file, Kubernetes resources, and the Krkn scenario base YAML.
## Option 1: Scaffold script (recommended)
From the **repository root**:
```bash
python CI/tests_v2/scaffold.py --scenario service_hijacking
```
This creates:
- `CI/tests_v2/scenarios/service_hijacking/test_service_hijacking.py` — A test class extending `BaseScenarioTest` with a stub `test_happy_path` and `WORKLOAD_MANIFEST` pointing to the folders `resource.yaml`.
- `CI/tests_v2/scenarios/service_hijacking/resource.yaml` — A placeholder Deployment (namespace is patched at deploy time).
- `CI/tests_v2/scenarios/service_hijacking/scenario_base.yaml` — A placeholder Krkn scenario; edit this with the structure expected by your scenario type.
The script automatically registers the marker in `CI/tests_v2/pytest.ini`. For example, it adds:
```
service_hijacking: marks a test as a service_hijacking scenario test
```
**Next steps after scaffolding:**
1. Verify the marker was added to `pytest.ini` (the scaffold does this automatically).
2. Edit `scenario_base.yaml` with the structure your Krkn scenario type expects (see `scenarios/application_outage/scenario_base.yaml` and `scenarios/pod_disruption/scenario_base.yaml` for examples). The top-level key should match `SCENARIO_NAME`.
3. If your scenario uses a **list** structure (like pod_disruption) instead of a **dict** with a top-level key, set `NAMESPACE_KEY_PATH` (e.g. `[0, "config", "namespace_pattern"]`) and `NAMESPACE_IS_REGEX = True` if the namespace is a regex pattern.
4. The generated `test_happy_path` already uses `self.run_scenario(self.tmp_path, ns)` and assertions. Add more test methods (e.g. negative tests with `@pytest.mark.no_workload`) as needed.
5. Adjust `resource.yaml` if your scenario needs a different workload (e.g. specific image or labels).
If your Kraken scenario type string is not `<scenario>_scenarios`, pass it explicitly:
```bash
python CI/tests_v2/scaffold.py --scenario node_disruption --scenario-type node_scenarios
```
## Option 2: Manual setup
1. **Create the scenario folder**
`CI/tests_v2/scenarios/<scenario_name>/`.
2. **Add resource.yaml**
Kubernetes manifest(s) for the workload (Deployment or Pod). Use a distinct label (e.g. `app: <scenario>-target`). Omit or leave `metadata.namespace`; the framework patches it at deploy time.
3. **Add scenario_base.yaml**
The canonical Krkn scenario structure. Tests will load this, patch namespace (and any overrides), write to `tmp_path`, and pass to `build_config`. See existing scenarios for the format your scenario type expects.
4. **Add test_<scenario>.py**
- Import `BaseScenarioTest` from `lib.base` and helpers from `lib.utils` (e.g. `assert_kraken_success`, `get_pods_list`, `scenario_dir` if needed).
- Define a class extending `BaseScenarioTest` with:
- `WORKLOAD_MANIFEST = "CI/tests_v2/scenarios/<scenario_name>/resource.yaml"`
- `WORKLOAD_IS_PATH = True`
- `LABEL_SELECTOR = "app=<label>"`
- `SCENARIO_NAME = "<scenario_name>"`
- `SCENARIO_TYPE = "<scenario_type>"` (e.g. `application_outages_scenarios`)
- `NAMESPACE_KEY_PATH`: path to the namespace field (e.g. `["application_outage", "namespace"]` for dict-based, or `[0, "config", "namespace_pattern"]` for list-based)
- `NAMESPACE_IS_REGEX = False` (or `True` for regex patterns like pod_disruption)
- `OVERRIDES_KEY_PATH = ["<top-level key>"]` if the scenario supports overrides (e.g. duration, block).
- Add `@pytest.mark.functional` and `@pytest.mark.<scenario>` on the class.
- In at least one test, call `self.run_scenario(self.tmp_path, self.ns)` and assert with `assert_kraken_success`, `assert_pod_count_unchanged`, and `assert_all_pods_running_and_ready`. Use `self.k8s_core`, `self.tmp_path`, etc. (injected by the base class).
5. **Register the marker**
In `CI/tests_v2/pytest.ini`, under `markers`:
```
<scenario>: marks a test as a <scenario> scenario test
```
## Conventions
- **Folder-per-scenario**: One directory per scenario under `scenarios/`. All assets (test, resource.yaml, scenario_base.yaml, and any extra YAMLs) live there for easy tracking and onboarding.
- **Ephemeral namespace**: Every test gets a unique `krkn-test-<uuid>` namespace. The base class deploys the workload into it before the test; no manual deploy is required.
- **Negative tests**: For tests that dont need a workload (e.g. invalid scenario, bad namespace), use `@pytest.mark.no_workload`. The test will still get a namespace but no workload will be deployed.
- **Scenario type**: `SCENARIO_TYPE` must match the key in Krakens config (e.g. `application_outages_scenarios`, `pod_disruption_scenarios`). See `CI/tests_v2/config/common_test_config.yaml` and the scenario plugins `get_scenario_types()`.
- **Assertions**: Use `assert_kraken_success(result, context=f"namespace={ns}", tmp_path=self.tmp_path)` so failures include stdout/stderr and optional log files.
- **Timeouts**: Use constants from `lib.base` (`READINESS_TIMEOUT`, `POLICY_WAIT_TIMEOUT`, etc.) instead of magic numbers.
## Exit Code Handling
Kraken uses the following exit codes: **0** = success; **1** = scenario failure (e.g. post scenarios still failing); **2** = critical alerts fired; **3+** = health check / KubeVirt check failures; **-1** = infrastructure error (bad config, no kubeconfig).
- **Happy-path tests**: Use `assert_kraken_success(result, ...)`. By default only exit code 0 is accepted.
- **Alert-aware tests**: If you enable `check_critical_alerts` and expect alerts, use `assert_kraken_success(result, allowed_codes=(0, 2), ...)` so exit code 2 is treated as acceptable.
- **Expected-failure tests**: Use `assert_kraken_failure(result, context=..., tmp_path=self.tmp_path)` for negative tests (invalid scenario, bad namespace, etc.). This gives the same diagnostic quality (log dump, tmp_path hint) as success assertions. Prefer this over a bare `assert result.returncode != 0`.
## Running your new tests
```bash
pytest CI/tests_v2/ -v -m <scenario>
```
For debugging with logs and keeping failed namespaces:
```bash
pytest CI/tests_v2/ -v -m <scenario> --log-cli-level=DEBUG --keep-ns-on-fail
```
---
## Naming Conventions
Follow these conventions so the framework stays consistent as new scenarios are added.
### Quick Reference
| Element | Pattern | Example |
|---|---|---|
| Scenario folder | `scenarios/<snake_case>/` | `scenarios/node_disruption/` |
| Test file | `test_<scenario>.py` | `test_node_disruption.py` |
| Test class | `Test<CamelCase>(BaseScenarioTest)` | `TestNodeDisruption` |
| Pytest marker | `@pytest.mark.<scenario>` (matches folder) | `@pytest.mark.node_disruption` |
| Scenario YAML | `scenario_base.yaml` | — |
| Workload YAML | `resource.yaml` | — |
| Extra YAMLs | `<descriptive_name>.yaml` | `nginx_http.yaml` |
| Lib modules | `lib/<concern>.py` | `lib/deploy.py` |
| Public fixtures | `<verb>_<noun>` or `<noun>` | `run_kraken`, `test_namespace` |
| Private/autouse fixtures | `_<descriptive>` | `_cleanup_stale_namespaces` |
| Assertion helpers | `assert_<condition>` | `assert_pod_count_unchanged` |
| Query helpers | `get_<resource>` or `find_<resource>_by_<criteria>` | `get_pods_list`, `find_network_policy_by_prefix` |
| Env var overrides | `KRKN_TEST_<NAME>` | `KRKN_TEST_READINESS_TIMEOUT` |
### Folders
- One folder per scenario under `scenarios/`. The folder name is `snake_case` and must match the `SCENARIO_NAME` class attribute in the test.
- Shared framework code lives in `lib/`. Each module covers a single concern (`k8s`, `namespace`, `deploy`, `kraken`, `utils`, `base`, `preflight`).
- Do **not** add scenario-specific code to `lib/`; keep it in the scenario folder as module-level helpers.
### Files
- Test files: `test_<scenario>.py`. This is required for pytest discovery (`test_*.py`).
- Workload manifests: always `resource.yaml`. If a scenario needs additional K8s resources (e.g. a Service for traffic testing), use a descriptive name like `nginx_http.yaml`.
- Scenario config: always `scenario_base.yaml`. This is the template that `load_and_patch_scenario` loads and patches.
### Classes
- One test class per file: `Test<CamelCase>` extending `BaseScenarioTest`.
- The CamelCase name must be the PascalCase equivalent of the folder name (e.g. `pod_disruption` -> `TestPodDisruption`).
### Test Methods
- Prefix: `test_` (pytest requirement).
- Use descriptive names that convey **what is being verified**, not implementation details.
- Good: `test_pod_crash_and_recovery`, `test_traffic_blocked_during_outage`, `test_invalid_scenario_fails`.
- Avoid: `test_run_1`, `test_scenario`, `test_it_works`.
### Fixtures
- **Public fixtures** (intended for use in tests): use `<verb>_<noun>` or plain `<noun>`. Examples: `run_kraken`, `deploy_workload`, `test_namespace`, `kubectl`.
- **Private/autouse fixtures** (framework internals): prefix with `_`. Examples: `_kube_config_loaded`, `_preflight_checks`, `_inject_common_fixtures`.
- K8s client fixtures use the `k8s_` prefix: `k8s_core`, `k8s_apps`, `k8s_networking`, `k8s_client`.
### Helpers and Utilities
- **Assertions**: `assert_<what_is_expected>`. Always raise `AssertionError` with a message that includes the namespace.
- **K8s queries**: `get_<resource>_list` for direct API calls, `find_<resource>_by_<criteria>` for filtered lookups.
- **Private helpers**: prefix with `_` for module-internal functions (e.g. `_pods`, `_policies`, `_get_nested`).
### Constants and Environment Variables
- Timeout constants: `UPPER_CASE` in `lib/base.py`. Each is overridable via an env var prefixed `KRKN_TEST_`.
- Feature flags: `KRKN_TEST_DRY_RUN`, `KRKN_TEST_COVERAGE`. Always use the `KRKN_TEST_` prefix so all tunables are discoverable with `grep KRKN_TEST_`.
### Markers
- Every test class gets `@pytest.mark.functional` (framework-wide) and `@pytest.mark.<scenario>` (scenario-specific).
- The scenario marker name matches the folder name exactly.
- Behavioral modifiers use plain descriptive names: `no_workload`, `order`.
- Register all custom markers in `pytest.ini` to avoid warnings.
## Adding Dependencies
- **Runtime (Kraken needs it)**: Add to the **root** `requirements.txt`. Pin a version (e.g. `package==1.2.3` or `package>=1.2,<2`).
- **Test-only (only CI/tests_v2 needs it)**: Add to **`CI/tests_v2/requirements.txt`**. Pin a version there as well.
- After changing either file, run `make setup` (or `make -f CI/tests_v2/Makefile setup`) from the repo root to verify both files install cleanly together.

97
CI/tests_v2/Makefile Normal file
View File

@@ -0,0 +1,97 @@
# CI/tests_v2 functional tests - single entry point.
# Run from repo root: make -f CI/tests_v2/Makefile <target>
# Or from CI/tests_v2: make <target> (REPO_ROOT is resolved automatically).
# Resolve repo root: go to Makefile dir then up two levels (CI/tests_v2 -> repo root)
REPO_ROOT := $(shell cd "$(dir $(firstword $(MAKEFILE_LIST)))" && cd ../.. && pwd)
VENV := $(REPO_ROOT)/venv
PYTHON := $(VENV)/bin/python
PIP := $(VENV)/bin/pip
CLUSTER_NAME ?= ci-krkn
TESTS_DIR := $(REPO_ROOT)/CI/tests_v2
.PHONY: setup preflight test test-fast test-debug test-scenario test-dry-run clean help
help:
@echo "CI/tests_v2 functional tests - usage: make [target]"
@echo ""
@echo "Targets:"
@echo " setup Create venv (if missing), install Python deps, create KinD cluster (kind-config-dev.yml)."
@echo " Run once before first test. Override cluster config: KIND_CONFIG=path make setup"
@echo ""
@echo " preflight Check Python 3.9+, kind, kubectl, Docker, cluster reachability, test deps."
@echo " Invoked automatically by test targets; run standalone to validate environment."
@echo ""
@echo " test Full run: retries (2), timeout 300s, HTML report, JUnit XML, coverage."
@echo " Use for CI or final verification. Output: report.html, results.xml"
@echo ""
@echo " test-fast Quick run: no retries, 120s timeout, no report. For fast local iteration."
@echo ""
@echo " test-debug Debug run: verbose (-s), keep failed namespaces (--keep-ns-on-fail), DEBUG logging."
@echo " Use when investigating failures; inspect kept namespaces with kubectl."
@echo ""
@echo " test-scenario Run only one scenario. Requires SCENARIO=<marker>."
@echo " Example: make test-scenario SCENARIO=pod_disruption"
@echo ""
@echo " test-dry-run Validate scenario plumbing only (no Kraken execution). Sets KRKN_TEST_DRY_RUN=1."
@echo ""
@echo " clean Delete KinD cluster $(CLUSTER_NAME) and remove report.html, results.xml."
@echo ""
@echo " help Show this help."
@echo ""
@echo "Run from repo root: make -f CI/tests_v2/Makefile <target>"
@echo "Or from CI/tests_v2: make <target>"
setup: $(VENV)/.installed
@echo "Running cluster setup..."
$(MAKE) -f $(TESTS_DIR)/Makefile preflight
cd $(REPO_ROOT) && ./CI/tests_v2/setup_env.sh
@echo "Setup complete. Run 'make test' or 'make -f CI/tests_v2/Makefile test' from repo root."
$(VENV)/.installed: $(REPO_ROOT)/requirements.txt $(TESTS_DIR)/requirements.txt
@if [ ! -d "$(VENV)" ]; then python3 -m venv $(VENV); echo "Created venv at $(VENV)"; fi
$(PYTHON) -m pip install -q --upgrade pip
# Root = Kraken runtime; tests_v2 = test-only plugins; both required for functional tests.
$(PIP) install -q -r $(REPO_ROOT)/requirements.txt
$(PIP) install -q -r $(TESTS_DIR)/requirements.txt
@touch $(VENV)/.installed
@echo "Python deps installed."
preflight:
@echo "Preflight: checking Python, tools, and cluster..."
@command -v python3 >/dev/null 2>&1 || { echo "Error: python3 not found."; exit 1; }
@python3 -c "import sys; exit(0 if sys.version_info >= (3, 9) else 1)" || { echo "Error: Python 3.9+ required."; exit 1; }
@command -v kind >/dev/null 2>&1 || { echo "Error: kind not installed."; exit 1; }
@command -v kubectl >/dev/null 2>&1 || { echo "Error: kubectl not installed."; exit 1; }
@docker info >/dev/null 2>&1 || { echo "Error: Docker not running (required for KinD)."; exit 1; }
@if kind get clusters 2>/dev/null | grep -qx "$(CLUSTER_NAME)"; then \
kubectl cluster-info >/dev/null 2>&1 || { echo "Error: Cluster $(CLUSTER_NAME) exists but cluster-info failed."; exit 1; }; \
else \
echo "Note: Cluster $(CLUSTER_NAME) not found. Run 'make setup' to create it."; \
fi
@$(PYTHON) -c "import pytest_rerunfailures, pytest_html, pytest_timeout, pytest_order" 2>/dev/null || \
{ echo "Error: Install test deps with 'make setup' or pip install -r CI/tests_v2/requirements.txt"; exit 1; }
@echo "Preflight OK."
test: preflight
cd $(REPO_ROOT) && KRKN_TEST_COVERAGE=1 $(PYTHON) -m pytest $(TESTS_DIR)/ -v --timeout=300 --reruns=2 --reruns-delay=10 \
--html=$(TESTS_DIR)/report.html -n auto --junitxml=$(TESTS_DIR)/results.xml
test-fast: preflight
cd $(REPO_ROOT) && $(PYTHON) -m pytest $(TESTS_DIR)/ -v -p no:rerunfailures -n auto --timeout=120
test-debug: preflight
cd $(REPO_ROOT) && $(PYTHON) -m pytest $(TESTS_DIR)/ -v -s -p no:rerunfailures --timeout=300 \
--keep-ns-on-fail --log-cli-level=DEBUG
test-scenario: preflight
@if [ -z "$(SCENARIO)" ]; then echo "Error: set SCENARIO=pod_disruption (or application_outage, etc.)"; exit 1; fi
cd $(REPO_ROOT) && $(PYTHON) -m pytest $(TESTS_DIR)/ -v -m "$(SCENARIO)" --timeout=300 --reruns=2 --reruns-delay=10
test-dry-run: preflight
cd $(REPO_ROOT) && KRKN_TEST_DRY_RUN=1 $(PYTHON) -m pytest $(TESTS_DIR)/ -v
clean:
@kind delete cluster --name $(CLUSTER_NAME) 2>/dev/null || true
@rm -f $(TESTS_DIR)/report.html $(TESTS_DIR)/results.xml
@echo "Cleaned cluster and report artifacts."

198
CI/tests_v2/README.md Normal file
View File

@@ -0,0 +1,198 @@
# Pytest Functional Tests (tests_v2)
This directory contains a pytest-based functional test framework that runs **alongside** the existing bash tests in `CI/tests/`. It covers the **pod disruption** and **application outage** scenarios with proper assertions, retries, and reporting.
Each test runs in its **own ephemeral Kubernetes namespace** (`krkn-test-<uuid>`). Before the test, the framework creates the namespace, deploys the target workload, and waits for pods to be ready. After the test, the namespace is deleted (cascading all resources). **You do not need to deploy any workloads manually.**
## Prerequisites
Without a cluster, tests that need one will **skip** with a clear message (e.g. *"Could not load kube config"*). No manual workload deployment is required; workloads are deployed automatically into ephemeral namespaces per test.
- **KinD cluster** (or any Kubernetes cluster) running with `kubectl` configured (e.g. `KUBECONFIG` or default `~/.kube/config`).
- **Python 3.9+** and main repo deps: `pip install -r requirements.txt`.
### Supported clusters
- **KinD** (recommended): Use `make -f CI/tests_v2/Makefile setup` from the repo root. Fastest for local dev; uses a 2-node dev config by default. Override with `KIND_CONFIG=/path/to/kind-config.yml` for a larger cluster.
- **Minikube**: Should work; ensure `kubectl` context is set. Not tested in CI.
- **Remote/cloud cluster**: Tests create and delete namespaces; use with caution. Use `--require-kind` to avoid accidentally running against production (tests will skip unless context is kind/minikube).
### Setting up the cluster
**Option A: Use the setup script (recommended)**
From the repository root, with `kind` and `kubectl` installed:
```bash
# Create KinD cluster (defaults to CI/tests_v2/kind-config-dev.yml; override with KIND_CONFIG=...)
./CI/tests_v2/setup_env.sh
```
Then in the same shell (or after `export KUBECONFIG=~/.kube/config` in another terminal), activate your venv and install Python deps:
```bash
python3 -m venv venv
source venv/bin/activate # or: source venv/Scripts/activate on Windows
pip install -r requirements.txt
pip install -r CI/tests_v2/requirements.txt
```
**Option B: Manual setup**
1. Install [kind](https://kind.sigs.k8s.io/docs/user/quick-start/) and [kubectl](https://kubernetes.io/docs/tasks/tools/).
2. Create a cluster (from repo root):
```bash
kind create cluster --name kind --config kind-config.yml
```
3. Wait for the cluster:
```bash
kubectl wait --for=condition=Ready nodes --all --timeout=120s
```
4. Create a virtualenv, activate it, and install dependencies (as in Option A).
5. Run tests from repo root: `pytest CI/tests_v2/ -v ...`
## Install test dependencies
From the repository root:
```bash
pip install -r CI/tests_v2/requirements.txt
```
This adds `pytest-rerunfailures`, `pytest-html`, `pytest-timeout`, and `pytest-order` (pytest and coverage come from the main `requirements.txt`).
## Dependency Management
Dependencies are split into two files:
- **Root `requirements.txt`** — Kraken runtime (cloud SDKs, Kubernetes client, krkn-lib, pytest, coverage, etc.). Required to run Kraken.
- **`CI/tests_v2/requirements.txt`** — Test-only pytest plugins (rerunfailures, html, timeout, order, xdist). Not needed by Kraken itself.
**Rule of thumb:** If Kraken needs it at runtime, add to root. If only the functional tests need it, add to `CI/tests_v2/requirements.txt`.
Running `make -f CI/tests_v2/Makefile setup` (or `make setup` from `CI/tests_v2`) creates the venv and installs **both** files automatically; you do not need to install them separately. The Makefile re-installs when either file changes (via the `.installed` sentinel).
## Run tests
All commands below are from the **repository root**.
### Basic run (with retries and HTML report)
```bash
pytest CI/tests_v2/ -v --timeout=300 --reruns=2 --reruns-delay=10 --html=CI/tests_v2/report.html --junitxml=CI/tests_v2/results.xml
```
- Failed tests are **retried up to 2 times** with a 10s delay (configurable in `CI/tests_v2/pytest.ini`).
- Each test has a **5-minute timeout**.
- Open `CI/tests_v2/report.html` in a browser for a detailed report.
### Run in parallel (faster suite)
```bash
pytest CI/tests_v2/ -v -n 4 --timeout=300
```
Ephemeral namespaces make tests parallel-safe; use `-n` with the number of workers (e.g. 4).
### Run without retries (for debugging)
```bash
pytest CI/tests_v2/ -v -p no:rerunfailures
```
### Run with coverage
```bash
python -m coverage run -m pytest CI/tests_v2/ -v
python -m coverage report
```
To append to existing coverage from unit tests, ensure coverage was started with `coverage run -a` for earlier runs, or run the full test suite in one go.
### Run only pod disruption tests
```bash
pytest CI/tests_v2/ -v -m pod_disruption
```
### Run only application outage tests
```bash
pytest CI/tests_v2/ -v -m application_outage
```
### Run with verbose output and no capture
```bash
pytest CI/tests_v2/ -v -s
```
### Keep failed test namespaces for debugging
When a test fails, its ephemeral namespace is normally deleted. To **keep** the namespace so you can inspect pods, logs, and network policies:
```bash
pytest CI/tests_v2/ -v --keep-ns-on-fail
```
On failure, the namespace name is printed (e.g. `[keep-ns-on-fail] Keeping namespace krkn-test-a1b2c3d4 for debugging`). Use `kubectl get pods -n krkn-test-a1b2c3d4` (and similar) to debug, then delete the namespace manually when done.
### Logging and cluster options
- **Structured logging**: Use `--log-cli-level=DEBUG` to see namespace creation, workload deploy, and readiness in the console. Use `--log-file=test.log` to capture logs to a file.
- **Require dev cluster**: To avoid running against the wrong cluster, use `--require-kind`. Tests will skip unless the current kube context cluster name contains "kind" or "minikube".
- **Stale namespace cleanup**: At session start, namespaces matching `krkn-test-*` that are older than 30 minutes are deleted (e.g. from a previous crashed run).
- **Timeout overrides**: Set env vars to tune timeouts (e.g. in CI): `KRKN_TEST_READINESS_TIMEOUT`, `KRKN_TEST_DEPLOY_TIMEOUT`, `KRKN_TEST_NS_CLEANUP_TIMEOUT`, `KRKN_TEST_POLICY_WAIT_TIMEOUT`, `KRKN_TEST_KRAKEN_PROC_WAIT_TIMEOUT`, `KRKN_TEST_TIMEOUT_BUDGET`.
## Architecture
- **Folder-per-scenario**: Each scenario lives under `scenarios/<scenario_name>/` with:
- **test_<scenario>.py** — Test class extending `BaseScenarioTest`; sets `WORKLOAD_MANIFEST`, `SCENARIO_NAME`, `SCENARIO_TYPE`, `NAMESPACE_KEY_PATH`, and optionally `OVERRIDES_KEY_PATH`.
- **resource.yaml** — Kubernetes resources (Deployment/Pod) for the scenario; namespace is patched at deploy time.
- **scenario_base.yaml** — Canonical Krkn scenario; the base class loads it, patches namespace (and overrides), and passes it to Kraken via `run_scenario()`. Optional extra YAMLs (e.g. `nginx_http.yaml` for application_outage) can live in the same folder.
- **lib/**: Shared framework — `lib/base.py` defines `BaseScenarioTest`, timeout constants (env-overridable), and scenario helpers (`load_and_patch_scenario`, `run_scenario`); `lib/utils.py` provides assertion and K8s helpers; `lib/k8s.py` provides K8s client fixtures; `lib/namespace.py` provides namespace lifecycle; `lib/deploy.py` provides `deploy_workload`, `wait_for_pods_running`, `wait_for_deployment_replicas`; `lib/kraken.py` provides `run_kraken`, `build_config` (using `CI/tests_v2/config/common_test_config.yaml`).
- **conftest.py**: Re-exports fixtures from the lib modules and defines `pytest_addoption`, logging, and `repo_root`.
- **Adding a new scenario**: Use the scaffold script (see [CONTRIBUTING_TESTS.md](CONTRIBUTING_TESTS.md)) to create `scenarios/<name>/` with test file, `resource.yaml`, and `scenario_base.yaml`, or copy an existing scenario folder and adapt.
## What is tested
Each test runs in an isolated ephemeral namespace; workloads are deployed automatically before the test and the namespace is deleted after (unless `--keep-ns-on-fail` is set and the test failed).
- **scenarios/pod_disruption/**
Pod disruption scenario. `resource.yaml` is a deployment with label `app=krkn-pod-disruption-target`; `scenario_base.yaml` is loaded and `namespace_pattern` is patched to the test namespace. The test:
1. Records baseline pod UIDs and restart counts.
2. Runs Kraken with the pod disruption scenario.
3. Asserts that chaos had an effect (UIDs changed or restart count increased).
4. Waits for pods to be Running and all containers Ready.
5. Asserts pod count is unchanged and all pods are healthy.
- **scenarios/application_outage/**
Application outage scenario (block Ingress/Egress to target pods, then restore). `resource.yaml` is the main workload (outage pod); `scenario_base.yaml` is loaded and patched with namespace (and duration/block as needed). Optional `nginx_http.yaml` is used by the traffic test. Tests include:
- **test_app_outage_block_restore_and_variants**: Happy path with default, exclude_label, and block variants (Ingress, Egress, both); Krkn exit 0, pods still Running/Ready.
- **test_network_policy_created_then_deleted**: Policy with prefix `krkn-deny-` appears during run and is gone after.
- **test_traffic_blocked_during_outage** (disabled, planned): Deploys nginx with label `scenario=outage`, port-forwards; during outage curl fails, after run curl succeeds.
- **test_invalid_scenario_fails**: Invalid scenario file (missing `application_outage` key) causes Kraken to exit non-zero.
- **test_bad_namespace_fails**: Scenario targeting a non-existent namespace causes Kraken to exit non-zero.
## Configuration
- **pytest.ini**: Markers (`functional`, `pod_disruption`, `application_outage`, `no_workload`). Use `--timeout=300`, `--reruns=2`, `--reruns-delay=10` on the command line for full runs.
- **conftest.py**: Re-exports fixtures from `lib/k8s.py`, `lib/namespace.py`, `lib/deploy.py`, `lib/kraken.py` (e.g. `test_namespace`, `deploy_workload`, `k8s_core`, `wait_for_pods_running`, `run_kraken`, `build_config`). Configs are built from `CI/tests_v2/config/common_test_config.yaml` with monitoring disabled for local runs. Timeout constants in `lib/base.py` can be overridden via env vars.
- **Cluster access**: Reads and applies use the Kubernetes Python client; `kubectl` is still used for `port-forward` and for running Kraken.
- **utils.py**: Pod/network policy helpers and assertion helpers (`assert_all_pods_running_and_ready`, `assert_pod_count_unchanged`, `assert_kraken_success`, `assert_kraken_failure`, `patch_namespace_in_docs`).
## Relationship to existing CI
- The **existing** bash tests in `CI/tests/` and `CI/run.sh` are **unchanged**. They continue to run as before in GitHub Actions.
- This framework is **additive**. To run it in CI later, add a separate job or step that runs `pytest CI/tests_v2/ ...` from the repo root.
## Troubleshooting
- **`pytest.skip: Could not load kube config`** — No cluster or bad KUBECONFIG. Run `make -f CI/tests_v2/Makefile setup` (or `make setup` from `CI/tests_v2`) or check `kubectl cluster-info`.
- **KinD cluster creation hangs** — Docker is not running. Start Docker Desktop or run `systemctl start docker`.
- **`Bind for 0.0.0.0:9090 failed: port is already allocated`** — Another process (e.g. Prometheus) is using the port. The default dev config (`kind-config-dev.yml`) no longer maps host ports; if you use `KIND_CONFIG=kind-config.yml` or a custom config with `extraPortMappings`, free the port or switch to `kind-config-dev.yml`.
- **`TimeoutError: Pods did not become ready`** — Slow image pull or node resource limits. Increase `KRKN_TEST_READINESS_TIMEOUT` or check node resources.
- **`ModuleNotFoundError: pytest_rerunfailures`** — Missing test deps. Run `pip install -r CI/tests_v2/requirements.txt` (or `make setup`).
- **Stale `krkn-test-*` namespaces** — Left over from a previous crashed run. They are auto-cleaned at session start (older than 30 min). To remove cluster and reports: `make -f CI/tests_v2/Makefile clean`.
- **Wrong cluster targeted** — Multiple kube contexts. Use `--require-kind` to skip unless context is kind/minikube, or set context explicitly: `kubectl config use-context kind-ci-krkn`.
- **`OSError: [Errno 48] Address already in use` when running tests in parallel** — Kraken normally starts an HTTP status server on port 8081. With `-n auto` (pytest-xdist), multiple Kraken processes would all try to bind to 8081. The test framework disables this server (`publish_kraken_status: False`) in the generated config, so parallel runs should not hit this. If you see it, ensure you're using the framework's `build_config` and not a config that has `publish_kraken_status: True`.

View File

@@ -0,0 +1,74 @@
kraken:
distribution: kubernetes # Distribution can be kubernetes or openshift.
kubeconfig_path: ~/.kube/config # Path to kubeconfig.
exit_on_failure: False # Exit when a post action scenario fails.
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
signal_address: 0.0.0.0 # Signal listening address
port: 8081 # Signal port
auto_rollback: True # Enable auto rollback for scenarios.
rollback_versions_directory: # Directory to store rollback version files. If empty, a secure temp directory is created automatically.
chaos_scenarios: # List of policies/chaos scenarios to load.
- $scenario_type: # List of chaos pod scenarios to load.
- $scenario_file
cerberus:
cerberus_enabled: False # Enable it when cerberus is previously installed.
cerberus_url: # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal.
performance_monitoring:
capture_metrics: False
metrics_profile_path: config/metrics-aggregated.yaml
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
uuid: # uuid for the run is generated by default if not set.
enable_alerts: True # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
enable_metrics: True
alert_profile: config/alerts.yaml # Path or URL to alert profile with the prometheus queries
metrics_profile: config/metrics-report.yaml
check_critical_alerts: True # Path to alert profile with the prometheus queries.
tunings:
wait_duration: 6 # Duration to wait between each chaos scenario.
iterations: 1 # Number of times to execute the scenarios.
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever.
telemetry:
enabled: False # enable/disables the telemetry collection feature
api_url: https://yvnn4rfoi7.execute-api.us-west-2.amazonaws.com/test #telemetry service endpoint
username: $TELEMETRY_USERNAME # telemetry service username
password: $TELEMETRY_PASSWORD # telemetry service password
prometheus_namespace: 'monitoring' # prometheus namespace
prometheus_pod_name: 'prometheus-kind-prometheus-kube-prome-prometheus-0' # prometheus pod_name
prometheus_container_name: 'prometheus'
prometheus_backup: True # enables/disables prometheus data collection
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
backup_threads: 5 # number of telemetry download/upload threads
archive_path: # local path where the archive files will be temporarily stored. If empty, a secure temp directory is created automatically.
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
archive_size: 10000 # the size of the prometheus data archive size in KB. The lower the size of archive is
logs_backup: True
logs_filter_patterns:
- "(\\w{3}\\s\\d{1,2}\\s\\d{2}:\\d{2}:\\d{2}\\.\\d+).+" # Sep 9 11:20:36.123425532
- "kinit (\\d+/\\d+/\\d+\\s\\d{2}:\\d{2}:\\d{2})\\s+" # kinit 2023/09/15 11:20:36 log
- "(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d+Z).+" # 2023-09-15T11:20:36.123425532Z log
oc_cli_path: /usr/bin/oc # optional, if not specified will be search in $PATH
events_backup: True # enables/disables cluster events collection
telemetry_group: "funtests"
elastic:
enable_elastic: False
verify_certs: False
elastic_url: "https://192.168.39.196" # To track results in elasticsearch, give url to server here; will post telemetry details when url and index not blank
elastic_port: 32766
username: "elastic"
password: "test"
metrics_index: "krkn-metrics"
alerts_index: "krkn-alerts"
telemetry_index: "krkn-telemetry"
health_checks: # Utilizing health check endpoints to observe application behavior during chaos injection.
interval: # Interval in seconds to perform health checks, default value is 2 seconds
config: # Provide list of health check configurations for applications
- url: # Provide application endpoint
bearer_token: # Bearer token for authentication if any
auth: # Provide authentication credentials (username , password) in tuple format if any, ex:("admin","secretpassword")
exit_on_failure: # If value is True exits when health check failed for application, values can be True/False

67
CI/tests_v2/conftest.py Normal file
View File

@@ -0,0 +1,67 @@
"""
Shared fixtures for pytest functional tests (CI/tests_v2).
Tests must be run from the repository root so run_kraken.py and config paths resolve.
"""
import logging
from pathlib import Path
import pytest
def pytest_addoption(parser):
parser.addoption(
"--keep-ns-on-fail",
action="store_true",
default=False,
help="Don't delete test namespaces on failure (for debugging)",
)
parser.addoption(
"--require-kind",
action="store_true",
default=False,
help="Skip tests unless current context is a known dev cluster (kind, minikube)",
)
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
rep = outcome.get_result()
setattr(item, f"rep_{rep.when}", rep)
def _repo_root() -> Path:
"""Repository root (directory containing run_kraken.py and CI/)."""
return Path(__file__).resolve().parent.parent.parent
@pytest.fixture(scope="session")
def repo_root():
return _repo_root()
@pytest.fixture(scope="session", autouse=True)
def _configure_logging():
"""Set log format with timestamps for test runs."""
logging.basicConfig(
format="%(asctime)s %(levelname)s [%(name)s] %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
level=logging.INFO,
)
# Re-export fixtures from lib modules so pytest discovers them
from lib.deploy import deploy_workload, wait_for_pods_running # noqa: E402, F401
from lib.kraken import build_config, run_kraken, run_kraken_background # noqa: E402, F401
from lib.k8s import ( # noqa: E402, F401
_kube_config_loaded,
_log_cluster_context,
k8s_apps,
k8s_client,
k8s_core,
k8s_networking,
kubectl,
)
from lib.namespace import _cleanup_stale_namespaces, test_namespace # noqa: E402, F401
from lib.preflight import _preflight_checks # noqa: E402, F401

View File

@@ -0,0 +1,8 @@
# Lean KinD config for local dev (faster than full 5-node). Use KIND_CONFIG to override.
# No extraPortMappings so setup works when 9090/30080 are in use (e.g. local Prometheus).
# For Prometheus/ES port mapping, use the repo root kind-config.yml.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker

View File

@@ -0,0 +1,7 @@
# Shared framework for CI/tests_v2 functional tests.
# base: BaseScenarioTest, timeout constants
# utils: assertions, K8s helpers, patch_namespace_in_docs
# k8s: K8s client fixtures, cluster context checks
# namespace: test_namespace, stale namespace cleanup
# deploy: deploy_workload, wait_for_pods_running, wait_for_deployment_replicas
# kraken: run_kraken, run_kraken_background, build_config

155
CI/tests_v2/lib/base.py Normal file
View File

@@ -0,0 +1,155 @@
"""
Base class for CI/tests_v2 scenario tests.
Encapsulates the shared lifecycle: ephemeral namespace, optional workload deploy, teardown.
"""
import copy
import logging
import os
import subprocess
from pathlib import Path
import pytest
import yaml
from lib.utils import load_scenario_base
logger = logging.getLogger(__name__)
def _get_nested(obj, path):
"""Walk path (list of keys/indices) and return the value. Supports list and dict."""
for key in path:
obj = obj[key]
return obj
def _set_nested(obj, path, value):
"""Walk path to the parent and set the last key to value."""
if not path:
return
parent_path, last_key = path[:-1], path[-1]
parent = obj
for key in parent_path:
parent = parent[key]
parent[last_key] = value
# Timeout constants (seconds). Override via env vars (e.g. KRKN_TEST_READINESS_TIMEOUT).
# Coordinate with pytest-timeout budget (e.g. 300s).
TIMEOUT_BUDGET = int(os.environ.get("KRKN_TEST_TIMEOUT_BUDGET", "300"))
DEPLOY_TIMEOUT = int(os.environ.get("KRKN_TEST_DEPLOY_TIMEOUT", "90"))
READINESS_TIMEOUT = int(os.environ.get("KRKN_TEST_READINESS_TIMEOUT", "90"))
NS_CLEANUP_TIMEOUT = int(os.environ.get("KRKN_TEST_NS_CLEANUP_TIMEOUT", "60"))
POLICY_WAIT_TIMEOUT = int(os.environ.get("KRKN_TEST_POLICY_WAIT_TIMEOUT", "30"))
KRAKEN_PROC_WAIT_TIMEOUT = int(os.environ.get("KRKN_TEST_KRAKEN_PROC_WAIT_TIMEOUT", "60"))
class BaseScenarioTest:
"""
Base class for scenario tests. Subclasses set:
- WORKLOAD_MANIFEST: path (str), or callable(namespace) -> YAML str for inline manifest
- WORKLOAD_IS_PATH: True if WORKLOAD_MANIFEST is a file path, False if inline YAML
- LABEL_SELECTOR: label selector for pods to wait on (e.g. "app=my-target")
- SCENARIO_NAME: e.g. "pod_disruption", "application_outage"
- SCENARIO_TYPE: e.g. "pod_disruption_scenarios", "application_outages_scenarios"
- NAMESPACE_KEY_PATH: path to namespace field, e.g. [0, "config", "namespace_pattern"] or ["application_outage", "namespace"]
- NAMESPACE_IS_REGEX: True to wrap namespace in ^...$
- OVERRIDES_KEY_PATH: path to dict for **overrides (e.g. ["application_outage"]), or [] if none
"""
WORKLOAD_MANIFEST = None
WORKLOAD_IS_PATH = True
LABEL_SELECTOR = None
SCENARIO_NAME = ""
SCENARIO_TYPE = ""
NAMESPACE_KEY_PATH = []
NAMESPACE_IS_REGEX = False
OVERRIDES_KEY_PATH = []
@pytest.fixture(autouse=True)
def _inject_common_fixtures(
self,
repo_root,
tmp_path,
build_config,
run_kraken,
run_kraken_background,
k8s_core,
k8s_apps,
k8s_networking,
k8s_client,
):
"""Inject common fixtures onto self so test methods don't need to declare them."""
self.repo_root = repo_root
self.tmp_path = tmp_path
self.build_config = build_config
self.run_kraken = run_kraken
self.run_kraken_background = run_kraken_background
self.k8s_core = k8s_core
self.k8s_apps = k8s_apps
self.k8s_networking = k8s_networking
self.k8s_client = k8s_client
yield
@pytest.fixture(autouse=True)
def _setup_workload(self, request, repo_root):
if "no_workload" in request.keywords:
request.instance.ns = request.getfixturevalue("test_namespace")
logger.debug("no_workload marker: skipping workload deploy, ns=%s", request.instance.ns)
yield
return
deploy = request.getfixturevalue("deploy_workload")
test_namespace = request.getfixturevalue("test_namespace")
manifest = self.WORKLOAD_MANIFEST
if callable(manifest):
manifest = manifest(test_namespace)
is_path = False
logger.info("Deploying inline workload in ns=%s, label_selector=%s", test_namespace, self.LABEL_SELECTOR)
else:
is_path = self.WORKLOAD_IS_PATH
if is_path and manifest and not Path(manifest).is_absolute():
manifest = repo_root / manifest
logger.info("Deploying workload from %s in ns=%s, label_selector=%s", manifest, test_namespace, self.LABEL_SELECTOR)
ns = deploy(manifest, self.LABEL_SELECTOR, is_path=is_path, timeout=DEPLOY_TIMEOUT)
request.instance.ns = ns
yield
def load_and_patch_scenario(self, repo_root, namespace, **overrides):
"""Load scenario_base.yaml and patch namespace (and overrides). Returns the scenario structure."""
scenario = copy.deepcopy(load_scenario_base(repo_root, self.SCENARIO_NAME))
ns_value = f"^{namespace}$" if self.NAMESPACE_IS_REGEX else namespace
if self.NAMESPACE_KEY_PATH:
_set_nested(scenario, self.NAMESPACE_KEY_PATH, ns_value)
if overrides and self.OVERRIDES_KEY_PATH:
target = _get_nested(scenario, self.OVERRIDES_KEY_PATH)
for key, value in overrides.items():
target[key] = value
return scenario
def write_scenario(self, tmp_path, scenario_data, suffix=""):
"""Write scenario data to a YAML file in tmp_path. Returns the path."""
filename = f"{self.SCENARIO_NAME}_scenario{suffix}.yaml"
path = tmp_path / filename
path.write_text(yaml.dump(scenario_data, default_flow_style=False, sort_keys=False))
return path
def run_scenario(self, tmp_path, namespace, *, overrides=None, config_filename=None):
"""Load, patch, write scenario; build config; run Kraken. Returns CompletedProcess."""
scenario = self.load_and_patch_scenario(self.repo_root, namespace, **(overrides or {}))
scenario_path = self.write_scenario(tmp_path, scenario)
config_path = self.build_config(
self.SCENARIO_TYPE,
str(scenario_path),
filename=config_filename or "test_config.yaml",
)
if os.environ.get("KRKN_TEST_DRY_RUN", "0") == "1":
logger.info(
"[dry-run] Would run Kraken with config=%s, scenario=%s",
config_path,
scenario_path,
)
return subprocess.CompletedProcess(
args=[], returncode=0, stdout="[dry-run] skipped", stderr=""
)
return self.run_kraken(config_path)

145
CI/tests_v2/lib/deploy.py Normal file
View File

@@ -0,0 +1,145 @@
"""
Workload deploy and pod/deployment readiness fixtures for CI/tests_v2.
"""
import logging
import time
from pathlib import Path
import pytest
import yaml
from kubernetes import utils as k8s_utils
from lib.base import READINESS_TIMEOUT
from lib.utils import patch_namespace_in_docs
logger = logging.getLogger(__name__)
def wait_for_deployment_replicas(k8s_apps, namespace: str, name: str, timeout: int = 120) -> None:
"""
Poll until the deployment has ready_replicas >= spec.replicas.
Raises TimeoutError with diagnostic details on failure.
"""
deadline = time.monotonic() + timeout
last_dep = None
attempts = 0
while time.monotonic() < deadline:
try:
dep = k8s_apps.read_namespaced_deployment(name=name, namespace=namespace)
except Exception as e:
logger.debug("Deployment %s/%s poll attempt %s failed: %s", namespace, name, attempts, e)
time.sleep(2)
attempts += 1
continue
last_dep = dep
ready = dep.status.ready_replicas or 0
desired = dep.spec.replicas or 1
if ready >= desired:
logger.debug("Deployment %s/%s ready (%s/%s)", namespace, name, ready, desired)
return
logger.debug("Deployment %s/%s not ready yet: %s/%s", namespace, name, ready, desired)
time.sleep(2)
attempts += 1
diag = ""
if last_dep is not None and last_dep.status:
diag = f" ready_replicas={last_dep.status.ready_replicas}, desired={last_dep.spec.replicas}"
raise TimeoutError(
f"Deployment {namespace}/{name} did not become ready within {timeout}s.{diag}"
)
@pytest.fixture
def wait_for_pods_running(k8s_core):
"""
Poll until all matching pods are Running and all containers ready.
Uses exponential backoff: 1s, 2s, 4s, ... capped at 10s.
Raises TimeoutError with diagnostic details on failure.
"""
def _wait(namespace: str, label_selector: str, timeout: int = READINESS_TIMEOUT):
deadline = time.monotonic() + timeout
interval = 1.0
max_interval = 10.0
last_list = None
while time.monotonic() < deadline:
try:
pod_list = k8s_core.list_namespaced_pod(
namespace=namespace,
label_selector=label_selector,
)
except Exception:
time.sleep(min(interval, max_interval))
interval = min(interval * 2, max_interval)
continue
last_list = pod_list
items = pod_list.items or []
if not items:
time.sleep(min(interval, max_interval))
interval = min(interval * 2, max_interval)
continue
all_running = all(
(p.status and p.status.phase == "Running") for p in items
)
if not all_running:
time.sleep(min(interval, max_interval))
interval = min(interval * 2, max_interval)
continue
all_ready = True
for p in items:
if not p.status or not p.status.container_statuses:
all_ready = False
break
for cs in p.status.container_statuses:
if not getattr(cs, "ready", False):
all_ready = False
break
if all_ready:
return
time.sleep(min(interval, max_interval))
interval = min(interval * 2, max_interval)
diag = ""
if last_list and last_list.items:
p = last_list.items[0]
diag = f" e.g. pod {p.metadata.name}: phase={getattr(p.status, 'phase', None)}"
raise TimeoutError(
f"Pods in {namespace} with label {label_selector} did not become ready within {timeout}s.{diag}"
)
return _wait
@pytest.fixture(scope="function")
def deploy_workload(test_namespace, k8s_client, wait_for_pods_running, repo_root, tmp_path):
"""
Helper that applies a manifest into the test namespace and waits for pods.
Yields a callable: deploy(manifest_path_or_content, label_selector, *, is_path=True)
which applies the manifest, waits for readiness, and returns the namespace name.
"""
def _deploy(manifest_path_or_content, label_selector, *, is_path=True, timeout=READINESS_TIMEOUT):
try:
if is_path:
path = Path(manifest_path_or_content)
if not path.is_absolute():
path = repo_root / path
with open(path) as f:
docs = list(yaml.safe_load_all(f))
else:
docs = list(yaml.safe_load_all(manifest_path_or_content))
docs = patch_namespace_in_docs(docs, test_namespace)
k8s_utils.create_from_yaml(
k8s_client,
yaml_objects=docs,
namespace=test_namespace,
)
except k8s_utils.FailToCreateError as e:
msgs = [str(exc) for exc in e.api_exceptions]
raise RuntimeError(f"Failed to create resources: {'; '.join(msgs)}") from e
logger.info("Workload applied in namespace=%s, waiting for pods with selector=%s", test_namespace, label_selector)
wait_for_pods_running(test_namespace, label_selector, timeout=timeout)
logger.info("Pods ready in namespace=%s", test_namespace)
return test_namespace
return _deploy

88
CI/tests_v2/lib/k8s.py Normal file
View File

@@ -0,0 +1,88 @@
"""
Kubernetes client fixtures and cluster context checks for CI/tests_v2.
"""
import logging
import subprocess
from pathlib import Path
import pytest
from kubernetes import client, config
logger = logging.getLogger(__name__)
@pytest.fixture(scope="session")
def _kube_config_loaded():
"""Load kubeconfig once per session. Skips if cluster unreachable."""
try:
config.load_kube_config()
logger.info("Kube config loaded successfully")
except config.ConfigException as e:
logger.warning("Could not load kube config: %s", e)
pytest.skip(f"Could not load kube config (is a cluster running?): {e}")
@pytest.fixture(scope="session")
def k8s_core(_kube_config_loaded):
"""Kubernetes CoreV1Api for pods, etc. Uses default kubeconfig."""
return client.CoreV1Api()
@pytest.fixture(scope="session")
def k8s_networking(_kube_config_loaded):
"""Kubernetes NetworkingV1Api for network policies."""
return client.NetworkingV1Api()
@pytest.fixture(scope="session")
def k8s_client(_kube_config_loaded):
"""Kubernetes ApiClient for create_from_yaml and other generic API calls."""
return client.ApiClient()
@pytest.fixture(scope="session")
def k8s_apps(_kube_config_loaded):
"""Kubernetes AppsV1Api for deployment status polling."""
return client.AppsV1Api()
@pytest.fixture(scope="session", autouse=True)
def _log_cluster_context(request):
"""Log current cluster context at session start; skip if --require-kind and not a dev cluster."""
try:
contexts, active = config.list_kube_config_contexts()
except Exception as e:
logger.warning("Could not list kube config contexts: %s", e)
return
if not active:
return
context_name = active.get("name", "?")
cluster = (active.get("context") or {}).get("cluster", "?")
logger.info("Running tests against cluster: context=%s cluster=%s", context_name, cluster)
if not request.config.getoption("--require-kind", False):
return
cluster_lower = (cluster or "").lower()
if "kind" in cluster_lower or "minikube" in cluster_lower:
return
pytest.skip(
f"Cluster '{cluster}' does not look like kind/minikube. "
"Use default kubeconfig or pass --require-kind only on dev clusters."
)
@pytest.fixture
def kubectl(repo_root):
"""Run kubectl with given args from repo root. Returns CompletedProcess."""
def run(args, timeout=120):
cmd = ["kubectl"] + (args if isinstance(args, list) else list(args))
return subprocess.run(
cmd,
cwd=repo_root,
capture_output=True,
text=True,
timeout=timeout,
)
return run

94
CI/tests_v2/lib/kraken.py Normal file
View File

@@ -0,0 +1,94 @@
"""
Kraken execution and config building fixtures for CI/tests_v2.
"""
import os
import subprocess
import sys
from pathlib import Path
import pytest
import yaml
def _kraken_cmd(config_path: str, repo_root: Path):
"""Use the same Python as the test process so venv/.venv and coverage match."""
python = sys.executable
if os.environ.get("KRKN_TEST_COVERAGE", "0") == "1":
return [
python, "-m", "coverage", "run", "-a",
"run_kraken.py", "-c", str(config_path),
]
return [python, "run_kraken.py", "-c", str(config_path)]
@pytest.fixture
def run_kraken(repo_root):
"""Run Kraken with the given config path. Returns CompletedProcess. Default timeout 300s."""
def run(config_path, timeout=300, extra_args=None):
cmd = _kraken_cmd(config_path, repo_root)
if extra_args:
cmd.extend(extra_args)
return subprocess.run(
cmd,
cwd=repo_root,
capture_output=True,
text=True,
timeout=timeout,
)
return run
@pytest.fixture
def run_kraken_background(repo_root):
"""Start Kraken in background. Returns Popen. Call proc.terminate() or proc.wait() to stop."""
def start(config_path):
cmd = _kraken_cmd(config_path, repo_root)
return subprocess.Popen(
cmd,
cwd=repo_root,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
return start
@pytest.fixture
def build_config(repo_root, tmp_path):
"""
Build a Kraken config from tests_v2's common_test_config.yaml with scenario_type and scenario_file
substituted. Disables Prometheus/Elastic checks for local runs.
Returns the path to the written config file.
"""
common_path = repo_root / "CI" / "tests_v2" / "config" / "common_test_config.yaml"
def _build(scenario_type: str, scenario_file: str, filename: str = "test_config.yaml"):
content = common_path.read_text()
content = content.replace("$scenario_type", scenario_type)
content = content.replace("$scenario_file", scenario_file)
content = content.replace("$post_config", "")
config = yaml.safe_load(content)
if "kraken" in config:
# Disable status server so parallel test workers don't all bind to port 8081
config["kraken"]["publish_kraken_status"] = False
if "performance_monitoring" in config:
config["performance_monitoring"]["check_critical_alerts"] = False
config["performance_monitoring"]["enable_alerts"] = False
config["performance_monitoring"]["enable_metrics"] = False
if "elastic" in config:
config["elastic"]["enable_elastic"] = False
if "tunings" in config:
config["tunings"]["wait_duration"] = 1
out_path = tmp_path / filename
with open(out_path, "w") as f:
yaml.dump(config, f, default_flow_style=False, sort_keys=False)
return str(out_path)
return _build

View File

@@ -0,0 +1,114 @@
"""
Namespace lifecycle fixtures for CI/tests_v2: create, delete, stale cleanup.
"""
import logging
import os
import time
import uuid
from datetime import datetime
import pytest
from kubernetes import client
from kubernetes.client.rest import ApiException
logger = logging.getLogger(__name__)
STALE_NS_AGE_MINUTES = 30
def _namespace_age_minutes(metadata) -> float:
"""Return age of namespace in minutes from its creation_timestamp."""
if not metadata or not metadata.creation_timestamp:
return 0.0
created = metadata.creation_timestamp
if hasattr(created, "timestamp"):
created_ts = created.timestamp()
else:
try:
dt = datetime.fromisoformat(created.replace("Z", "+00:00"))
created_ts = dt.timestamp()
except Exception:
return 0.0
return (time.time() - created_ts) / 60.0
def _wait_for_namespace_gone(k8s_core, name: str, timeout: int = 60):
"""Poll until the namespace no longer exists."""
deadline = time.monotonic() + timeout
while time.monotonic() < deadline:
try:
k8s_core.read_namespace(name=name)
except ApiException as e:
if e.status == 404:
return
raise
time.sleep(1)
raise TimeoutError(f"Namespace {name} did not disappear within {timeout}s")
@pytest.fixture(scope="function")
def test_namespace(request, k8s_core):
"""
Create an ephemeral namespace for the test. Deleted after the test unless
--keep-ns-on-fail is set and the test failed.
"""
name = f"krkn-test-{uuid.uuid4().hex[:8]}"
ns = client.V1Namespace(
metadata=client.V1ObjectMeta(
name=name,
labels={
"pod-security.kubernetes.io/audit": "privileged",
"pod-security.kubernetes.io/enforce": "privileged",
"pod-security.kubernetes.io/enforce-version": "v1.24",
"pod-security.kubernetes.io/warn": "privileged",
"security.openshift.io/scc.podSecurityLabelSync": "false",
},
)
)
k8s_core.create_namespace(body=ns)
logger.info("Created test namespace: %s", name)
yield name
keep_on_fail = request.config.getoption("--keep-ns-on-fail", False)
rep_call = getattr(request.node, "rep_call", None)
failed = rep_call is not None and rep_call.failed
if keep_on_fail and failed:
logger.info("[keep-ns-on-fail] Keeping namespace %s for debugging", name)
return
try:
k8s_core.delete_namespace(
name=name,
body=client.V1DeleteOptions(propagation_policy="Background"),
)
logger.debug("Scheduled background deletion for namespace: %s", name)
except Exception as e:
logger.warning("Failed to delete namespace %s: %s", name, e)
@pytest.fixture(scope="session", autouse=True)
def _cleanup_stale_namespaces(k8s_core):
"""Delete krkn-test-* namespaces older than STALE_NS_AGE_MINUTES at session start."""
if os.environ.get("PYTEST_XDIST_WORKER"):
return
try:
namespaces = k8s_core.list_namespace()
except Exception as e:
logger.warning("Could not list namespaces for stale cleanup: %s", e)
return
for ns in namespaces.items or []:
name = ns.metadata.name if ns.metadata else ""
if not name.startswith("krkn-test-"):
continue
if _namespace_age_minutes(ns.metadata) <= STALE_NS_AGE_MINUTES:
continue
try:
logger.warning("Deleting stale namespace: %s", name)
k8s_core.delete_namespace(
name=name,
body=client.V1DeleteOptions(propagation_policy="Background"),
)
except Exception as e:
logger.warning("Failed to delete stale namespace %s: %s", name, e)

View File

@@ -0,0 +1,48 @@
"""
Preflight checks for CI/tests_v2: cluster reachability and test deps at session start.
"""
import logging
import subprocess
import pytest
logger = logging.getLogger(__name__)
@pytest.fixture(scope="session", autouse=True)
def _preflight_checks(repo_root):
"""
Verify cluster is reachable and test deps are importable at session start.
Skips the session if cluster-info fails or required plugins are missing.
"""
# Check test deps (pytest plugins)
try:
import pytest_rerunfailures # noqa: F401
import pytest_html # noqa: F401
import pytest_timeout # noqa: F401
import pytest_order # noqa: F401
import xdist # noqa: F401
except ImportError as e:
pytest.skip(
f"Missing test dependency: {e}. "
"Run: pip install -r CI/tests_v2/requirements.txt"
)
# Check cluster reachable and log server URL
result = subprocess.run(
["kubectl", "cluster-info"],
cwd=repo_root,
capture_output=True,
text=True,
timeout=10,
)
if result.returncode != 0:
pytest.skip(
f"Cluster not reachable (kubectl cluster-info failed). "
f"Start a cluster (e.g. make setup) or check KUBECONFIG. stderr: {result.stderr or '(none)'}"
)
# Log first line of cluster-info (server URL) for debugging
if result.stdout:
first_line = result.stdout.strip().split("\n")[0]
logger.info("Preflight: %s", first_line)

212
CI/tests_v2/lib/utils.py Normal file
View File

@@ -0,0 +1,212 @@
"""
Shared helpers for CI/tests_v2 functional tests.
"""
import logging
import time
from pathlib import Path
from typing import List, Optional, Union
import pytest
import yaml
from kubernetes.client import V1NetworkPolicy, V1NetworkPolicyList, V1Pod, V1PodList
logger = logging.getLogger(__name__)
def _pods(pod_list: Union[V1PodList, List[V1Pod]]) -> List[V1Pod]:
"""Normalize V1PodList or list of V1Pod to list of V1Pod."""
return pod_list.items if hasattr(pod_list, "items") else pod_list
def _policies(
policy_list: Union[V1NetworkPolicyList, List[V1NetworkPolicy]],
) -> List[V1NetworkPolicy]:
"""Normalize V1NetworkPolicyList or list to list of V1NetworkPolicy."""
return policy_list.items if hasattr(policy_list, "items") else policy_list
def scenario_dir(repo_root: Path, scenario_name: str) -> Path:
"""Return the path to a scenario folder under CI/tests_v2/scenarios/."""
return repo_root / "CI" / "tests_v2" / "scenarios" / scenario_name
def load_scenario_base(
repo_root: Path,
scenario_name: str,
filename: str = "scenario_base.yaml",
) -> Union[dict, list]:
"""
Load and parse the scenario base YAML for a scenario.
Returns dict or list depending on the YAML structure.
"""
path = scenario_dir(repo_root, scenario_name) / filename
text = path.read_text()
data = yaml.safe_load(text)
if data is None:
raise ValueError(f"Empty or invalid YAML in {path}")
return data
def patch_namespace_in_docs(docs: list, namespace: str) -> list:
"""Override metadata.namespace in each doc so create_from_yaml respects target namespace."""
for doc in docs:
if isinstance(doc, dict) and doc.get("metadata") is not None:
doc["metadata"]["namespace"] = namespace
return docs
def get_pods_list(k8s_core, namespace: str, label_selector: str) -> V1PodList:
"""Return V1PodList from the Kubernetes API."""
return k8s_core.list_namespaced_pod(
namespace=namespace,
label_selector=label_selector,
)
def get_pods_or_skip(
k8s_core,
namespace: str,
label_selector: str,
no_pods_reason: Optional[str] = None,
) -> V1PodList:
"""
Get pods via Kubernetes API or skip if cluster unreachable or no matching pods.
Use at test start when prerequisites may be missing.
no_pods_reason: message when no pods match; if None, a default message is used.
"""
try:
pod_list = k8s_core.list_namespaced_pod(
namespace=namespace,
label_selector=label_selector,
)
except Exception as e:
pytest.skip(f"Cluster unreachable: {e}")
if not pod_list.items or len(pod_list.items) == 0:
reason = (
no_pods_reason
if no_pods_reason
else f"No pods in {namespace} with label {label_selector}. "
"Start a KinD cluster with default storage (local-path-provisioner)."
)
pytest.skip(reason)
return pod_list
def pod_uids(pod_list: Union[V1PodList, List[V1Pod]]) -> list:
"""Return list of pod UIDs from V1PodList or list of V1Pod."""
return [p.metadata.uid for p in _pods(pod_list)]
def restart_counts(pod_list: Union[V1PodList, List[V1Pod]]) -> int:
"""Return total restart count across all containers in V1PodList or list of V1Pod."""
total = 0
for p in _pods(pod_list):
if not p.status or not p.status.container_statuses:
continue
for cs in p.status.container_statuses:
total += getattr(cs, "restart_count", 0)
return total
def get_network_policies_list(k8s_networking, namespace: str) -> V1NetworkPolicyList:
"""Return V1NetworkPolicyList from the Kubernetes API."""
return k8s_networking.list_namespaced_network_policy(namespace=namespace)
def find_network_policy_by_prefix(
policy_list: Union[V1NetworkPolicyList, List[V1NetworkPolicy]],
name_prefix: str,
) -> Optional[V1NetworkPolicy]:
"""Return the first NetworkPolicy whose name starts with name_prefix, or None."""
for policy in _policies(policy_list):
if (
policy.metadata
and policy.metadata.name
and policy.metadata.name.startswith(name_prefix)
):
return policy
return None
def assert_all_pods_running_and_ready(
pod_list: Union[V1PodList, List[V1Pod]],
namespace: str = "",
) -> None:
"""
Assert all pods are Running and all containers Ready.
Include namespace in assertion messages for debugging.
"""
ns_suffix = f" (namespace={namespace})" if namespace else ""
for pod in _pods(pod_list):
assert pod.status and pod.status.phase == "Running", (
f"Pod {pod.metadata.name} not Running after scenario: {pod.status}{ns_suffix}"
)
if pod.status.container_statuses:
for cs in pod.status.container_statuses:
assert getattr(cs, "ready", False) is True, (
f"Container {getattr(cs, 'name', '?')} not ready in pod {pod.metadata.name}{ns_suffix}"
)
def assert_pod_count_unchanged(
before: Union[V1PodList, List[V1Pod]],
after: Union[V1PodList, List[V1Pod]],
namespace: str = "",
) -> None:
"""Assert pod count is unchanged; include namespace in failure message."""
before_items = _pods(before)
after_items = _pods(after)
ns_suffix = f" (namespace={namespace})" if namespace else ""
assert len(after_items) == len(before_items), (
f"Pod count changed after scenario: expected {len(before_items)}, got {len(after_items)}.{ns_suffix}"
)
def assert_kraken_success(result, context: str = "", tmp_path=None, allowed_codes=(0,)) -> None:
"""
Assert Kraken run succeeded (returncode in allowed_codes). On failure, include stdout and stderr
in the assertion message and optionally write full output to tmp_path.
Default allowed_codes=(0,). For alert-aware tests, use allowed_codes=(0, 2).
"""
if result.returncode in allowed_codes:
return
if tmp_path is not None:
try:
(tmp_path / "kraken_stdout.log").write_text(result.stdout or "")
(tmp_path / "kraken_stderr.log").write_text(result.stderr or "")
except Exception as e:
logger.warning("Could not write Kraken logs to tmp_path: %s", e)
lines = (result.stdout or "").splitlines()
tail_stdout = "\n".join(lines[-20:]) if lines else "(empty)"
context_str = f" {context}" if context else ""
path_hint = f"\nFull logs: {tmp_path}/kraken_stdout.log, {tmp_path}/kraken_stderr.log" if tmp_path else ""
raise AssertionError(
f"Krkn failed (rc={result.returncode}){context_str}.{path_hint}\n"
f"--- stderr ---\n{result.stderr or '(empty)'}\n"
f"--- stdout (last 20 lines) ---\n{tail_stdout}"
)
def assert_kraken_failure(result, context: str = "", tmp_path=None) -> None:
"""
Assert Kraken run failed (returncode != 0). On failure (Kraken unexpectedly succeeded),
raise AssertionError with stdout/stderr and optional tmp_path log files for diagnostics.
"""
if result.returncode != 0:
return
if tmp_path is not None:
try:
(tmp_path / "kraken_stdout.log").write_text(result.stdout or "")
(tmp_path / "kraken_stderr.log").write_text(result.stderr or "")
except Exception as e:
logger.warning("Could not write Kraken logs to tmp_path: %s", e)
lines = (result.stdout or "").splitlines()
tail_stdout = "\n".join(lines[-20:]) if lines else "(empty)"
context_str = f" {context}" if context else ""
path_hint = f"\nFull logs: {tmp_path}/kraken_stdout.log, {tmp_path}/kraken_stderr.log" if tmp_path else ""
raise AssertionError(
f"Expected Krkn to fail but it succeeded (rc=0){context_str}.{path_hint}\n"
f"--- stderr ---\n{result.stderr or '(empty)'}\n"
f"--- stdout (last 20 lines) ---\n{tail_stdout}"
)

14
CI/tests_v2/pytest.ini Normal file
View File

@@ -0,0 +1,14 @@
[pytest]
testpaths = .
python_files = test_*.py
python_functions = test_*
# Install CI/tests_v2/requirements.txt for --timeout, --reruns, --reruns-delay.
# Example full run: pytest CI/tests_v2/ -v --timeout=300 --reruns=2 --reruns-delay=10 --html=... --junitxml=...
addopts = -v
markers =
functional: marks a test as a functional test (deselect with '-m "not functional"')
pod_disruption: marks a test as a pod disruption scenario test
application_outage: marks a test as an application outage scenario test
no_workload: skip workload deployment for this test (e.g. negative tests)
order: set test order (pytest-order)
junit_family = xunit2

View File

@@ -0,0 +1,15 @@
# Pytest plugin deps for CI/tests_v2 functional tests.
#
# Kept separate from the root requirements.txt because:
# - Root deps are Kraken runtime (cloud SDKs, K8s client, etc.)
# - These are test-only plugins not needed by Kraken itself
# - Merging would bloat installs for users who don't run functional tests
# - Separate files reduce version-conflict risk between test and runtime deps
#
# pytest and coverage are already in root requirements.txt; do NOT duplicate here.
# The Makefile installs both files automatically via `make setup`.
pytest-rerunfailures>=14.0
pytest-html>=4.1.0
pytest-timeout>=2.2.0
pytest-order>=1.2.0
pytest-xdist>=3.5.0

230
CI/tests_v2/scaffold.py Normal file
View File

@@ -0,0 +1,230 @@
#!/usr/bin/env python3
"""
Generate boilerplate for a new scenario test in CI/tests_v2.
Usage (from repository root):
python CI/tests_v2/scaffold.py --scenario service_hijacking
python CI/tests_v2/scaffold.py --scenario node_disruption --scenario-type node_scenarios
Creates (folder-per-scenario layout):
- CI/tests_v2/scenarios/<scenario>/test_<scenario>.py (BaseScenarioTest subclass + stub test)
- CI/tests_v2/scenarios/<scenario>/resource.yaml (placeholder workload)
- CI/tests_v2/scenarios/<scenario>/scenario_base.yaml (placeholder Krkn scenario; edit for your scenario_type)
- Adds the scenario marker to pytest.ini (if not already present)
"""
import argparse
import re
import sys
from pathlib import Path
def snake_to_camel(snake: str) -> str:
"""Convert snake_case to CamelCase."""
return "".join(word.capitalize() for word in snake.split("_"))
def scenario_type_default(scenario: str) -> str:
"""Default scenario_type for build_config (e.g. service_hijacking -> service_hijacking_scenarios)."""
return f"{scenario}_scenarios"
TEST_FILE_TEMPLATE = '''"""
Functional test for {scenario} scenario.
Each test runs in its own ephemeral namespace with workload deployed automatically.
"""
import pytest
from lib.base import BaseScenarioTest
from lib.utils import (
assert_all_pods_running_and_ready,
assert_kraken_failure,
assert_kraken_success,
assert_pod_count_unchanged,
get_pods_list,
)
@pytest.mark.functional
@pytest.mark.{marker}
class Test{class_name}(BaseScenarioTest):
"""{scenario} scenario."""
WORKLOAD_MANIFEST = "CI/tests_v2/scenarios/{scenario}/resource.yaml"
WORKLOAD_IS_PATH = True
LABEL_SELECTOR = "app={app_label}"
SCENARIO_NAME = "{scenario}"
SCENARIO_TYPE = "{scenario_type}"
NAMESPACE_KEY_PATH = {namespace_key_path}
NAMESPACE_IS_REGEX = {namespace_is_regex}
OVERRIDES_KEY_PATH = {overrides_key_path}
@pytest.mark.order(1)
def test_happy_path(self):
"""Run {scenario} scenario and assert pods remain healthy."""
ns = self.ns
before = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
result = self.run_scenario(self.tmp_path, ns)
assert_kraken_success(result, context=f"namespace={{ns}}", tmp_path=self.tmp_path)
after = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
assert_pod_count_unchanged(before, after, namespace=ns)
assert_all_pods_running_and_ready(after, namespace=ns)
'''
RESOURCE_YAML_TEMPLATE = '''# Target workload for {scenario} scenario tests.
# Namespace is patched at deploy time by the test framework.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {app_label}
spec:
replicas: 1
selector:
matchLabels:
app: {app_label}
template:
metadata:
labels:
app: {app_label}
spec:
containers:
- name: app
image: nginx:alpine
ports:
- containerPort: 80
'''
SCENARIO_BASE_DICT_TEMPLATE = '''# Base scenario for {scenario} (used by build_config with scenario_type: {scenario_type}).
# Edit this file with the structure expected by Krkn. Top-level key must match SCENARIO_NAME.
# See scenarios/application_outage/scenario_base.yaml and scenarios/pod_disruption/scenario_base.yaml for examples.
{scenario}:
namespace: default
# Add fields required by your scenario plugin.
'''
SCENARIO_BASE_LIST_TEMPLATE = '''# Base scenario for {scenario} (list format). Tests patch config.namespace_pattern with ^<ns>$.
# Edit with the structure expected by your scenario plugin. See scenarios/pod_disruption/scenario_base.yaml.
- id: {scenario}-default
config:
namespace_pattern: "^default$"
# Add fields required by your scenario plugin.
'''
def main() -> int:
parser = argparse.ArgumentParser(description="Scaffold a new scenario test in CI/tests_v2 (folder-per-scenario)")
parser.add_argument(
"--scenario",
required=True,
help="Scenario name in snake_case (e.g. service_hijacking)",
)
parser.add_argument(
"--scenario-type",
default=None,
help="Kraken scenario_type for build_config (default: <scenario>_scenarios)",
)
parser.add_argument(
"--list-based",
action="store_true",
help="Use list-based scenario (NAMESPACE_KEY_PATH [0, 'config', 'namespace_pattern'], OVERRIDES_KEY_PATH [0, 'config'])",
)
parser.add_argument(
"--regex-namespace",
action="store_true",
help="Set NAMESPACE_IS_REGEX = True (namespace wrapped in ^...$)",
)
args = parser.parse_args()
scenario = args.scenario.strip().lower()
if not re.match(r"^[a-z][a-z0-9_]*$", scenario):
print("Error: --scenario must be snake_case (e.g. service_hijacking)", file=sys.stderr)
return 1
scenario_type = args.scenario_type or scenario_type_default(scenario)
class_name = snake_to_camel(scenario)
marker = scenario
app_label = scenario.replace("_", "-")
if args.list_based:
namespace_key_path = [0, "config", "namespace_pattern"]
namespace_is_regex = True
overrides_key_path = [0, "config"]
scenario_base_template = SCENARIO_BASE_LIST_TEMPLATE
else:
namespace_key_path = [scenario, "namespace"]
namespace_is_regex = args.regex_namespace
overrides_key_path = [scenario]
scenario_base_template = SCENARIO_BASE_DICT_TEMPLATE
repo_root = Path(__file__).resolve().parent.parent.parent
scenario_dir_path = repo_root / "CI" / "tests_v2" / "scenarios" / scenario
test_path = scenario_dir_path / f"test_{scenario}.py"
resource_path = scenario_dir_path / "resource.yaml"
scenario_base_path = scenario_dir_path / "scenario_base.yaml"
if scenario_dir_path.exists() and any(scenario_dir_path.iterdir()):
print(f"Error: scenario directory already exists and is non-empty: {scenario_dir_path}", file=sys.stderr)
return 1
if test_path.exists():
print(f"Error: {test_path} already exists", file=sys.stderr)
return 1
scenario_dir_path.mkdir(parents=True, exist_ok=True)
test_content = TEST_FILE_TEMPLATE.format(
scenario=scenario,
marker=marker,
class_name=class_name,
app_label=app_label,
scenario_type=scenario_type,
namespace_key_path=repr(namespace_key_path),
namespace_is_regex=namespace_is_regex,
overrides_key_path=repr(overrides_key_path),
)
resource_content = RESOURCE_YAML_TEMPLATE.format(scenario=scenario, app_label=app_label)
scenario_base_content = scenario_base_template.format(
scenario=scenario,
scenario_type=scenario_type,
)
test_path.write_text(test_content, encoding="utf-8")
resource_path.write_text(resource_content, encoding="utf-8")
scenario_base_path.write_text(scenario_base_content, encoding="utf-8")
# Auto-add marker to pytest.ini if not already present
pytest_ini_path = repo_root / "CI" / "tests_v2" / "pytest.ini"
marker_line = f" {marker}: marks a test as a {scenario} scenario test"
if pytest_ini_path.exists():
content = pytest_ini_path.read_text(encoding="utf-8")
if f" {marker}:" not in content and f"{marker}: marks" not in content:
lines = content.splitlines(keepends=True)
insert_at = None
for i, line in enumerate(lines):
if re.match(r"^ \w+:\s*.+", line):
insert_at = i + 1
if insert_at is not None:
lines.insert(insert_at, marker_line + "\n")
pytest_ini_path.write_text("".join(lines), encoding="utf-8")
print("Added marker to pytest.ini")
else:
print("Could not find markers block in pytest.ini; add manually:")
print(marker_line)
else:
print("Marker already in pytest.ini")
else:
print("pytest.ini not found; add this marker under 'markers':")
print(marker_line)
print(f"Created: {test_path}")
print(f"Created: {resource_path}")
print(f"Created: {scenario_base_path}")
print()
print("Then edit scenario_base.yaml with your scenario structure (top-level key should match SCENARIO_NAME).")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,34 @@
# Nginx Deployment + Service for application outage traffic test.
# Namespace is patched at deploy time by the test framework.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-outage-http
spec:
replicas: 1
selector:
matchLabels:
app: nginx-outage-http
scenario: outage
template:
metadata:
labels:
app: nginx-outage-http
scenario: outage
spec:
containers:
- name: nginx
image: nginx:alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-outage-http
spec:
selector:
app: nginx-outage-http
ports:
- port: 80
targetPort: 80

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: Pod
metadata:
name: outage
labels:
scenario: outage
spec:
containers:
- name: fedtools
image: quay.io/krkn-chaos/krkn:tools
command:
- /bin/sh
- -c
- |
sleep infinity

View File

@@ -0,0 +1,10 @@
# Base application_outage scenario. Tests load this and patch namespace (and optionally duration, block, exclude_label).
application_outage:
duration: 10
namespace: default
pod_selector:
scenario: outage
block:
- Ingress
- Egress
exclude_label: ""

View File

@@ -0,0 +1,229 @@
"""
Functional test for application outage scenario (block network to target pods, then restore).
Equivalent to CI/tests/test_app_outages.sh with proper assertions.
The main happy-path test reuses one namespace and workload for multiple scenario runs (default, exclude_label, block variants); other tests use their own ephemeral namespace as needed.
"""
import time
import pytest
from lib.base import (
BaseScenarioTest,
KRAKEN_PROC_WAIT_TIMEOUT,
POLICY_WAIT_TIMEOUT,
)
from lib.utils import (
assert_all_pods_running_and_ready,
assert_kraken_failure,
assert_kraken_success,
assert_pod_count_unchanged,
find_network_policy_by_prefix,
get_network_policies_list,
get_pods_list,
)
def _wait_for_network_policy(k8s_networking, namespace: str, prefix: str, timeout: int = 30):
"""Poll until a NetworkPolicy with name starting with prefix exists. Return its name."""
deadline = time.monotonic() + timeout
while time.monotonic() < deadline:
policy_list = get_network_policies_list(k8s_networking, namespace)
policy = find_network_policy_by_prefix(policy_list, prefix)
if policy:
return policy.metadata.name
time.sleep(1)
raise TimeoutError(f"No NetworkPolicy with prefix {prefix!r} in {namespace} within {timeout}s")
def _assert_no_network_policy_with_prefix(k8s_networking, namespace: str, prefix: str):
policy_list = get_network_policies_list(k8s_networking, namespace)
policy = find_network_policy_by_prefix(policy_list, prefix)
name = policy.metadata.name if policy and policy.metadata else "?"
assert policy is None, (
f"Expected no NetworkPolicy with prefix {prefix!r} in namespace={namespace}, found {name}"
)
@pytest.mark.functional
@pytest.mark.application_outage
class TestApplicationOutage(BaseScenarioTest):
"""Application outage scenario: block network to target pods, then restore."""
WORKLOAD_MANIFEST = "CI/tests_v2/scenarios/application_outage/resource.yaml"
WORKLOAD_IS_PATH = True
LABEL_SELECTOR = "scenario=outage"
POLICY_PREFIX = "krkn-deny-"
SCENARIO_NAME = "application_outage"
SCENARIO_TYPE = "application_outages_scenarios"
NAMESPACE_KEY_PATH = ["application_outage", "namespace"]
NAMESPACE_IS_REGEX = False
OVERRIDES_KEY_PATH = ["application_outage"]
@pytest.mark.order(1)
def test_app_outage_block_restore_and_variants(self):
"""Default, exclude_label, and block-type variants (Ingress, Egress, both) run successfully in one namespace; each run restores and pods stay ready."""
ns = self.ns
before = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
cases = [
("default", {}, "app_outage_config.yaml"),
("exclude_label", {"exclude_label": {"env": "prod"}}, "app_outage_exclude_config.yaml"),
("block=Ingress", {"block": ["Ingress"]}, "app_outage_block_ingress_config.yaml"),
("block=Egress", {"block": ["Egress"]}, "app_outage_block_egress_config.yaml"),
("block=Ingress,Egress", {"block": ["Ingress", "Egress"]}, "app_outage_block_ingress_egress_config.yaml"),
]
for context_name, overrides, config_filename in cases:
result = self.run_scenario(
self.tmp_path, ns,
overrides=overrides if overrides else None,
config_filename=config_filename,
)
assert_kraken_success(
result, context=f"{context_name} namespace={ns}", tmp_path=self.tmp_path
)
after = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
assert_pod_count_unchanged(before, after, namespace=ns)
assert_all_pods_running_and_ready(after, namespace=ns)
def test_network_policy_created_then_deleted(self):
"""NetworkPolicy with prefix krkn-deny- is created during run and deleted after."""
ns = self.ns
scenario = self.load_and_patch_scenario(self.repo_root, ns, duration=12)
scenario_path = self.write_scenario(self.tmp_path, scenario, suffix="_np_lifecycle")
config_path = self.build_config(
self.SCENARIO_TYPE, str(scenario_path),
filename="app_outage_np_lifecycle.yaml",
)
proc = self.run_kraken_background(config_path)
try:
policy_name = _wait_for_network_policy(
self.k8s_networking, ns, self.POLICY_PREFIX, timeout=POLICY_WAIT_TIMEOUT
)
assert policy_name.startswith(self.POLICY_PREFIX), (
f"Policy name {policy_name!r} should start with {self.POLICY_PREFIX!r} (namespace={ns})"
)
policy_list = get_network_policies_list(self.k8s_networking, ns)
policy = find_network_policy_by_prefix(policy_list, self.POLICY_PREFIX)
assert policy is not None and policy.spec is not None, (
f"Expected NetworkPolicy with spec (namespace={ns})"
)
assert policy.spec.pod_selector is not None, f"Policy should have pod_selector (namespace={ns})"
assert policy.spec.policy_types is not None, f"Policy should have policy_types (namespace={ns})"
finally:
proc.wait(timeout=KRAKEN_PROC_WAIT_TIMEOUT)
_assert_no_network_policy_with_prefix(self.k8s_networking, ns, self.POLICY_PREFIX)
# def test_traffic_blocked_during_outage(self, request):
# """During outage, ingress to target pods is blocked; after run, traffic is restored."""
# ns = self.ns
# nginx_path = scenario_dir(self.repo_root, "application_outage") / "nginx_http.yaml"
# docs = list(yaml.safe_load_all(nginx_path.read_text()))
# docs = patch_namespace_in_docs(docs, ns)
# try:
# k8s_utils.create_from_yaml(
# self.k8s_client,
# yaml_objects=docs,
# namespace=ns,
# )
# except k8s_utils.FailToCreateError as e:
# msgs = [str(exc) for exc in e.api_exceptions]
# raise AssertionError(
# f"Failed to create nginx resources (namespace={ns}): {'; '.join(msgs)}"
# ) from e
# wait_for_deployment_replicas(self.k8s_apps, ns, "nginx-outage-http", timeout=READINESS_TIMEOUT)
# port = _get_free_port()
# pf_ref = []
# def _kill_port_forward():
# if pf_ref and pf_ref[0].poll() is None:
# pf_ref[0].terminate()
# try:
# pf_ref[0].wait(timeout=5)
# except subprocess.TimeoutExpired:
# pf_ref[0].kill()
# request.addfinalizer(_kill_port_forward)
# pf = subprocess.Popen(
# ["kubectl", "port-forward", "-n", ns, "service/nginx-outage-http", f"{port}:80"],
# cwd=self.repo_root,
# stdout=subprocess.DEVNULL,
# stderr=subprocess.DEVNULL,
# )
# pf_ref.append(pf)
# url = f"http://127.0.0.1:{port}/"
# try:
# time.sleep(2)
# baseline_ok = False
# for _ in range(10):
# try:
# resp = requests.get(url, timeout=3)
# if resp.ok:
# baseline_ok = True
# break
# except (requests.ConnectionError, requests.Timeout):
# pass
# time.sleep(1)
# assert baseline_ok, f"Baseline: HTTP request to nginx should succeed (namespace={ns})"
# scenario = self.load_and_patch_scenario(self.repo_root, ns, duration=15)
# scenario_path = self.write_scenario(self.tmp_path, scenario, suffix="_traffic")
# config_path = self.build_config(
# self.SCENARIO_TYPE, str(scenario_path),
# filename="app_outage_traffic_config.yaml",
# )
# proc = self.run_kraken_background(config_path)
# policy_name = _wait_for_network_policy(
# self.k8s_networking, ns, self.POLICY_PREFIX, timeout=POLICY_WAIT_TIMEOUT
# )
# assert policy_name, f"Expected policy to exist (namespace={ns})"
# time.sleep(2)
# failed = False
# for _ in range(5):
# try:
# resp = requests.get(url, timeout=2)
# if not resp.ok:
# failed = True
# break
# except (requests.ConnectionError, requests.Timeout):
# failed = True
# break
# time.sleep(1)
# assert failed, f"During outage, HTTP request to nginx should fail (namespace={ns})"
# proc.wait(timeout=KRAKEN_PROC_WAIT_TIMEOUT)
# time.sleep(1)
# resp = requests.get(url, timeout=5)
# assert resp.ok, f"After scenario, HTTP request to nginx should succeed (namespace={ns})"
# finally:
# pf.terminate()
# pf.wait(timeout=5)
@pytest.mark.no_workload
def test_invalid_scenario_fails(self):
"""Invalid scenario file (missing application_outage) causes Kraken to exit non-zero."""
invalid_scenario_path = self.tmp_path / "invalid_scenario.yaml"
invalid_scenario_path.write_text("foo: bar\n")
config_path = self.build_config(
self.SCENARIO_TYPE, str(invalid_scenario_path),
filename="invalid_config.yaml",
)
result = self.run_kraken(config_path)
assert_kraken_failure(
result, context=f"namespace={self.ns}", tmp_path=self.tmp_path
)
@pytest.mark.no_workload
def test_bad_namespace_fails(self):
"""Scenario targeting non-existent namespace causes Kraken to exit non-zero."""
scenario = self.load_and_patch_scenario(self.repo_root, "nonexistent-namespace-xyz-12345")
scenario_path = self.write_scenario(self.tmp_path, scenario, suffix="_bad_ns")
config_path = self.build_config(
self.SCENARIO_TYPE, str(scenario_path),
filename="app_outage_bad_ns_config.yaml",
)
result = self.run_kraken(config_path)
assert_kraken_failure(
result,
context=f"test namespace={self.ns}",
tmp_path=self.tmp_path,
)

View File

@@ -0,0 +1,21 @@
# Single-pod deployment targeted by pod disruption scenario.
# Namespace is patched at deploy time by the test framework.
apiVersion: apps/v1
kind: Deployment
metadata:
name: krkn-pod-disruption-target
spec:
replicas: 1
selector:
matchLabels:
app: krkn-pod-disruption-target
template:
metadata:
labels:
app: krkn-pod-disruption-target
spec:
containers:
- name: app
image: nginx:alpine
ports:
- containerPort: 80

View File

@@ -0,0 +1,7 @@
# Base pod_disruption scenario (list). Tests load this and patch namespace_pattern with ^<ns>$.
- id: kill-pods
config:
namespace_pattern: "^default$"
label_selector: app=krkn-pod-disruption-target
krkn_pod_recovery_time: 5
kill: 1

View File

@@ -0,0 +1,58 @@
"""
Functional test for pod disruption scenario (pod crash and recovery).
Equivalent to CI/tests/test_pod.sh with proper before/after assertions.
Each test runs in its own ephemeral namespace with workload deployed automatically.
"""
import pytest
from lib.base import BaseScenarioTest, READINESS_TIMEOUT
from lib.utils import (
assert_all_pods_running_and_ready,
assert_kraken_success,
assert_pod_count_unchanged,
get_pods_list,
pod_uids,
restart_counts,
)
@pytest.mark.functional
@pytest.mark.pod_disruption
class TestPodDisruption(BaseScenarioTest):
"""Pod disruption scenario: kill pods and verify recovery."""
WORKLOAD_MANIFEST = "CI/tests_v2/scenarios/pod_disruption/resource.yaml"
WORKLOAD_IS_PATH = True
LABEL_SELECTOR = "app=krkn-pod-disruption-target"
SCENARIO_NAME = "pod_disruption"
SCENARIO_TYPE = "pod_disruption_scenarios"
NAMESPACE_KEY_PATH = [0, "config", "namespace_pattern"]
NAMESPACE_IS_REGEX = True
@pytest.mark.order(1)
def test_pod_crash_and_recovery(self, wait_for_pods_running):
ns = self.ns
before = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
before_uids = pod_uids(before)
before_restarts = restart_counts(before)
result = self.run_scenario(self.tmp_path, ns)
assert_kraken_success(result, context=f"namespace={ns}", tmp_path=self.tmp_path)
after = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
after_uids = pod_uids(after)
after_restarts = restart_counts(after)
uids_changed = set(after_uids) != set(before_uids)
restarts_increased = after_restarts > before_restarts
assert uids_changed or restarts_increased, (
f"Chaos had no effect in namespace={ns}: pod UIDs unchanged and restart count did not increase. "
f"Before UIDs: {before_uids}, restarts: {before_restarts}. "
f"After UIDs: {after_uids}, restarts: {after_restarts}."
)
wait_for_pods_running(ns, self.LABEL_SELECTOR, timeout=READINESS_TIMEOUT)
after_final = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
assert_pod_count_unchanged(before, after_final, namespace=ns)
assert_all_pods_running_and_ready(after_final, namespace=ns)

74
CI/tests_v2/setup_env.sh Executable file
View File

@@ -0,0 +1,74 @@
#!/usr/bin/env bash
# Setup environment for CI/tests_v2 pytest functional tests.
# Run from the repository root: ./CI/tests_v2/setup_env.sh
#
# - Creates a KinD cluster using kind-config-dev.yml (override with KIND_CONFIG=...).
# - Waits for the cluster and for local-path-provisioner pods (required by pod disruption test).
# - Does not install Python deps; use a venv and pip install -r requirements.txt and CI/tests_v2/requirements.txt yourself.
set -e
REPO_ROOT="$(cd "$(dirname "$0")/../.." && pwd)"
KIND_CONFIG="${KIND_CONFIG:-${REPO_ROOT}/CI/tests_v2/kind-config-dev.yml}"
CLUSTER_NAME="${KIND_CLUSTER_NAME:-ci-krkn}"
echo "Repository root: $REPO_ROOT"
cd "$REPO_ROOT"
# Check required tools
command -v kind >/dev/null 2>&1 || { echo "Error: kind is not installed. Install from https://kind.sigs.k8s.io/docs/user/quick-start/"; exit 1; }
command -v kubectl >/dev/null 2>&1 || { echo "Error: kubectl is not installed."; exit 1; }
# Python 3.9+
python3 -c "import sys; exit(0 if sys.version_info >= (3, 9) else 1)" 2>/dev/null || { echo "Error: Python 3.9+ required. Check: python3 --version"; exit 1; }
# Docker running (required for KinD)
docker info >/dev/null 2>&1 || { echo "Error: Docker is not running. Start Docker Desktop or run: systemctl start docker"; exit 1; }
# Tool versions for reproducibility
echo "kind: $(kind --version 2>/dev/null || kind version 2>/dev/null)"
echo "kubectl: $(kubectl version --client --short 2>/dev/null || kubectl version --client 2>/dev/null)"
# Create cluster if it doesn't exist (use "kind get clusters" so we skip when nodes exist even if kubeconfig check would fail)
if kind get clusters 2>/dev/null | grep -qx "$CLUSTER_NAME"; then
echo "KinD cluster '$CLUSTER_NAME' already exists, skipping creation."
else
echo "Creating KinD cluster '$CLUSTER_NAME' from $KIND_CONFIG ..."
kind create cluster --name "$CLUSTER_NAME" --config "$KIND_CONFIG"
fi
# echo "Pre-pulling test workload images into KinD cluster..."
# docker pull nginx:alpine
# kind load docker-image nginx:alpine --name "$CLUSTER_NAME"
# kind merges into default kubeconfig (~/.kube/config), so kubectl should work in this shell.
# If you need to use this cluster from another terminal: export KUBECONFIG=~/.kube/config
# and ensure context: kubectl config use-context kind-$CLUSTER_NAME
echo "Waiting for cluster nodes to be Ready..."
kubectl wait --for=condition=Ready nodes --all --timeout=120s 2>/dev/null || true
echo "Waiting for local-path-provisioner pods (namespace local-path-storage, label app=local-path-provisioner)..."
for i in {1..60}; do
if kubectl get pods -n local-path-storage -l app=local-path-provisioner -o name 2>/dev/null | grep -q .; then
echo "Found local-path-provisioner pod(s). Waiting for Ready..."
kubectl wait --for=condition=ready pod -l app=local-path-provisioner -n local-path-storage --timeout=120s 2>/dev/null && break
fi
echo "Attempt $i: local-path-provisioner not ready yet..."
sleep 3
done
if ! kubectl get pods -n local-path-storage -l app=local-path-provisioner -o name 2>/dev/null | grep -q .; then
echo "Warning: No pods with label app=local-path-provisioner in local-path-storage."
echo "KinD usually deploys this by default. Check: kubectl get pods -n local-path-storage"
exit 1
fi
echo ""
echo "Cluster is ready for CI/tests_v2."
echo " kubectl uses the default kubeconfig (kind merged it). For another terminal: export KUBECONFIG=~/.kube/config"
echo ""
echo "Next: activate your venv, install deps, and run tests from repo root:"
echo " pip install -r requirements.txt"
echo " pip install -r CI/tests_v2/requirements.txt"
echo " pytest CI/tests_v2/ -v --timeout=300 --reruns=2 --reruns-delay=10"

273
CLAUDE.md Normal file
View File

@@ -0,0 +1,273 @@
# CLAUDE.md - Krkn Chaos Engineering Framework
## Project Overview
Krkn (Kraken) is a chaos engineering tool for Kubernetes/OpenShift clusters. It injects deliberate failures to validate cluster resilience. Plugin-based architecture with multi-cloud support (AWS, Azure, GCP, IBM Cloud, VMware, Alibaba, OpenStack).
## Repository Structure
```
krkn/
├── krkn/
│ ├── scenario_plugins/ # Chaos scenario plugins (pod, node, network, hogs, etc.)
│ ├── utils/ # Utility functions
│ ├── rollback/ # Rollback management
│ ├── prometheus/ # Prometheus integration
│ └── cerberus/ # Health monitoring
├── tests/ # Unit tests (unittest framework)
├── scenarios/ # Example scenario configs (openshift/, kube/, kind/)
├── config/ # Configuration files
└── CI/ # CI/CD test scripts
```
## Quick Start
```bash
# Setup (ALWAYS use virtual environment)
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
# Run Krkn
python run_kraken.py --config config/config.yaml
# Note: Scenarios are specified in config.yaml under kraken.chaos_scenarios
# There is no --scenario flag; edit config/config.yaml to select scenarios
# Run tests
python -m unittest discover -s tests -v
python -m coverage run -a -m unittest discover -s tests -v
```
## Critical Requirements
### Python Environment
- **Python 3.9+** required
- **NEVER install packages globally** - always use virtual environment
- **CRITICAL**: `docker` must be <7.0 and `requests` must be <2.32 (Unix socket compatibility)
### Key Dependencies
- **krkn-lib** (5.1.13): Core library for Kubernetes/OpenShift operations
- **kubernetes** (34.1.0): Kubernetes Python client
- **docker** (<7.0), **requests** (<2.32): DO NOT upgrade without verifying compatibility
- Cloud SDKs: boto3 (AWS), azure-mgmt-* (Azure), google-cloud-compute (GCP), ibm_vpc (IBM), pyVmomi (VMware)
## Plugin Architecture (CRITICAL)
**Strictly enforced naming conventions:**
### Naming Rules
- **Module files**: Must end with `_scenario_plugin.py` and use snake_case
- Example: `pod_disruption_scenario_plugin.py`
- **Class names**: Must be CamelCase and end with `ScenarioPlugin`
- Example: `PodDisruptionScenarioPlugin`
- Must match module filename (snake_case ↔ CamelCase)
- **Directory structure**: Plugin dirs CANNOT contain "scenario" or "plugin"
- Location: `krkn/scenario_plugins/<plugin_name>/`
### Plugin Implementation
Every plugin MUST:
1. Extend `AbstractScenarioPlugin`
2. Implement `run()` method
3. Implement `get_scenario_types()` method
```python
from krkn.scenario_plugins import AbstractScenarioPlugin
class PodDisruptionScenarioPlugin(AbstractScenarioPlugin):
def run(self, config, scenarios_list, kubeconfig_path, wait_duration):
pass
def get_scenario_types(self):
return ["pod_scenarios", "pod_outage"]
```
### Creating a New Plugin
1. Create directory: `krkn/scenario_plugins/<plugin_name>/`
2. Create module: `<plugin_name>_scenario_plugin.py`
3. Create class: `<PluginName>ScenarioPlugin` extending `AbstractScenarioPlugin`
4. Implement `run()` and `get_scenario_types()`
5. Create unit test: `tests/test_<plugin_name>_scenario_plugin.py`
6. Add example scenario: `scenarios/<platform>/<scenario>.yaml`
**DO NOT**: Violate naming conventions (factory will reject), include "scenario"/"plugin" in directory names, create plugins without tests.
## Testing
### Unit Tests
```bash
# Run all tests
python -m unittest discover -s tests -v
# Specific test
python -m unittest tests.test_pod_disruption_scenario_plugin
# With coverage
python -m coverage run -a -m unittest discover -s tests -v
python -m coverage html
```
**Test requirements:**
- Naming: `test_<module>_scenario_plugin.py`
- Mock external dependencies (Kubernetes API, cloud providers)
- Test success, failure, and edge cases
- Keep tests isolated and independent
### Functional Tests
Located in `CI/tests/`. Can be run locally on a kind cluster with Prometheus and Elasticsearch set up.
**Setup for local testing:**
1. Deploy Prometheus and Elasticsearch on your kind cluster:
- Prometheus setup: https://krkn-chaos.dev/docs/developers-guide/testing-changes/#prometheus
- Elasticsearch setup: https://krkn-chaos.dev/docs/developers-guide/testing-changes/#elasticsearch
2. Or disable monitoring features in `config/config.yaml`:
```yaml
performance_monitoring:
enable_alerts: False
enable_metrics: False
check_critical_alerts: False
```
**Note:** Functional tests run automatically in CI with full monitoring enabled.
## Cloud Provider Implementations
Node chaos scenarios are cloud-specific. Each in `krkn/scenario_plugins/node_actions/<provider>_node_scenarios.py`:
- AWS, Azure, GCP, IBM Cloud, VMware, Alibaba, OpenStack, Bare Metal
Implement: stop, start, reboot, terminate instances.
**When modifying**: Maintain consistency with other providers, handle API errors, add logging, update tests.
### Adding Cloud Provider Support
1. Create: `krkn/scenario_plugins/node_actions/<provider>_node_scenarios.py`
2. Extend: `abstract_node_scenarios.AbstractNodeScenarios`
3. Implement: `stop_instances`, `start_instances`, `reboot_instances`, `terminate_instances`
4. Add SDK to `requirements.txt`
5. Create unit test with mocked SDK
6. Add example scenario: `scenarios/openshift/<provider>_node_scenarios.yml`
## Configuration
**Main config**: `config/config.yaml`
- `kraken`: Core settings
- `cerberus`: Health monitoring
- `performance_monitoring`: Prometheus
- `elastic`: Elasticsearch telemetry
**Scenario configs**: `scenarios/` directory
```yaml
- config:
scenario_type: <type> # Must match plugin's get_scenario_types()
```
## Code Style
- **Import order**: Standard library, third-party, local imports
- **Naming**: snake_case (functions/variables), CamelCase (classes)
- **Logging**: Use Python's `logging` module
- **Error handling**: Return appropriate exit codes
- **Docstrings**: Required for public functions/classes
## Exit Codes
Krkn uses specific exit codes to communicate execution status:
- `0`: Success - all scenarios passed, no critical alerts
- `1`: Scenario failure - one or more scenarios failed
- `2`: Critical alerts fired during execution
- `3+`: Health check failure (Cerberus monitoring detected issues)
**When implementing scenarios:**
- Return `0` on success
- Return `1` on scenario-specific failures
- Propagate health check failures appropriately
- Log exit code reasons clearly
## Container Support
Krkn can run inside a container. See `containers/` directory.
**Building custom image:**
```bash
cd containers
./compile_dockerfile.sh # Generates Dockerfile from template
docker build -t krkn:latest .
```
**Running containerized:**
```bash
docker run -v ~/.kube:/root/.kube:Z \
-v $(pwd)/config:/config:Z \
-v $(pwd)/scenarios:/scenarios:Z \
krkn:latest
```
## Git Workflow
- **NEVER commit directly to main**
- **NEVER use `--force` without approval**
- **ALWAYS create feature branches**: `git checkout -b feature/description`
- **ALWAYS run tests before pushing**
**Conventional commits**: `feat:`, `fix:`, `test:`, `docs:`, `refactor:`
```bash
git checkout main && git pull origin main
git checkout -b feature/your-feature-name
# Make changes, write tests
python -m unittest discover -s tests -v
git add <specific-files>
git commit -m "feat: description"
git push -u origin feature/your-feature-name
```
## Environment Variables
- `KUBECONFIG`: Path to kubeconfig
- `AWS_*`, `AZURE_*`, `GOOGLE_APPLICATION_CREDENTIALS`: Cloud credentials
- `PROMETHEUS_URL`, `ELASTIC_URL`, `ELASTIC_PASSWORD`: Monitoring config
**NEVER commit credentials or API keys.**
## Common Pitfalls
1. Missing virtual environment - always activate venv
2. Running functional tests without cluster setup
3. Ignoring exit codes
4. Modifying krkn-lib directly (it's a separate package)
5. Upgrading docker/requests beyond version constraints
## Before Writing Code
1. Check for existing implementations
2. Review existing plugins as examples
3. Maintain consistency with cloud provider patterns
4. Plan rollback logic
5. Write tests alongside code
6. Update documentation
## When Adding Dependencies
1. Check if functionality exists in krkn-lib or current dependencies
2. Verify compatibility with existing versions
3. Pin specific versions in `requirements.txt`
4. Check for security vulnerabilities
5. Test thoroughly for conflicts
## Common Development Tasks
### Modifying Existing Plugin
1. Read plugin code and corresponding test
2. Make changes
3. Update/add unit tests
4. Run: `python -m unittest tests.test_<plugin>_scenario_plugin`
### Writing Unit Tests
1. Create: `tests/test_<module>_scenario_plugin.py`
2. Import `unittest` and plugin class
3. Mock external dependencies
4. Test success, failure, and edge cases
5. Run: `python -m unittest tests.test_<module>_scenario_plugin`

View File

@@ -1,83 +1,148 @@
# Krkn Project Governance
Krkn is a chaos and resiliency testing tool for Kubernetes that injects deliberate failures into clusters to validate their resilience under turbulent conditions. This governance document explains how the project is run.
- [Values](#values)
- [Community Roles](#community-roles)
- [Becoming a Maintainer](#becoming-a-maintainer)
- [Removing a Maintainer](#removing-a-maintainer)
- [Meetings](#meetings)
- [CNCF Resources](#cncf-resources)
- [Code of Conduct](#code-of-conduct)
- [Security Response Team](#security-response-team)
- [Voting](#voting)
- [Modifying this Charter](#modifying-this-charter)
The governance model adopted here is heavily influenced by a set of CNCF projects, especially drew
reference from [Kubernetes governance](https://github.com/kubernetes/community/blob/master/governance.md).
*For similar structures some of the same wordings from kubernetes governance are borrowed to adhere
to the originally construed meaning.*
## Values
## Principles
Krkn and its leadership embrace the following values:
- **Open**: Krkn is open source community.
- **Welcoming and respectful**: See [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
- **Transparent and accessible**: Work and collaboration should be done in public.
Changes to the Krkn organization, Krkn code repositories, and CNCF related activities (e.g.
level, involvement, etc) are done in public.
- **Merit**: Ideas and contributions are accepted according to their technical merit
and alignment with project objectives, scope and design principles.
* **Openness**: Communication and decision-making happens in the open and is discoverable for future reference. As much as possible, all discussions and work take place in public forums and open repositories.
* **Fairness**: All stakeholders have the opportunity to provide feedback and submit contributions, which will be considered on their merits.
* **Community over Product or Company**: Sustaining and growing our community takes priority over shipping code or sponsors' organizational goals. Each contributor participates in the project as an individual.
* **Inclusivity**: We innovate through different perspectives and skill sets, which can only be accomplished in a welcoming and respectful environment.
* **Participation**: Responsibilities within the project are earned through participation, and there is a clear path up the contributor ladder into leadership positions.
## Community Roles
Krkn uses a tiered contributor model. Each level comes with increasing responsibilities and privileges.
### Contributor
Anyone can become a contributor by participating in discussions, reporting bugs, or submitting code or documentation.
**Responsibilities:**
- Adhere to the [Code of Conduct](CODE_OF_CONDUCT.md)
- Report bugs and suggest new features
- Contribute high-quality code and documentation
### Member
Members are active contributors who have demonstrated a solid understanding of the project's codebase and conventions.
**Responsibilities:**
- Review pull requests for correctness, quality, and adherence to project standards
- Provide constructive and timely feedback to contributors
- Ensure contributions are well-tested and documented
- Work with maintainers to support a smooth release process
### Maintainer
Maintainers are responsible for the overall health and direction of the project. They have write access to the [project GitHub repository](https://github.com/krkn-chaos/krkn) and can merge patches from themselves or others. The current maintainers are listed in [MAINTAINERS.md](./MAINTAINERS.md).
Maintainers collectively form the **Maintainer Council**, the governing body for the project.
A maintainer is not just someone who can make changes — they are someone who has demonstrated the ability to collaborate with the team, get the right people to review code and docs, contribute high-quality work, and follow through to fix issues.
**Responsibilities:**
- Set the technical direction and vision for the project
- Manage releases and ensure stability of the main branch
- Make decisions on feature inclusion and project priorities
- Mentor contributors and help grow the community
- Resolve disputes and make final decisions when consensus cannot be reached
### Owner
Owners have administrative access to the project and are the final decision-makers.
**Responsibilities:**
- Manage the core team of maintainers
- Set the overall vision and strategy for the project
- Handle administrative tasks such as managing the repository and other resources
- Represent the project in the broader open-source community
## Becoming a Maintainer
To become a Maintainer you need to demonstrate the following:
- **Commitment to the project:**
- Participate in discussions, contributions, code and documentation reviews for 3 months or more
- Perform reviews for at least 5 non-trivial pull requests
- Contribute at least 3 non-trivial pull requests that have been merged
- Ability to write quality code and/or documentation
- Ability to collaborate effectively with the team
- Understanding of how the team works (policies, processes for testing and code review, etc.)
- Understanding of the project's codebase and coding and documentation style
A new Maintainer must be proposed by an existing Maintainer by sending a message to the [maintainer mailing list](mailto:krkn.maintainers@gmail.com). A simple majority vote of existing Maintainers approves the application. Nominations will be evaluated without prejudice to employer or demographics.
Maintainers who are approved will be granted the necessary GitHub rights and invited to the [maintainer mailing list](mailto:krkn.maintainers@gmail.com).
## Removing a Maintainer
Maintainers may resign at any time if they feel they will not be able to continue fulfilling their project duties.
Maintainers may also be removed for inactivity, failure to fulfill their responsibilities, violating the Code of Conduct, or other reasons. Inactivity is defined as a period of very low or no activity in the project for a year or more, with no definite schedule to return to full Maintainer activity.
A Maintainer may be removed at any time by a 2/3 vote of the remaining Maintainers.
Depending on the reason for removal, a Maintainer may be converted to **Emeritus** status. Emeritus Maintainers will still be consulted on some project matters and can be rapidly returned to Maintainer status if their availability changes.
## Meetings
Maintainers are expected to participate in the public developer meeting, which occurs **once a month via Zoom**. Meeting details (link, agenda, and notes) are posted in the [#krkn channel on Kubernetes Slack](https://kubernetes.slack.com/messages/C05SFMHRWK1) prior to each meeting.
Maintainers will also hold closed meetings to discuss security reports or Code of Conduct violations. Such meetings should be scheduled by any Maintainer on receipt of a security issue or CoC report. All current Maintainers must be invited to such closed meetings, except for any Maintainer who is accused of a CoC violation.
## CNCF Resources
Any Maintainer may suggest a request for CNCF resources, either on the [mailing list](mailto:krkn.maintainers@gmail.com) or during a monthly meeting. A simple majority of Maintainers approves the request. The Maintainers may also choose to delegate working with the CNCF to non-Maintainer community members, who will then be added to the [CNCF's Maintainer List](https://github.com/cncf/foundation/blob/main/project-maintainers.csv) for that purpose.
## Code of Conduct
Krkn follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
Here is an excerpt:
> As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.
> As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.
## Maintainer Levels
Code of Conduct violations by community members will be discussed and resolved on the [private maintainer mailing list](mailto:krkn.maintainers@gmail.com). If a Maintainer is directly involved in the report, two Maintainers will instead be designated to work with the CNCF Code of Conduct Committee in resolving it.
### Contributor
Contributors contributor to the community. Anyone can become a contributor by participating in discussions, reporting bugs, or contributing code or documentation.
## Security Response Team
#### Responsibilities:
The Maintainers will appoint a Security Response Team to handle security reports. This committee may consist of the Maintainer Council itself. If this responsibility is delegated, the Maintainers will appoint a team of at least two contributors to handle it. The Maintainers will review the composition of this team at least once a year.
Be active in the community and adhere to the Code of Conduct.
The Security Response Team is responsible for handling all reports of security holes and breaches according to the [security policy](SECURITY.md).
Report bugs and suggest new features.
To report a security vulnerability, please follow the process outlined in [SECURITY.md](SECURITY.md) rather than filing a public GitHub issue.
Contribute high-quality code and documentation.
## Voting
While most business in Krkn is conducted by "[lazy consensus](https://community.apache.org/committers/lazyConsensus.html)", periodically the Maintainers may need to vote on specific actions or changes. Any Maintainer may demand a vote be taken.
### Member
Members are active contributors to the community. Members have demonstrated a strong understanding of the project's codebase and conventions.
Votes on general project matters may be raised on the [maintainer mailing list](mailto:krkn.maintainers@gmail.com) or during a monthly meeting. Votes on security vulnerabilities or Code of Conduct violations must be conducted exclusively on the [private maintainer mailing list](mailto:krkn.maintainers@gmail.com) or in a closed Maintainer meeting, in order to prevent accidental public disclosure of sensitive information.
#### Responsibilities:
Most votes require a **simple majority** of all Maintainers to succeed, except where otherwise noted. Two-thirds majority votes mean at least two-thirds of all existing Maintainers.
Review pull requests for correctness, quality, and adherence to project standards.
| Action | Required Vote |
|--------|--------------|
| Adding a new Maintainer | Simple majority |
| Removing a Maintainer | 2/3 majority |
| Approving CNCF resource requests | Simple majority |
| Modifying this charter | 2/3 majority |
Provide constructive and timely feedback to contributors.
## Modifying this Charter
Ensure that all contributions are well-tested and documented.
Work with maintainers to ensure a smooth and efficient release process.
### Maintainer
Maintainers are responsible for the overall health and direction of the project. They are long-standing contributors who have shown a deep commitment to the project's success.
#### Responsibilities:
Set the technical direction and vision for the project.
Manage releases and ensure the stability of the main branch.
Make decisions on feature inclusion and project priorities.
Mentor other contributors and help grow the community.
Resolve disputes and make final decisions when consensus cannot be reached.
### Owner
Owners have administrative access to the project and are the final decision-makers.
#### Responsibilities:
Manage the core team of maintainers and approvers.
Set the overall vision and strategy for the project.
Handle administrative tasks, such as managing the project's repository and other resources.
Represent the project in the broader open-source community.
# Credits
Sections of this documents have been borrowed from [Kubernetes governance](https://github.com/kubernetes/community/blob/master/governance.md)
Changes to this Governance document and its supporting documents may be approved by a 2/3 vote of the Maintainers.

View File

@@ -15,7 +15,7 @@ For detailed description of the roles, see [Governance](./GOVERNANCE.md) page.
| Pradeep Surisetty | [psuriset](https://github.com/psuriset) | psuriset@redhat.com | Owner |
| Paige Patton | [paigerube14](https://github.com/paigerube14) | prubenda@redhat.com | Maintainer |
| Tullio Sebastiani | [tsebastiani](https://github.com/tsebastiani) | tsebasti@redhat.com | Maintainer |
| Yogananth Subramanian | [yogananth-subramanian](https://github.com/yogananth-subramanian) | ysubrama@redhat.com |Maintainer |
| Yogananth Subramanian | [yogananth-subramanian](https://github.com/yogananth-subramanian) | ysubrama@redhat.com | Maintainer |
| Sahil Shah | [shahsahil264](https://github.com/shahsahil264) | sahshah@redhat.com | Member |
@@ -32,3 +32,64 @@ The roles are:
* Maintainer: A contributor who is responsible for the overall health and direction of the project.
* Owner: A contributor who has administrative ownership of the project.
## Maintainer Levels
### Contributor
Contributors contributor to the community. Anyone can become a contributor by participating in discussions, reporting bugs, or contributing code or documentation.
#### Responsibilities:
Be active in the community and adhere to the Code of Conduct.
Report bugs and suggest new features.
Contribute high-quality code and documentation.
### Member
Members are active contributors to the community. Members have demonstrated a strong understanding of the project's codebase and conventions.
#### Responsibilities:
Review pull requests for correctness, quality, and adherence to project standards.
Provide constructive and timely feedback to contributors.
Ensure that all contributions are well-tested and documented.
Work with maintainers to ensure a smooth and efficient release process.
### Maintainer
Maintainers are responsible for the overall health and direction of the project. They are long-standing contributors who have shown a deep commitment to the project's success.
#### Responsibilities:
Set the technical direction and vision for the project.
Manage releases and ensure the stability of the main branch.
Make decisions on feature inclusion and project priorities.
Mentor other contributors and help grow the community.
Resolve disputes and make final decisions when consensus cannot be reached.
### Owner
Owners have administrative access to the project and are the final decision-makers.
#### Responsibilities:
Manage the core team of maintainers and approvers.
Set the overall vision and strategy for the project.
Handle administrative tasks, such as managing the project's repository and other resources.
Represent the project in the broader open-source community.
## Email
If you'd like to contact the krkn maintainers about a specific issue you're having, please reach out to use at krkn.maintainers@gmail.com.

View File

@@ -16,5 +16,11 @@ Following are a list of enhancements that we are planning to work on adding supp
- [x] [Krknctl - client for running Krkn scenarios with ease](https://github.com/krkn-chaos/krknctl)
- [x] [AI Chat bot to help get started with Krkn and commands](https://github.com/krkn-chaos/krkn-lightspeed)
- [ ] [Ability to roll back cluster to original state if chaos fails](https://github.com/krkn-chaos/krkn/issues/804)
- [ ] Add recovery time metrics to each scenario for each better regression analysis
- [ ] [Add resiliency scoring to chaos scenarios ran on cluster](https://github.com/krkn-chaos/krkn/issues/125)
- [ ] Add recovery time metrics to each scenario for better regression analysis
- [ ] [Add resiliency scoring to chaos scenarios ran on cluster](https://github.com/krkn-chaos/krkn/issues/125)
- [ ] [Add AI-based Chaos Configuration Generator](https://github.com/krkn-chaos/krkn/issues/1166)
- [ ] [Introduce Security Chaos Engineering Scenarios](https://github.com/krkn-chaos/krkn/issues/1165)
- [ ] [Add AWS-native Chaos Scenarios (S3, Lambda, Networking)](https://github.com/krkn-chaos/krkn/issues/1164)
- [ ] [Unify Krkn Ecosystem under krknctl for Enhanced UX](https://github.com/krkn-chaos/krknctl/issues/113)
- [ ] [Build Web UI for Creating, Monitoring, and Reviewing Chaos Scenarios](https://github.com/krkn-chaos/krkn/issues/1167)
- [ ] [Add Predefined Chaos Scenario Templates (KRKN Chaos Library)](https://github.com/krkn-chaos/krkn/issues/1168)

View File

@@ -40,4 +40,4 @@ The security team currently consists of the [Maintainers of Krkn](https://github
## Process and Supported Releases
The Krkn security team will investigate and provide a fix in a timely mannner depending on the severity. The fix will be included in the new release of Krkn and details will be included in the release notes.
The Krkn security team will investigate and provide a fix in a timely manner depending on the severity. The fix will be included in the new release of Krkn and details will be included in the release notes.

View File

@@ -39,7 +39,7 @@ cerberus:
Sunday:
slack_team_alias: # The slack team alias to be tagged while reporting failures in the slack channel when no watcher is assigned
custom_checks: # Relative paths of files conataining additional user defined checks
custom_checks: # Relative paths of files containing additional user defined checks
tunings:
timeout: 3 # Number of seconds before requests fail

View File

@@ -2,7 +2,7 @@ kraken:
kubeconfig_path: ~/.kube/config # Path to kubeconfig
exit_on_failure: False # Exit when a post action scenario fails
auto_rollback: True # Enable auto rollback for scenarios.
rollback_versions_directory: /tmp/kraken-rollback # Directory to store rollback version files.
rollback_versions_directory: # Directory to store rollback version files. If empty, a secure temp directory is created automatically.
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
signal_address: 0.0.0.0 # Signal listening address
@@ -50,13 +50,22 @@ kraken:
- network_chaos_ng_scenarios:
- scenarios/kube/pod-network-filter.yml
- scenarios/kube/node-network-filter.yml
- scenarios/kube/node-network-chaos.yml
- scenarios/kube/pod-network-chaos.yml
- scenarios/kube/node_interface_down.yaml
- kubevirt_vm_outage:
- scenarios/kubevirt/kubevirt-vm-outage.yaml
- http_load_scenarios:
- scenarios/kube/http_load_scenario.yml
resiliency:
resiliency_run_mode: standalone # Options: standalone, detailed, disabled
resiliency_file: config/alerts.yaml # Path to SLO definitions, will resolve to performance_monitoring: alert_profile: if not specified
cerberus:
cerberus_enabled: False # Enable it when cerberus is previously installed
cerberus_url: # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
check_application_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
performance_monitoring:
prometheus_url: '' # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
@@ -77,6 +86,7 @@ elastic:
metrics_index: "krkn-metrics"
alerts_index: "krkn-alerts"
telemetry_index: "krkn-telemetry"
run_tag: ""
tunings:
wait_duration: 1 # Duration to wait between each chaos scenario
@@ -93,7 +103,7 @@ telemetry:
prometheus_pod_name: "" # name of the prometheus pod (if distribution is kubernetes)
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
backup_threads: 5 # number of telemetry download/upload threads
archive_path: /tmp # local path where the archive files will be temporarly stored
archive_path: # local path where the archive files will be temporarily stored. If empty, a secure temp directory is created automatically.
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
archive_size: 500000
@@ -128,4 +138,5 @@ kubevirt_checks: # Utilizing virt che
disconnected: False # Boolean of how to try to connect to the VMIs; if True will use the ip_address to try ssh from within a node, if false will use the name and uses virtctl to try to connect; Default is False
ssh_node: "" # If set, will be a backup way to ssh to a node. Will want to set to a node that isn't targeted in chaos
node_names: ""
exit_on_failure: # If value is True and VMI's are failing post chaos returns failure, values can be True/False
exit_on_failure: # If value is True and VMI's are failing post chaos returns failure, values can be True/False

View File

@@ -13,7 +13,7 @@ kraken:
cerberus:
cerberus_enabled: False # Enable it when cerberus is previously installed
cerberus_url: # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
check_application_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
performance_monitoring:
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
@@ -32,7 +32,7 @@ tunings:
telemetry:
enabled: False # enable/disables the telemetry collection feature
archive_path: /tmp # local path where the archive files will be temporarly stored
archive_path: # local path where the archive files will be temporarily stored. If empty, a secure temp directory is created automatically.
events_backup: False # enables/disables cluster events collection
logs_backup: False

View File

@@ -14,7 +14,7 @@ kraken:
cerberus:
cerberus_enabled: False # Enable it when cerberus is previously installed
cerberus_url: # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
check_application_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
performance_monitoring:
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.

View File

@@ -35,7 +35,7 @@ kraken:
cerberus:
cerberus_enabled: True # Enable it when cerberus is previously installed
cerberus_url: http://0.0.0.0:8080 # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
check_application_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
performance_monitoring:
deploy_dashboards: True # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
@@ -61,7 +61,7 @@ telemetry:
prometheus_backup: True # enables/disables prometheus data collection
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
backup_threads: 5 # number of telemetry download/upload threads
archive_path: /tmp # local path where the archive files will be temporarly stored
archive_path: # local path where the archive files will be temporarily stored. If empty, a secure temp directory is created automatically.
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
archive_size: 500000 # the size of the prometheus data archive size in KB. The lower the size of archive is

View File

@@ -33,6 +33,8 @@ RUN go mod edit -go 1.24.9 &&\
FROM fedora:40
ARG PR_NUMBER
ARG TAG
ARG PYTHON_VERSION=3.11
ENV PYTHON_CMD=python${PYTHON_VERSION}
RUN groupadd -g 1001 krkn && useradd -m -u 1001 -g krkn krkn
RUN dnf update -y
@@ -41,7 +43,7 @@ ENV KUBECONFIG /home/krkn/.kube/config
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
RUN dnf update && dnf install -y --setopt=install_weak_deps=False \
git python39 jq yq gettext wget which ipmitool openssh-server &&\
git python${PYTHON_VERSION} jq yq gettext wget which ipmitool openssh-server &&\
dnf clean all
# copy oc client binary from oc-build image
@@ -63,15 +65,15 @@ RUN if [ -n "$PR_NUMBER" ]; then git fetch origin pull/${PR_NUMBER}/head:pr-${PR
# if it is a TAG trigger checkout the tag
RUN if [ -n "$TAG" ]; then git checkout "$TAG";fi
RUN python3.9 -m ensurepip --upgrade --default-pip
RUN python3.9 -m pip install --upgrade pip setuptools==78.1.1
RUN ${PYTHON_CMD} -m ensurepip --upgrade --default-pip
RUN ${PYTHON_CMD} -m pip install --upgrade pip setuptools==78.1.1
# removes the the vulnerable versions of setuptools and pip
RUN rm -rf "$(pip cache dir)"
RUN rm -rf /tmp/*
RUN rm -rf /usr/local/lib/python3.9/ensurepip/_bundled
RUN pip3.9 install -r requirements.txt
RUN pip3.9 install jsonschema
RUN rm -rf /usr/local/lib/${PYTHON_CMD}/ensurepip/_bundled
RUN ${PYTHON_CMD} -m pip install -r requirements.txt
RUN ${PYTHON_CMD} -m pip install jsonschema
LABEL krknctl.title.global="Krkn Base Image"
LABEL krknctl.description.global="This is the krkn base image."

View File

@@ -1,7 +1,8 @@
#!/bin/bash
set -e
# Run SSH setup
./containers/setup-ssh.sh
# Change to kraken directory
# Execute the main command
exec python3.9 run_kraken.py "$@"
exec "${PYTHON_CMD:-python3}" run_kraken.py "$@"

View File

@@ -163,6 +163,15 @@
"default": "False",
"required": "false"
},
{
"name": "es-run-tag",
"short_description": "Elasticsearch run tag",
"description": "Elasticsearch run tag to compare similar runs",
"variable": "ES_RUN_TAG",
"type": "string",
"default": "",
"required": "false"
},
{
"name": "es-server",
"short_description": "Elasticsearch instance URL",
@@ -549,5 +558,31 @@
"separator": ",",
"default": "False",
"required": "false"
},
{
"name": "resiliency-score",
"short_description": "Enable resiliency score calculation",
"description": "The system outputs a detailed resiliency score as a single-line JSON object, facilitating easy aggregation across multiple test scenarios.",
"variable": "RESILIENCY_SCORE",
"type": "boolean",
"required": "false"
},
{
"name": "disable-resiliency-score",
"short_description": "Disable resiliency score calculation",
"description": "Disable resiliency score calculation",
"variable": "DISABLE_RESILIENCY_SCORE",
"type": "boolean",
"required": "false"
},
{
"name": "resiliency-file",
"short_description": "Resiliency Score metrics file",
"description": "Custom Resiliency score file",
"variable": "RESILIENCY_FILE",
"type": "file",
"required": "false",
"mount_path": "/home/krkn/resiliency-file.yaml"
}
]

View File

@@ -3,10 +3,16 @@ apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 9090
- containerPort: 32766
hostPort: 9200
- containerPort: 30036
hostPort: 8888
- containerPort: 30037
hostPort: 8889
- containerPort: 30080
hostPort: 30080
- role: control-plane
- role: control-plane
- role: worker

13
krkn/__init__.py Normal file
View File

@@ -0,0 +1,13 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1 +1,14 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .setup import *

View File

@@ -1,20 +1,47 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import requests
import sys
import json
from krkn_lib.utils.functions import get_yaml_item_value
check_application_routes = ""
cerberus_url = None
exit_on_failure = False
cerberus_enabled = False
def get_status(config, start_time, end_time):
def set_url(config):
global exit_on_failure
exit_on_failure = get_yaml_item_value(config["kraken"], "exit_on_failure", False)
global cerberus_enabled
cerberus_enabled = get_yaml_item_value(config["cerberus"],"cerberus_enabled", False)
if cerberus_enabled:
global cerberus_url
cerberus_url = get_yaml_item_value(config["cerberus"],"cerberus_url", "")
global check_application_routes
check_application_routes = \
get_yaml_item_value(config["cerberus"],"check_applicaton_routes","")
def get_status(start_time, end_time):
"""
Get cerberus status
"""
cerberus_status = True
check_application_routes = False
application_routes_status = True
if config["cerberus"]["cerberus_enabled"]:
cerberus_url = config["cerberus"]["cerberus_url"]
check_application_routes = \
config["cerberus"]["check_applicaton_routes"]
if cerberus_enabled:
if not cerberus_url:
logging.error(
"url where Cerberus publishes True/False signal "
@@ -61,40 +88,38 @@ def get_status(config, start_time, end_time):
return cerberus_status
def publish_kraken_status(config, failed_post_scenarios, start_time, end_time):
def publish_kraken_status( start_time, end_time):
"""
Publish kraken status to cerberus
"""
cerberus_status = get_status(config, start_time, end_time)
cerberus_status = get_status(start_time, end_time)
if not cerberus_status:
if failed_post_scenarios:
if config["kraken"]["exit_on_failure"]:
logging.info(
"Cerberus status is not healthy and post action scenarios "
"are still failing, exiting kraken run"
)
sys.exit(1)
else:
logging.info(
"Cerberus status is not healthy and post action scenarios "
"are still failing"
)
if exit_on_failure:
logging.info(
"Cerberus status is not healthy and post action scenarios "
"are still failing, exiting kraken run"
)
sys.exit(1)
else:
logging.info(
"Cerberus status is not healthy and post action scenarios "
"are still failing"
)
else:
if failed_post_scenarios:
if config["kraken"]["exit_on_failure"]:
logging.info(
"Cerberus status is healthy but post action scenarios "
"are still failing, exiting kraken run"
)
sys.exit(1)
else:
logging.info(
"Cerberus status is healthy but post action scenarios "
"are still failing"
)
if exit_on_failure:
logging.info(
"Cerberus status is healthy but post action scenarios "
"are still failing, exiting kraken run"
)
sys.exit(1)
else:
logging.info(
"Cerberus status is healthy but post action scenarios "
"are still failing"
)
def application_status(cerberus_url, start_time, end_time):
def application_status( start_time, end_time):
"""
Check application availability
"""

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .analysis import *
from .kraken_tests import *
from .prometheus import *

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import pandas as pd

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def get_entries_by_category(filename, category):
# Read the file
with open(filename, "r") as file:

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from prometheus_api_client import PrometheusConnect

View File

@@ -0,0 +1,13 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import subprocess
import logging
import sys

View File

@@ -1 +1,14 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .client import *

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import datetime
@@ -46,7 +59,7 @@ def alerts(
sys.exit(1)
for alert in profile_yaml:
if list(alert.keys()).sort() != ["expr", "description", "severity"].sort():
if sorted(alert.keys()) != sorted(["expr", "description", "severity"]):
logging.error(f"wrong alert {alert}, skipping")
continue
@@ -205,8 +218,8 @@ def metrics(
query
)
elif (
list(metric_query.keys()).sort()
== ["query", "metricName"].sort()
sorted(metric_query.keys())
== sorted(["query", "metricName"])
):
metrics_result = prom_cli.process_prom_query_in_range(
query,
@@ -214,7 +227,7 @@ def metrics(
end_time=datetime.datetime.fromtimestamp(end_time), granularity=30
)
else:
logging.info('didnt match keys')
logging.info("didn't match keys")
continue
for returned_metric in metrics_result:
@@ -251,7 +264,17 @@ def metrics(
for k,v in pod.items():
metric[k] = v
metric['timestamp'] = str(datetime.datetime.now())
print('adding pod' + str(metric))
logging.debug("adding pod %s", metric)
metrics_list.append(metric.copy())
for k,v in scenario.get("affected_vmis", {}).items():
metric_name = "affected_vmis_recovery"
metric = {"metricName": metric_name, "type": k}
if type(v) is list:
for vmi in v:
for k,v in vmi.items():
metric[k] = v
metric['timestamp'] = str(datetime.datetime.now())
logging.debug("adding vmi %s", metric)
metrics_list.append(metric.copy())
for affected_node in scenario["affected_nodes"]:
metric_name = "affected_nodes_recovery"

View File

@@ -0,0 +1,92 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import datetime
import logging
from typing import Dict, Any, List, Optional
from krkn_lib.prometheus.krkn_prometheus import KrknPrometheus
# -----------------------------------------------------------------------------
# SLO evaluation helpers (used by krkn.resiliency)
# -----------------------------------------------------------------------------
def slo_passed(prometheus_result: List[Any]) -> Optional[bool]:
if not prometheus_result:
return None
has_samples = False
for series in prometheus_result:
if "values" in series:
has_samples = True
for _ts, val in series["values"]:
try:
if float(val) > 0:
return False
except (TypeError, ValueError):
continue
elif "value" in series:
has_samples = True
try:
return float(series["value"][1]) == 0
except (TypeError, ValueError):
return False
# If we reached here and never saw any samples, skip
return None if not has_samples else True
def evaluate_slos(
prom_cli: KrknPrometheus,
slo_list: List[Dict[str, Any]],
start_time: datetime.datetime,
end_time: datetime.datetime,
) -> Dict[str, bool]:
"""Evaluate a list of SLO expressions against Prometheus.
Args:
prom_cli: Configured Prometheus client.
slo_list: List of dicts with keys ``name``, ``expr``.
start_time: Start timestamp.
end_time: End timestamp.
granularity: Step in seconds for range queries.
Returns:
Mapping name -> bool indicating pass status.
True means good we passed the SLO test otherwise failed the SLO
"""
results: Dict[str, bool] = {}
logging.info("Evaluating %d SLOs over window %s %s", len(slo_list), start_time, end_time)
for slo in slo_list:
expr = slo["expr"]
name = slo["name"]
try:
response = prom_cli.process_prom_query_in_range(
expr,
start_time=start_time,
end_time=end_time,
)
passed = slo_passed(response)
if passed is None:
# Absence of data indicates the condition did not trigger; treat as pass.
logging.debug("SLO '%s' query returned no data; assuming pass.", name)
results[name] = True
else:
results[name] = passed
except Exception as exc:
logging.error("PromQL query failed for SLO '%s': %s", name, exc)
results[name] = False
return results

View File

@@ -0,0 +1,17 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""krkn.resiliency package public interface."""
from .resiliency import Resiliency # noqa: F401
from .score import calculate_resiliency_score # noqa: F401

View File

@@ -0,0 +1,379 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Resiliency evaluation orchestrator for Krkn chaos runs.
This module provides the `Resiliency` class which loads the canonical
`alerts.yaml`, executes every SLO expression against Prometheus in the
chaos-test time window, determines pass/fail status and calculates an
overall resiliency score using the generic weighted model implemented
in `krkn.resiliency.score`.
"""
from __future__ import annotations
import datetime
import logging
import os
from typing import Dict, List, Any, Optional, Tuple
import yaml
import json
import dataclasses
from krkn_lib.models.telemetry import ChaosRunTelemetry
from krkn_lib.prometheus.krkn_prometheus import KrknPrometheus
from krkn.prometheus.collector import evaluate_slos
from krkn.resiliency.score import calculate_resiliency_score
class Resiliency:
"""Central orchestrator for resiliency scoring."""
def __init__(self, alerts_yaml_path: str):
if not os.path.exists(alerts_yaml_path):
raise FileNotFoundError(f"alerts file not found: {alerts_yaml_path}")
with open(alerts_yaml_path, "r", encoding="utf-8") as fp:
raw_yaml_data = yaml.safe_load(fp)
logging.info("Loaded SLO configuration from %s", alerts_yaml_path)
self._slos = self._normalise_alerts(raw_yaml_data)
self._results: Dict[str, bool] = {}
self._score: Optional[int] = None
self._breakdown: Optional[Dict[str, int]] = None
self._health_check_results: Dict[str, bool] = {}
self.scenario_reports: List[Dict[str, Any]] = []
self.summary: Optional[Dict[str, Any]] = None
self.detailed_report: Optional[Dict[str, Any]] = None
# ---------------------------------------------------------------------
# Public API
# ---------------------------------------------------------------------
def calculate_score(
self,
*,
health_check_results: Optional[Dict[str, bool]] = None,
) -> int:
"""Calculate the resiliency score using collected SLO results."""
slo_defs = {slo["name"]: {"severity": slo["severity"], "weight": slo.get("weight")} for slo in self._slos}
score, breakdown = calculate_resiliency_score(
slo_definitions=slo_defs,
prometheus_results=self._results,
health_check_results=health_check_results or {},
)
self._score = score
self._breakdown = breakdown
self._health_check_results = health_check_results or {}
return score
def to_dict(self) -> Dict[str, Any]:
"""Return a dictionary ready for telemetry output."""
if self._score is None:
raise RuntimeError("calculate_score() must be called before to_dict()")
return {
"score": self._score,
"breakdown": self._breakdown,
"slo_results": self._results,
"health_check_results": getattr(self, "_health_check_results", {}),
}
# ------------------------------------------------------------------
# Scenario-based resiliency evaluation
# ------------------------------------------------------------------
def add_scenario_report(
self,
*,
scenario_name: str,
prom_cli: KrknPrometheus,
start_time: datetime.datetime,
end_time: datetime.datetime,
weight: float | int = 1,
health_check_results: Optional[Dict[str, bool]] = None,
) -> int:
"""
Evaluate SLOs for a single scenario window and store the result.
Args:
scenario_name: Human-friendly scenario identifier.
prom_cli: Initialized KrknPrometheus instance.
start_time: Window start.
end_time: Window end.
weight: Weight to use for the final weighted average calculation.
health_check_results: Optional mapping of custom health-check name ➡ bool.
Returns:
The calculated integer resiliency score (0-100) for this scenario.
"""
slo_results = evaluate_slos(
prom_cli=prom_cli,
slo_list=self._slos,
start_time=start_time,
end_time=end_time,
)
slo_defs = {slo["name"]: {"severity": slo["severity"], "weight": slo.get("weight")} for slo in self._slos}
score, breakdown = calculate_resiliency_score(
slo_definitions=slo_defs,
prometheus_results=slo_results,
health_check_results=health_check_results or {},
)
self.scenario_reports.append(
{
"name": scenario_name,
"window": {
"start": start_time.isoformat(),
"end": end_time.isoformat(),
},
"score": score,
"weight": weight,
"breakdown": breakdown,
"slo_results": slo_results,
"health_check_results": health_check_results or {},
}
)
return score
def finalize_report(
self,
*,
prom_cli: KrknPrometheus,
total_start_time: datetime.datetime,
total_end_time: datetime.datetime,
) -> None:
if not self.scenario_reports:
raise RuntimeError("No scenario reports added nothing to finalize")
# ---------------- Weighted average (primary resiliency_score) ----------
total_weight = sum(rep["weight"] for rep in self.scenario_reports)
resiliency_score = int(
sum(rep["score"] * rep["weight"] for rep in self.scenario_reports) / total_weight
)
# ---------------- Overall SLO evaluation across full test window -----------------------------
full_slo_results = evaluate_slos(
prom_cli=prom_cli,
slo_list=self._slos,
start_time=total_start_time,
end_time=total_end_time,
)
slo_defs = {slo["name"]: {"severity": slo["severity"], "weight": slo.get("weight")} for slo in self._slos}
_overall_score, full_breakdown = calculate_resiliency_score(
slo_definitions=slo_defs,
prometheus_results=full_slo_results,
health_check_results={},
)
self.summary = {
"scenarios": {rep["name"]: rep["score"] for rep in self.scenario_reports},
"resiliency_score": resiliency_score,
"passed_slos": full_breakdown.get("passed", 0),
"total_slos": full_breakdown.get("passed", 0) + full_breakdown.get("failed", 0),
}
# Detailed report currently limited to per-scenario information; system stability section removed
self.detailed_report = {
"scenarios": self.scenario_reports,
}
def get_summary(self) -> Dict[str, Any]:
"""Return the concise resiliency_summary structure."""
if not hasattr(self, "summary") or self.summary is None:
raise RuntimeError("finalize_report() must be called first")
return self.summary
def get_detailed_report(self) -> Dict[str, Any]:
"""Return the full resiliency-report structure."""
if not hasattr(self, "detailed_report") or self.detailed_report is None:
raise RuntimeError("finalize_report() must be called first")
return self.detailed_report
@staticmethod
def compact_breakdown(report: Dict[str, Any]) -> Dict[str, int]:
"""Return a compact summary dict for a single scenario report."""
try:
passed = report["breakdown"]["passed"]
failed = report["breakdown"]["failed"]
score_val = report["score"]
except Exception:
passed = report.get("breakdown", {}).get("passed", 0)
failed = report.get("breakdown", {}).get("failed", 0)
score_val = report.get("score", 0)
return {
"resiliency_score": score_val,
"passed_slos": passed,
"total_slos": passed + failed,
}
def attach_compact_to_telemetry(self, chaos_telemetry: ChaosRunTelemetry) -> None:
"""Embed per-scenario compact resiliency reports into a ChaosRunTelemetry instance."""
score_map = {
rep["name"]: self.compact_breakdown(rep) for rep in self.scenario_reports
}
new_scenarios = []
for item in getattr(chaos_telemetry, "scenarios", []):
if isinstance(item, dict):
name = item.get("scenario")
if name in score_map:
item["resiliency_report"] = score_map[name]
new_scenarios.append(item)
else:
name = getattr(item, "scenario", None)
try:
item_dict = dataclasses.asdict(item)
except Exception:
item_dict = {
k: getattr(item, k)
for k in dir(item)
if not k.startswith("__") and not callable(getattr(item, k))
}
if name in score_map:
item_dict["resiliency_report"] = score_map[name]
new_scenarios.append(item_dict)
chaos_telemetry.scenarios = new_scenarios
def add_scenario_reports(
self,
*,
scenario_telemetries,
prom_cli: KrknPrometheus,
scenario_type: str,
batch_start_dt: datetime.datetime,
batch_end_dt: datetime.datetime,
weight: int | float = 1,
) -> None:
"""Evaluate SLOs for every telemetry item belonging to a scenario window,
store the result and enrich the telemetry list with a compact resiliency breakdown.
Args:
scenario_telemetries: Iterable with telemetry objects/dicts for the
current scenario batch window.
prom_cli: Pre-configured :class:`KrknPrometheus` instance.
scenario_type: Fallback scenario identifier in case individual
telemetry items do not provide one.
batch_start_dt: Fallback start timestamp for the batch window.
batch_end_dt: Fallback end timestamp for the batch window.
weight: Weight to assign to every scenario when calculating the final
weighted average.
logger: Optional custom logger.
"""
for tel in scenario_telemetries:
try:
# -------- Extract timestamps & scenario name --------------------
if isinstance(tel, dict):
st_ts = tel.get("start_timestamp")
en_ts = tel.get("end_timestamp")
scen_name = tel.get("scenario", scenario_type)
else:
st_ts = getattr(tel, "start_timestamp", None)
en_ts = getattr(tel, "end_timestamp", None)
scen_name = getattr(tel, "scenario", scenario_type)
if st_ts and en_ts:
st_dt = datetime.datetime.fromtimestamp(int(st_ts))
en_dt = datetime.datetime.fromtimestamp(int(en_ts))
else:
st_dt = batch_start_dt
en_dt = batch_end_dt
# -------- Calculate resiliency score for the scenario -----------
self.add_scenario_report(
scenario_name=str(scen_name),
prom_cli=prom_cli,
start_time=st_dt,
end_time=en_dt,
weight=weight,
health_check_results=None,
)
compact = self.compact_breakdown(self.scenario_reports[-1])
if isinstance(tel, dict):
tel["resiliency_report"] = compact
else:
setattr(tel, "resiliency_report", compact)
except Exception as exc:
logging.error("Resiliency per-scenario evaluation failed: %s", exc)
def finalize_and_save(
self,
*,
prom_cli: KrknPrometheus,
total_start_time: datetime.datetime,
total_end_time: datetime.datetime,
run_mode: str = "standalone",
detailed_path: str = "resiliency-report.json",
) -> Tuple[Dict[str, Any], Dict[str, Any]]:
"""Finalize resiliency scoring, persist reports and return them.
Args:
prom_cli: Pre-configured KrknPrometheus instance.
total_start_time: Start time for the full test window.
total_end_time: End time for the full test window.
run_mode: "detailed" or "standalone" mode.
Returns:
(detailed_report)
"""
try:
self.finalize_report(
prom_cli=prom_cli,
total_start_time=total_start_time,
total_end_time=total_end_time,
)
detailed = self.get_detailed_report()
if run_mode == "detailed":
# krknctl expects the detailed report on stdout in a special format
try:
detailed_json = json.dumps(detailed)
print(f"KRKN_RESILIENCY_REPORT_JSON:{detailed_json}")
logging.info("Resiliency report logged to stdout for krknctl.")
except Exception as exc:
logging.error("Failed to serialize and log detailed resiliency report: %s", exc)
else:
# Stand-alone mode write to files for post-run consumption
try:
with open(detailed_path, "w", encoding="utf-8") as fp:
json.dump(detailed, fp, indent=2)
logging.info("Resiliency report written: %s", detailed_path)
except Exception as io_exc:
logging.error("Failed to write resiliency report files: %s", io_exc)
except Exception as exc:
logging.error("Failed to finalize resiliency scoring: %s", exc)
# ------------------------------------------------------------------
# Internal helpers
# ------------------------------------------------------------------
@staticmethod
def _normalise_alerts(raw_alerts: Any) -> List[Dict[str, Any]]:
"""Convert raw YAML alerts data into internal SLO list structure."""
if not isinstance(raw_alerts, list):
raise ValueError("SLO configuration must be a list under key 'slos' or top-level list")
slos: List[Dict[str, Any]] = []
for idx, alert in enumerate(raw_alerts):
if not (isinstance(alert, dict) and "expr" in alert and "severity" in alert):
logging.warning("Skipping invalid alert entry at index %d: %s", idx, alert)
continue
name = alert.get("description") or f"slo_{idx}"
slos.append(
{
"name": name,
"expr": alert["expr"],
"severity": str(alert["severity"]).lower(),
"weight": alert.get("weight")
}
)
return slos

89
krkn/resiliency/score.py Normal file
View File

@@ -0,0 +1,89 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
from typing import Dict, List, Tuple
DEFAULT_WEIGHTS = {"critical": 3, "warning": 1}
class SLOResult:
"""Simple container representing evaluation outcome for a single SLO."""
def __init__(self, name: str, severity: str, passed: bool, weight: int | None = None):
self.name = name
self.severity = severity
self.passed = passed
self._custom_weight = weight
def weight(self, severity_weights: Dict[str, int]) -> int:
"""Return the weight for this SLO. Uses custom weight if set, otherwise uses severity-based weight."""
if self._custom_weight is not None:
return self._custom_weight
return severity_weights.get(self.severity, severity_weights.get("warning", 1))
def calculate_resiliency_score(
slo_definitions: Dict[str, str] | Dict[str, Dict[str, int | str | None]],
prometheus_results: Dict[str, bool],
health_check_results: Dict[str, bool],
) -> Tuple[int, Dict[str, int]]:
"""Compute a resiliency score between 0-100 based on SLO pass/fail results.
Args:
slo_definitions: Mapping of SLO name -> severity ("critical" | "warning") OR
SLO name -> {"severity": str, "weight": int | None}.
prometheus_results: Mapping of SLO name -> bool indicating whether the SLO
passed. Any SLO missing in this mapping is treated as failed.
health_check_results: Mapping of custom health-check name -> bool pass flag.
These checks are always treated as *critical*.
Returns:
Tuple containing (final_score, breakdown) where *breakdown* is a dict with
the counts of passed/failed SLOs per severity.
"""
slo_objects: List[SLOResult] = []
for slo_name, slo_def in slo_definitions.items():
# Exclude SLOs that were not evaluated (query returned no data)
if slo_name not in prometheus_results:
continue
passed = bool(prometheus_results[slo_name])
# Support both old format (str) and new format (dict)
if isinstance(slo_def, str):
severity = slo_def
slo_weight = None
else:
severity = slo_def.get("severity", "warning")
slo_weight = slo_def.get("weight")
slo_objects.append(SLOResult(slo_name, severity, passed, weight=slo_weight))
# Health-check SLOs (by default keeping them critical)
for hc_name, hc_passed in health_check_results.items():
slo_objects.append(SLOResult(hc_name, "critical", bool(hc_passed)))
total_points = sum(slo.weight(DEFAULT_WEIGHTS) for slo in slo_objects)
points_lost = sum(slo.weight(DEFAULT_WEIGHTS) for slo in slo_objects if not slo.passed)
score = 0 if total_points == 0 else int(((total_points - points_lost) / total_points) * 100)
breakdown = {
"total_points": total_points,
"points_lost": points_lost,
"passed": len([s for s in slo_objects if s.passed]),
"failed": len([s for s in slo_objects if not s.passed]),
}
return score, breakdown

View File

@@ -0,0 +1,13 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import logging
from typing import Optional, TYPE_CHECKING

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
from dataclasses import dataclass

View File

@@ -1,3 +1,19 @@
#!/usr/bin/env python
#
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
@@ -132,8 +148,8 @@ def execute_rollback_version_files(
:param ignore_auto_rollback_config: Flag to ignore auto rollback configuration. Will be set to True for manual execute-rollback calls.
"""
if not ignore_auto_rollback_config and RollbackConfig().auto is False:
logger.warning(f"Auto rollback is disabled, skipping execution for run_uuid={run_uuid or '*'}, scenario_type={scenario_type or '*'}")
return
logger.warning(f"Auto rollback is disabled, skipping execution for run_uuid={run_uuid or '*'}, scenario_type={scenario_type or '*'}")
return
# Get the rollback versions directory
version_files = RollbackConfig.search_rollback_version_files(run_uuid, scenario_type)

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import os
import logging

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Dict, Any, Optional
import threading
import signal

View File

@@ -0,0 +1,13 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,10 +1,24 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
import time
from abc import ABC, abstractmethod
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from krkn import utils
from krkn import utils, cerberus
from krkn.rollback.handler import (
RollbackHandler,
execute_rollback_version_files,
@@ -30,7 +44,6 @@ class AbstractScenarioPlugin(ABC):
self,
run_uuid: str,
scenario: str,
krkn_config: dict[str, any],
lib_telemetry: KrknTelemetryOpenshift,
scenario_telemetry: ScenarioTelemetry,
) -> int:
@@ -87,6 +100,16 @@ class AbstractScenarioPlugin(ABC):
scenario_telemetry.scenario = scenario_config
scenario_telemetry.scenario_type = self.get_scenario_types()[0]
scenario_telemetry.start_timestamp = time.time()
if not os.path.exists(scenario_config):
logging.error(
f"scenario file not found: '{scenario_config}' -- "
f"check that the path is correct relative to the working directory: {os.getcwd()}"
)
failed_scenarios.append(scenario_config)
scenario_telemetry.exit_status = 1
scenario_telemetry.end_timestamp = time.time()
scenario_telemetries.append(scenario_telemetry)
continue
parsed_scenario_config = telemetry.set_parameters_base64(
scenario_telemetry, scenario_config
)
@@ -104,7 +127,6 @@ class AbstractScenarioPlugin(ABC):
return_value = self.run(
run_uuid=run_uuid,
scenario=scenario_config,
krkn_config=krkn_config,
lib_telemetry=telemetry,
scenario_telemetry=scenario_telemetry,
)
@@ -126,12 +148,14 @@ class AbstractScenarioPlugin(ABC):
)
scenario_telemetry.exit_status = return_value
scenario_telemetry.end_timestamp = time.time()
start_time = int(scenario_telemetry.start_timestamp)
end_time = int(scenario_telemetry.end_timestamp)
utils.collect_and_put_ocp_logs(
telemetry,
parsed_scenario_config,
telemetry.get_telemetry_request_id(),
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp),
start_time,
end_time
)
if events_backup:
@@ -139,15 +163,17 @@ class AbstractScenarioPlugin(ABC):
krkn_config,
parsed_scenario_config,
telemetry.get_lib_kubernetes(),
int(scenario_telemetry.start_timestamp),
int(scenario_telemetry.end_timestamp),
start_time,
end_time
)
if scenario_telemetry.exit_status != 0:
failed_scenarios.append(scenario_config)
scenario_telemetries.append(scenario_telemetry)
logging.info(f"wating {wait_duration} before running the next scenario")
cerberus.publish_kraken_status(start_time,end_time)
logging.info(f"waiting {wait_duration} before running the next scenario")
time.sleep(wait_duration)
return failed_scenarios, scenario_telemetries

View File

@@ -0,0 +1,13 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import time
import yaml
@@ -5,7 +18,6 @@ from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from krkn_lib.utils import get_yaml_item_value, get_random_string
from jinja2 import Template
from krkn import cerberus
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
from krkn.rollback.config import RollbackContent
from krkn.rollback.handler import set_rollback_context_decorator
@@ -17,11 +29,9 @@ class ApplicationOutageScenarioPlugin(AbstractScenarioPlugin):
self,
run_uuid: str,
scenario: str,
krkn_config: dict[str, any],
lib_telemetry: KrknTelemetryOpenshift,
scenario_telemetry: ScenarioTelemetry,
) -> int:
wait_duration = krkn_config["tunings"]["wait_duration"]
try:
with open(scenario, "r") as f:
app_outage_config_yaml = yaml.full_load(f)
@@ -110,14 +120,8 @@ class ApplicationOutageScenarioPlugin(AbstractScenarioPlugin):
policy_name, namespace
)
logging.info(
"End of scenario. Waiting for the specified duration: %s"
% wait_duration
)
time.sleep(wait_duration)
end_time = int(time.time())
cerberus.publish_kraken_status(krkn_config, [], start_time, end_time)
except Exception as e:
logging.error(
"ApplicationOutageScenarioPlugin exiting due to Exception %s" % e

View File

@@ -0,0 +1,13 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import random
import time
@@ -10,7 +23,6 @@ from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from krkn_lib.utils import get_yaml_item_value
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
@@ -19,7 +31,6 @@ class ContainerScenarioPlugin(AbstractScenarioPlugin):
self,
run_uuid: str,
scenario: str,
krkn_config: dict[str, any],
lib_telemetry: KrknTelemetryOpenshift,
scenario_telemetry: ScenarioTelemetry,
) -> int:

View File

@@ -0,0 +1,13 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import logging
import queue
@@ -23,7 +36,7 @@ from krkn.rollback.handler import set_rollback_context_decorator
class HogsScenarioPlugin(AbstractScenarioPlugin):
@set_rollback_context_decorator
def run(self, run_uuid: str, scenario: str, krkn_config: dict[str, any], lib_telemetry: KrknTelemetryOpenshift,
def run(self, run_uuid: str, scenario: str, lib_telemetry: KrknTelemetryOpenshift,
scenario_telemetry: ScenarioTelemetry) -> int:
try:
with open(scenario, "r") as f:
@@ -53,7 +66,7 @@ class HogsScenarioPlugin(AbstractScenarioPlugin):
raise Exception("no available nodes to schedule workload")
if not has_selector:
available_nodes = [available_nodes[random.randint(0, len(available_nodes))]]
available_nodes = [available_nodes[random.randint(0, len(available_nodes) - 1)]]
if scenario_config.number_of_nodes and len(available_nodes) > scenario_config.number_of_nodes:
available_nodes = random.sample(available_nodes, scenario_config.number_of_nodes)

View File

@@ -0,0 +1,563 @@
import base64
import json
import logging
import time
from typing import Dict, List, Any
import yaml
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from krkn_lib.utils import get_random_string
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
from krkn.rollback.config import RollbackContent
from krkn.rollback.handler import set_rollback_context_decorator
class HttpLoadScenarioPlugin(AbstractScenarioPlugin):
"""
HTTP Load Testing Scenario Plugin using Vegeta.
Deploys Vegeta load testing pods inside the Kubernetes cluster for distributed
HTTP load testing. Supports multiple concurrent pods, node affinity, authentication,
and comprehensive results collection.
"""
def __init__(self, scenario_type: str = "http_load_scenarios"):
super().__init__(scenario_type=scenario_type)
@set_rollback_context_decorator
def run(
self,
run_uuid: str,
scenario: str,
lib_telemetry: KrknTelemetryOpenshift,
scenario_telemetry: ScenarioTelemetry,
) -> int:
"""
Main entry point for HTTP load scenario execution.
Deploys Vegeta load testing pods inside the cluster for distributed load testing.
:param run_uuid: Unique identifier for this chaos run
:param scenario: Path to scenario configuration file
:param lib_telemetry: Telemetry object for Kubernetes operations
:param scenario_telemetry: Telemetry object for this scenario
:return: 0 on success, 1 on failure
"""
try:
# Load scenario configuration
with open(scenario, "r") as f:
scenario_configs = yaml.full_load(f)
if not scenario_configs:
logging.error("Empty scenario configuration file")
return 1
# Process each scenario configuration
for scenario_config in scenario_configs:
if not isinstance(scenario_config, dict):
logging.error(f"Invalid scenario configuration format: {scenario_config}")
return 1
# Get the http_load_scenario configuration
config = scenario_config.get("http_load_scenario", scenario_config)
# Validate configuration
if not self._validate_config(config):
return 1
# Execute the load test (deploy pods)
result = self._execute_distributed_load_test(
config,
lib_telemetry,
scenario_telemetry
)
if result != 0:
return result
logging.info("HTTP load test completed successfully")
return 0
except Exception as e:
logging.error(f"HTTP load scenario failed with exception: {e}")
import traceback
logging.error(traceback.format_exc())
return 1
def get_scenario_types(self) -> list[str]:
"""Return the scenario types this plugin handles."""
return ["http_load_scenarios"]
def _validate_config(self, config: Dict[str, Any]) -> bool:
"""
Validate scenario configuration.
:param config: Scenario configuration dictionary
:return: True if valid, False otherwise
"""
# Check for required fields
if "targets" not in config:
logging.error("Missing required field: targets")
return False
targets = config["targets"]
# Validate targets configuration
if "endpoints" not in targets:
logging.error("targets must contain 'endpoints'")
return False
if "endpoints" in targets:
endpoints = targets["endpoints"]
if not isinstance(endpoints, list) or len(endpoints) == 0:
logging.error("endpoints must be a non-empty list")
return False
# Validate each endpoint
for idx, endpoint in enumerate(endpoints):
if not isinstance(endpoint, dict):
logging.error(f"Endpoint {idx} must be a dictionary")
return False
if "url" not in endpoint:
logging.error(f"Endpoint {idx} missing required field: url")
return False
if "method" not in endpoint:
logging.error(f"Endpoint {idx} missing required field: method")
return False
# Validate rate format
if "rate" in config:
rate = config["rate"]
if not isinstance(rate, (str, int)):
logging.error("rate must be a string (e.g., '200/1s') or integer")
return False
# Validate duration format
if "duration" in config:
duration = config["duration"]
if not isinstance(duration, (str, int)):
logging.error("duration must be a string (e.g., '30s') or integer")
return False
return True
def _execute_distributed_load_test(
self,
config: Dict[str, Any],
lib_telemetry: KrknTelemetryOpenshift,
scenario_telemetry: ScenarioTelemetry
) -> int:
"""
Execute distributed HTTP load test by deploying Vegeta pods.
:param config: Scenario configuration
:param lib_telemetry: Telemetry object for Kubernetes operations
:param scenario_telemetry: Telemetry object for recording results
:return: 0 on success, 1 on failure
"""
pod_names = []
namespace = config.get("namespace", "default")
try:
# Get number of pods to deploy
number_of_pods = config.get("number-of-pods", 1)
# Get container image
image = config.get("image", "quay.io/krkn-chaos/krkn-http-load:latest")
# Get endpoints
endpoints = config.get("targets", {}).get("endpoints", [])
if not endpoints:
logging.error("No endpoints specified in targets")
return 1
# Build Vegeta JSON targets for all endpoints (round-robin)
targets_json = self._build_vegeta_json_targets(endpoints)
targets_json_base64 = base64.b64encode(targets_json.encode()).decode()
target_urls = [ep["url"] for ep in endpoints]
logging.info(f"Targeting {len(endpoints)} endpoint(s): {target_urls}")
# Get node selectors for pod placement
node_selectors = config.get("attacker-nodes")
# Deploy multiple Vegeta pods
logging.info(f"Deploying {number_of_pods} HTTP load testing pod(s)")
for i in range(number_of_pods):
pod_name = f"http-load-{get_random_string(10)}"
logging.info(f"Deploying pod {i+1}/{number_of_pods}: {pod_name}")
# Deploy pod using krkn-lib
lib_telemetry.get_lib_kubernetes().deploy_http_load(
name=pod_name,
namespace=namespace,
image=image,
targets_json_base64=targets_json_base64,
duration=config.get("duration", "30s"),
rate=config.get("rate", "50/1s"),
workers=config.get("workers", 10),
max_workers=config.get("max_workers", 100),
connections=config.get("connections", 100),
timeout=config.get("timeout", "10s"),
keepalive=config.get("keepalive", True),
http2=config.get("http2", True),
insecure=config.get("insecure", False),
node_selectors=node_selectors,
timeout_sec=500
)
pod_names.append(pod_name)
# Set rollback callable for pod cleanup
rollback_data = base64.b64encode(json.dumps(pod_names).encode('utf-8')).decode('utf-8')
self.rollback_handler.set_rollback_callable(
self.rollback_http_load_pods,
RollbackContent(
namespace=namespace,
resource_identifier=rollback_data,
),
)
logging.info(f"Successfully deployed {len(pod_names)} HTTP load pod(s)")
# Wait for all pods to complete
logging.info("Waiting for all HTTP load pods to complete...")
self._wait_for_pods_completion(pod_names, namespace, lib_telemetry, config)
# Collect and aggregate results from all pods
metrics = self._collect_and_aggregate_results(pod_names, namespace, lib_telemetry)
if metrics:
# Log metrics summary
self._log_metrics_summary(metrics)
# Store metrics in telemetry
scenario_telemetry.additional_telemetry = metrics
logging.info("HTTP load test completed successfully")
return 0
except Exception as e:
logging.error(f"Error executing distributed load test: {e}")
import traceback
logging.error(traceback.format_exc())
return 1
def _build_vegeta_json_targets(self, endpoints: List[Dict[str, Any]]) -> str:
"""
Build newline-delimited Vegeta JSON targets from all endpoints.
Vegeta round-robins across targets when multiple are provided.
Each line is a JSON object: {"method":"GET","url":"...","header":{...},"body":"base64..."}
:param endpoints: List of endpoint configurations
:return: Newline-delimited JSON string
"""
lines = []
for ep in endpoints:
target = {
"method": ep.get("method", "GET"),
"url": ep["url"],
}
# Add headers
if "headers" in ep and ep["headers"]:
target["header"] = {k: [v] for k, v in ep["headers"].items()}
# Add body (base64 encoded as Vegeta JSON format expects)
if "body" in ep and ep["body"]:
target["body"] = base64.b64encode(ep["body"].encode()).decode()
lines.append(json.dumps(target, separators=(",", ":")))
return "\n".join(lines)
def _wait_for_pods_completion(
self,
pod_names: List[str],
namespace: str,
lib_telemetry: KrknTelemetryOpenshift,
config: Dict[str, Any]
):
"""
Wait for all HTTP load pods to complete.
:param pod_names: List of pod names to wait for
:param namespace: Namespace where pods are running
:param lib_telemetry: Telemetry object for Kubernetes operations
:param config: Scenario configuration
"""
lib_k8s = lib_telemetry.get_lib_kubernetes()
finished_pods = []
did_finish = False
# Calculate max wait time (duration + buffer)
duration_str = config.get("duration", "30s")
max_wait = self._parse_duration_to_seconds(duration_str) + 60 # Add 60s buffer
start_time = time.time()
while not did_finish:
for pod_name in pod_names:
if pod_name not in finished_pods:
if not lib_k8s.is_pod_running(pod_name, namespace):
finished_pods.append(pod_name)
logging.info(f"Pod {pod_name} has completed")
if set(pod_names) == set(finished_pods):
did_finish = True
break
# Check timeout
if time.time() - start_time > max_wait:
logging.warning(f"Timeout waiting for pods to complete (waited {max_wait}s)")
break
time.sleep(5)
logging.info(f"All {len(finished_pods)}/{len(pod_names)} pods have completed")
def _collect_and_aggregate_results(
self,
pod_names: List[str],
namespace: str,
lib_telemetry: KrknTelemetryOpenshift
) -> Dict[str, Any]:
"""
Collect results from all pods and aggregate metrics.
:param pod_names: List of pod names
:param namespace: Namespace where pods ran
:param lib_telemetry: Telemetry object for Kubernetes operations
:return: Aggregated metrics dictionary
"""
lib_k8s = lib_telemetry.get_lib_kubernetes()
all_metrics = []
logging.info("Collecting results from HTTP load pods...")
for pod_name in pod_names:
try:
# Read pod logs to get results
log_response = lib_k8s.get_pod_log(pod_name, namespace)
# Handle HTTPResponse object from kubernetes client
if hasattr(log_response, 'data'):
logs = log_response.data.decode('utf-8') if isinstance(log_response.data, bytes) else str(log_response.data)
elif hasattr(log_response, 'read'):
logs = log_response.read().decode('utf-8')
else:
logs = str(log_response)
# Parse JSON report from logs
metrics = self._parse_metrics_from_logs(logs)
if metrics:
all_metrics.append(metrics)
logging.info(f"Collected metrics from pod: {pod_name}")
else:
logging.warning(f"No metrics found in logs for pod: {pod_name}")
except Exception as e:
logging.warning(f"Failed to collect results from pod {pod_name}: {e}")
if not all_metrics:
logging.warning("No metrics collected from any pods")
return {}
# Aggregate metrics from all pods
aggregated = self._aggregate_metrics(all_metrics)
logging.info(f"Aggregated metrics from {len(all_metrics)} pod(s)")
return aggregated
def _parse_metrics_from_logs(self, logs: str) -> Dict[str, Any]:
"""
Parse Vegeta JSON metrics from pod logs.
:param logs: Pod logs
:return: Metrics dictionary or None
"""
try:
# Look for JSON report section in logs
for line in logs.split('\n'):
line = line.strip()
if line.startswith('{') and '"latencies"' in line:
return json.loads(line)
return None
except Exception as e:
logging.warning(f"Failed to parse metrics from logs: {e}")
return None
def _aggregate_metrics(self, metrics_list: List[Dict[str, Any]]) -> Dict[str, Any]:
"""
Aggregate metrics from multiple pods.
:param metrics_list: List of metrics dictionaries from each pod
:return: Aggregated metrics
"""
if not metrics_list:
return {}
# Sum totals
total_requests = sum(m.get("requests", 0) for m in metrics_list)
total_rate = sum(m.get("rate", 0) for m in metrics_list)
total_throughput = sum(m.get("throughput", 0) for m in metrics_list)
# Average latencies (weighted by request count)
latencies = {}
if total_requests > 0:
for percentile in ["mean", "50th", "95th", "99th", "max", "min"]:
weighted_sum = sum(
m.get("latencies", {}).get(percentile, 0) * m.get("requests", 0)
for m in metrics_list
)
latencies[percentile] = weighted_sum / total_requests if total_requests > 0 else 0
# Average success rate (weighted by request count)
total_success = sum(
m.get("success", 0) * m.get("requests", 0)
for m in metrics_list
)
success_rate = total_success / total_requests if total_requests > 0 else 0
# Aggregate status codes
status_codes = {}
for metrics in metrics_list:
for code, count in metrics.get("status_codes", {}).items():
status_codes[code] = status_codes.get(code, 0) + count
# Aggregate bytes
bytes_in_total = sum(m.get("bytes_in", {}).get("total", 0) for m in metrics_list)
bytes_out_total = sum(m.get("bytes_out", {}).get("total", 0) for m in metrics_list)
# Aggregate errors
all_errors = []
for metrics in metrics_list:
all_errors.extend(metrics.get("errors", []))
return {
"requests": total_requests,
"rate": total_rate,
"throughput": total_throughput,
"latencies": latencies,
"success": success_rate,
"status_codes": status_codes,
"bytes_in": {"total": bytes_in_total},
"bytes_out": {"total": bytes_out_total},
"errors": all_errors[:10], # First 10 errors only
"pod_count": len(metrics_list)
}
def _parse_duration_to_seconds(self, duration: str) -> int:
"""
Parse duration string to seconds.
:param duration: Duration string like "30s", "5m", "1h"
:return: Duration in seconds
"""
import re
match = re.match(r'^(\d+)(s|m|h)$', str(duration))
if not match:
logging.warning(f"Invalid duration format: {duration}, defaulting to 30s")
return 30
value = int(match.group(1))
unit = match.group(2)
multipliers = {
"s": 1,
"m": 60,
"h": 3600,
}
return value * multipliers.get(unit, 1)
@staticmethod
def rollback_http_load_pods(
rollback_content: RollbackContent,
lib_telemetry: KrknTelemetryOpenshift
):
"""
Rollback function to delete HTTP load pods.
:param rollback_content: Rollback content containing namespace and pod names
:param lib_telemetry: Instance of KrknTelemetryOpenshift for Kubernetes operations
"""
try:
namespace = rollback_content.namespace
pod_names = json.loads(
base64.b64decode(rollback_content.resource_identifier.encode('utf-8')).decode('utf-8')
)
logging.info(f"Rolling back HTTP load pods: {pod_names} in namespace: {namespace}")
for pod_name in pod_names:
try:
lib_telemetry.get_lib_kubernetes().delete_pod(pod_name, namespace)
logging.info(f"Deleted pod: {pod_name}")
except Exception as e:
logging.warning(f"Failed to delete pod {pod_name}: {e}")
logging.info("Rollback of HTTP load pods completed")
except Exception as e:
logging.error(f"Failed to rollback HTTP load pods: {e}")
def _log_metrics_summary(self, metrics: Dict[str, Any]):
"""Log summary of test metrics."""
logging.info("=" * 60)
logging.info("HTTP Load Test Results Summary (Aggregated)")
logging.info("=" * 60)
# Pod count
pod_count = metrics.get("pod_count", 1)
logging.info(f"Load Generator Pods: {pod_count}")
# Request statistics
requests = metrics.get("requests", 0)
logging.info(f"Total Requests: {requests}")
# Success rate
success = metrics.get("success", 0.0)
logging.info(f"Success Rate: {success * 100:.2f}%")
# Latency statistics
latencies = metrics.get("latencies", {})
if latencies:
logging.info(f"Latency Mean: {latencies.get('mean', 0) / 1e6:.2f} ms")
logging.info(f"Latency P50: {latencies.get('50th', 0) / 1e6:.2f} ms")
logging.info(f"Latency P95: {latencies.get('95th', 0) / 1e6:.2f} ms")
logging.info(f"Latency P99: {latencies.get('99th', 0) / 1e6:.2f} ms")
logging.info(f"Latency Max: {latencies.get('max', 0) / 1e6:.2f} ms")
# Throughput
throughput = metrics.get("throughput", 0.0)
logging.info(f"Total Throughput: {throughput:.2f} req/s")
# Bytes
bytes_in = metrics.get("bytes_in", {})
bytes_out = metrics.get("bytes_out", {})
if bytes_in:
logging.info(f"Bytes In (total): {bytes_in.get('total', 0) / 1024 / 1024:.2f} MB")
if bytes_out:
logging.info(f"Bytes Out (total): {bytes_out.get('total', 0) / 1024 / 1024:.2f} MB")
# Status codes
status_codes = metrics.get("status_codes", {})
if status_codes:
logging.info("Status Code Distribution:")
for code, count in sorted(status_codes.items()):
logging.info(f" {code}: {count}")
# Errors
errors = metrics.get("errors", [])
if errors:
logging.warning(f"Errors encountered: {len(errors)}")
for error in errors[:5]: # Show first 5 errors
logging.warning(f" - {error}")
logging.info("=" * 60)

View File

@@ -0,0 +1,13 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,15 +1,27 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import time
from typing import Dict, Any, Optional
from typing import Dict, Any
import random
import re
import yaml
from kubernetes.client.rest import ApiException
from krkn_lib.k8s import KrknKubernetes
from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from krkn_lib.utils import log_exception
from krkn_lib.models.k8s import AffectedPod, PodsStatus
from krkn_lib.models.k8s import AffectedVMI, VmisStatus
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
@@ -35,7 +47,6 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
self,
run_uuid: str,
scenario: str,
krkn_config: dict[str, any],
lib_telemetry: KrknTelemetryOpenshift,
scenario_telemetry: ScenarioTelemetry,
) -> int:
@@ -46,21 +57,21 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
try:
with open(scenario, "r") as f:
scenario_config = yaml.full_load(f)
self.init_clients(lib_telemetry.get_lib_kubernetes())
pods_status = PodsStatus()
vmis_status = VmisStatus()
for config in scenario_config["scenarios"]:
if config.get("scenario") == "kubevirt_vm_outage":
single_pods_status = self.execute_scenario(config, scenario_telemetry)
pods_status.merge(single_pods_status)
single_vmis_status = self.execute_scenario(config, scenario_telemetry)
vmis_status.merge(single_vmis_status)
scenario_telemetry.affected_pods = pods_status
if len(scenario_telemetry.affected_pods.unrecovered) > 0:
scenario_telemetry.affected_vmis = vmis_status
if len(scenario_telemetry.affected_vmis.unrecovered) > 0:
return 1
return 0
except Exception as e:
logging.error(f"KubeVirt VM Outage scenario failed: {e}")
log_exception(e)
log_exception(str(e))
return 1
def init_clients(self, k8s_client: KrknKubernetes):
@@ -71,77 +82,16 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
self.custom_object_client = k8s_client.custom_object_client
logging.info("Successfully initialized Kubernetes client for KubeVirt operations")
def get_vmi(self, name: str, namespace: str) -> Optional[Dict]:
"""
Get a Virtual Machine Instance by name and namespace.
:param name: Name of the VMI to retrieve
:param namespace: Namespace of the VMI
:return: The VMI object if found, None otherwise
"""
try:
vmi = self.custom_object_client.get_namespaced_custom_object(
group="kubevirt.io",
version="v1",
namespace=namespace,
plural="virtualmachineinstances",
name=name
)
return vmi
except ApiException as e:
if e.status == 404:
logging.warning(f"VMI {name} not found in namespace {namespace}")
return None
else:
logging.error(f"Error getting VMI {name}: {e}")
raise
except Exception as e:
logging.error(f"Unexpected error getting VMI {name}: {e}")
raise
def get_vmis(self, regex_name: str, namespace: str) -> Optional[Dict]:
"""
Get a Virtual Machine Instance by name and namespace.
:param name: Name of the VMI to retrieve
:param namespace: Namespace of the VMI
:return: The VMI object if found, None otherwise
"""
try:
namespaces = self.k8s_client.list_namespaces_by_regex(namespace)
for namespace in namespaces:
vmis = self.custom_object_client.list_namespaced_custom_object(
group="kubevirt.io",
version="v1",
namespace=namespace,
plural="virtualmachineinstances",
)
for vmi in vmis.get("items"):
vmi_name = vmi.get("metadata",{}).get("name")
match = re.match(regex_name, vmi_name)
if match:
self.vmis_list.append(vmi)
except ApiException as e:
if e.status == 404:
logging.warning(f"VMI {regex_name} not found in namespace {namespace}")
return []
else:
logging.error(f"Error getting VMI {regex_name}: {e}")
raise
except Exception as e:
logging.error(f"Unexpected error getting VMI {regex_name}: {e}")
raise
def execute_scenario(self, config: Dict[str, Any], scenario_telemetry: ScenarioTelemetry) -> int:
def execute_scenario(self, config: Dict[str, Any], scenario_telemetry: ScenarioTelemetry) -> VmisStatus:
"""
Execute a KubeVirt VM outage scenario based on the provided configuration.
:param config: The scenario configuration
:param scenario_telemetry: The telemetry object for recording metrics
:return: 0 for success, 1 for failure
:return: VmisStatus object containing recovered and unrecovered pods
"""
self.pods_status = PodsStatus()
self.vmis_status = VmisStatus()
try:
params = config.get("parameters", {})
vm_name = params.get("vm_name")
@@ -149,12 +99,12 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
timeout = params.get("timeout", 60)
kill_count = params.get("kill_count", 1)
disable_auto_restart = params.get("disable_auto_restart", False)
if not vm_name:
logging.error("vm_name parameter is required")
return 1
self.pods_status = PodsStatus()
self.get_vmis(vm_name,namespace)
return self.vmis_status
self.vmis_status = VmisStatus()
self.vmis_list = self.k8s_client.get_vmis(vm_name,namespace)
for _ in range(kill_count):
rand_int = random.randint(0, len(self.vmis_list) - 1)
@@ -163,44 +113,49 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
logging.info(f"Starting KubeVirt VM outage scenario for VM: {vm_name} in namespace: {namespace}")
vmi_name = vmi.get("metadata").get("name")
vmi_namespace = vmi.get("metadata").get("namespace")
if not self.validate_environment(vmi_name, vmi_namespace):
return 1
vmi = self.get_vmi(vmi_name, vmi_namespace)
self.affected_pod = AffectedPod(
pod_name=vmi_name,
# Create affected_vmi early so we can track failures
self.affected_vmi = AffectedVMI(
vmi_name=vmi_name,
namespace=vmi_namespace,
)
if not self.validate_environment(vmi_name, vmi_namespace):
self.vmis_status.unrecovered.append(self.affected_vmi)
continue
vmi = self.k8s_client.get_vmi(vmi_name, vmi_namespace)
if not vmi:
logging.error(f"VMI {vm_name} not found in namespace {namespace}")
return 1
self.vmis_status.unrecovered.append(self.affected_vmi)
continue
self.original_vmi = vmi
logging.info(f"Captured initial state of VMI: {vm_name}")
result = self.delete_vmi(vmi_name, vmi_namespace, disable_auto_restart)
if result != 0:
self.pods_status.unrecovered.append(self.affected_pod)
self.vmis_status.unrecovered.append(self.affected_vmi)
continue
result = self.wait_for_running(vmi_name,vmi_namespace, timeout)
if result != 0:
self.pods_status.unrecovered.append(self.affected_pod)
self.vmis_status.unrecovered.append(self.affected_vmi)
continue
self.affected_pod.total_recovery_time = (
self.affected_pod.pod_readiness_time
+ self.affected_pod.pod_rescheduling_time
self.affected_vmi.total_recovery_time = (
self.affected_vmi.vmi_readiness_time
+ self.affected_vmi.vmi_rescheduling_time
)
self.pods_status.recovered.append(self.affected_pod)
self.vmis_status.recovered.append(self.affected_vmi)
logging.info(f"Successfully completed KubeVirt VM outage scenario for VM: {vm_name}")
return self.pods_status
return self.vmis_status
except Exception as e:
logging.error(f"Error executing KubeVirt VM outage scenario: {e}")
log_exception(e)
return self.pods_status
log_exception(str(e))
return self.vmis_status
def validate_environment(self, vm_name: str, namespace: str) -> bool:
"""
@@ -212,15 +167,13 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
"""
try:
# Check if KubeVirt CRDs exist
crd_list = self.custom_object_client.list_namespaced_custom_object("kubevirt.io","v1",namespace,"virtualmachines")
kubevirt_crds = [crd for crd in crd_list.items() ]
kubevirt_crds = self.k8s_client.get_vms(vm_name, namespace)
if not kubevirt_crds:
logging.error("KubeVirt CRDs not found. Ensure KubeVirt/CNV is installed in the cluster")
return False
# Check if VMI exists
vmi = self.get_vmi(vm_name, namespace)
vmi = self.k8s_client.get_vmi(vm_name, namespace)
if not vmi:
logging.error(f"VMI {vm_name} not found in namespace {namespace}")
return False
@@ -243,13 +196,7 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
"""
try:
# Get the VM object first to get its current spec
vm = self.custom_object_client.get_namespaced_custom_object(
group="kubevirt.io",
version="v1",
namespace=namespace,
plural="virtualmachines",
name=vm_name
)
vm = self.k8s_client.get_vm(vm_name, namespace)
# Update the running state
if 'spec' not in vm:
@@ -257,14 +204,7 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
vm['spec']['running'] = running
# Apply the patch
self.custom_object_client.patch_namespaced_custom_object(
group="kubevirt.io",
version="v1",
namespace=namespace,
plural="virtualmachines",
name=vm_name,
body=vm
)
self.k8s_client.patch_vm(vm_name,namespace,vm)
return True
except ApiException as e:
@@ -293,43 +233,29 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
" - proceeding with deletion but VM may auto-restart")
start_creation_time = self.original_vmi.get('metadata', {}).get('creationTimestamp')
start_time = time.time()
try:
self.custom_object_client.delete_namespaced_custom_object(
group="kubevirt.io",
version="v1",
namespace=namespace,
plural="virtualmachineinstances",
name=vm_name
)
except ApiException as e:
if e.status == 404:
logging.warning(f"VMI {vm_name} not found during deletion")
return 1
else:
logging.error(f"API error during VMI deletion: {e}")
return 1
self.k8s_client.delete_vmi(vm_name, namespace)
# Wait for the VMI to be deleted
while time.time() - start_time < timeout:
deleted_vmi = self.get_vmi(vm_name, namespace)
deleted_vmi = self.k8s_client.get_vmi(vm_name, namespace)
if deleted_vmi:
if start_creation_time != deleted_vmi.get('metadata', {}).get('creationTimestamp'):
logging.info(f"VMI {vm_name} successfully recreated")
self.affected_pod.pod_rescheduling_time = time.time() - start_time
self.affected_vmi.vmi_rescheduling_time = time.time() - start_time
return 0
else:
logging.info(f"VMI {vm_name} successfully deleted")
time.sleep(1)
logging.error(f"Timed out waiting for VMI {vm_name} to be deleted")
self.pods_status.unrecovered.append(self.affected_pod)
self.vmis_status.unrecovered.append(self.affected_vmi)
return 1
except Exception as e:
logging.error(f"Error deleting VMI {vm_name}: {e}")
log_exception(e)
self.pods_status.unrecovered.append(self.affected_pod)
log_exception(str(e))
self.vmis_status.unrecovered.append(self.affected_vmi)
return 1
def wait_for_running(self, vm_name: str, namespace: str, timeout: int = 120) -> int:
@@ -337,12 +263,12 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
while time.time() - start_time < timeout:
# Check current state once since we've already waited for the duration
vmi = self.get_vmi(vm_name, namespace)
vmi = self.k8s_client.get_vmi(vm_name, namespace)
if vmi:
if vmi.get('status', {}).get('phase') == "Running":
end_time = time.time()
self.affected_pod.pod_readiness_time = end_time - start_time
self.affected_vmi.vmi_readiness_time = end_time - start_time
logging.info(f"VMI {vm_name} is already running")
return 0
@@ -378,13 +304,7 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
del metadata[field]
# Create the VMI
self.custom_object_client.create_namespaced_custom_object(
group="kubevirt.io",
version="v1",
namespace=namespace,
plural="virtualmachineinstances",
body=vmi_dict
)
self.k8s_client.create_vmi(vm_name, namespace, vmi_dict)
logging.info(f"Successfully recreated VMI {vm_name}")
# Wait for VMI to start running
@@ -395,7 +315,7 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
except Exception as e:
logging.error(f"Error recreating VMI {vm_name}: {e}")
log_exception(e)
log_exception(str(e))
return 1
else:
logging.error(f"Failed to recover VMI {vm_name}: No original state captured and auto-recovery did not occur")
@@ -403,5 +323,5 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
except Exception as e:
logging.error(f"Unexpected error recovering VMI {vm_name}: {e}")
log_exception(e)
log_exception(str(e))
return 1

View File

@@ -0,0 +1,13 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import random
import logging
from krkn_lib.k8s import KrknKubernetes

View File

@@ -1,5 +1,17 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import time
import yaml
from krkn_lib.k8s import KrknKubernetes
@@ -7,7 +19,6 @@ from krkn_lib.models.telemetry import ScenarioTelemetry
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
from krkn_lib.utils import get_yaml_item_value
from krkn import cerberus, utils
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
from krkn.scenario_plugins.managed_cluster.common_functions import get_managedcluster
from krkn.scenario_plugins.managed_cluster.scenarios import Scenarios
@@ -18,7 +29,6 @@ class ManagedClusterScenarioPlugin(AbstractScenarioPlugin):
self,
run_uuid: str,
scenario: str,
krkn_config: dict[str, any],
lib_telemetry: KrknTelemetryOpenshift,
scenario_telemetry: ScenarioTelemetry,
) -> int:
@@ -29,25 +39,27 @@ class ManagedClusterScenarioPlugin(AbstractScenarioPlugin):
lib_telemetry.get_lib_kubernetes()
)
if managedcluster_scenario["actions"]:
for action in managedcluster_scenario["actions"]:
start_time = int(time.time())
try:
self.inject_managedcluster_scenario(
action,
managedcluster_scenario,
managedcluster_scenario_object,
lib_telemetry.get_lib_kubernetes(),
)
end_time = int(time.time())
cerberus.get_status(krkn_config, start_time, end_time)
except Exception as e:
logging.error(
"ManagedClusterScenarioPlugin exiting due to Exception %s"
% e
)
return 1
else:
return 0
try:
self.inject_managedcluster_scenario(
action,
managedcluster_scenario,
managedcluster_scenario_object,
lib_telemetry.get_lib_kubernetes(),
)
except Exception as e:
logging.error(
"ManagedClusterScenarioPlugin exiting due to Exception %s"
% e
)
return 1
else:
logging.error(
"ManagedClusterScenarioPlugin: 'actions' must be defined and non-empty in the scenario config"
)
return 1
return 0
def inject_managedcluster_scenario(
self,

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from jinja2 import Environment, FileSystemLoader
import os
import time

View File

@@ -0,0 +1,13 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
from krkn.scenario_plugins.native.plugins import PLUGINS
from krkn_lib.models.telemetry import ScenarioTelemetry
@@ -12,7 +25,6 @@ class NativeScenarioPlugin(AbstractScenarioPlugin):
self,
run_uuid: str,
scenario: str,
krkn_config: dict[str, any],
lib_telemetry: KrknTelemetryOpenshift,
scenario_telemetry: ScenarioTelemetry,
) -> int:
@@ -21,7 +33,6 @@ class NativeScenarioPlugin(AbstractScenarioPlugin):
PLUGINS.run(
scenario,
lib_telemetry.get_lib_kubernetes().get_kubeconfig_path(),
krkn_config,
run_uuid,
)

View File

@@ -1,141 +0,0 @@
import logging
import requests
import sys
import json
def get_status(config, start_time, end_time):
"""
Function to get Cerberus status
Args:
config
- Kraken config dictionary
start_time
- The time when chaos is injected
end_time
- The time when chaos is removed
Returns:
Cerberus status
"""
cerberus_status = True
check_application_routes = False
application_routes_status = True
if config["cerberus"]["cerberus_enabled"]:
cerberus_url = config["cerberus"]["cerberus_url"]
check_application_routes = config["cerberus"]["check_applicaton_routes"]
if not cerberus_url:
logging.error("url where Cerberus publishes True/False signal is not provided.")
sys.exit(1)
cerberus_status = requests.get(cerberus_url, timeout=60).content
cerberus_status = True if cerberus_status == b"True" else False
# Fail if the application routes monitored by cerberus experience downtime during the chaos
if check_application_routes:
application_routes_status, unavailable_routes = application_status(cerberus_url, start_time, end_time)
if not application_routes_status:
logging.error(
"Application routes: %s monitored by cerberus encountered downtime during the run, failing"
% unavailable_routes
)
else:
logging.info("Application routes being monitored didn't encounter any downtime during the run!")
if not cerberus_status:
logging.error(
"Received a no-go signal from Cerberus, looks like "
"the cluster is unhealthy. Please check the Cerberus "
"report for more details. Test failed."
)
if not application_routes_status or not cerberus_status:
sys.exit(1)
else:
logging.info("Received a go signal from Ceberus, the cluster is healthy. " "Test passed.")
return cerberus_status
def publish_kraken_status(config, failed_post_scenarios, start_time, end_time):
"""
Function to publish Kraken status to Cerberus
Args:
config
- Kraken config dictionary
failed_post_scenarios
- String containing the failed post scenarios
start_time
- The time when chaos is injected
end_time
- The time when chaos is removed
"""
cerberus_status = get_status(config, start_time, end_time)
if not cerberus_status:
if failed_post_scenarios:
if config["kraken"]["exit_on_failure"]:
logging.info(
"Cerberus status is not healthy and post action scenarios " "are still failing, exiting kraken run"
)
sys.exit(1)
else:
logging.info("Cerberus status is not healthy and post action scenarios " "are still failing")
else:
if failed_post_scenarios:
if config["kraken"]["exit_on_failure"]:
logging.info(
"Cerberus status is healthy but post action scenarios " "are still failing, exiting kraken run"
)
sys.exit(1)
else:
logging.info("Cerberus status is healthy but post action scenarios " "are still failing")
def application_status(cerberus_url, start_time, end_time):
"""
Function to check application availability
Args:
cerberus_url
- url where Cerberus publishes True/False signal
start_time
- The time when chaos is injected
end_time
- The time when chaos is removed
Returns:
Application status and failed routes
"""
if not cerberus_url:
logging.error("url where Cerberus publishes True/False signal is not provided.")
sys.exit(1)
else:
duration = (end_time - start_time) / 60
url = cerberus_url + "/" + "history" + "?" + "loopback=" + str(duration)
logging.info("Scraping the metrics for the test duration from cerberus url: %s" % url)
try:
failed_routes = []
status = True
metrics = requests.get(url, timeout=60).content
metrics_json = json.loads(metrics)
for entry in metrics_json["history"]["failures"]:
if entry["component"] == "route":
name = entry["name"]
failed_routes.append(name)
status = False
else:
continue
except Exception as e:
logging.error("Failed to scrape metrics from cerberus API at %s: %s" % (url, e))
sys.exit(1)
return status, set(failed_routes)

View File

@@ -1,3 +1,16 @@
# Copyright 2025 The Krkn Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass, field
import yaml
import logging
@@ -8,12 +21,9 @@ import re
import random
from traceback import format_exc
from jinja2 import Environment, FileSystemLoader
from . import kubernetes_functions as kube_helper
from . import cerberus
from krkn_lib.k8s import KrknKubernetes
import typing
from arcaflow_plugin_sdk import validation, plugin
from kubernetes.client.api.core_v1_api import CoreV1Api as CoreV1Api
from kubernetes.client.api.batch_v1_api import BatchV1Api as BatchV1Api
@dataclass
@@ -100,13 +110,13 @@ class NetworkScenarioConfig:
default=None,
metadata={
"name": "Network Parameters",
"description": "The network filters that are applied on the interface. "
"The currently supported filters are latency, "
"loss and bandwidth",
},
"description":
"The network filters that are applied on the interface. "
"The currently supported filters are latency, "
"loss and bandwidth"
}
)
@dataclass
class NetworkScenarioSuccessOutput:
filter_direction: str = field(
@@ -151,7 +161,7 @@ class NetworkScenarioErrorOutput:
)
def get_default_interface(node: str, pod_template, cli: CoreV1Api, image: str) -> str:
def get_default_interface(node: str, pod_template, kubecli: KrknKubernetes, image: str) -> str:
"""
Function that returns a random interface from a node
@@ -163,20 +173,20 @@ def get_default_interface(node: str, pod_template, cli: CoreV1Api, image: str) -
- The YAML template used to instantiate a pod to query
the node's interface
cli (CoreV1Api)
- Object to interact with Kubernetes Python client's CoreV1 API
kubecli (KrknKubernetes)
- Object to interact with Kubernetes Python client
Returns:
Default interface (string) belonging to the node
"""
pod_name_regex = str(random.randint(0, 10000))
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex, nodename=node, image=image))
logging.info("Creating pod to query interface on node %s" % node)
kube_helper.create_pod(cli, pod_body, "default", 300)
kubecli.create_pod(pod_body, "default", 300)
pod_name = f"fedtools-{pod_name_regex}"
try:
cmd = ["ip", "r"]
output = kube_helper.exec_cmd_in_pod(cli, cmd, pod_name, "default")
output = kubecli.exec_cmd_in_pod(cmd, pod_name, "default")
if not output:
logging.error("Exception occurred while executing command in pod")
@@ -192,13 +202,13 @@ def get_default_interface(node: str, pod_template, cli: CoreV1Api, image: str) -
finally:
logging.info("Deleting pod to query interface on node")
kube_helper.delete_pod(cli, pod_name, "default")
kubecli.delete_pod(pod_name, "default")
return interfaces
def verify_interface(
input_interface_list: typing.List[str], node: str, pod_template, cli: CoreV1Api, image: str
input_interface_list: typing.List[str], node: str, pod_template, kubecli: KrknKubernetes, image: str
) -> typing.List[str]:
"""
Function that verifies whether a list of interfaces is present in the node.
@@ -215,21 +225,21 @@ def verify_interface(
- The YAML template used to instantiate a pod to query
the node's interfaces
cli (CoreV1Api)
- Object to interact with Kubernetes Python client's CoreV1 API
kubecli (KrknKubernetes)
- Object to interact with Kubernetes Python client
Returns:
The interface list for the node
"""
pod_name_regex = str(random.randint(0, 10000))
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex, nodename=node, image=image))
logging.info("Creating pod to query interface on node %s" % node)
kube_helper.create_pod(cli, pod_body, "default", 300)
kubecli.create_pod(pod_body, "default", 300)
pod_name = f"fedtools-{pod_name_regex}"
try:
if input_interface_list == []:
cmd = ["ip", "r"]
output = kube_helper.exec_cmd_in_pod(cli, cmd, pod_name, "default")
output = kubecli.exec_cmd_in_pod(cmd, pod_name, "default")
if not output:
logging.error("Exception occurred while executing command in pod")
@@ -245,7 +255,7 @@ def verify_interface(
else:
cmd = ["ip", "-br", "addr", "show"]
output = kube_helper.exec_cmd_in_pod(cli, cmd, pod_name, "default")
output = kubecli.exec_cmd_in_pod(cmd, pod_name, "default")
if not output:
logging.error("Exception occurred while executing command in pod")
@@ -268,7 +278,7 @@ def verify_interface(
)
finally:
logging.info("Deleting pod to query interface on node")
kube_helper.delete_pod(cli, pod_name, "default")
kubecli.delete_pod(pod_name, "default")
return input_interface_list
@@ -278,7 +288,7 @@ def get_node_interfaces(
label_selector: str,
instance_count: int,
pod_template,
cli: CoreV1Api,
kubecli: KrknKubernetes,
image: str
) -> typing.Dict[str, typing.List[str]]:
"""
@@ -306,8 +316,8 @@ def get_node_interfaces(
- The YAML template used to instantiate a pod to query
the node's interfaces
cli (CoreV1Api)
- Object to interact with Kubernetes Python client's CoreV1 API
kubecli (KrknKubernetes)
- Object to interact with Kubernetes Python client
Returns:
Filtered dictionary containing the test nodes and their test interfaces
@@ -318,22 +328,22 @@ def get_node_interfaces(
"If node names and interfaces aren't provided, "
"then the label selector must be provided"
)
nodes = kube_helper.get_node(None, label_selector, instance_count, cli)
nodes = kubecli.get_node(None, label_selector, instance_count)
node_interface_dict = {}
for node in nodes:
node_interface_dict[node] = get_default_interface(node, pod_template, cli, image)
node_interface_dict[node] = get_default_interface(node, pod_template, kubecli, image)
else:
node_name_list = node_interface_dict.keys()
filtered_node_list = []
for node in node_name_list:
filtered_node_list.extend(
kube_helper.get_node(node, label_selector, instance_count, cli)
kubecli.get_node(node, label_selector, instance_count)
)
for node in filtered_node_list:
node_interface_dict[node] = verify_interface(
node_interface_dict[node], node, pod_template, cli, image
node_interface_dict[node], node, pod_template, kubecli, image
)
return node_interface_dict
@@ -345,11 +355,10 @@ def apply_ingress_filter(
node: str,
pod_template,
job_template,
batch_cli: BatchV1Api,
cli: CoreV1Api,
kubecli: KrknKubernetes,
create_interfaces: bool = True,
param_selector: str = "all",
image:str = "quay.io/krkn-chaos/krkn:tools",
image: str = "quay.io/krkn-chaos/krkn:tools",
) -> str:
"""
Function that applies the filters to shape incoming traffic to
@@ -375,11 +384,8 @@ def apply_ingress_filter(
- The YAML template used to instantiate a job to apply and remove
the filters on the interfaces
batch_cli
- Object to interact with Kubernetes Python client's BatchV1 API
cli (CoreV1Api)
- Object to interact with Kubernetes Python client's CoreV1 API
kubecli (KrknKubernetes)
- Object to interact with Kubernetes Python client
param_selector (string)
- Used to specify what kind of filter to apply. Useful during
@@ -395,7 +401,7 @@ def apply_ingress_filter(
network_params = {param_selector: cfg.network_params[param_selector]}
if create_interfaces:
create_virtual_interfaces(cli, interface_list, node, pod_template, image)
create_virtual_interfaces(kubecli, interface_list, node, pod_template, image)
exec_cmd = get_ingress_cmd(
interface_list, network_params, duration=cfg.test_duration
@@ -404,7 +410,7 @@ def apply_ingress_filter(
job_body = yaml.safe_load(
job_template.render(jobname=str(hash(node))[:5], nodename=node, image=image, cmd=exec_cmd)
)
api_response = kube_helper.create_job(batch_cli, job_body)
api_response = kubecli.create_job(job_body)
if api_response is None:
raise Exception("Error creating job")
@@ -413,15 +419,15 @@ def apply_ingress_filter(
def create_virtual_interfaces(
cli: CoreV1Api, interface_list: typing.List[str], node: str, pod_template, image: str
kubecli: KrknKubernetes, interface_list: typing.List[str], node: str, pod_template, image: str
) -> None:
"""
Function that creates a privileged pod and uses it to create
virtual interfaces on the node
Args:
cli (CoreV1Api)
- Object to interact with Kubernetes Python client's CoreV1 API
kubecli (KrknKubernetes)
- Object to interact with Kubernetes Python client
interface_list (List of strings)
- The list of interfaces on the node for which virtual interfaces
@@ -435,37 +441,34 @@ def create_virtual_interfaces(
virtual interfaces on the node
"""
pod_name_regex = str(random.randint(0, 10000))
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
kube_helper.create_pod(cli, pod_body, "default", 300)
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex, nodename=node, image=image))
kubecli.create_pod(pod_body, "default", 300)
logging.info(
"Creating {0} virtual interfaces on node {1} using a pod".format(
len(interface_list), node
)
)
pod_name = f"modtools-{pod_name_regex}"
create_ifb(cli, len(interface_list), pod_name)
create_ifb(kubecli, len(interface_list), pod_name)
logging.info("Deleting pod used to create virtual interfaces")
kube_helper.delete_pod(cli, pod_name, "default")
kubecli.delete_pod(pod_name, "default")
def delete_virtual_interfaces(
cli: CoreV1Api, node_list: typing.List[str], pod_template, image: str
kubecli: KrknKubernetes, node_list: typing.List[str], pod_template, image: str
):
"""
Function that creates a privileged pod and uses it to delete all
virtual interfaces on the specified nodes
Args:
cli (CoreV1Api)
- Object to interact with Kubernetes Python client's CoreV1 API
kubecli (KrknKubernetes)
- Object to interact with Kubernetes Python client
node_list (List of strings)
- The list of nodes on which the list of virtual interfaces are
to be deleted
node (string)
- The node on which the virtual interfaces are created
pod_template (jinja2.environment.Template))
- The YAML template used to instantiate a pod to delete
virtual interfaces on the node
@@ -473,46 +476,45 @@ def delete_virtual_interfaces(
for node in node_list:
pod_name_regex = str(random.randint(0, 10000))
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
kube_helper.create_pod(cli, pod_body, "default", 300)
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex, nodename=node, image=image))
kubecli.create_pod(pod_body, "default", 300)
logging.info("Deleting all virtual interfaces on node {0}".format(node))
pod_name = f"modtools-{pod_name_regex}"
delete_ifb(cli, pod_name)
kube_helper.delete_pod(cli, pod_name, "default")
delete_ifb(kubecli, pod_name)
kubecli.delete_pod(pod_name, "default")
def create_ifb(cli: CoreV1Api, number: int, pod_name: str):
def create_ifb(kubecli: KrknKubernetes, number: int, pod_name: str):
"""
Function that creates virtual interfaces in a pod.
Makes use of modprobe commands
"""
exec_command = ["chroot", "/host", "modprobe", "ifb", "numifbs=" + str(number)]
kube_helper.exec_cmd_in_pod(cli, exec_command, pod_name, "default")
exec_command = ["/host", "modprobe", "ifb", "numifbs=" + str(number)]
kubecli.exec_cmd_in_pod(exec_command, pod_name, "default", base_command="chroot")
for i in range(0, number):
exec_command = ["chroot", "/host", "ip", "link", "set", "dev"]
exec_command += ["ifb" + str(i), "up"]
kube_helper.exec_cmd_in_pod(cli, exec_command, pod_name, "default")
exec_command = ["/host", "ip", "link", "set", "dev", "ifb" + str(i), "up"]
kubecli.exec_cmd_in_pod(exec_command, pod_name, "default", base_command="chroot")
def delete_ifb(cli: CoreV1Api, pod_name: str):
def delete_ifb(kubecli: KrknKubernetes, pod_name: str):
"""
Function that deletes all virtual interfaces in a pod.
Makes use of modprobe command
"""
exec_command = ["chroot", "/host", "modprobe", "-r", "ifb"]
kube_helper.exec_cmd_in_pod(cli, exec_command, pod_name, "default")
exec_command = ["/host", "modprobe", "-r", "ifb"]
kubecli.exec_cmd_in_pod(exec_command, pod_name, "default", base_command="chroot")
def get_job_pods(cli: CoreV1Api, api_response):
def get_job_pods(kubecli: KrknKubernetes, api_response):
"""
Function that gets the pod corresponding to the job
Args:
cli (CoreV1Api)
- Object to interact with Kubernetes Python client's CoreV1 API
kubecli (KrknKubernetes)
- Object to interact with Kubernetes Python client
api_response
- The API response for the job status
@@ -523,22 +525,22 @@ def get_job_pods(cli: CoreV1Api, api_response):
controllerUid = api_response.metadata.labels["controller-uid"]
pod_label_selector = "controller-uid=" + controllerUid
pods_list = kube_helper.list_pods(
cli, label_selector=pod_label_selector, namespace="default"
pods_list = kubecli.list_pods(
label_selector=pod_label_selector, namespace="default"
)
return pods_list[0]
def wait_for_job(
batch_cli: BatchV1Api, job_list: typing.List[str], timeout: int = 300
kubecli: KrknKubernetes, job_list: typing.List[str], timeout: int = 300
) -> None:
"""
Function that waits for a list of jobs to finish within a time period
Args:
batch_cli (BatchV1Api)
- Object to interact with Kubernetes Python client's BatchV1 API
kubecli (KrknKubernetes)
- Object to interact with Kubernetes Python client
job_list (List of strings)
- The list of jobs to check for completion
@@ -553,9 +555,7 @@ def wait_for_job(
while count != job_len:
for job_name in job_list:
try:
api_response = kube_helper.get_job_status(
batch_cli, job_name, namespace="default"
)
api_response = kubecli.get_job_status(job_name, namespace="default")
if (
api_response.status.succeeded is not None
or api_response.status.failed is not None
@@ -572,16 +572,13 @@ def wait_for_job(
time.sleep(5)
def delete_jobs(cli: CoreV1Api, batch_cli: BatchV1Api, job_list: typing.List[str]):
def delete_jobs(kubecli: KrknKubernetes, job_list: typing.List[str]):
"""
Function that deletes jobs
Args:
cli (CoreV1Api)
- Object to interact with Kubernetes Python client's CoreV1 API
batch_cli (BatchV1Api)
- Object to interact with Kubernetes Python client's BatchV1 API
kubecli (KrknKubernetes)
- Object to interact with Kubernetes Python client
job_list (List of strings)
- The list of jobs to delete
@@ -589,23 +586,19 @@ def delete_jobs(cli: CoreV1Api, batch_cli: BatchV1Api, job_list: typing.List[str
for job_name in job_list:
try:
api_response = kube_helper.get_job_status(
batch_cli, job_name, namespace="default"
)
api_response = kubecli.get_job_status(job_name, namespace="default")
if api_response.status.failed is not None:
pod_name = get_job_pods(cli, api_response)
pod_stat = kube_helper.read_pod(cli, name=pod_name, namespace="default")
pod_name = get_job_pods(kubecli, api_response)
pod_stat = kubecli.read_pod(name=pod_name, namespace="default")
logging.error(pod_stat.status.container_statuses)
pod_log_response = kube_helper.get_pod_log(
cli, name=pod_name, namespace="default"
pod_log_response = kubecli.get_pod_log(
name=pod_name, namespace="default"
)
pod_log = pod_log_response.data.decode("utf-8")
logging.error(pod_log)
except Exception as e:
logging.warning("Exception in getting job status: %s" % str(e))
api_response = kube_helper.delete_job(
batch_cli, name=job_name, namespace="default"
)
kubecli.delete_job(name=job_name, namespace="default")
def get_ingress_cmd(
@@ -716,7 +709,7 @@ def network_chaos(
job_template = env.get_template("job.j2")
pod_interface_template = env.get_template("pod_interface.j2")
pod_module_template = env.get_template("pod_module.j2")
cli, batch_cli = kube_helper.setup_kubernetes(cfg.kubeconfig_path)
kubecli = KrknKubernetes(kubeconfig_path=cfg.kubeconfig_path)
test_image = cfg.image
logging.info("Starting Ingress Network Chaos")
try:
@@ -725,7 +718,7 @@ def network_chaos(
cfg.label_selector,
cfg.instance_count,
pod_interface_template,
cli,
kubecli,
test_image
)
except Exception:
@@ -742,13 +735,12 @@ def network_chaos(
node,
pod_module_template,
job_template,
batch_cli,
cli,
test_image
kubecli,
image=test_image
)
)
logging.info("Waiting for parallel job to finish")
wait_for_job(batch_cli, job_list[:], cfg.test_duration + 100)
wait_for_job(kubecli, job_list[:], cfg.test_duration + 100)
elif cfg.execution_type == "serial":
create_interfaces = True
@@ -761,23 +753,20 @@ def network_chaos(
node,
pod_module_template,
job_template,
batch_cli,
cli,
kubecli,
create_interfaces=create_interfaces,
param_selector=param,
image=test_image
)
)
logging.info("Waiting for serial job to finish")
wait_for_job(batch_cli, job_list[:], cfg.test_duration + 100)
wait_for_job(kubecli, job_list[:], cfg.test_duration + 100)
logging.info("Deleting jobs")
delete_jobs(cli, batch_cli, job_list[:])
delete_jobs(kubecli, job_list[:])
job_list = []
logging.info("Waiting for wait_duration : %ss" % cfg.wait_duration)
time.sleep(cfg.wait_duration)
create_interfaces = False
else:
return "error", NetworkScenarioErrorOutput(
"Invalid execution type - serial and parallel are "
"the only accepted types"
@@ -792,6 +781,6 @@ def network_chaos(
logging.error("Ingress Network Chaos exiting due to Exception - %s" % e)
return "error", NetworkScenarioErrorOutput(format_exc())
finally:
delete_virtual_interfaces(cli, node_interface_dict.keys(), pod_module_template, test_image)
delete_virtual_interfaces(kubecli, node_interface_dict.keys(), pod_module_template, test_image)
logging.info("Deleting jobs(if any)")
delete_jobs(cli, batch_cli, job_list[:])
delete_jobs(kubecli, job_list[:])

View File

@@ -1,284 +0,0 @@
from kubernetes import config, client
from kubernetes.client.rest import ApiException
from kubernetes.stream import stream
import sys
import time
import logging
import random
def setup_kubernetes(kubeconfig_path):
"""
Sets up the Kubernetes client
"""
if kubeconfig_path is None:
kubeconfig_path = config.KUBE_CONFIG_DEFAULT_LOCATION
config.load_kube_config(kubeconfig_path)
cli = client.CoreV1Api()
batch_cli = client.BatchV1Api()
return cli, batch_cli
def create_job(batch_cli, body, namespace="default"):
"""
Function used to create a job from a YAML config
"""
try:
api_response = batch_cli.create_namespaced_job(body=body, namespace=namespace)
return api_response
except ApiException as api:
logging.warning(
"Exception when calling \
BatchV1Api->create_job: %s"
% api
)
if api.status == 409:
logging.warning("Job already present")
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->create_namespaced_job: %s"
% e
)
raise
def delete_pod(cli, name, namespace):
"""
Function that deletes a pod and waits until deletion is complete
"""
try:
cli.delete_namespaced_pod(name=name, namespace=namespace)
while cli.read_namespaced_pod(name=name, namespace=namespace):
time.sleep(1)
except ApiException as e:
if e.status == 404:
logging.info("Pod deleted")
else:
logging.error("Failed to delete pod %s" % e)
raise e
def create_pod(cli, body, namespace, timeout=120):
"""
Function used to create a pod from a YAML config
"""
try:
pod_stat = None
pod_stat = cli.create_namespaced_pod(body=body, namespace=namespace)
end_time = time.time() + timeout
while True:
pod_stat = cli.read_namespaced_pod(name=body["metadata"]["name"], namespace=namespace)
if pod_stat.status.phase == "Running":
break
if time.time() > end_time:
raise Exception("Starting pod failed")
time.sleep(1)
except Exception as e:
logging.error("Pod creation failed %s" % e)
if pod_stat:
logging.error(pod_stat.status.container_statuses)
delete_pod(cli, body["metadata"]["name"], namespace)
sys.exit(1)
def exec_cmd_in_pod(cli, command, pod_name, namespace, container=None):
"""
Function used to execute a command in a running pod
"""
exec_command = command
try:
if container:
ret = stream(
cli.connect_get_namespaced_pod_exec,
pod_name,
namespace,
container=container,
command=exec_command,
stderr=True,
stdin=False,
stdout=True,
tty=False,
)
else:
ret = stream(
cli.connect_get_namespaced_pod_exec,
pod_name,
namespace,
command=exec_command,
stderr=True,
stdin=False,
stdout=True,
tty=False,
)
except Exception as e:
return False
return ret
def create_ifb(cli, number, pod_name):
"""
Function that creates virtual interfaces in a pod. Makes use of modprobe commands
"""
exec_command = ['chroot', '/host', 'modprobe', 'ifb','numifbs=' + str(number)]
resp = exec_cmd_in_pod(cli, exec_command, pod_name, 'default')
for i in range(0, number):
exec_command = ['chroot', '/host','ip','link','set','dev']
exec_command+= ['ifb' + str(i), 'up']
resp = exec_cmd_in_pod(cli, exec_command, pod_name, 'default')
def delete_ifb(cli, pod_name):
"""
Function that deletes all virtual interfaces in a pod. Makes use of modprobe command
"""
exec_command = ['chroot', '/host', 'modprobe', '-r', 'ifb']
resp = exec_cmd_in_pod(cli, exec_command, pod_name, 'default')
def list_pods(cli, namespace, label_selector=None):
"""
Function used to list pods in a given namespace and having a certain label
"""
pods = []
try:
if label_selector:
ret = cli.list_namespaced_pod(namespace, pretty=True, label_selector=label_selector)
else:
ret = cli.list_namespaced_pod(namespace, pretty=True)
except ApiException as e:
logging.error(
"Exception when calling \
CoreV1Api->list_namespaced_pod: %s\n"
% e
)
raise e
for pod in ret.items:
pods.append(pod.metadata.name)
return pods
def get_job_status(batch_cli, name, namespace="default"):
"""
Function that retrieves the status of a running job in a given namespace
"""
try:
return batch_cli.read_namespaced_job_status(name=name, namespace=namespace)
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->read_namespaced_job_status: %s"
% e
)
raise
def get_pod_log(cli, name, namespace="default"):
"""
Function that retrieves the logs of a running pod in a given namespace
"""
return cli.read_namespaced_pod_log(
name=name, namespace=namespace, _return_http_data_only=True, _preload_content=False
)
def read_pod(cli, name, namespace="default"):
"""
Function that retrieves the info of a running pod in a given namespace
"""
return cli.read_namespaced_pod(name=name, namespace=namespace)
def delete_job(batch_cli, name, namespace="default"):
"""
Deletes a job with the input name and namespace
"""
try:
api_response = batch_cli.delete_namespaced_job(
name=name,
namespace=namespace,
body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=0),
)
logging.debug("Job deleted. status='%s'" % str(api_response.status))
return api_response
except ApiException as api:
logging.warning(
"Exception when calling \
BatchV1Api->create_namespaced_job: %s"
% api
)
logging.warning("Job already deleted\n")
except Exception as e:
logging.error(
"Exception when calling \
BatchV1Api->delete_namespaced_job: %s\n"
% e
)
sys.exit(1)
def list_ready_nodes(cli, label_selector=None):
"""
Returns a list of ready nodes
"""
nodes = []
try:
if label_selector:
ret = cli.list_node(pretty=True, label_selector=label_selector)
else:
ret = cli.list_node(pretty=True)
except ApiException as e:
logging.error("Exception when calling CoreV1Api->list_node: %s\n" % e)
raise e
for node in ret.items:
for cond in node.status.conditions:
if str(cond.type) == "Ready" and str(cond.status) == "True":
nodes.append(node.metadata.name)
return nodes
def get_node(node_name, label_selector, instance_kill_count, cli):
"""
Returns active node(s) on which the scenario can be performed
"""
if node_name in list_ready_nodes(cli):
return [node_name]
elif node_name:
logging.info(
"Node with provided node_name does not exist or the node might "
"be in NotReady state."
)
nodes = list_ready_nodes(cli, label_selector)
if not nodes:
raise Exception("Ready nodes with the provided label selector do not exist")
logging.info(
"Ready nodes with the label selector %s: %s" % (label_selector, nodes)
)
number_of_nodes = len(nodes)
if instance_kill_count == number_of_nodes:
return nodes
nodes_to_return = []
for i in range(instance_kill_count):
node_to_add = nodes[random.randint(0, len(nodes) - 1)]
nodes_to_return.append(node_to_add)
nodes.remove(node_to_add)
return nodes_to_return

Some files were not shown because too many files have changed in this diff Show More