mirror of
https://github.com/krkn-chaos/krkn.git
synced 2026-03-16 08:30:37 +00:00
Compare commits
156 Commits
v4.0.1
...
custom_wei
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b9d7c8ba12 | ||
|
|
e8075743ab | ||
|
|
ec5511b2db | ||
|
|
4e7dca9474 | ||
|
|
edf0f3d1c9 | ||
|
|
8c9bce6987 | ||
|
|
5608482f1b | ||
|
|
a14d3955a6 | ||
|
|
f655ec1a73 | ||
|
|
dfc350ac03 | ||
|
|
c474b810b2 | ||
|
|
072e8d0e87 | ||
|
|
aee61061ac | ||
|
|
544cac8bbb | ||
|
|
49b1affdb8 | ||
|
|
c1dd43fe87 | ||
|
|
8dad2a3996 | ||
|
|
cebc60f5a8 | ||
|
|
2065443622 | ||
|
|
b6ef7fa052 | ||
|
|
4f305e78aa | ||
|
|
b17e933134 | ||
|
|
beea484597 | ||
|
|
0222b0f161 | ||
|
|
f7e674d5ad | ||
|
|
7aea12ce6c | ||
|
|
625e1e90cf | ||
|
|
a9f1ce8f1b | ||
|
|
66e364e293 | ||
|
|
898ce76648 | ||
|
|
4a0f4e7cab | ||
|
|
819191866d | ||
|
|
37ca4bbce7 | ||
|
|
b9dd4e40d3 | ||
|
|
3fd249bb88 | ||
|
|
773107245c | ||
|
|
05bc201528 | ||
|
|
9a316550e1 | ||
|
|
9c261e2599 | ||
|
|
0cc82dc65d | ||
|
|
269e21e9eb | ||
|
|
d0dbe3354a | ||
|
|
4a0686daf3 | ||
|
|
822bebac0c | ||
|
|
a13150b0f5 | ||
|
|
0443637fe1 | ||
|
|
36585630f2 | ||
|
|
1401724312 | ||
|
|
fa204a515c | ||
|
|
b3a5fc2d53 | ||
|
|
05600b62b3 | ||
|
|
126599e02c | ||
|
|
b3d6a19d24 | ||
|
|
65100f26a7 | ||
|
|
10b6e4663e | ||
|
|
ce52183a26 | ||
|
|
e9ab3b47b3 | ||
|
|
3e14fe07b7 | ||
|
|
d9271a4bcc | ||
|
|
850930631e | ||
|
|
15eee80c55 | ||
|
|
ff3c4f5313 | ||
|
|
4c74df301f | ||
|
|
b60b66de43 | ||
|
|
2458022248 | ||
|
|
18385cba2b | ||
|
|
e7fa6bdebc | ||
|
|
c3f6b1a7ff | ||
|
|
f2ba8b85af | ||
|
|
ba3fdea403 | ||
|
|
42d18a8e04 | ||
|
|
4b3617bd8a | ||
|
|
eb7a1e243c | ||
|
|
197ce43f9a | ||
|
|
eecdeed73c | ||
|
|
ef606d0f17 | ||
|
|
9981c26304 | ||
|
|
4ebfc5dde5 | ||
|
|
4527d073c6 | ||
|
|
93d6967331 | ||
|
|
b462c46b28 | ||
|
|
ab4ae85896 | ||
|
|
6acd6f9bd3 | ||
|
|
787759a591 | ||
|
|
957cb355be | ||
|
|
35609484d4 | ||
|
|
959337eb63 | ||
|
|
f4bdbff9dc | ||
|
|
954202cab7 | ||
|
|
a373dcf453 | ||
|
|
d0c604a516 | ||
|
|
82582f5bc3 | ||
|
|
37f0f1eb8b | ||
|
|
d2eab21f95 | ||
|
|
d84910299a | ||
|
|
48f19c0a0e | ||
|
|
eb86885bcd | ||
|
|
967fd14bd7 | ||
|
|
5cefe80286 | ||
|
|
9ee76ce337 | ||
|
|
fd3e7ee2c8 | ||
|
|
c85c435b5d | ||
|
|
d5284ace25 | ||
|
|
c3098ec80b | ||
|
|
6629c7ec33 | ||
|
|
fb6af04b09 | ||
|
|
dc1215a61b | ||
|
|
f74aef18f8 | ||
|
|
166204e3c5 | ||
|
|
fc7667aef1 | ||
|
|
3eea42770f | ||
|
|
77a46e3869 | ||
|
|
b801308d4a | ||
|
|
97f4c1fd9c | ||
|
|
c54390d8b1 | ||
|
|
543729b18a | ||
|
|
a0ea4dc749 | ||
|
|
a5459792ef | ||
|
|
d434bb26fa | ||
|
|
fee41d404e | ||
|
|
8663ee8893 | ||
|
|
a072f0306a | ||
|
|
8221392356 | ||
|
|
671fc581dd | ||
|
|
11508ce017 | ||
|
|
0d78139fb6 | ||
|
|
a3baffe8ee | ||
|
|
438b08fcd5 | ||
|
|
9b930a02a5 | ||
|
|
194e3b87ee | ||
|
|
8c05e44c23 | ||
|
|
88f8cf49f1 | ||
|
|
015ba4d90d | ||
|
|
26fdbef144 | ||
|
|
d77e6dc79c | ||
|
|
2885645e77 | ||
|
|
84169e2d4e | ||
|
|
05bc404d32 | ||
|
|
e8fd432fc5 | ||
|
|
ec05675e3a | ||
|
|
c91648d35c | ||
|
|
24aa9036b0 | ||
|
|
816363d151 | ||
|
|
90c52f907f | ||
|
|
4f250c9601 | ||
|
|
6480adc00a | ||
|
|
5002f210ae | ||
|
|
62c5afa9a2 | ||
|
|
c109fc0b17 | ||
|
|
fff675f3dd | ||
|
|
c125e5acf7 | ||
|
|
ca6995a1a1 | ||
|
|
50cf91ac9e | ||
|
|
11069c6982 | ||
|
|
106d9bf1ae | ||
|
|
17f832637c |
5
.coveragerc
Normal file
5
.coveragerc
Normal file
@@ -0,0 +1,5 @@
|
||||
[run]
|
||||
omit =
|
||||
tests/*
|
||||
krkn/tests/**
|
||||
CI/tests_v2/*
|
||||
1
.github/CODEOWNERS
vendored
Normal file
1
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1 @@
|
||||
* @paigerube14 @tsebastiani @chaitanyaenr
|
||||
43
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
43
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report an issue
|
||||
title: "[BUG]"
|
||||
labels: bug
|
||||
---
|
||||
|
||||
# Bug Description
|
||||
|
||||
## **Describe the bug**
|
||||
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
## **To Reproduce**
|
||||
|
||||
Any specific steps used to reproduce the behavior
|
||||
|
||||
### Scenario File
|
||||
Scenario file(s) that were specified in your config file (can be starred (*) with confidential information )
|
||||
```yaml
|
||||
<config>
|
||||
|
||||
```
|
||||
|
||||
### Config File
|
||||
Config file you used when error was seen (the default used is config/config.yaml)
|
||||
|
||||
```yaml
|
||||
<config>
|
||||
|
||||
```
|
||||
|
||||
## **Expected behavior**
|
||||
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
## **Krkn Output**
|
||||
|
||||
Krkn output to help show your problem
|
||||
|
||||
## **Additional context**
|
||||
|
||||
Add any other context about the problem
|
||||
16
.github/ISSUE_TEMPLATE/feature.md
vendored
Normal file
16
.github/ISSUE_TEMPLATE/feature.md
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
name: New Feature Request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: enhancement
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to see added/changed. Ex. new parameter in [xxx] scenario, new scenario that does [xxx]
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the feature request here.
|
||||
45
.github/PULL_REQUEST_TEMPLATE.md
vendored
45
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,10 +1,47 @@
|
||||
## Description
|
||||
<!-- Provide a brief description of the changes made in this PR. -->
|
||||
# Type of change
|
||||
|
||||
## Documentation
|
||||
- [ ] Refactor
|
||||
- [ ] New feature
|
||||
- [ ] Bug fix
|
||||
- [ ] Optimization
|
||||
|
||||
# Description
|
||||
<-- Provide a brief description of the changes made in this PR. -->
|
||||
|
||||
## Related Tickets & Documents
|
||||
If no related issue, please create one and start the converasation on wants of
|
||||
|
||||
- Related Issue #:
|
||||
- Closes #:
|
||||
|
||||
# Documentation
|
||||
- [ ] **Is documentation needed for this update?**
|
||||
|
||||
If checked, a documentation PR must be created and merged in the [website repository](https://github.com/krkn-chaos/website/).
|
||||
|
||||
## Related Documentation PR (if applicable)
|
||||
<!-- Add the link to the corresponding documentation PR in the website repository -->
|
||||
<-- Add the link to the corresponding documentation PR in the website repository -->
|
||||
|
||||
# Checklist before requesting a review
|
||||
[ ] Ensure the changes and proposed solution have been discussed in the relevant issue and have received acknowledgment from the community or maintainers. See [contributing guidelines](https://krkn-chaos.dev/docs/contribution-guidelines/)
|
||||
See [testing your changes](https://krkn-chaos.dev/docs/developers-guide/testing-changes/) and run on any Kubernetes or OpenShift cluster to validate your changes
|
||||
- [ ] I have performed a self-review of my code by running krkn and specific scenario
|
||||
- [ ] If it is a core feature, I have added thorough unit tests with above 80% coverage
|
||||
|
||||
*REQUIRED*:
|
||||
Description of combination of tests performed and output of run
|
||||
|
||||
```bash
|
||||
python run_kraken.py
|
||||
...
|
||||
<---insert test results output--->
|
||||
```
|
||||
|
||||
OR
|
||||
|
||||
|
||||
```bash
|
||||
python -m coverage run -a -m unittest discover -s tests -v
|
||||
...
|
||||
<---insert test results output--->
|
||||
```
|
||||
|
||||
13
.github/workflows/release.yml
vendored
13
.github/workflows/release.yml
vendored
@@ -16,6 +16,7 @@ jobs:
|
||||
PREVIOUS_TAG=$(git tag --sort=-creatordate | sed -n '2 p')
|
||||
echo $PREVIOUS_TAG
|
||||
echo "PREVIOUS_TAG=$PREVIOUS_TAG" >> "$GITHUB_ENV"
|
||||
|
||||
- name: generate release notes from template
|
||||
id: release-notes
|
||||
env:
|
||||
@@ -45,3 +46,15 @@ jobs:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
gh release create ${{ github.ref_name }} --title "${{ github.ref_name }}" -F release-notes.md
|
||||
|
||||
- name: Install Syft
|
||||
run: |
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sudo sh -s -- -b /usr/local/bin
|
||||
|
||||
- name: Generate SBOM
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
syft . --scope all-layers --output cyclonedx-json > sbom.json
|
||||
echo "SBOM generated successfully!"
|
||||
gh release upload ${{ github.ref_name }} sbom.json
|
||||
|
||||
52
.github/workflows/stale.yml
vendored
Normal file
52
.github/workflows/stale.yml
vendored
Normal file
@@ -0,0 +1,52 @@
|
||||
name: Manage Stale Issues and Pull Requests
|
||||
|
||||
on:
|
||||
schedule:
|
||||
# Run daily at 1:00 AM UTC
|
||||
- cron: '0 1 * * *'
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
issues: write
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
name: Mark and Close Stale Issues and PRs
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Mark and close stale issues and PRs
|
||||
uses: actions/stale@v9
|
||||
with:
|
||||
days-before-issue-stale: 60
|
||||
days-before-issue-close: 14
|
||||
stale-issue-label: 'stale'
|
||||
stale-issue-message: |
|
||||
This issue has been automatically marked as stale because it has not had any activity in the last 60 days.
|
||||
It will be closed in 14 days if no further activity occurs.
|
||||
If this issue is still relevant, please leave a comment or remove the stale label.
|
||||
Thank you for your contributions to krkn!
|
||||
close-issue-message: |
|
||||
This issue has been automatically closed due to inactivity.
|
||||
If you believe this issue is still relevant, please feel free to reopen it or create a new issue with updated information.
|
||||
Thank you for your understanding!
|
||||
close-issue-reason: 'not_planned'
|
||||
|
||||
days-before-pr-stale: 90
|
||||
days-before-pr-close: 14
|
||||
stale-pr-label: 'stale'
|
||||
stale-pr-message: |
|
||||
This pull request has been automatically marked as stale because it has not had any activity in the last 90 days.
|
||||
It will be closed in 14 days if no further activity occurs.
|
||||
If this PR is still relevant, please rebase it, address any pending reviews, or leave a comment.
|
||||
Thank you for your contributions to krkn!
|
||||
close-pr-message: |
|
||||
This pull request has been automatically closed due to inactivity.
|
||||
If you believe this PR is still relevant, please feel free to reopen it or create a new pull request with updated changes.
|
||||
Thank you for your understanding!
|
||||
|
||||
# Exempt labels
|
||||
exempt-issue-labels: 'bug,enhancement,good first issue'
|
||||
exempt-pr-labels: 'pending discussions,hold'
|
||||
|
||||
remove-stale-when-updated: true
|
||||
103
.github/workflows/tests.yml
vendored
103
.github/workflows/tests.yml
vendored
@@ -14,27 +14,51 @@ jobs:
|
||||
uses: actions/checkout@v3
|
||||
- name: Create multi-node KinD cluster
|
||||
uses: redhat-chaos/actions/kind@main
|
||||
- name: Install Helm & add repos
|
||||
run: |
|
||||
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
|
||||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
helm repo add stable https://charts.helm.sh/stable
|
||||
helm repo update
|
||||
- name: Deploy prometheus & Port Forwarding
|
||||
uses: redhat-chaos/actions/prometheus@main
|
||||
- name: Deploy Elasticsearch
|
||||
with:
|
||||
ELASTIC_PORT: ${{ env.ELASTIC_PORT }}
|
||||
RUN_ID: ${{ github.run_id }}
|
||||
uses: redhat-chaos/actions/elastic@main
|
||||
- name: Download elastic password
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: elastic_password_${{ github.run_id }}
|
||||
- name: Set elastic password on env
|
||||
run: |
|
||||
ELASTIC_PASSWORD=$(cat elastic_password.txt)
|
||||
echo "ELASTIC_PASSWORD=$ELASTIC_PASSWORD" >> "$GITHUB_ENV"
|
||||
- name: Install Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.9'
|
||||
python-version: '3.11'
|
||||
architecture: 'x64'
|
||||
- name: Install environment
|
||||
run: |
|
||||
sudo apt-get install build-essential python3-dev
|
||||
pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install coverage
|
||||
|
||||
- name: Deploy test workloads
|
||||
run: |
|
||||
# es_pod_name=$(kubectl get pods -l "app=elasticsearch-master" -o name)
|
||||
# echo "POD_NAME: $es_pod_name"
|
||||
# kubectl --namespace default port-forward $es_pod_name 9200 &
|
||||
# prom_name=$(kubectl get pods -n monitoring -l "app.kubernetes.io/name=prometheus" -o name)
|
||||
# kubectl --namespace monitoring port-forward $prom_name 9090 &
|
||||
|
||||
# Wait for Elasticsearch to be ready
|
||||
echo "Waiting for Elasticsearch to be ready..."
|
||||
for i in {1..30}; do
|
||||
if curl -k -s -u elastic:$ELASTIC_PASSWORD https://localhost:9200/_cluster/health > /dev/null 2>&1; then
|
||||
echo "Elasticsearch is ready!"
|
||||
break
|
||||
fi
|
||||
echo "Attempt $i: Elasticsearch not ready yet, waiting..."
|
||||
sleep 2
|
||||
done
|
||||
kubectl apply -f CI/templates/outage_pod.yaml
|
||||
kubectl wait --for=condition=ready pod -l scenario=outage --timeout=300s
|
||||
kubectl apply -f CI/templates/container_scenario_pod.yaml
|
||||
@@ -44,33 +68,43 @@ jobs:
|
||||
kubectl wait --for=condition=ready pod -l scenario=time-skew --timeout=300s
|
||||
kubectl apply -f CI/templates/service_hijacking.yaml
|
||||
kubectl wait --for=condition=ready pod -l "app.kubernetes.io/name=proxy" --timeout=300s
|
||||
kubectl apply -f CI/legacy/scenarios/volume_scenario.yaml
|
||||
kubectl wait --for=condition=ready pod kraken-test-pod -n kraken --timeout=300s
|
||||
- name: Get Kind nodes
|
||||
run: |
|
||||
kubectl get nodes --show-labels=true
|
||||
# Pull request only steps
|
||||
- name: Run unit tests
|
||||
if: github.event_name == 'pull_request'
|
||||
run: python -m coverage run -a -m unittest discover -s tests -v
|
||||
|
||||
- name: Setup Pull Request Functional Tests
|
||||
if: |
|
||||
github.event_name == 'pull_request'
|
||||
- name: Setup Functional Tests
|
||||
run: |
|
||||
yq -i '.kraken.port="8081"' CI/config/common_test_config.yaml
|
||||
yq -i '.kraken.signal_address="0.0.0.0"' CI/config/common_test_config.yaml
|
||||
yq -i '.kraken.performance_monitoring="localhost:9090"' CI/config/common_test_config.yaml
|
||||
echo "test_service_hijacking" > ./CI/tests/functional_tests
|
||||
echo "test_app_outages" >> ./CI/tests/functional_tests
|
||||
echo "test_container" >> ./CI/tests/functional_tests
|
||||
echo "test_pod" >> ./CI/tests/functional_tests
|
||||
echo "test_namespace" >> ./CI/tests/functional_tests
|
||||
echo "test_net_chaos" >> ./CI/tests/functional_tests
|
||||
echo "test_time" >> ./CI/tests/functional_tests
|
||||
yq -i '.elastic.elastic_port=9200' CI/config/common_test_config.yaml
|
||||
yq -i '.elastic.elastic_url="https://localhost"' CI/config/common_test_config.yaml
|
||||
yq -i '.elastic.enable_elastic=False' CI/config/common_test_config.yaml
|
||||
yq -i '.elastic.password="${{env.ELASTIC_PASSWORD}}"' CI/config/common_test_config.yaml
|
||||
yq -i '.performance_monitoring.prometheus_url="http://localhost:9090"' CI/config/common_test_config.yaml
|
||||
echo "test_app_outages" > ./CI/tests/functional_tests
|
||||
echo "test_container" >> ./CI/tests/functional_tests
|
||||
echo "test_cpu_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_memory_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_customapp_pod" >> ./CI/tests/functional_tests
|
||||
echo "test_io_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_memory_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_namespace" >> ./CI/tests/functional_tests
|
||||
echo "test_net_chaos" >> ./CI/tests/functional_tests
|
||||
echo "test_node" >> ./CI/tests/functional_tests
|
||||
echo "test_service_hijacking" >> ./CI/tests/functional_tests
|
||||
echo "test_pod_network_filter" >> ./CI/tests/functional_tests
|
||||
|
||||
echo "test_pod_server" >> ./CI/tests/functional_tests
|
||||
echo "test_time" >> ./CI/tests/functional_tests
|
||||
echo "test_node_network_chaos" >> ./CI/tests/functional_tests
|
||||
echo "test_pod_network_chaos" >> ./CI/tests/functional_tests
|
||||
echo "test_cerberus_unhealthy" >> ./CI/tests/functional_tests
|
||||
echo "test_pod_error" >> ./CI/tests/functional_tests
|
||||
echo "test_pod" >> ./CI/tests/functional_tests
|
||||
# echo "test_pvc" >> ./CI/tests/functional_tests
|
||||
|
||||
|
||||
# Push on main only steps + all other functional to collect coverage
|
||||
# for the badge
|
||||
@@ -84,24 +118,9 @@ jobs:
|
||||
- name: Setup Post Merge Request Functional Tests
|
||||
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
|
||||
run: |
|
||||
yq -i '.kraken.port="8081"' CI/config/common_test_config.yaml
|
||||
yq -i '.kraken.signal_address="0.0.0.0"' CI/config/common_test_config.yaml
|
||||
yq -i '.kraken.performance_monitoring="localhost:9090"' CI/config/common_test_config.yaml
|
||||
yq -i '.telemetry.username="${{secrets.TELEMETRY_USERNAME}}"' CI/config/common_test_config.yaml
|
||||
yq -i '.telemetry.password="${{secrets.TELEMETRY_PASSWORD}}"' CI/config/common_test_config.yaml
|
||||
echo "test_telemetry" > ./CI/tests/functional_tests
|
||||
echo "test_service_hijacking" >> ./CI/tests/functional_tests
|
||||
echo "test_app_outages" >> ./CI/tests/functional_tests
|
||||
echo "test_container" >> ./CI/tests/functional_tests
|
||||
echo "test_pod" >> ./CI/tests/functional_tests
|
||||
echo "test_namespace" >> ./CI/tests/functional_tests
|
||||
echo "test_net_chaos" >> ./CI/tests/functional_tests
|
||||
echo "test_time" >> ./CI/tests/functional_tests
|
||||
echo "test_cpu_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_memory_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_io_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_pod_network_filter" >> ./CI/tests/functional_tests
|
||||
|
||||
echo "test_telemetry" >> ./CI/tests/functional_tests
|
||||
# Final common steps
|
||||
- name: Run Functional tests
|
||||
env:
|
||||
@@ -111,32 +130,38 @@ jobs:
|
||||
cat ./CI/results.markdown >> $GITHUB_STEP_SUMMARY
|
||||
echo >> $GITHUB_STEP_SUMMARY
|
||||
- name: Upload CI logs
|
||||
if: ${{ always() }}
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ci-logs
|
||||
path: CI/out
|
||||
if-no-files-found: error
|
||||
- name: Collect coverage report
|
||||
if: ${{ always() }}
|
||||
run: |
|
||||
python -m coverage html
|
||||
python -m coverage json
|
||||
- name: Publish coverage report to job summary
|
||||
if: ${{ always() }}
|
||||
run: |
|
||||
pip install html2text
|
||||
html2text --ignore-images --ignore-links -b 0 htmlcov/index.html >> $GITHUB_STEP_SUMMARY
|
||||
- name: Upload coverage data
|
||||
if: ${{ always() }}
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: coverage
|
||||
path: htmlcov
|
||||
if-no-files-found: error
|
||||
- name: Upload json coverage
|
||||
if: ${{ always() }}
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: coverage.json
|
||||
path: coverage.json
|
||||
if-no-files-found: error
|
||||
- name: Check CI results
|
||||
if: ${{ always() }}
|
||||
run: "! grep Fail CI/results.markdown"
|
||||
|
||||
badge:
|
||||
@@ -161,7 +186,7 @@ jobs:
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: 3.9
|
||||
python-version: '3.11'
|
||||
- name: Copy badge on GitHub Page Repo
|
||||
env:
|
||||
COLOR: yellow
|
||||
|
||||
53
.github/workflows/tests_v2.yml
vendored
Normal file
53
.github/workflows/tests_v2.yml
vendored
Normal file
@@ -0,0 +1,53 @@
|
||||
name: Tests v2 (pytest functional)
|
||||
on:
|
||||
pull_request:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
jobs:
|
||||
tests-v2:
|
||||
name: Tests v2 (pytest functional)
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check out code
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Create KinD cluster
|
||||
uses: redhat-chaos/actions/kind@main
|
||||
|
||||
- name: Pre-load test images into KinD
|
||||
run: |
|
||||
docker pull nginx:alpine
|
||||
kind load docker-image nginx:alpine
|
||||
docker pull quay.io/krkn-chaos/krkn:tools
|
||||
kind load docker-image quay.io/krkn-chaos/krkn:tools
|
||||
|
||||
- name: Install Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.11'
|
||||
architecture: 'x64'
|
||||
cache: 'pip'
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
sudo apt-get install -y build-essential python3-dev
|
||||
pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
pip install -r CI/tests_v2/requirements.txt
|
||||
|
||||
- name: Run tests_v2
|
||||
run: |
|
||||
KRKN_TEST_COVERAGE=1 python -m pytest CI/tests_v2/ -v --timeout=300 --reruns=1 --reruns-delay=5 \
|
||||
--html=CI/tests_v2/report.html -n auto --junitxml=CI/tests_v2/results.xml
|
||||
|
||||
- name: Upload tests_v2 artifacts
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: tests-v2-results
|
||||
path: |
|
||||
CI/tests_v2/report.html
|
||||
CI/tests_v2/results.xml
|
||||
CI/tests_v2/assets/
|
||||
if-no-files-found: ignore
|
||||
5
.gitignore
vendored
5
.gitignore
vendored
@@ -17,6 +17,7 @@ __pycache__/*
|
||||
kube-burner*
|
||||
kube_burner*
|
||||
recommender_*.json
|
||||
resiliency*.json
|
||||
|
||||
# Project files
|
||||
.ropeproject
|
||||
@@ -64,6 +65,10 @@ CI/out/*
|
||||
CI/ci_results
|
||||
CI/legacy/*node.yaml
|
||||
CI/results.markdown
|
||||
# CI tests_v2 (pytest-html / pytest outputs)
|
||||
CI/tests_v2/results.xml
|
||||
CI/tests_v2/report.html
|
||||
CI/tests_v2/assets/
|
||||
|
||||
#env
|
||||
chaos/*
|
||||
|
||||
@@ -6,3 +6,4 @@ This is a list of organizations that have publicly acknowledged usage of Krkn an
|
||||
|:-|:-|:-|:-|
|
||||
| MarketAxess | 2024 | https://www.marketaxess.com/ | Kraken enables us to achieve our goal of increasing the reliability of our cloud products on Kubernetes. The tool allows us to automatically run various chaos scenarios, identify resilience and performance bottlenecks, and seamlessly restore the system to its original state once scenarios finish. These chaos scenarios include pod disruptions, node (EC2) outages, simulating availability zone (AZ) outages, and filling up storage spaces like EBS and EFS. The community is highly responsive to requests and works on expanding the tool's capabilities. MarketAxess actively contributes to the project, adding features such as the ability to leverage existing network ACLs and proposing several feature improvements to enhance test coverage. |
|
||||
| Red Hat Openshift | 2020 | https://www.redhat.com/ | Kraken is a highly reliable chaos testing tool used to ensure the quality and resiliency of Red Hat Openshift. The engineering team runs all the test scenarios under Kraken on different cloud platforms on both self-managed and cloud services environments prior to the release of a new version of the product. The team also contributes to the Kraken project consistently which helps the test scenarios to keep up with the new features introduced to the product. Inclusion of this test coverage has contributed to gaining the trust of new and existing customers of the product. |
|
||||
| IBM | 2023 | https://www.ibm.com/ | While working on AI for Chaos Testing at IBM Research, we closely collaborated with the Kraken (Krkn) team to advance intelligent chaos engineering. Our contributions included developing AI-enabled chaos injection strategies and integrating reinforcement learning (RL)-based fault search techniques into the Krkn tool, enabling it to identify and explore system vulnerabilities more efficiently. Kraken stands out as one of the most user-friendly and effective tools for chaos engineering, and the Kraken team’s deep technical involvement played a crucial role in the success of this collaboration—helping bridge cutting-edge AI research with practical, real-world system reliability testing. |
|
||||
|
||||
@@ -2,6 +2,12 @@ kraken:
|
||||
distribution: kubernetes # Distribution can be kubernetes or openshift.
|
||||
kubeconfig_path: ~/.kube/config # Path to kubeconfig.
|
||||
exit_on_failure: False # Exit when a post action scenario fails.
|
||||
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
|
||||
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
|
||||
signal_address: 0.0.0.0 # Signal listening address
|
||||
port: 8081 # Signal port
|
||||
auto_rollback: True # Enable auto rollback for scenarios.
|
||||
rollback_versions_directory: /tmp/kraken-rollback # Directory to store rollback version files.
|
||||
chaos_scenarios: # List of policies/chaos scenarios to load.
|
||||
- $scenario_type: # List of chaos pod scenarios to load.
|
||||
- $scenario_file
|
||||
@@ -10,15 +16,16 @@ cerberus:
|
||||
cerberus_url: # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal.
|
||||
|
||||
performance_monitoring:
|
||||
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift.
|
||||
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
|
||||
capture_metrics: False
|
||||
metrics_profile_path: config/metrics-aggregated.yaml
|
||||
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
|
||||
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
|
||||
uuid: # uuid for the run is generated by default if not set.
|
||||
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error.
|
||||
alert_profile: config/alerts.yaml # Path to alert profile with the prometheus queries.
|
||||
enable_alerts: True # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
|
||||
enable_metrics: True
|
||||
alert_profile: config/alerts.yaml # Path or URL to alert profile with the prometheus queries
|
||||
metrics_profile: config/metrics-report.yaml
|
||||
check_critical_alerts: True # Path to alert profile with the prometheus queries.
|
||||
|
||||
tunings:
|
||||
wait_duration: 6 # Duration to wait between each chaos scenario.
|
||||
@@ -29,13 +36,13 @@ telemetry:
|
||||
api_url: https://yvnn4rfoi7.execute-api.us-west-2.amazonaws.com/test #telemetry service endpoint
|
||||
username: $TELEMETRY_USERNAME # telemetry service username
|
||||
password: $TELEMETRY_PASSWORD # telemetry service password
|
||||
prometheus_namespace: 'prometheus-k8s' # prometheus namespace
|
||||
prometheus_namespace: 'monitoring' # prometheus namespace
|
||||
prometheus_pod_name: 'prometheus-kind-prometheus-kube-prome-prometheus-0' # prometheus pod_name
|
||||
prometheus_container_name: 'prometheus'
|
||||
prometheus_backup: True # enables/disables prometheus data collection
|
||||
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
|
||||
backup_threads: 5 # number of telemetry download/upload threads
|
||||
archive_path: /tmp # local path where the archive files will be temporarly stored
|
||||
archive_path: /tmp # local path where the archive files will be temporarily stored
|
||||
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
|
||||
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
|
||||
archive_size: 10000 # the size of the prometheus data archive size in KB. The lower the size of archive is
|
||||
|
||||
@@ -45,15 +45,45 @@ metadata:
|
||||
name: kraken-test-pod
|
||||
namespace: kraken
|
||||
spec:
|
||||
securityContext:
|
||||
fsGroup: 1001
|
||||
# initContainer to fix permissions on the mounted volume
|
||||
initContainers:
|
||||
- name: fix-permissions
|
||||
image: 'quay.io/centos7/httpd-24-centos7:centos7'
|
||||
command:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
echo "Setting up permissions for /home/kraken..."
|
||||
# Create the directory if it doesn't exist
|
||||
mkdir -p /home/kraken
|
||||
# Set ownership to user 1001 and group 1001
|
||||
chown -R 1001:1001 /home/kraken
|
||||
# Set permissions to allow read/write
|
||||
chmod -R 755 /home/kraken
|
||||
rm -rf /home/kraken/*
|
||||
echo "Permissions fixed. Current state:"
|
||||
ls -la /home/kraken
|
||||
volumeMounts:
|
||||
- mountPath: "/home/kraken"
|
||||
name: kraken-test-pv
|
||||
securityContext:
|
||||
runAsUser: 0 # Run as root to fix permissions
|
||||
volumes:
|
||||
- name: kraken-test-pv
|
||||
persistentVolumeClaim:
|
||||
claimName: kraken-test-pvc
|
||||
containers:
|
||||
- name: kraken-test-container
|
||||
image: 'quay.io/centos7/httpd-24-centos7:latest'
|
||||
volumeMounts:
|
||||
- mountPath: "/home/krake-dir/"
|
||||
name: kraken-test-pv
|
||||
image: 'quay.io/centos7/httpd-24-centos7:centos7'
|
||||
securityContext:
|
||||
privileged: true
|
||||
runAsUser: 1001
|
||||
runAsNonRoot: true
|
||||
allowPrivilegeEscalation: false
|
||||
capabilities:
|
||||
drop:
|
||||
- ALL
|
||||
volumeMounts:
|
||||
- mountPath: "/home/kraken"
|
||||
name: kraken-test-pv
|
||||
|
||||
79
CI/templates/mock_cerberus.yaml
Normal file
79
CI/templates/mock_cerberus.yaml
Normal file
@@ -0,0 +1,79 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mock-cerberus-server
|
||||
namespace: default
|
||||
data:
|
||||
server.py: |
|
||||
#!/usr/bin/env python3
|
||||
from http.server import HTTPServer, BaseHTTPRequestHandler
|
||||
import json
|
||||
|
||||
class MockCerberusHandler(BaseHTTPRequestHandler):
|
||||
def do_GET(self):
|
||||
if self.path == '/':
|
||||
# Return True to indicate cluster is healthy
|
||||
self.send_response(200)
|
||||
self.send_header('Content-type', 'text/plain')
|
||||
self.end_headers()
|
||||
self.wfile.write(b'True')
|
||||
elif self.path.startswith('/history'):
|
||||
# Return empty history (no failures)
|
||||
self.send_response(200)
|
||||
self.send_header('Content-type', 'application/json')
|
||||
self.end_headers()
|
||||
response = {
|
||||
"history": {
|
||||
"failures": []
|
||||
}
|
||||
}
|
||||
self.wfile.write(json.dumps(response).encode())
|
||||
else:
|
||||
self.send_response(404)
|
||||
self.end_headers()
|
||||
|
||||
def log_message(self, format, *args):
|
||||
print(f"[MockCerberus] {format % args}")
|
||||
|
||||
if __name__ == '__main__':
|
||||
server = HTTPServer(('0.0.0.0', 8080), MockCerberusHandler)
|
||||
print("[MockCerberus] Starting mock cerberus server on port 8080...")
|
||||
server.serve_forever()
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: mock-cerberus
|
||||
namespace: default
|
||||
labels:
|
||||
app: mock-cerberus
|
||||
spec:
|
||||
containers:
|
||||
- name: mock-cerberus
|
||||
image: python:3.9-slim
|
||||
command: ["python3", "/app/server.py"]
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
name: http
|
||||
volumeMounts:
|
||||
- name: server-script
|
||||
mountPath: /app
|
||||
volumes:
|
||||
- name: server-script
|
||||
configMap:
|
||||
name: mock-cerberus-server
|
||||
defaultMode: 0755
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mock-cerberus
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
app: mock-cerberus
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8080
|
||||
targetPort: 8080
|
||||
type: ClusterIP
|
||||
85
CI/templates/mock_cerberus_unhealthy.yaml
Normal file
85
CI/templates/mock_cerberus_unhealthy.yaml
Normal file
@@ -0,0 +1,85 @@
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: mock-cerberus-unhealthy-server
|
||||
namespace: default
|
||||
data:
|
||||
server.py: |
|
||||
#!/usr/bin/env python3
|
||||
from http.server import HTTPServer, BaseHTTPRequestHandler
|
||||
import json
|
||||
|
||||
class MockCerberusUnhealthyHandler(BaseHTTPRequestHandler):
|
||||
def do_GET(self):
|
||||
if self.path == '/':
|
||||
# Return False to indicate cluster is unhealthy
|
||||
self.send_response(200)
|
||||
self.send_header('Content-type', 'text/plain')
|
||||
self.end_headers()
|
||||
self.wfile.write(b'False')
|
||||
elif self.path.startswith('/history'):
|
||||
# Return history with failures
|
||||
self.send_response(200)
|
||||
self.send_header('Content-type', 'application/json')
|
||||
self.end_headers()
|
||||
response = {
|
||||
"history": {
|
||||
"failures": [
|
||||
{
|
||||
"component": "node",
|
||||
"name": "test-node",
|
||||
"timestamp": "2024-01-01T00:00:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
self.wfile.write(json.dumps(response).encode())
|
||||
else:
|
||||
self.send_response(404)
|
||||
self.end_headers()
|
||||
|
||||
def log_message(self, format, *args):
|
||||
print(f"[MockCerberusUnhealthy] {format % args}")
|
||||
|
||||
if __name__ == '__main__':
|
||||
server = HTTPServer(('0.0.0.0', 8080), MockCerberusUnhealthyHandler)
|
||||
print("[MockCerberusUnhealthy] Starting mock cerberus unhealthy server on port 8080...")
|
||||
server.serve_forever()
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: mock-cerberus-unhealthy
|
||||
namespace: default
|
||||
labels:
|
||||
app: mock-cerberus-unhealthy
|
||||
spec:
|
||||
containers:
|
||||
- name: mock-cerberus-unhealthy
|
||||
image: python:3.9-slim
|
||||
command: ["python3", "/app/server.py"]
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
name: http
|
||||
volumeMounts:
|
||||
- name: server-script
|
||||
mountPath: /app
|
||||
volumes:
|
||||
- name: server-script
|
||||
configMap:
|
||||
name: mock-cerberus-unhealthy-server
|
||||
defaultMode: 0755
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: mock-cerberus-unhealthy
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
app: mock-cerberus-unhealthy
|
||||
ports:
|
||||
- protocol: TCP
|
||||
port: 8080
|
||||
targetPort: 8080
|
||||
type: ClusterIP
|
||||
@@ -8,9 +8,9 @@ spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: fedtools
|
||||
image: docker.io/fedora/tools
|
||||
image: quay.io/krkn-chaos/krkn:tools
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
sleep infinity
|
||||
sleep infinity
|
||||
|
||||
@@ -8,9 +8,9 @@ spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: fedtools
|
||||
image: docker.io/fedora/tools
|
||||
image: quay.io/krkn-chaos/krkn:tools
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
sleep infinity
|
||||
sleep infinity
|
||||
|
||||
@@ -13,7 +13,13 @@ function functional_test_app_outage {
|
||||
export scenario_type="application_outages_scenarios"
|
||||
export scenario_file="scenarios/openshift/app_outage.yaml"
|
||||
export post_config=""
|
||||
|
||||
kubectl get services -A
|
||||
|
||||
kubectl get pods
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/app_outage.yaml
|
||||
cat $scenario_file
|
||||
cat CI/config/app_outage.yaml
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/app_outage.yaml
|
||||
echo "App outage scenario test: Success"
|
||||
}
|
||||
|
||||
79
CI/tests/test_cerberus_unhealthy.sh
Executable file
79
CI/tests/test_cerberus_unhealthy.sh
Executable file
@@ -0,0 +1,79 @@
|
||||
set -xeEo pipefail
|
||||
|
||||
source CI/tests/common.sh
|
||||
|
||||
trap error ERR
|
||||
trap finish EXIT
|
||||
|
||||
function functional_test_cerberus_unhealthy {
|
||||
echo "========================================"
|
||||
echo "Starting Cerberus Unhealthy Test"
|
||||
echo "========================================"
|
||||
|
||||
# Deploy mock cerberus unhealthy server
|
||||
echo "Deploying mock cerberus unhealthy server..."
|
||||
kubectl apply -f CI/templates/mock_cerberus_unhealthy.yaml
|
||||
|
||||
# Wait for mock cerberus unhealthy pod to be ready
|
||||
echo "Waiting for mock cerberus unhealthy to be ready..."
|
||||
kubectl wait --for=condition=ready pod -l app=mock-cerberus-unhealthy --timeout=300s
|
||||
|
||||
# Verify mock cerberus service is accessible
|
||||
echo "Verifying mock cerberus unhealthy service..."
|
||||
mock_cerberus_ip=$(kubectl get service mock-cerberus-unhealthy -o jsonpath='{.spec.clusterIP}')
|
||||
echo "Mock Cerberus Unhealthy IP: $mock_cerberus_ip"
|
||||
|
||||
# Test cerberus endpoint from within the cluster (should return False)
|
||||
kubectl run cerberus-unhealthy-test --image=curlimages/curl:latest --rm -i --restart=Never -- \
|
||||
curl -s http://mock-cerberus-unhealthy.default.svc.cluster.local:8080/ || echo "Cerberus unhealthy test curl completed"
|
||||
|
||||
# Configure scenario for pod disruption with cerberus enabled
|
||||
export scenario_type="pod_disruption_scenarios"
|
||||
export scenario_file="scenarios/kind/pod_etcd.yml"
|
||||
export post_config=""
|
||||
|
||||
# Generate config with cerberus enabled
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/cerberus_unhealthy_test_config.yaml
|
||||
|
||||
# Enable cerberus in the config but DON'T exit_on_failure (so the test can verify the behavior)
|
||||
# Using yq jq-wrapper syntax with -i -y
|
||||
yq -i '.cerberus.cerberus_enabled = true' CI/config/cerberus_unhealthy_test_config.yaml
|
||||
yq -i ".cerberus.cerberus_url = \"http://${mock_cerberus_ip}:8080\"" CI/config/cerberus_unhealthy_test_config.yaml
|
||||
yq -i '.kraken.exit_on_failure = false' CI/config/cerberus_unhealthy_test_config.yaml
|
||||
|
||||
echo "========================================"
|
||||
echo "Cerberus Unhealthy Configuration:"
|
||||
yq '.cerberus' CI/config/cerberus_unhealthy_test_config.yaml
|
||||
echo "exit_on_failure:"
|
||||
yq '.kraken.exit_on_failure' CI/config/cerberus_unhealthy_test_config.yaml
|
||||
echo "========================================"
|
||||
|
||||
# Run kraken with cerberus unhealthy (should detect unhealthy but not exit due to exit_on_failure=false)
|
||||
echo "Running kraken with cerberus unhealthy integration..."
|
||||
|
||||
# We expect this to complete (not exit 1) because exit_on_failure is false
|
||||
# But cerberus should log that the cluster is unhealthy
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/cerberus_unhealthy_test_config.yaml || {
|
||||
exit_code=$?
|
||||
echo "Kraken exited with code: $exit_code"
|
||||
# If exit_code is 1, that's expected when cerberus reports unhealthy and exit_on_failure would be true
|
||||
# But since we set exit_on_failure=false, it should not exit
|
||||
if [ $exit_code -eq 1 ]; then
|
||||
echo "WARNING: Kraken exited with 1, which may indicate cerberus detected unhealthy cluster"
|
||||
fi
|
||||
}
|
||||
|
||||
# Verify cerberus was called by checking mock cerberus logs
|
||||
echo "Checking mock cerberus unhealthy logs..."
|
||||
kubectl logs -l app=mock-cerberus-unhealthy --tail=50
|
||||
|
||||
# Cleanup
|
||||
echo "Cleaning up mock cerberus unhealthy..."
|
||||
kubectl delete -f CI/templates/mock_cerberus_unhealthy.yaml || true
|
||||
|
||||
echo "========================================"
|
||||
echo "Cerberus unhealthy functional test: Success"
|
||||
echo "========================================"
|
||||
}
|
||||
|
||||
functional_test_cerberus_unhealthy
|
||||
@@ -16,8 +16,10 @@ function functional_test_container_crash {
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/container_config.yaml
|
||||
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/container_config.yaml
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/container_config.yaml -d True
|
||||
echo "Container scenario test: Success"
|
||||
|
||||
kubectl get pods -n kube-system -l component=etcd
|
||||
}
|
||||
|
||||
functional_test_container_crash
|
||||
|
||||
18
CI/tests/test_customapp_pod.sh
Executable file
18
CI/tests/test_customapp_pod.sh
Executable file
@@ -0,0 +1,18 @@
|
||||
set -xeEo pipefail
|
||||
|
||||
source CI/tests/common.sh
|
||||
|
||||
trap error ERR
|
||||
trap finish EXIT
|
||||
|
||||
function functional_test_customapp_pod_node_selector {
|
||||
export scenario_type="pod_disruption_scenarios"
|
||||
export scenario_file="scenarios/openshift/customapp_pod.yaml"
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/customapp_pod_config.yaml
|
||||
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/customapp_pod_config.yaml -d True
|
||||
echo "Pod disruption with node_label_selector test: Success"
|
||||
}
|
||||
|
||||
functional_test_customapp_pod_node_selector
|
||||
18
CI/tests/test_node.sh
Executable file
18
CI/tests/test_node.sh
Executable file
@@ -0,0 +1,18 @@
|
||||
uset -xeEo pipefail
|
||||
|
||||
source CI/tests/common.sh
|
||||
|
||||
trap error ERR
|
||||
trap finish EXIT
|
||||
|
||||
function functional_test_node_stop_start {
|
||||
export scenario_type="node_scenarios"
|
||||
export scenario_file="scenarios/kind/node_scenarios_example.yml"
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/node_config.yaml
|
||||
cat CI/config/node_config.yaml
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/node_config.yaml
|
||||
echo "Node Stop/Start scenario test: Success"
|
||||
}
|
||||
|
||||
functional_test_node_stop_start
|
||||
165
CI/tests/test_node_network_chaos.sh
Executable file
165
CI/tests/test_node_network_chaos.sh
Executable file
@@ -0,0 +1,165 @@
|
||||
set -xeEo pipefail
|
||||
|
||||
source CI/tests/common.sh
|
||||
|
||||
trap error ERR
|
||||
trap finish EXIT
|
||||
|
||||
function functional_test_node_network_chaos {
|
||||
echo "Starting node network chaos functional test"
|
||||
|
||||
# Get a worker node
|
||||
get_node
|
||||
export TARGET_NODE=$(echo $WORKER_NODE | awk '{print $1}')
|
||||
echo "Target node: $TARGET_NODE"
|
||||
|
||||
# Deploy nginx workload on the target node
|
||||
echo "Deploying nginx workload on $TARGET_NODE..."
|
||||
kubectl create deployment nginx-node-net-chaos --image=nginx:latest
|
||||
|
||||
# Add node selector to ensure pod runs on target node
|
||||
kubectl patch deployment nginx-node-net-chaos -p '{"spec":{"template":{"spec":{"nodeSelector":{"kubernetes.io/hostname":"'$TARGET_NODE'"}}}}}'
|
||||
|
||||
# Expose service
|
||||
kubectl expose deployment nginx-node-net-chaos --port=80 --target-port=80 --name=nginx-node-net-chaos-svc
|
||||
|
||||
# Wait for nginx to be ready
|
||||
echo "Waiting for nginx pod to be ready on $TARGET_NODE..."
|
||||
kubectl wait --for=condition=ready pod -l app=nginx-node-net-chaos --timeout=120s
|
||||
|
||||
# Verify pod is on correct node
|
||||
export POD_NAME=$(kubectl get pods -l app=nginx-node-net-chaos -o jsonpath='{.items[0].metadata.name}')
|
||||
export POD_NODE=$(kubectl get pod $POD_NAME -o jsonpath='{.spec.nodeName}')
|
||||
echo "Pod $POD_NAME is running on node $POD_NODE"
|
||||
|
||||
if [ "$POD_NODE" != "$TARGET_NODE" ]; then
|
||||
echo "ERROR: Pod is not on target node (expected $TARGET_NODE, got $POD_NODE)"
|
||||
kubectl get pods -l app=nginx-node-net-chaos -o wide
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Setup port-forward to access nginx
|
||||
echo "Setting up port-forward to nginx service..."
|
||||
kubectl port-forward service/nginx-node-net-chaos-svc 8091:80 &
|
||||
PORT_FORWARD_PID=$!
|
||||
sleep 3 # Give port-forward time to start
|
||||
|
||||
# Test baseline connectivity
|
||||
echo "Testing baseline connectivity..."
|
||||
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 http://localhost:8091 || echo "000")
|
||||
if [ "$response" != "200" ]; then
|
||||
echo "ERROR: Nginx not responding correctly (got $response, expected 200)"
|
||||
kubectl get pods -l app=nginx-node-net-chaos
|
||||
kubectl describe pod $POD_NAME
|
||||
exit 1
|
||||
fi
|
||||
echo "Baseline test passed: nginx responding with 200"
|
||||
|
||||
# Measure baseline latency
|
||||
echo "Measuring baseline latency..."
|
||||
baseline_start=$(date +%s%3N)
|
||||
curl -s http://localhost:8091 > /dev/null || true
|
||||
baseline_end=$(date +%s%3N)
|
||||
baseline_latency=$((baseline_end - baseline_start))
|
||||
echo "Baseline latency: ${baseline_latency}ms"
|
||||
|
||||
# Configure node network chaos scenario
|
||||
echo "Configuring node network chaos scenario..."
|
||||
yq -i '.[0].config.target="'$TARGET_NODE'"' scenarios/kube/node-network-chaos.yml
|
||||
yq -i '.[0].config.namespace="default"' scenarios/kube/node-network-chaos.yml
|
||||
yq -i '.[0].config.test_duration=20' scenarios/kube/node-network-chaos.yml
|
||||
yq -i '.[0].config.latency="200ms"' scenarios/kube/node-network-chaos.yml
|
||||
yq -i '.[0].config.loss=15' scenarios/kube/node-network-chaos.yml
|
||||
yq -i '.[0].config.bandwidth="10mbit"' scenarios/kube/node-network-chaos.yml
|
||||
yq -i '.[0].config.ingress=true' scenarios/kube/node-network-chaos.yml
|
||||
yq -i '.[0].config.egress=true' scenarios/kube/node-network-chaos.yml
|
||||
yq -i '.[0].config.force=false' scenarios/kube/node-network-chaos.yml
|
||||
yq -i 'del(.[0].config.interfaces)' scenarios/kube/node-network-chaos.yml
|
||||
|
||||
# Prepare krkn config
|
||||
export scenario_type="network_chaos_ng_scenarios"
|
||||
export scenario_file="scenarios/kube/node-network-chaos.yml"
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/node_network_chaos_config.yaml
|
||||
|
||||
# Run krkn in background
|
||||
echo "Starting krkn with node network chaos scenario..."
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/node_network_chaos_config.yaml &
|
||||
KRKN_PID=$!
|
||||
echo "Krkn started with PID: $KRKN_PID"
|
||||
|
||||
# Wait for chaos to start (give it time to inject chaos)
|
||||
echo "Waiting for chaos injection to begin..."
|
||||
sleep 10
|
||||
|
||||
# Test during chaos - check for increased latency or packet loss effects
|
||||
echo "Testing network behavior during chaos..."
|
||||
chaos_test_count=0
|
||||
chaos_success=0
|
||||
|
||||
for i in {1..5}; do
|
||||
chaos_test_count=$((chaos_test_count + 1))
|
||||
chaos_start=$(date +%s%3N)
|
||||
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 10 http://localhost:8091 || echo "000")
|
||||
chaos_end=$(date +%s%3N)
|
||||
chaos_latency=$((chaos_end - chaos_start))
|
||||
|
||||
echo "Attempt $i: HTTP $response, latency: ${chaos_latency}ms"
|
||||
|
||||
# We expect either increased latency or some failures due to packet loss
|
||||
if [ "$response" == "200" ] || [ "$response" == "000" ]; then
|
||||
chaos_success=$((chaos_success + 1))
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
done
|
||||
|
||||
echo "Chaos test results: $chaos_success/$chaos_test_count requests processed"
|
||||
|
||||
# Verify node-level chaos affects pod
|
||||
echo "Verifying node-level chaos affects pod on $TARGET_NODE..."
|
||||
# The node chaos should affect all pods on the node
|
||||
|
||||
# Wait for krkn to complete
|
||||
echo "Waiting for krkn to complete..."
|
||||
wait $KRKN_PID || true
|
||||
echo "Krkn completed"
|
||||
|
||||
# Wait a bit for cleanup
|
||||
sleep 5
|
||||
|
||||
# Verify recovery - nginx should respond normally again
|
||||
echo "Verifying service recovery..."
|
||||
recovery_attempts=0
|
||||
max_recovery_attempts=10
|
||||
|
||||
while [ $recovery_attempts -lt $max_recovery_attempts ]; do
|
||||
recovery_attempts=$((recovery_attempts + 1))
|
||||
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 http://localhost:8091 || echo "000")
|
||||
|
||||
if [ "$response" == "200" ]; then
|
||||
echo "Recovery verified: nginx responding normally (attempt $recovery_attempts)"
|
||||
break
|
||||
fi
|
||||
|
||||
echo "Recovery attempt $recovery_attempts/$max_recovery_attempts: got $response, retrying..."
|
||||
sleep 3
|
||||
done
|
||||
|
||||
if [ "$response" != "200" ]; then
|
||||
echo "ERROR: Service did not recover after chaos (got $response)"
|
||||
kubectl get pods -l app=nginx-node-net-chaos
|
||||
kubectl describe pod $POD_NAME
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Cleanup
|
||||
echo "Cleaning up test resources..."
|
||||
kill $PORT_FORWARD_PID 2>/dev/null || true
|
||||
kubectl delete deployment nginx-node-net-chaos --ignore-not-found=true
|
||||
kubectl delete service nginx-node-net-chaos-svc --ignore-not-found=true
|
||||
|
||||
echo "Node network chaos test: Success"
|
||||
}
|
||||
|
||||
functional_test_node_network_chaos
|
||||
@@ -7,12 +7,15 @@ trap finish EXIT
|
||||
|
||||
function functional_test_pod_crash {
|
||||
export scenario_type="pod_disruption_scenarios"
|
||||
export scenario_file="scenarios/kind/pod_etcd.yml"
|
||||
export scenario_file="scenarios/kind/pod_path_provisioner.yml"
|
||||
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/pod_config.yaml
|
||||
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/pod_config.yaml
|
||||
echo "Pod disruption scenario test: Success"
|
||||
date
|
||||
kubectl get pods -n local-path-storage -l app=local-path-provisioner -o yaml
|
||||
}
|
||||
|
||||
functional_test_pod_crash
|
||||
|
||||
31
CI/tests/test_pod_error.sh
Executable file
31
CI/tests/test_pod_error.sh
Executable file
@@ -0,0 +1,31 @@
|
||||
|
||||
|
||||
source CI/tests/common.sh
|
||||
|
||||
trap error ERR
|
||||
trap finish EXIT
|
||||
|
||||
function functional_test_pod_error {
|
||||
export scenario_type="pod_disruption_scenarios"
|
||||
export scenario_file="scenarios/kind/pod_etcd.yml"
|
||||
export post_config=""
|
||||
# this test will check if krkn exits with an error when too many pods are targeted
|
||||
yq -i '.[0].config.kill=5' scenarios/kind/pod_etcd.yml
|
||||
yq -i '.[0].config.krkn_pod_recovery_time=1' scenarios/kind/pod_etcd.yml
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/pod_config.yaml
|
||||
cat CI/config/pod_config.yaml
|
||||
|
||||
cat scenarios/kind/pod_etcd.yml
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/pod_config.yaml
|
||||
|
||||
ret=$?
|
||||
echo "\n\nret $ret"
|
||||
if [[ $ret -ge 1 ]]; then
|
||||
echo "Pod disruption error scenario test: Success"
|
||||
else
|
||||
echo "Pod disruption error scenario test: Failure"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
functional_test_pod_error
|
||||
143
CI/tests/test_pod_network_chaos.sh
Executable file
143
CI/tests/test_pod_network_chaos.sh
Executable file
@@ -0,0 +1,143 @@
|
||||
set -xeEo pipefail
|
||||
|
||||
source CI/tests/common.sh
|
||||
|
||||
trap error ERR
|
||||
trap finish EXIT
|
||||
|
||||
function functional_test_pod_network_chaos {
|
||||
echo "Starting pod network chaos functional test"
|
||||
|
||||
# Deploy nginx workload
|
||||
echo "Deploying nginx workload..."
|
||||
kubectl create deployment nginx-pod-net-chaos --image=nginx:latest
|
||||
kubectl expose deployment nginx-pod-net-chaos --port=80 --target-port=80 --name=nginx-pod-net-chaos-svc
|
||||
|
||||
# Wait for nginx to be ready
|
||||
echo "Waiting for nginx pod to be ready..."
|
||||
kubectl wait --for=condition=ready pod -l app=nginx-pod-net-chaos --timeout=120s
|
||||
|
||||
# Get pod name
|
||||
export POD_NAME=$(kubectl get pods -l app=nginx-pod-net-chaos -o jsonpath='{.items[0].metadata.name}')
|
||||
echo "Target pod: $POD_NAME"
|
||||
|
||||
# Setup port-forward to access nginx
|
||||
echo "Setting up port-forward to nginx service..."
|
||||
kubectl port-forward service/nginx-pod-net-chaos-svc 8090:80 &
|
||||
PORT_FORWARD_PID=$!
|
||||
sleep 3 # Give port-forward time to start
|
||||
|
||||
# Test baseline connectivity
|
||||
echo "Testing baseline connectivity..."
|
||||
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 http://localhost:8090 || echo "000")
|
||||
if [ "$response" != "200" ]; then
|
||||
echo "ERROR: Nginx not responding correctly (got $response, expected 200)"
|
||||
kubectl get pods -l app=nginx-pod-net-chaos
|
||||
kubectl describe pod $POD_NAME
|
||||
exit 1
|
||||
fi
|
||||
echo "Baseline test passed: nginx responding with 200"
|
||||
|
||||
# Measure baseline latency
|
||||
echo "Measuring baseline latency..."
|
||||
baseline_start=$(date +%s%3N)
|
||||
curl -s http://localhost:8090 > /dev/null || true
|
||||
baseline_end=$(date +%s%3N)
|
||||
baseline_latency=$((baseline_end - baseline_start))
|
||||
echo "Baseline latency: ${baseline_latency}ms"
|
||||
|
||||
# Configure pod network chaos scenario
|
||||
echo "Configuring pod network chaos scenario..."
|
||||
yq -i '.[0].config.target="'$POD_NAME'"' scenarios/kube/pod-network-chaos.yml
|
||||
yq -i '.[0].config.namespace="default"' scenarios/kube/pod-network-chaos.yml
|
||||
yq -i '.[0].config.test_duration=20' scenarios/kube/pod-network-chaos.yml
|
||||
yq -i '.[0].config.latency="200ms"' scenarios/kube/pod-network-chaos.yml
|
||||
yq -i '.[0].config.loss=15' scenarios/kube/pod-network-chaos.yml
|
||||
yq -i '.[0].config.bandwidth="10mbit"' scenarios/kube/pod-network-chaos.yml
|
||||
yq -i '.[0].config.ingress=true' scenarios/kube/pod-network-chaos.yml
|
||||
yq -i '.[0].config.egress=true' scenarios/kube/pod-network-chaos.yml
|
||||
yq -i 'del(.[0].config.interfaces)' scenarios/kube/pod-network-chaos.yml
|
||||
|
||||
# Prepare krkn config
|
||||
export scenario_type="network_chaos_ng_scenarios"
|
||||
export scenario_file="scenarios/kube/pod-network-chaos.yml"
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/pod_network_chaos_config.yaml
|
||||
|
||||
# Run krkn in background
|
||||
echo "Starting krkn with pod network chaos scenario..."
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/pod_network_chaos_config.yaml &
|
||||
KRKN_PID=$!
|
||||
echo "Krkn started with PID: $KRKN_PID"
|
||||
|
||||
# Wait for chaos to start (give it time to inject chaos)
|
||||
echo "Waiting for chaos injection to begin..."
|
||||
sleep 10
|
||||
|
||||
# Test during chaos - check for increased latency or packet loss effects
|
||||
echo "Testing network behavior during chaos..."
|
||||
chaos_test_count=0
|
||||
chaos_success=0
|
||||
|
||||
for i in {1..5}; do
|
||||
chaos_test_count=$((chaos_test_count + 1))
|
||||
chaos_start=$(date +%s%3N)
|
||||
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 10 http://localhost:8090 || echo "000")
|
||||
chaos_end=$(date +%s%3N)
|
||||
chaos_latency=$((chaos_end - chaos_start))
|
||||
|
||||
echo "Attempt $i: HTTP $response, latency: ${chaos_latency}ms"
|
||||
|
||||
# We expect either increased latency or some failures due to packet loss
|
||||
if [ "$response" == "200" ] || [ "$response" == "000" ]; then
|
||||
chaos_success=$((chaos_success + 1))
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
done
|
||||
|
||||
echo "Chaos test results: $chaos_success/$chaos_test_count requests processed"
|
||||
|
||||
# Wait for krkn to complete
|
||||
echo "Waiting for krkn to complete..."
|
||||
wait $KRKN_PID || true
|
||||
echo "Krkn completed"
|
||||
|
||||
# Wait a bit for cleanup
|
||||
sleep 5
|
||||
|
||||
# Verify recovery - nginx should respond normally again
|
||||
echo "Verifying service recovery..."
|
||||
recovery_attempts=0
|
||||
max_recovery_attempts=10
|
||||
|
||||
while [ $recovery_attempts -lt $max_recovery_attempts ]; do
|
||||
recovery_attempts=$((recovery_attempts + 1))
|
||||
response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 5 http://localhost:8090 || echo "000")
|
||||
|
||||
if [ "$response" == "200" ]; then
|
||||
echo "Recovery verified: nginx responding normally (attempt $recovery_attempts)"
|
||||
break
|
||||
fi
|
||||
|
||||
echo "Recovery attempt $recovery_attempts/$max_recovery_attempts: got $response, retrying..."
|
||||
sleep 3
|
||||
done
|
||||
|
||||
if [ "$response" != "200" ]; then
|
||||
echo "ERROR: Service did not recover after chaos (got $response)"
|
||||
kubectl get pods -l app=nginx-pod-net-chaos
|
||||
kubectl describe pod $POD_NAME
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Cleanup
|
||||
echo "Cleaning up test resources..."
|
||||
kill $PORT_FORWARD_PID 2>/dev/null || true
|
||||
kubectl delete deployment nginx-pod-net-chaos --ignore-not-found=true
|
||||
kubectl delete service nginx-pod-net-chaos-svc --ignore-not-found=true
|
||||
|
||||
echo "Pod network chaos test: Success"
|
||||
}
|
||||
|
||||
functional_test_pod_network_chaos
|
||||
@@ -11,7 +11,7 @@ function functional_pod_network_filter {
|
||||
yq -i '.[0].target="pod-network-filter-test"' scenarios/kube/pod-network-filter.yml
|
||||
yq -i '.[0].protocols=["tcp"]' scenarios/kube/pod-network-filter.yml
|
||||
yq -i '.[0].ports=[443]' scenarios/kube/pod-network-filter.yml
|
||||
|
||||
yq -i '.performance_monitoring.check_critical_alerts=False' CI/config/pod_network_filter.yaml
|
||||
|
||||
## Test webservice deployment
|
||||
kubectl apply -f ./CI/templates/pod_network_filter.yaml
|
||||
@@ -29,7 +29,9 @@ function functional_pod_network_filter {
|
||||
[ $COUNTER -eq "100" ] && echo "maximum number of retry reached, test failed" && exit 1
|
||||
done
|
||||
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/pod_network_filter.yaml > /dev/null 2>&1 &
|
||||
cat scenarios/kube/pod-network-filter.yml
|
||||
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/pod_network_filter.yaml > krkn_pod_network.out 2>&1 &
|
||||
PID=$!
|
||||
|
||||
# wait until the dns resolution starts failing and the service returns 400
|
||||
@@ -53,6 +55,7 @@ function functional_pod_network_filter {
|
||||
done
|
||||
|
||||
wait $PID
|
||||
|
||||
}
|
||||
|
||||
functional_pod_network_filter
|
||||
|
||||
35
CI/tests/test_pod_server.sh
Executable file
35
CI/tests/test_pod_server.sh
Executable file
@@ -0,0 +1,35 @@
|
||||
set -xeEo pipefail
|
||||
|
||||
source CI/tests/common.sh
|
||||
|
||||
trap error ERR
|
||||
trap finish EXIT
|
||||
|
||||
function functional_test_pod_server {
|
||||
export scenario_type="pod_disruption_scenarios"
|
||||
export scenario_file="scenarios/kind/pod_etcd.yml"
|
||||
export post_config=""
|
||||
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/pod_config.yaml
|
||||
yq -i '.[0].config.kill=1' scenarios/kind/pod_etcd.yml
|
||||
|
||||
yq -i '.tunings.daemon_mode=True' CI/config/pod_config.yaml
|
||||
cat CI/config/pod_config.yaml
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/pod_config.yaml &
|
||||
sleep 15
|
||||
curl -X POST http:/0.0.0.0:8081/STOP
|
||||
|
||||
wait
|
||||
|
||||
yq -i '.kraken.signal_state="PAUSE"' CI/config/pod_config.yaml
|
||||
yq -i '.tunings.daemon_mode=False' CI/config/pod_config.yaml
|
||||
cat CI/config/pod_config.yaml
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/pod_config.yaml &
|
||||
sleep 5
|
||||
curl -X POST http:/0.0.0.0:8081/RUN
|
||||
wait
|
||||
|
||||
echo "Pod disruption with server scenario test: Success"
|
||||
}
|
||||
|
||||
functional_test_pod_server
|
||||
18
CI/tests/test_pvc.sh
Executable file
18
CI/tests/test_pvc.sh
Executable file
@@ -0,0 +1,18 @@
|
||||
set -xeEo pipefail
|
||||
|
||||
source CI/tests/common.sh
|
||||
|
||||
trap error ERR
|
||||
trap finish EXIT
|
||||
|
||||
function functional_test_pvc_fill {
|
||||
export scenario_type="pvc_scenarios"
|
||||
export scenario_file="scenarios/kind/pvc_scenario.yaml"
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/pvc_config.yaml
|
||||
cat CI/config/pvc_config.yaml
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/pvc_config.yaml --debug True
|
||||
echo "PVC Fill scenario test: Success"
|
||||
}
|
||||
|
||||
functional_test_pvc_fill
|
||||
@@ -39,7 +39,7 @@ function functional_test_service_hijacking {
|
||||
export scenario_file="scenarios/kube/service_hijacking.yaml"
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/service_hijacking.yaml
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/service_hijacking.yaml > /dev/null 2>&1 &
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/service_hijacking.yaml > /tmp/krkn.log 2>&1 &
|
||||
PID=$!
|
||||
#Waiting the hijacking to have effect
|
||||
COUNTER=0
|
||||
@@ -100,8 +100,13 @@ function functional_test_service_hijacking {
|
||||
[ "${PAYLOAD_PATCH_2//[$'\t\r\n ']}" == "${OUT_PATCH//[$'\t\r\n ']}" ] && echo "Step 2 PATCH Payload OK" || (echo "Step 2 PATCH Payload did not match. Test failed." && exit 1)
|
||||
[ "$OUT_STATUS_CODE" == "$STATUS_CODE_PATCH_2" ] && echo "Step 2 PATCH Status Code OK" || (echo "Step 2 PATCH status code did not match. Test failed." && exit 1)
|
||||
[ "$OUT_CONTENT" == "$TEXT_MIME" ] && echo "Step 2 PATCH MIME OK" || (echo " Step 2 PATCH MIME did not match. Test failed." && exit 1)
|
||||
|
||||
|
||||
|
||||
wait $PID
|
||||
|
||||
cat /tmp/krkn.log
|
||||
|
||||
# now checking if service has been restore correctly and nginx responds correctly
|
||||
curl -s $SERVICE_URL | grep nginx! && echo "BODY: Service restored!" || (echo "BODY: failed to restore service" && exit 1)
|
||||
OUT_STATUS_CODE=`curl -X GET -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL`
|
||||
|
||||
@@ -18,14 +18,13 @@ function functional_test_telemetry {
|
||||
yq -i '.performance_monitoring.prometheus_url="http://localhost:9090"' CI/config/common_test_config.yaml
|
||||
yq -i '.telemetry.run_tag=env(RUN_TAG)' CI/config/common_test_config.yaml
|
||||
|
||||
export scenario_type="hog_scenarios"
|
||||
|
||||
export scenario_file="scenarios/kube/cpu-hog.yml"
|
||||
export scenario_type="pod_disruption_scenarios"
|
||||
export scenario_file="scenarios/kind/pod_path_provisioner.yml"
|
||||
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/telemetry.yaml
|
||||
retval=$(python3 -m coverage run -a run_kraken.py -c CI/config/telemetry.yaml)
|
||||
RUN_FOLDER=`cat CI/out/test_telemetry.out | grep amazonaws.com | sed -rn "s#.*https:\/\/.*\/files/(.*)#\1#p"`
|
||||
RUN_FOLDER=`cat CI/out/test_telemetry.out | grep amazonaws.com | sed -rn "s#.*https:\/\/.*\/files/(.*)#\1#p" | sed 's/\x1b\[[0-9;]*m//g'`
|
||||
$AWS_CLI s3 ls "s3://$AWS_BUCKET/$RUN_FOLDER/" | awk '{ print $4 }' > s3_remote_files
|
||||
echo "checking if telemetry files are uploaded on s3"
|
||||
cat s3_remote_files | grep critical-alerts-00.log || ( echo "FAILED: critical-alerts-00.log not uploaded" && exit 1 )
|
||||
|
||||
175
CI/tests_v2/CONTRIBUTING_TESTS.md
Normal file
175
CI/tests_v2/CONTRIBUTING_TESTS.md
Normal file
@@ -0,0 +1,175 @@
|
||||
# Adding a New Scenario Test (CI/tests_v2)
|
||||
|
||||
This guide explains how to add a new chaos scenario test to the v2 pytest framework. The layout is **folder-per-scenario**: each scenario has its own directory under `scenarios/<scenario_name>/` containing the test file, Kubernetes resources, and the Krkn scenario base YAML.
|
||||
|
||||
## Option 1: Scaffold script (recommended)
|
||||
|
||||
From the **repository root**:
|
||||
|
||||
```bash
|
||||
python CI/tests_v2/scaffold.py --scenario service_hijacking
|
||||
```
|
||||
|
||||
This creates:
|
||||
|
||||
- `CI/tests_v2/scenarios/service_hijacking/test_service_hijacking.py` — A test class extending `BaseScenarioTest` with a stub `test_happy_path` and `WORKLOAD_MANIFEST` pointing to the folder’s `resource.yaml`.
|
||||
- `CI/tests_v2/scenarios/service_hijacking/resource.yaml` — A placeholder Deployment (namespace is patched at deploy time).
|
||||
- `CI/tests_v2/scenarios/service_hijacking/scenario_base.yaml` — A placeholder Krkn scenario; edit this with the structure expected by your scenario type.
|
||||
|
||||
The script automatically registers the marker in `CI/tests_v2/pytest.ini`. For example, it adds:
|
||||
|
||||
```
|
||||
service_hijacking: marks a test as a service_hijacking scenario test
|
||||
```
|
||||
|
||||
**Next steps after scaffolding:**
|
||||
|
||||
1. Verify the marker was added to `pytest.ini` (the scaffold does this automatically).
|
||||
2. Edit `scenario_base.yaml` with the structure your Krkn scenario type expects (see `scenarios/application_outage/scenario_base.yaml` and `scenarios/pod_disruption/scenario_base.yaml` for examples). The top-level key should match `SCENARIO_NAME`.
|
||||
3. If your scenario uses a **list** structure (like pod_disruption) instead of a **dict** with a top-level key, set `NAMESPACE_KEY_PATH` (e.g. `[0, "config", "namespace_pattern"]`) and `NAMESPACE_IS_REGEX = True` if the namespace is a regex pattern.
|
||||
4. The generated `test_happy_path` already uses `self.run_scenario(self.tmp_path, ns)` and assertions. Add more test methods (e.g. negative tests with `@pytest.mark.no_workload`) as needed.
|
||||
5. Adjust `resource.yaml` if your scenario needs a different workload (e.g. specific image or labels).
|
||||
|
||||
If your Kraken scenario type string is not `<scenario>_scenarios`, pass it explicitly:
|
||||
|
||||
```bash
|
||||
python CI/tests_v2/scaffold.py --scenario node_disruption --scenario-type node_scenarios
|
||||
```
|
||||
|
||||
## Option 2: Manual setup
|
||||
|
||||
1. **Create the scenario folder**
|
||||
`CI/tests_v2/scenarios/<scenario_name>/`.
|
||||
|
||||
2. **Add resource.yaml**
|
||||
Kubernetes manifest(s) for the workload (Deployment or Pod). Use a distinct label (e.g. `app: <scenario>-target`). Omit or leave `metadata.namespace`; the framework patches it at deploy time.
|
||||
|
||||
3. **Add scenario_base.yaml**
|
||||
The canonical Krkn scenario structure. Tests will load this, patch namespace (and any overrides), write to `tmp_path`, and pass to `build_config`. See existing scenarios for the format your scenario type expects.
|
||||
|
||||
4. **Add test_<scenario>.py**
|
||||
- Import `BaseScenarioTest` from `lib.base` and helpers from `lib.utils` (e.g. `assert_kraken_success`, `get_pods_list`, `scenario_dir` if needed).
|
||||
- Define a class extending `BaseScenarioTest` with:
|
||||
- `WORKLOAD_MANIFEST = "CI/tests_v2/scenarios/<scenario_name>/resource.yaml"`
|
||||
- `WORKLOAD_IS_PATH = True`
|
||||
- `LABEL_SELECTOR = "app=<label>"`
|
||||
- `SCENARIO_NAME = "<scenario_name>"`
|
||||
- `SCENARIO_TYPE = "<scenario_type>"` (e.g. `application_outages_scenarios`)
|
||||
- `NAMESPACE_KEY_PATH`: path to the namespace field (e.g. `["application_outage", "namespace"]` for dict-based, or `[0, "config", "namespace_pattern"]` for list-based)
|
||||
- `NAMESPACE_IS_REGEX = False` (or `True` for regex patterns like pod_disruption)
|
||||
- `OVERRIDES_KEY_PATH = ["<top-level key>"]` if the scenario supports overrides (e.g. duration, block).
|
||||
- Add `@pytest.mark.functional` and `@pytest.mark.<scenario>` on the class.
|
||||
- In at least one test, call `self.run_scenario(self.tmp_path, self.ns)` and assert with `assert_kraken_success`, `assert_pod_count_unchanged`, and `assert_all_pods_running_and_ready`. Use `self.k8s_core`, `self.tmp_path`, etc. (injected by the base class).
|
||||
|
||||
5. **Register the marker**
|
||||
In `CI/tests_v2/pytest.ini`, under `markers`:
|
||||
```
|
||||
<scenario>: marks a test as a <scenario> scenario test
|
||||
```
|
||||
|
||||
## Conventions
|
||||
|
||||
- **Folder-per-scenario**: One directory per scenario under `scenarios/`. All assets (test, resource.yaml, scenario_base.yaml, and any extra YAMLs) live there for easy tracking and onboarding.
|
||||
- **Ephemeral namespace**: Every test gets a unique `krkn-test-<uuid>` namespace. The base class deploys the workload into it before the test; no manual deploy is required.
|
||||
- **Negative tests**: For tests that don’t need a workload (e.g. invalid scenario, bad namespace), use `@pytest.mark.no_workload`. The test will still get a namespace but no workload will be deployed.
|
||||
- **Scenario type**: `SCENARIO_TYPE` must match the key in Kraken’s config (e.g. `application_outages_scenarios`, `pod_disruption_scenarios`). See `CI/tests_v2/config/common_test_config.yaml` and the scenario plugin’s `get_scenario_types()`.
|
||||
- **Assertions**: Use `assert_kraken_success(result, context=f"namespace={ns}", tmp_path=self.tmp_path)` so failures include stdout/stderr and optional log files.
|
||||
- **Timeouts**: Use constants from `lib.base` (`READINESS_TIMEOUT`, `POLICY_WAIT_TIMEOUT`, etc.) instead of magic numbers.
|
||||
|
||||
## Exit Code Handling
|
||||
|
||||
Kraken uses the following exit codes: **0** = success; **1** = scenario failure (e.g. post scenarios still failing); **2** = critical alerts fired; **3+** = health check / KubeVirt check failures; **-1** = infrastructure error (bad config, no kubeconfig).
|
||||
|
||||
- **Happy-path tests**: Use `assert_kraken_success(result, ...)`. By default only exit code 0 is accepted.
|
||||
- **Alert-aware tests**: If you enable `check_critical_alerts` and expect alerts, use `assert_kraken_success(result, allowed_codes=(0, 2), ...)` so exit code 2 is treated as acceptable.
|
||||
- **Expected-failure tests**: Use `assert_kraken_failure(result, context=..., tmp_path=self.tmp_path)` for negative tests (invalid scenario, bad namespace, etc.). This gives the same diagnostic quality (log dump, tmp_path hint) as success assertions. Prefer this over a bare `assert result.returncode != 0`.
|
||||
|
||||
## Running your new tests
|
||||
|
||||
```bash
|
||||
pytest CI/tests_v2/ -v -m <scenario>
|
||||
```
|
||||
|
||||
For debugging with logs and keeping failed namespaces:
|
||||
|
||||
```bash
|
||||
pytest CI/tests_v2/ -v -m <scenario> --log-cli-level=DEBUG --keep-ns-on-fail
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
Follow these conventions so the framework stays consistent as new scenarios are added.
|
||||
|
||||
### Quick Reference
|
||||
|
||||
| Element | Pattern | Example |
|
||||
|---|---|---|
|
||||
| Scenario folder | `scenarios/<snake_case>/` | `scenarios/node_disruption/` |
|
||||
| Test file | `test_<scenario>.py` | `test_node_disruption.py` |
|
||||
| Test class | `Test<CamelCase>(BaseScenarioTest)` | `TestNodeDisruption` |
|
||||
| Pytest marker | `@pytest.mark.<scenario>` (matches folder) | `@pytest.mark.node_disruption` |
|
||||
| Scenario YAML | `scenario_base.yaml` | — |
|
||||
| Workload YAML | `resource.yaml` | — |
|
||||
| Extra YAMLs | `<descriptive_name>.yaml` | `nginx_http.yaml` |
|
||||
| Lib modules | `lib/<concern>.py` | `lib/deploy.py` |
|
||||
| Public fixtures | `<verb>_<noun>` or `<noun>` | `run_kraken`, `test_namespace` |
|
||||
| Private/autouse fixtures | `_<descriptive>` | `_cleanup_stale_namespaces` |
|
||||
| Assertion helpers | `assert_<condition>` | `assert_pod_count_unchanged` |
|
||||
| Query helpers | `get_<resource>` or `find_<resource>_by_<criteria>` | `get_pods_list`, `find_network_policy_by_prefix` |
|
||||
| Env var overrides | `KRKN_TEST_<NAME>` | `KRKN_TEST_READINESS_TIMEOUT` |
|
||||
|
||||
### Folders
|
||||
|
||||
- One folder per scenario under `scenarios/`. The folder name is `snake_case` and must match the `SCENARIO_NAME` class attribute in the test.
|
||||
- Shared framework code lives in `lib/`. Each module covers a single concern (`k8s`, `namespace`, `deploy`, `kraken`, `utils`, `base`, `preflight`).
|
||||
- Do **not** add scenario-specific code to `lib/`; keep it in the scenario folder as module-level helpers.
|
||||
|
||||
### Files
|
||||
|
||||
- Test files: `test_<scenario>.py`. This is required for pytest discovery (`test_*.py`).
|
||||
- Workload manifests: always `resource.yaml`. If a scenario needs additional K8s resources (e.g. a Service for traffic testing), use a descriptive name like `nginx_http.yaml`.
|
||||
- Scenario config: always `scenario_base.yaml`. This is the template that `load_and_patch_scenario` loads and patches.
|
||||
|
||||
### Classes
|
||||
|
||||
- One test class per file: `Test<CamelCase>` extending `BaseScenarioTest`.
|
||||
- The CamelCase name must be the PascalCase equivalent of the folder name (e.g. `pod_disruption` -> `TestPodDisruption`).
|
||||
|
||||
### Test Methods
|
||||
|
||||
- Prefix: `test_` (pytest requirement).
|
||||
- Use descriptive names that convey **what is being verified**, not implementation details.
|
||||
- Good: `test_pod_crash_and_recovery`, `test_traffic_blocked_during_outage`, `test_invalid_scenario_fails`.
|
||||
- Avoid: `test_run_1`, `test_scenario`, `test_it_works`.
|
||||
|
||||
### Fixtures
|
||||
|
||||
- **Public fixtures** (intended for use in tests): use `<verb>_<noun>` or plain `<noun>`. Examples: `run_kraken`, `deploy_workload`, `test_namespace`, `kubectl`.
|
||||
- **Private/autouse fixtures** (framework internals): prefix with `_`. Examples: `_kube_config_loaded`, `_preflight_checks`, `_inject_common_fixtures`.
|
||||
- K8s client fixtures use the `k8s_` prefix: `k8s_core`, `k8s_apps`, `k8s_networking`, `k8s_client`.
|
||||
|
||||
### Helpers and Utilities
|
||||
|
||||
- **Assertions**: `assert_<what_is_expected>`. Always raise `AssertionError` with a message that includes the namespace.
|
||||
- **K8s queries**: `get_<resource>_list` for direct API calls, `find_<resource>_by_<criteria>` for filtered lookups.
|
||||
- **Private helpers**: prefix with `_` for module-internal functions (e.g. `_pods`, `_policies`, `_get_nested`).
|
||||
|
||||
### Constants and Environment Variables
|
||||
|
||||
- Timeout constants: `UPPER_CASE` in `lib/base.py`. Each is overridable via an env var prefixed `KRKN_TEST_`.
|
||||
- Feature flags: `KRKN_TEST_DRY_RUN`, `KRKN_TEST_COVERAGE`. Always use the `KRKN_TEST_` prefix so all tunables are discoverable with `grep KRKN_TEST_`.
|
||||
|
||||
### Markers
|
||||
|
||||
- Every test class gets `@pytest.mark.functional` (framework-wide) and `@pytest.mark.<scenario>` (scenario-specific).
|
||||
- The scenario marker name matches the folder name exactly.
|
||||
- Behavioral modifiers use plain descriptive names: `no_workload`, `order`.
|
||||
- Register all custom markers in `pytest.ini` to avoid warnings.
|
||||
|
||||
## Adding Dependencies
|
||||
|
||||
- **Runtime (Kraken needs it)**: Add to the **root** `requirements.txt`. Pin a version (e.g. `package==1.2.3` or `package>=1.2,<2`).
|
||||
- **Test-only (only CI/tests_v2 needs it)**: Add to **`CI/tests_v2/requirements.txt`**. Pin a version there as well.
|
||||
- After changing either file, run `make setup` (or `make -f CI/tests_v2/Makefile setup`) from the repo root to verify both files install cleanly together.
|
||||
97
CI/tests_v2/Makefile
Normal file
97
CI/tests_v2/Makefile
Normal file
@@ -0,0 +1,97 @@
|
||||
# CI/tests_v2 functional tests - single entry point.
|
||||
# Run from repo root: make -f CI/tests_v2/Makefile <target>
|
||||
# Or from CI/tests_v2: make <target> (REPO_ROOT is resolved automatically).
|
||||
|
||||
# Resolve repo root: go to Makefile dir then up two levels (CI/tests_v2 -> repo root)
|
||||
REPO_ROOT := $(shell cd "$(dir $(firstword $(MAKEFILE_LIST)))" && cd ../.. && pwd)
|
||||
VENV := $(REPO_ROOT)/venv
|
||||
PYTHON := $(VENV)/bin/python
|
||||
PIP := $(VENV)/bin/pip
|
||||
CLUSTER_NAME ?= ci-krkn
|
||||
TESTS_DIR := $(REPO_ROOT)/CI/tests_v2
|
||||
|
||||
.PHONY: setup preflight test test-fast test-debug test-scenario test-dry-run clean help
|
||||
|
||||
help:
|
||||
@echo "CI/tests_v2 functional tests - usage: make [target]"
|
||||
@echo ""
|
||||
@echo "Targets:"
|
||||
@echo " setup Create venv (if missing), install Python deps, create KinD cluster (kind-config-dev.yml)."
|
||||
@echo " Run once before first test. Override cluster config: KIND_CONFIG=path make setup"
|
||||
@echo ""
|
||||
@echo " preflight Check Python 3.9+, kind, kubectl, Docker, cluster reachability, test deps."
|
||||
@echo " Invoked automatically by test targets; run standalone to validate environment."
|
||||
@echo ""
|
||||
@echo " test Full run: retries (2), timeout 300s, HTML report, JUnit XML, coverage."
|
||||
@echo " Use for CI or final verification. Output: report.html, results.xml"
|
||||
@echo ""
|
||||
@echo " test-fast Quick run: no retries, 120s timeout, no report. For fast local iteration."
|
||||
@echo ""
|
||||
@echo " test-debug Debug run: verbose (-s), keep failed namespaces (--keep-ns-on-fail), DEBUG logging."
|
||||
@echo " Use when investigating failures; inspect kept namespaces with kubectl."
|
||||
@echo ""
|
||||
@echo " test-scenario Run only one scenario. Requires SCENARIO=<marker>."
|
||||
@echo " Example: make test-scenario SCENARIO=pod_disruption"
|
||||
@echo ""
|
||||
@echo " test-dry-run Validate scenario plumbing only (no Kraken execution). Sets KRKN_TEST_DRY_RUN=1."
|
||||
@echo ""
|
||||
@echo " clean Delete KinD cluster $(CLUSTER_NAME) and remove report.html, results.xml."
|
||||
@echo ""
|
||||
@echo " help Show this help."
|
||||
@echo ""
|
||||
@echo "Run from repo root: make -f CI/tests_v2/Makefile <target>"
|
||||
@echo "Or from CI/tests_v2: make <target>"
|
||||
|
||||
setup: $(VENV)/.installed
|
||||
@echo "Running cluster setup..."
|
||||
$(MAKE) -f $(TESTS_DIR)/Makefile preflight
|
||||
cd $(REPO_ROOT) && ./CI/tests_v2/setup_env.sh
|
||||
@echo "Setup complete. Run 'make test' or 'make -f CI/tests_v2/Makefile test' from repo root."
|
||||
|
||||
$(VENV)/.installed: $(REPO_ROOT)/requirements.txt $(TESTS_DIR)/requirements.txt
|
||||
@if [ ! -d "$(VENV)" ]; then python3 -m venv $(VENV); echo "Created venv at $(VENV)"; fi
|
||||
$(PYTHON) -m pip install -q --upgrade pip
|
||||
# Root = Kraken runtime; tests_v2 = test-only plugins; both required for functional tests.
|
||||
$(PIP) install -q -r $(REPO_ROOT)/requirements.txt
|
||||
$(PIP) install -q -r $(TESTS_DIR)/requirements.txt
|
||||
@touch $(VENV)/.installed
|
||||
@echo "Python deps installed."
|
||||
|
||||
preflight:
|
||||
@echo "Preflight: checking Python, tools, and cluster..."
|
||||
@command -v python3 >/dev/null 2>&1 || { echo "Error: python3 not found."; exit 1; }
|
||||
@python3 -c "import sys; exit(0 if sys.version_info >= (3, 9) else 1)" || { echo "Error: Python 3.9+ required."; exit 1; }
|
||||
@command -v kind >/dev/null 2>&1 || { echo "Error: kind not installed."; exit 1; }
|
||||
@command -v kubectl >/dev/null 2>&1 || { echo "Error: kubectl not installed."; exit 1; }
|
||||
@docker info >/dev/null 2>&1 || { echo "Error: Docker not running (required for KinD)."; exit 1; }
|
||||
@if kind get clusters 2>/dev/null | grep -qx "$(CLUSTER_NAME)"; then \
|
||||
kubectl cluster-info >/dev/null 2>&1 || { echo "Error: Cluster $(CLUSTER_NAME) exists but cluster-info failed."; exit 1; }; \
|
||||
else \
|
||||
echo "Note: Cluster $(CLUSTER_NAME) not found. Run 'make setup' to create it."; \
|
||||
fi
|
||||
@$(PYTHON) -c "import pytest_rerunfailures, pytest_html, pytest_timeout, pytest_order" 2>/dev/null || \
|
||||
{ echo "Error: Install test deps with 'make setup' or pip install -r CI/tests_v2/requirements.txt"; exit 1; }
|
||||
@echo "Preflight OK."
|
||||
|
||||
test: preflight
|
||||
cd $(REPO_ROOT) && KRKN_TEST_COVERAGE=1 $(PYTHON) -m pytest $(TESTS_DIR)/ -v --timeout=300 --reruns=2 --reruns-delay=10 \
|
||||
--html=$(TESTS_DIR)/report.html -n auto --junitxml=$(TESTS_DIR)/results.xml
|
||||
|
||||
test-fast: preflight
|
||||
cd $(REPO_ROOT) && $(PYTHON) -m pytest $(TESTS_DIR)/ -v -p no:rerunfailures -n auto --timeout=120
|
||||
|
||||
test-debug: preflight
|
||||
cd $(REPO_ROOT) && $(PYTHON) -m pytest $(TESTS_DIR)/ -v -s -p no:rerunfailures --timeout=300 \
|
||||
--keep-ns-on-fail --log-cli-level=DEBUG
|
||||
|
||||
test-scenario: preflight
|
||||
@if [ -z "$(SCENARIO)" ]; then echo "Error: set SCENARIO=pod_disruption (or application_outage, etc.)"; exit 1; fi
|
||||
cd $(REPO_ROOT) && $(PYTHON) -m pytest $(TESTS_DIR)/ -v -m "$(SCENARIO)" --timeout=300 --reruns=2 --reruns-delay=10
|
||||
|
||||
test-dry-run: preflight
|
||||
cd $(REPO_ROOT) && KRKN_TEST_DRY_RUN=1 $(PYTHON) -m pytest $(TESTS_DIR)/ -v
|
||||
|
||||
clean:
|
||||
@kind delete cluster --name $(CLUSTER_NAME) 2>/dev/null || true
|
||||
@rm -f $(TESTS_DIR)/report.html $(TESTS_DIR)/results.xml
|
||||
@echo "Cleaned cluster and report artifacts."
|
||||
198
CI/tests_v2/README.md
Normal file
198
CI/tests_v2/README.md
Normal file
@@ -0,0 +1,198 @@
|
||||
# Pytest Functional Tests (tests_v2)
|
||||
|
||||
This directory contains a pytest-based functional test framework that runs **alongside** the existing bash tests in `CI/tests/`. It covers the **pod disruption** and **application outage** scenarios with proper assertions, retries, and reporting.
|
||||
|
||||
Each test runs in its **own ephemeral Kubernetes namespace** (`krkn-test-<uuid>`). Before the test, the framework creates the namespace, deploys the target workload, and waits for pods to be ready. After the test, the namespace is deleted (cascading all resources). **You do not need to deploy any workloads manually.**
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Without a cluster, tests that need one will **skip** with a clear message (e.g. *"Could not load kube config"*). No manual workload deployment is required; workloads are deployed automatically into ephemeral namespaces per test.
|
||||
|
||||
- **KinD cluster** (or any Kubernetes cluster) running with `kubectl` configured (e.g. `KUBECONFIG` or default `~/.kube/config`).
|
||||
- **Python 3.9+** and main repo deps: `pip install -r requirements.txt`.
|
||||
|
||||
### Supported clusters
|
||||
|
||||
- **KinD** (recommended): Use `make -f CI/tests_v2/Makefile setup` from the repo root. Fastest for local dev; uses a 2-node dev config by default. Override with `KIND_CONFIG=/path/to/kind-config.yml` for a larger cluster.
|
||||
- **Minikube**: Should work; ensure `kubectl` context is set. Not tested in CI.
|
||||
- **Remote/cloud cluster**: Tests create and delete namespaces; use with caution. Use `--require-kind` to avoid accidentally running against production (tests will skip unless context is kind/minikube).
|
||||
|
||||
### Setting up the cluster
|
||||
|
||||
**Option A: Use the setup script (recommended)**
|
||||
|
||||
From the repository root, with `kind` and `kubectl` installed:
|
||||
|
||||
```bash
|
||||
# Create KinD cluster (defaults to CI/tests_v2/kind-config-dev.yml; override with KIND_CONFIG=...)
|
||||
./CI/tests_v2/setup_env.sh
|
||||
```
|
||||
|
||||
Then in the same shell (or after `export KUBECONFIG=~/.kube/config` in another terminal), activate your venv and install Python deps:
|
||||
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate # or: source venv/Scripts/activate on Windows
|
||||
pip install -r requirements.txt
|
||||
pip install -r CI/tests_v2/requirements.txt
|
||||
```
|
||||
|
||||
**Option B: Manual setup**
|
||||
|
||||
1. Install [kind](https://kind.sigs.k8s.io/docs/user/quick-start/) and [kubectl](https://kubernetes.io/docs/tasks/tools/).
|
||||
2. Create a cluster (from repo root):
|
||||
```bash
|
||||
kind create cluster --name kind --config kind-config.yml
|
||||
```
|
||||
3. Wait for the cluster:
|
||||
```bash
|
||||
kubectl wait --for=condition=Ready nodes --all --timeout=120s
|
||||
```
|
||||
4. Create a virtualenv, activate it, and install dependencies (as in Option A).
|
||||
5. Run tests from repo root: `pytest CI/tests_v2/ -v ...`
|
||||
|
||||
## Install test dependencies
|
||||
|
||||
From the repository root:
|
||||
|
||||
```bash
|
||||
pip install -r CI/tests_v2/requirements.txt
|
||||
```
|
||||
|
||||
This adds `pytest-rerunfailures`, `pytest-html`, `pytest-timeout`, and `pytest-order` (pytest and coverage come from the main `requirements.txt`).
|
||||
|
||||
## Dependency Management
|
||||
|
||||
Dependencies are split into two files:
|
||||
|
||||
- **Root `requirements.txt`** — Kraken runtime (cloud SDKs, Kubernetes client, krkn-lib, pytest, coverage, etc.). Required to run Kraken.
|
||||
- **`CI/tests_v2/requirements.txt`** — Test-only pytest plugins (rerunfailures, html, timeout, order, xdist). Not needed by Kraken itself.
|
||||
|
||||
**Rule of thumb:** If Kraken needs it at runtime, add to root. If only the functional tests need it, add to `CI/tests_v2/requirements.txt`.
|
||||
|
||||
Running `make -f CI/tests_v2/Makefile setup` (or `make setup` from `CI/tests_v2`) creates the venv and installs **both** files automatically; you do not need to install them separately. The Makefile re-installs when either file changes (via the `.installed` sentinel).
|
||||
|
||||
## Run tests
|
||||
|
||||
All commands below are from the **repository root**.
|
||||
|
||||
### Basic run (with retries and HTML report)
|
||||
|
||||
```bash
|
||||
pytest CI/tests_v2/ -v --timeout=300 --reruns=2 --reruns-delay=10 --html=CI/tests_v2/report.html --junitxml=CI/tests_v2/results.xml
|
||||
```
|
||||
|
||||
- Failed tests are **retried up to 2 times** with a 10s delay (configurable in `CI/tests_v2/pytest.ini`).
|
||||
- Each test has a **5-minute timeout**.
|
||||
- Open `CI/tests_v2/report.html` in a browser for a detailed report.
|
||||
|
||||
### Run in parallel (faster suite)
|
||||
|
||||
```bash
|
||||
pytest CI/tests_v2/ -v -n 4 --timeout=300
|
||||
```
|
||||
|
||||
Ephemeral namespaces make tests parallel-safe; use `-n` with the number of workers (e.g. 4).
|
||||
|
||||
### Run without retries (for debugging)
|
||||
|
||||
```bash
|
||||
pytest CI/tests_v2/ -v -p no:rerunfailures
|
||||
```
|
||||
|
||||
### Run with coverage
|
||||
|
||||
```bash
|
||||
python -m coverage run -m pytest CI/tests_v2/ -v
|
||||
python -m coverage report
|
||||
```
|
||||
|
||||
To append to existing coverage from unit tests, ensure coverage was started with `coverage run -a` for earlier runs, or run the full test suite in one go.
|
||||
|
||||
### Run only pod disruption tests
|
||||
|
||||
```bash
|
||||
pytest CI/tests_v2/ -v -m pod_disruption
|
||||
```
|
||||
|
||||
### Run only application outage tests
|
||||
|
||||
```bash
|
||||
pytest CI/tests_v2/ -v -m application_outage
|
||||
```
|
||||
|
||||
### Run with verbose output and no capture
|
||||
|
||||
```bash
|
||||
pytest CI/tests_v2/ -v -s
|
||||
```
|
||||
|
||||
### Keep failed test namespaces for debugging
|
||||
|
||||
When a test fails, its ephemeral namespace is normally deleted. To **keep** the namespace so you can inspect pods, logs, and network policies:
|
||||
|
||||
```bash
|
||||
pytest CI/tests_v2/ -v --keep-ns-on-fail
|
||||
```
|
||||
|
||||
On failure, the namespace name is printed (e.g. `[keep-ns-on-fail] Keeping namespace krkn-test-a1b2c3d4 for debugging`). Use `kubectl get pods -n krkn-test-a1b2c3d4` (and similar) to debug, then delete the namespace manually when done.
|
||||
|
||||
### Logging and cluster options
|
||||
|
||||
- **Structured logging**: Use `--log-cli-level=DEBUG` to see namespace creation, workload deploy, and readiness in the console. Use `--log-file=test.log` to capture logs to a file.
|
||||
- **Require dev cluster**: To avoid running against the wrong cluster, use `--require-kind`. Tests will skip unless the current kube context cluster name contains "kind" or "minikube".
|
||||
- **Stale namespace cleanup**: At session start, namespaces matching `krkn-test-*` that are older than 30 minutes are deleted (e.g. from a previous crashed run).
|
||||
- **Timeout overrides**: Set env vars to tune timeouts (e.g. in CI): `KRKN_TEST_READINESS_TIMEOUT`, `KRKN_TEST_DEPLOY_TIMEOUT`, `KRKN_TEST_NS_CLEANUP_TIMEOUT`, `KRKN_TEST_POLICY_WAIT_TIMEOUT`, `KRKN_TEST_KRAKEN_PROC_WAIT_TIMEOUT`, `KRKN_TEST_TIMEOUT_BUDGET`.
|
||||
|
||||
## Architecture
|
||||
|
||||
- **Folder-per-scenario**: Each scenario lives under `scenarios/<scenario_name>/` with:
|
||||
- **test_<scenario>.py** — Test class extending `BaseScenarioTest`; sets `WORKLOAD_MANIFEST`, `SCENARIO_NAME`, `SCENARIO_TYPE`, `NAMESPACE_KEY_PATH`, and optionally `OVERRIDES_KEY_PATH`.
|
||||
- **resource.yaml** — Kubernetes resources (Deployment/Pod) for the scenario; namespace is patched at deploy time.
|
||||
- **scenario_base.yaml** — Canonical Krkn scenario; the base class loads it, patches namespace (and overrides), and passes it to Kraken via `run_scenario()`. Optional extra YAMLs (e.g. `nginx_http.yaml` for application_outage) can live in the same folder.
|
||||
- **lib/**: Shared framework — `lib/base.py` defines `BaseScenarioTest`, timeout constants (env-overridable), and scenario helpers (`load_and_patch_scenario`, `run_scenario`); `lib/utils.py` provides assertion and K8s helpers; `lib/k8s.py` provides K8s client fixtures; `lib/namespace.py` provides namespace lifecycle; `lib/deploy.py` provides `deploy_workload`, `wait_for_pods_running`, `wait_for_deployment_replicas`; `lib/kraken.py` provides `run_kraken`, `build_config` (using `CI/tests_v2/config/common_test_config.yaml`).
|
||||
- **conftest.py**: Re-exports fixtures from the lib modules and defines `pytest_addoption`, logging, and `repo_root`.
|
||||
- **Adding a new scenario**: Use the scaffold script (see [CONTRIBUTING_TESTS.md](CONTRIBUTING_TESTS.md)) to create `scenarios/<name>/` with test file, `resource.yaml`, and `scenario_base.yaml`, or copy an existing scenario folder and adapt.
|
||||
|
||||
## What is tested
|
||||
|
||||
Each test runs in an isolated ephemeral namespace; workloads are deployed automatically before the test and the namespace is deleted after (unless `--keep-ns-on-fail` is set and the test failed).
|
||||
|
||||
- **scenarios/pod_disruption/**
|
||||
Pod disruption scenario. `resource.yaml` is a deployment with label `app=krkn-pod-disruption-target`; `scenario_base.yaml` is loaded and `namespace_pattern` is patched to the test namespace. The test:
|
||||
1. Records baseline pod UIDs and restart counts.
|
||||
2. Runs Kraken with the pod disruption scenario.
|
||||
3. Asserts that chaos had an effect (UIDs changed or restart count increased).
|
||||
4. Waits for pods to be Running and all containers Ready.
|
||||
5. Asserts pod count is unchanged and all pods are healthy.
|
||||
|
||||
- **scenarios/application_outage/**
|
||||
Application outage scenario (block Ingress/Egress to target pods, then restore). `resource.yaml` is the main workload (outage pod); `scenario_base.yaml` is loaded and patched with namespace (and duration/block as needed). Optional `nginx_http.yaml` is used by the traffic test. Tests include:
|
||||
- **test_app_outage_block_restore_and_variants**: Happy path with default, exclude_label, and block variants (Ingress, Egress, both); Krkn exit 0, pods still Running/Ready.
|
||||
- **test_network_policy_created_then_deleted**: Policy with prefix `krkn-deny-` appears during run and is gone after.
|
||||
- **test_traffic_blocked_during_outage** (disabled, planned): Deploys nginx with label `scenario=outage`, port-forwards; during outage curl fails, after run curl succeeds.
|
||||
- **test_invalid_scenario_fails**: Invalid scenario file (missing `application_outage` key) causes Kraken to exit non-zero.
|
||||
- **test_bad_namespace_fails**: Scenario targeting a non-existent namespace causes Kraken to exit non-zero.
|
||||
|
||||
## Configuration
|
||||
|
||||
- **pytest.ini**: Markers (`functional`, `pod_disruption`, `application_outage`, `no_workload`). Use `--timeout=300`, `--reruns=2`, `--reruns-delay=10` on the command line for full runs.
|
||||
- **conftest.py**: Re-exports fixtures from `lib/k8s.py`, `lib/namespace.py`, `lib/deploy.py`, `lib/kraken.py` (e.g. `test_namespace`, `deploy_workload`, `k8s_core`, `wait_for_pods_running`, `run_kraken`, `build_config`). Configs are built from `CI/tests_v2/config/common_test_config.yaml` with monitoring disabled for local runs. Timeout constants in `lib/base.py` can be overridden via env vars.
|
||||
- **Cluster access**: Reads and applies use the Kubernetes Python client; `kubectl` is still used for `port-forward` and for running Kraken.
|
||||
- **utils.py**: Pod/network policy helpers and assertion helpers (`assert_all_pods_running_and_ready`, `assert_pod_count_unchanged`, `assert_kraken_success`, `assert_kraken_failure`, `patch_namespace_in_docs`).
|
||||
|
||||
## Relationship to existing CI
|
||||
|
||||
- The **existing** bash tests in `CI/tests/` and `CI/run.sh` are **unchanged**. They continue to run as before in GitHub Actions.
|
||||
- This framework is **additive**. To run it in CI later, add a separate job or step that runs `pytest CI/tests_v2/ ...` from the repo root.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **`pytest.skip: Could not load kube config`** — No cluster or bad KUBECONFIG. Run `make -f CI/tests_v2/Makefile setup` (or `make setup` from `CI/tests_v2`) or check `kubectl cluster-info`.
|
||||
- **KinD cluster creation hangs** — Docker is not running. Start Docker Desktop or run `systemctl start docker`.
|
||||
- **`Bind for 0.0.0.0:9090 failed: port is already allocated`** — Another process (e.g. Prometheus) is using the port. The default dev config (`kind-config-dev.yml`) no longer maps host ports; if you use `KIND_CONFIG=kind-config.yml` or a custom config with `extraPortMappings`, free the port or switch to `kind-config-dev.yml`.
|
||||
- **`TimeoutError: Pods did not become ready`** — Slow image pull or node resource limits. Increase `KRKN_TEST_READINESS_TIMEOUT` or check node resources.
|
||||
- **`ModuleNotFoundError: pytest_rerunfailures`** — Missing test deps. Run `pip install -r CI/tests_v2/requirements.txt` (or `make setup`).
|
||||
- **Stale `krkn-test-*` namespaces** — Left over from a previous crashed run. They are auto-cleaned at session start (older than 30 min). To remove cluster and reports: `make -f CI/tests_v2/Makefile clean`.
|
||||
- **Wrong cluster targeted** — Multiple kube contexts. Use `--require-kind` to skip unless context is kind/minikube, or set context explicitly: `kubectl config use-context kind-ci-krkn`.
|
||||
- **`OSError: [Errno 48] Address already in use` when running tests in parallel** — Kraken normally starts an HTTP status server on port 8081. With `-n auto` (pytest-xdist), multiple Kraken processes would all try to bind to 8081. The test framework disables this server (`publish_kraken_status: False`) in the generated config, so parallel runs should not hit this. If you see it, ensure you're using the framework's `build_config` and not a config that has `publish_kraken_status: True`.
|
||||
74
CI/tests_v2/config/common_test_config.yaml
Normal file
74
CI/tests_v2/config/common_test_config.yaml
Normal file
@@ -0,0 +1,74 @@
|
||||
kraken:
|
||||
distribution: kubernetes # Distribution can be kubernetes or openshift.
|
||||
kubeconfig_path: ~/.kube/config # Path to kubeconfig.
|
||||
exit_on_failure: False # Exit when a post action scenario fails.
|
||||
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
|
||||
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
|
||||
signal_address: 0.0.0.0 # Signal listening address
|
||||
port: 8081 # Signal port
|
||||
auto_rollback: True # Enable auto rollback for scenarios.
|
||||
rollback_versions_directory: /tmp/kraken-rollback # Directory to store rollback version files.
|
||||
chaos_scenarios: # List of policies/chaos scenarios to load.
|
||||
- $scenario_type: # List of chaos pod scenarios to load.
|
||||
- $scenario_file
|
||||
cerberus:
|
||||
cerberus_enabled: False # Enable it when cerberus is previously installed.
|
||||
cerberus_url: # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal.
|
||||
|
||||
performance_monitoring:
|
||||
capture_metrics: False
|
||||
metrics_profile_path: config/metrics-aggregated.yaml
|
||||
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
|
||||
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
|
||||
uuid: # uuid for the run is generated by default if not set.
|
||||
enable_alerts: True # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
|
||||
enable_metrics: True
|
||||
alert_profile: config/alerts.yaml # Path or URL to alert profile with the prometheus queries
|
||||
metrics_profile: config/metrics-report.yaml
|
||||
check_critical_alerts: True # Path to alert profile with the prometheus queries.
|
||||
|
||||
tunings:
|
||||
wait_duration: 6 # Duration to wait between each chaos scenario.
|
||||
iterations: 1 # Number of times to execute the scenarios.
|
||||
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever.
|
||||
telemetry:
|
||||
enabled: False # enable/disables the telemetry collection feature
|
||||
api_url: https://yvnn4rfoi7.execute-api.us-west-2.amazonaws.com/test #telemetry service endpoint
|
||||
username: $TELEMETRY_USERNAME # telemetry service username
|
||||
password: $TELEMETRY_PASSWORD # telemetry service password
|
||||
prometheus_namespace: 'monitoring' # prometheus namespace
|
||||
prometheus_pod_name: 'prometheus-kind-prometheus-kube-prome-prometheus-0' # prometheus pod_name
|
||||
prometheus_container_name: 'prometheus'
|
||||
prometheus_backup: True # enables/disables prometheus data collection
|
||||
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
|
||||
backup_threads: 5 # number of telemetry download/upload threads
|
||||
archive_path: /tmp # local path where the archive files will be temporarily stored
|
||||
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
|
||||
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
|
||||
archive_size: 10000 # the size of the prometheus data archive size in KB. The lower the size of archive is
|
||||
logs_backup: True
|
||||
logs_filter_patterns:
|
||||
- "(\\w{3}\\s\\d{1,2}\\s\\d{2}:\\d{2}:\\d{2}\\.\\d+).+" # Sep 9 11:20:36.123425532
|
||||
- "kinit (\\d+/\\d+/\\d+\\s\\d{2}:\\d{2}:\\d{2})\\s+" # kinit 2023/09/15 11:20:36 log
|
||||
- "(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d+Z).+" # 2023-09-15T11:20:36.123425532Z log
|
||||
oc_cli_path: /usr/bin/oc # optional, if not specified will be search in $PATH
|
||||
events_backup: True # enables/disables cluster events collection
|
||||
telemetry_group: "funtests"
|
||||
elastic:
|
||||
enable_elastic: False
|
||||
verify_certs: False
|
||||
elastic_url: "https://192.168.39.196" # To track results in elasticsearch, give url to server here; will post telemetry details when url and index not blank
|
||||
elastic_port: 32766
|
||||
username: "elastic"
|
||||
password: "test"
|
||||
metrics_index: "krkn-metrics"
|
||||
alerts_index: "krkn-alerts"
|
||||
telemetry_index: "krkn-telemetry"
|
||||
|
||||
health_checks: # Utilizing health check endpoints to observe application behavior during chaos injection.
|
||||
interval: # Interval in seconds to perform health checks, default value is 2 seconds
|
||||
config: # Provide list of health check configurations for applications
|
||||
- url: # Provide application endpoint
|
||||
bearer_token: # Bearer token for authentication if any
|
||||
auth: # Provide authentication credentials (username , password) in tuple format if any, ex:("admin","secretpassword")
|
||||
exit_on_failure: # If value is True exits when health check failed for application, values can be True/False
|
||||
67
CI/tests_v2/conftest.py
Normal file
67
CI/tests_v2/conftest.py
Normal file
@@ -0,0 +1,67 @@
|
||||
"""
|
||||
Shared fixtures for pytest functional tests (CI/tests_v2).
|
||||
Tests must be run from the repository root so run_kraken.py and config paths resolve.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption(
|
||||
"--keep-ns-on-fail",
|
||||
action="store_true",
|
||||
default=False,
|
||||
help="Don't delete test namespaces on failure (for debugging)",
|
||||
)
|
||||
parser.addoption(
|
||||
"--require-kind",
|
||||
action="store_true",
|
||||
default=False,
|
||||
help="Skip tests unless current context is a known dev cluster (kind, minikube)",
|
||||
)
|
||||
|
||||
|
||||
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
|
||||
def pytest_runtest_makereport(item, call):
|
||||
outcome = yield
|
||||
rep = outcome.get_result()
|
||||
setattr(item, f"rep_{rep.when}", rep)
|
||||
|
||||
|
||||
def _repo_root() -> Path:
|
||||
"""Repository root (directory containing run_kraken.py and CI/)."""
|
||||
return Path(__file__).resolve().parent.parent.parent
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def repo_root():
|
||||
return _repo_root()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
def _configure_logging():
|
||||
"""Set log format with timestamps for test runs."""
|
||||
logging.basicConfig(
|
||||
format="%(asctime)s %(levelname)s [%(name)s] %(message)s",
|
||||
datefmt="%Y-%m-%d %H:%M:%S",
|
||||
level=logging.INFO,
|
||||
)
|
||||
|
||||
|
||||
# Re-export fixtures from lib modules so pytest discovers them
|
||||
from lib.deploy import deploy_workload, wait_for_pods_running # noqa: E402, F401
|
||||
from lib.kraken import build_config, run_kraken, run_kraken_background # noqa: E402, F401
|
||||
from lib.k8s import ( # noqa: E402, F401
|
||||
_kube_config_loaded,
|
||||
_log_cluster_context,
|
||||
k8s_apps,
|
||||
k8s_client,
|
||||
k8s_core,
|
||||
k8s_networking,
|
||||
kubectl,
|
||||
)
|
||||
from lib.namespace import _cleanup_stale_namespaces, test_namespace # noqa: E402, F401
|
||||
from lib.preflight import _preflight_checks # noqa: E402, F401
|
||||
8
CI/tests_v2/kind-config-dev.yml
Normal file
8
CI/tests_v2/kind-config-dev.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
# Lean KinD config for local dev (faster than full 5-node). Use KIND_CONFIG to override.
|
||||
# No extraPortMappings so setup works when 9090/30080 are in use (e.g. local Prometheus).
|
||||
# For Prometheus/ES port mapping, use the repo root kind-config.yml.
|
||||
kind: Cluster
|
||||
apiVersion: kind.x-k8s.io/v1alpha4
|
||||
nodes:
|
||||
- role: control-plane
|
||||
- role: worker
|
||||
7
CI/tests_v2/lib/__init__.py
Normal file
7
CI/tests_v2/lib/__init__.py
Normal file
@@ -0,0 +1,7 @@
|
||||
# Shared framework for CI/tests_v2 functional tests.
|
||||
# base: BaseScenarioTest, timeout constants
|
||||
# utils: assertions, K8s helpers, patch_namespace_in_docs
|
||||
# k8s: K8s client fixtures, cluster context checks
|
||||
# namespace: test_namespace, stale namespace cleanup
|
||||
# deploy: deploy_workload, wait_for_pods_running, wait_for_deployment_replicas
|
||||
# kraken: run_kraken, run_kraken_background, build_config
|
||||
155
CI/tests_v2/lib/base.py
Normal file
155
CI/tests_v2/lib/base.py
Normal file
@@ -0,0 +1,155 @@
|
||||
"""
|
||||
Base class for CI/tests_v2 scenario tests.
|
||||
Encapsulates the shared lifecycle: ephemeral namespace, optional workload deploy, teardown.
|
||||
"""
|
||||
|
||||
import copy
|
||||
import logging
|
||||
import os
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
|
||||
from lib.utils import load_scenario_base
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _get_nested(obj, path):
|
||||
"""Walk path (list of keys/indices) and return the value. Supports list and dict."""
|
||||
for key in path:
|
||||
obj = obj[key]
|
||||
return obj
|
||||
|
||||
|
||||
def _set_nested(obj, path, value):
|
||||
"""Walk path to the parent and set the last key to value."""
|
||||
if not path:
|
||||
return
|
||||
parent_path, last_key = path[:-1], path[-1]
|
||||
parent = obj
|
||||
for key in parent_path:
|
||||
parent = parent[key]
|
||||
parent[last_key] = value
|
||||
|
||||
|
||||
# Timeout constants (seconds). Override via env vars (e.g. KRKN_TEST_READINESS_TIMEOUT).
|
||||
# Coordinate with pytest-timeout budget (e.g. 300s).
|
||||
TIMEOUT_BUDGET = int(os.environ.get("KRKN_TEST_TIMEOUT_BUDGET", "300"))
|
||||
DEPLOY_TIMEOUT = int(os.environ.get("KRKN_TEST_DEPLOY_TIMEOUT", "90"))
|
||||
READINESS_TIMEOUT = int(os.environ.get("KRKN_TEST_READINESS_TIMEOUT", "90"))
|
||||
NS_CLEANUP_TIMEOUT = int(os.environ.get("KRKN_TEST_NS_CLEANUP_TIMEOUT", "60"))
|
||||
POLICY_WAIT_TIMEOUT = int(os.environ.get("KRKN_TEST_POLICY_WAIT_TIMEOUT", "30"))
|
||||
KRAKEN_PROC_WAIT_TIMEOUT = int(os.environ.get("KRKN_TEST_KRAKEN_PROC_WAIT_TIMEOUT", "60"))
|
||||
|
||||
|
||||
class BaseScenarioTest:
|
||||
"""
|
||||
Base class for scenario tests. Subclasses set:
|
||||
- WORKLOAD_MANIFEST: path (str), or callable(namespace) -> YAML str for inline manifest
|
||||
- WORKLOAD_IS_PATH: True if WORKLOAD_MANIFEST is a file path, False if inline YAML
|
||||
- LABEL_SELECTOR: label selector for pods to wait on (e.g. "app=my-target")
|
||||
- SCENARIO_NAME: e.g. "pod_disruption", "application_outage"
|
||||
- SCENARIO_TYPE: e.g. "pod_disruption_scenarios", "application_outages_scenarios"
|
||||
- NAMESPACE_KEY_PATH: path to namespace field, e.g. [0, "config", "namespace_pattern"] or ["application_outage", "namespace"]
|
||||
- NAMESPACE_IS_REGEX: True to wrap namespace in ^...$
|
||||
- OVERRIDES_KEY_PATH: path to dict for **overrides (e.g. ["application_outage"]), or [] if none
|
||||
"""
|
||||
|
||||
WORKLOAD_MANIFEST = None
|
||||
WORKLOAD_IS_PATH = True
|
||||
LABEL_SELECTOR = None
|
||||
SCENARIO_NAME = ""
|
||||
SCENARIO_TYPE = ""
|
||||
NAMESPACE_KEY_PATH = []
|
||||
NAMESPACE_IS_REGEX = False
|
||||
OVERRIDES_KEY_PATH = []
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _inject_common_fixtures(
|
||||
self,
|
||||
repo_root,
|
||||
tmp_path,
|
||||
build_config,
|
||||
run_kraken,
|
||||
run_kraken_background,
|
||||
k8s_core,
|
||||
k8s_apps,
|
||||
k8s_networking,
|
||||
k8s_client,
|
||||
):
|
||||
"""Inject common fixtures onto self so test methods don't need to declare them."""
|
||||
self.repo_root = repo_root
|
||||
self.tmp_path = tmp_path
|
||||
self.build_config = build_config
|
||||
self.run_kraken = run_kraken
|
||||
self.run_kraken_background = run_kraken_background
|
||||
self.k8s_core = k8s_core
|
||||
self.k8s_apps = k8s_apps
|
||||
self.k8s_networking = k8s_networking
|
||||
self.k8s_client = k8s_client
|
||||
yield
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup_workload(self, request, repo_root):
|
||||
if "no_workload" in request.keywords:
|
||||
request.instance.ns = request.getfixturevalue("test_namespace")
|
||||
logger.debug("no_workload marker: skipping workload deploy, ns=%s", request.instance.ns)
|
||||
yield
|
||||
return
|
||||
deploy = request.getfixturevalue("deploy_workload")
|
||||
test_namespace = request.getfixturevalue("test_namespace")
|
||||
manifest = self.WORKLOAD_MANIFEST
|
||||
if callable(manifest):
|
||||
manifest = manifest(test_namespace)
|
||||
is_path = False
|
||||
logger.info("Deploying inline workload in ns=%s, label_selector=%s", test_namespace, self.LABEL_SELECTOR)
|
||||
else:
|
||||
is_path = self.WORKLOAD_IS_PATH
|
||||
if is_path and manifest and not Path(manifest).is_absolute():
|
||||
manifest = repo_root / manifest
|
||||
logger.info("Deploying workload from %s in ns=%s, label_selector=%s", manifest, test_namespace, self.LABEL_SELECTOR)
|
||||
ns = deploy(manifest, self.LABEL_SELECTOR, is_path=is_path, timeout=DEPLOY_TIMEOUT)
|
||||
request.instance.ns = ns
|
||||
yield
|
||||
|
||||
def load_and_patch_scenario(self, repo_root, namespace, **overrides):
|
||||
"""Load scenario_base.yaml and patch namespace (and overrides). Returns the scenario structure."""
|
||||
scenario = copy.deepcopy(load_scenario_base(repo_root, self.SCENARIO_NAME))
|
||||
ns_value = f"^{namespace}$" if self.NAMESPACE_IS_REGEX else namespace
|
||||
if self.NAMESPACE_KEY_PATH:
|
||||
_set_nested(scenario, self.NAMESPACE_KEY_PATH, ns_value)
|
||||
if overrides and self.OVERRIDES_KEY_PATH:
|
||||
target = _get_nested(scenario, self.OVERRIDES_KEY_PATH)
|
||||
for key, value in overrides.items():
|
||||
target[key] = value
|
||||
return scenario
|
||||
|
||||
def write_scenario(self, tmp_path, scenario_data, suffix=""):
|
||||
"""Write scenario data to a YAML file in tmp_path. Returns the path."""
|
||||
filename = f"{self.SCENARIO_NAME}_scenario{suffix}.yaml"
|
||||
path = tmp_path / filename
|
||||
path.write_text(yaml.dump(scenario_data, default_flow_style=False, sort_keys=False))
|
||||
return path
|
||||
|
||||
def run_scenario(self, tmp_path, namespace, *, overrides=None, config_filename=None):
|
||||
"""Load, patch, write scenario; build config; run Kraken. Returns CompletedProcess."""
|
||||
scenario = self.load_and_patch_scenario(self.repo_root, namespace, **(overrides or {}))
|
||||
scenario_path = self.write_scenario(tmp_path, scenario)
|
||||
config_path = self.build_config(
|
||||
self.SCENARIO_TYPE,
|
||||
str(scenario_path),
|
||||
filename=config_filename or "test_config.yaml",
|
||||
)
|
||||
if os.environ.get("KRKN_TEST_DRY_RUN", "0") == "1":
|
||||
logger.info(
|
||||
"[dry-run] Would run Kraken with config=%s, scenario=%s",
|
||||
config_path,
|
||||
scenario_path,
|
||||
)
|
||||
return subprocess.CompletedProcess(
|
||||
args=[], returncode=0, stdout="[dry-run] skipped", stderr=""
|
||||
)
|
||||
return self.run_kraken(config_path)
|
||||
145
CI/tests_v2/lib/deploy.py
Normal file
145
CI/tests_v2/lib/deploy.py
Normal file
@@ -0,0 +1,145 @@
|
||||
"""
|
||||
Workload deploy and pod/deployment readiness fixtures for CI/tests_v2.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import time
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
from kubernetes import utils as k8s_utils
|
||||
|
||||
from lib.base import READINESS_TIMEOUT
|
||||
from lib.utils import patch_namespace_in_docs
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def wait_for_deployment_replicas(k8s_apps, namespace: str, name: str, timeout: int = 120) -> None:
|
||||
"""
|
||||
Poll until the deployment has ready_replicas >= spec.replicas.
|
||||
Raises TimeoutError with diagnostic details on failure.
|
||||
"""
|
||||
deadline = time.monotonic() + timeout
|
||||
last_dep = None
|
||||
attempts = 0
|
||||
while time.monotonic() < deadline:
|
||||
try:
|
||||
dep = k8s_apps.read_namespaced_deployment(name=name, namespace=namespace)
|
||||
except Exception as e:
|
||||
logger.debug("Deployment %s/%s poll attempt %s failed: %s", namespace, name, attempts, e)
|
||||
time.sleep(2)
|
||||
attempts += 1
|
||||
continue
|
||||
last_dep = dep
|
||||
ready = dep.status.ready_replicas or 0
|
||||
desired = dep.spec.replicas or 1
|
||||
if ready >= desired:
|
||||
logger.debug("Deployment %s/%s ready (%s/%s)", namespace, name, ready, desired)
|
||||
return
|
||||
logger.debug("Deployment %s/%s not ready yet: %s/%s", namespace, name, ready, desired)
|
||||
time.sleep(2)
|
||||
attempts += 1
|
||||
diag = ""
|
||||
if last_dep is not None and last_dep.status:
|
||||
diag = f" ready_replicas={last_dep.status.ready_replicas}, desired={last_dep.spec.replicas}"
|
||||
raise TimeoutError(
|
||||
f"Deployment {namespace}/{name} did not become ready within {timeout}s.{diag}"
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def wait_for_pods_running(k8s_core):
|
||||
"""
|
||||
Poll until all matching pods are Running and all containers ready.
|
||||
Uses exponential backoff: 1s, 2s, 4s, ... capped at 10s.
|
||||
Raises TimeoutError with diagnostic details on failure.
|
||||
"""
|
||||
|
||||
def _wait(namespace: str, label_selector: str, timeout: int = READINESS_TIMEOUT):
|
||||
deadline = time.monotonic() + timeout
|
||||
interval = 1.0
|
||||
max_interval = 10.0
|
||||
last_list = None
|
||||
while time.monotonic() < deadline:
|
||||
try:
|
||||
pod_list = k8s_core.list_namespaced_pod(
|
||||
namespace=namespace,
|
||||
label_selector=label_selector,
|
||||
)
|
||||
except Exception:
|
||||
time.sleep(min(interval, max_interval))
|
||||
interval = min(interval * 2, max_interval)
|
||||
continue
|
||||
last_list = pod_list
|
||||
items = pod_list.items or []
|
||||
if not items:
|
||||
time.sleep(min(interval, max_interval))
|
||||
interval = min(interval * 2, max_interval)
|
||||
continue
|
||||
all_running = all(
|
||||
(p.status and p.status.phase == "Running") for p in items
|
||||
)
|
||||
if not all_running:
|
||||
time.sleep(min(interval, max_interval))
|
||||
interval = min(interval * 2, max_interval)
|
||||
continue
|
||||
all_ready = True
|
||||
for p in items:
|
||||
if not p.status or not p.status.container_statuses:
|
||||
all_ready = False
|
||||
break
|
||||
for cs in p.status.container_statuses:
|
||||
if not getattr(cs, "ready", False):
|
||||
all_ready = False
|
||||
break
|
||||
if all_ready:
|
||||
return
|
||||
time.sleep(min(interval, max_interval))
|
||||
interval = min(interval * 2, max_interval)
|
||||
|
||||
diag = ""
|
||||
if last_list and last_list.items:
|
||||
p = last_list.items[0]
|
||||
diag = f" e.g. pod {p.metadata.name}: phase={getattr(p.status, 'phase', None)}"
|
||||
raise TimeoutError(
|
||||
f"Pods in {namespace} with label {label_selector} did not become ready within {timeout}s.{diag}"
|
||||
)
|
||||
|
||||
return _wait
|
||||
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def deploy_workload(test_namespace, k8s_client, wait_for_pods_running, repo_root, tmp_path):
|
||||
"""
|
||||
Helper that applies a manifest into the test namespace and waits for pods.
|
||||
Yields a callable: deploy(manifest_path_or_content, label_selector, *, is_path=True)
|
||||
which applies the manifest, waits for readiness, and returns the namespace name.
|
||||
"""
|
||||
|
||||
def _deploy(manifest_path_or_content, label_selector, *, is_path=True, timeout=READINESS_TIMEOUT):
|
||||
try:
|
||||
if is_path:
|
||||
path = Path(manifest_path_or_content)
|
||||
if not path.is_absolute():
|
||||
path = repo_root / path
|
||||
with open(path) as f:
|
||||
docs = list(yaml.safe_load_all(f))
|
||||
else:
|
||||
docs = list(yaml.safe_load_all(manifest_path_or_content))
|
||||
docs = patch_namespace_in_docs(docs, test_namespace)
|
||||
k8s_utils.create_from_yaml(
|
||||
k8s_client,
|
||||
yaml_objects=docs,
|
||||
namespace=test_namespace,
|
||||
)
|
||||
except k8s_utils.FailToCreateError as e:
|
||||
msgs = [str(exc) for exc in e.api_exceptions]
|
||||
raise RuntimeError(f"Failed to create resources: {'; '.join(msgs)}") from e
|
||||
logger.info("Workload applied in namespace=%s, waiting for pods with selector=%s", test_namespace, label_selector)
|
||||
wait_for_pods_running(test_namespace, label_selector, timeout=timeout)
|
||||
logger.info("Pods ready in namespace=%s", test_namespace)
|
||||
return test_namespace
|
||||
|
||||
return _deploy
|
||||
88
CI/tests_v2/lib/k8s.py
Normal file
88
CI/tests_v2/lib/k8s.py
Normal file
@@ -0,0 +1,88 @@
|
||||
"""
|
||||
Kubernetes client fixtures and cluster context checks for CI/tests_v2.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import subprocess
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
from kubernetes import client, config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def _kube_config_loaded():
|
||||
"""Load kubeconfig once per session. Skips if cluster unreachable."""
|
||||
try:
|
||||
config.load_kube_config()
|
||||
logger.info("Kube config loaded successfully")
|
||||
except config.ConfigException as e:
|
||||
logger.warning("Could not load kube config: %s", e)
|
||||
pytest.skip(f"Could not load kube config (is a cluster running?): {e}")
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def k8s_core(_kube_config_loaded):
|
||||
"""Kubernetes CoreV1Api for pods, etc. Uses default kubeconfig."""
|
||||
return client.CoreV1Api()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def k8s_networking(_kube_config_loaded):
|
||||
"""Kubernetes NetworkingV1Api for network policies."""
|
||||
return client.NetworkingV1Api()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def k8s_client(_kube_config_loaded):
|
||||
"""Kubernetes ApiClient for create_from_yaml and other generic API calls."""
|
||||
return client.ApiClient()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def k8s_apps(_kube_config_loaded):
|
||||
"""Kubernetes AppsV1Api for deployment status polling."""
|
||||
return client.AppsV1Api()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
def _log_cluster_context(request):
|
||||
"""Log current cluster context at session start; skip if --require-kind and not a dev cluster."""
|
||||
try:
|
||||
contexts, active = config.list_kube_config_contexts()
|
||||
except Exception as e:
|
||||
logger.warning("Could not list kube config contexts: %s", e)
|
||||
return
|
||||
if not active:
|
||||
return
|
||||
context_name = active.get("name", "?")
|
||||
cluster = (active.get("context") or {}).get("cluster", "?")
|
||||
logger.info("Running tests against cluster: context=%s cluster=%s", context_name, cluster)
|
||||
if not request.config.getoption("--require-kind", False):
|
||||
return
|
||||
cluster_lower = (cluster or "").lower()
|
||||
if "kind" in cluster_lower or "minikube" in cluster_lower:
|
||||
return
|
||||
pytest.skip(
|
||||
f"Cluster '{cluster}' does not look like kind/minikube. "
|
||||
"Use default kubeconfig or pass --require-kind only on dev clusters."
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def kubectl(repo_root):
|
||||
"""Run kubectl with given args from repo root. Returns CompletedProcess."""
|
||||
|
||||
def run(args, timeout=120):
|
||||
cmd = ["kubectl"] + (args if isinstance(args, list) else list(args))
|
||||
return subprocess.run(
|
||||
cmd,
|
||||
cwd=repo_root,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout,
|
||||
)
|
||||
|
||||
return run
|
||||
94
CI/tests_v2/lib/kraken.py
Normal file
94
CI/tests_v2/lib/kraken.py
Normal file
@@ -0,0 +1,94 @@
|
||||
"""
|
||||
Kraken execution and config building fixtures for CI/tests_v2.
|
||||
"""
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
|
||||
|
||||
def _kraken_cmd(config_path: str, repo_root: Path):
|
||||
"""Use the same Python as the test process so venv/.venv and coverage match."""
|
||||
python = sys.executable
|
||||
if os.environ.get("KRKN_TEST_COVERAGE", "0") == "1":
|
||||
return [
|
||||
python, "-m", "coverage", "run", "-a",
|
||||
"run_kraken.py", "-c", str(config_path),
|
||||
]
|
||||
return [python, "run_kraken.py", "-c", str(config_path)]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def run_kraken(repo_root):
|
||||
"""Run Kraken with the given config path. Returns CompletedProcess. Default timeout 300s."""
|
||||
|
||||
def run(config_path, timeout=300, extra_args=None):
|
||||
cmd = _kraken_cmd(config_path, repo_root)
|
||||
if extra_args:
|
||||
cmd.extend(extra_args)
|
||||
return subprocess.run(
|
||||
cmd,
|
||||
cwd=repo_root,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=timeout,
|
||||
)
|
||||
|
||||
return run
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def run_kraken_background(repo_root):
|
||||
"""Start Kraken in background. Returns Popen. Call proc.terminate() or proc.wait() to stop."""
|
||||
|
||||
def start(config_path):
|
||||
cmd = _kraken_cmd(config_path, repo_root)
|
||||
return subprocess.Popen(
|
||||
cmd,
|
||||
cwd=repo_root,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
text=True,
|
||||
)
|
||||
|
||||
return start
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def build_config(repo_root, tmp_path):
|
||||
"""
|
||||
Build a Kraken config from tests_v2's common_test_config.yaml with scenario_type and scenario_file
|
||||
substituted. Disables Prometheus/Elastic checks for local runs.
|
||||
Returns the path to the written config file.
|
||||
"""
|
||||
common_path = repo_root / "CI" / "tests_v2" / "config" / "common_test_config.yaml"
|
||||
|
||||
def _build(scenario_type: str, scenario_file: str, filename: str = "test_config.yaml"):
|
||||
content = common_path.read_text()
|
||||
content = content.replace("$scenario_type", scenario_type)
|
||||
content = content.replace("$scenario_file", scenario_file)
|
||||
content = content.replace("$post_config", "")
|
||||
|
||||
config = yaml.safe_load(content)
|
||||
if "kraken" in config:
|
||||
# Disable status server so parallel test workers don't all bind to port 8081
|
||||
config["kraken"]["publish_kraken_status"] = False
|
||||
if "performance_monitoring" in config:
|
||||
config["performance_monitoring"]["check_critical_alerts"] = False
|
||||
config["performance_monitoring"]["enable_alerts"] = False
|
||||
config["performance_monitoring"]["enable_metrics"] = False
|
||||
if "elastic" in config:
|
||||
config["elastic"]["enable_elastic"] = False
|
||||
if "tunings" in config:
|
||||
config["tunings"]["wait_duration"] = 1
|
||||
|
||||
out_path = tmp_path / filename
|
||||
with open(out_path, "w") as f:
|
||||
yaml.dump(config, f, default_flow_style=False, sort_keys=False)
|
||||
return str(out_path)
|
||||
|
||||
return _build
|
||||
114
CI/tests_v2/lib/namespace.py
Normal file
114
CI/tests_v2/lib/namespace.py
Normal file
@@ -0,0 +1,114 @@
|
||||
"""
|
||||
Namespace lifecycle fixtures for CI/tests_v2: create, delete, stale cleanup.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import os
|
||||
import time
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
|
||||
import pytest
|
||||
from kubernetes import client
|
||||
from kubernetes.client.rest import ApiException
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
STALE_NS_AGE_MINUTES = 30
|
||||
|
||||
|
||||
def _namespace_age_minutes(metadata) -> float:
|
||||
"""Return age of namespace in minutes from its creation_timestamp."""
|
||||
if not metadata or not metadata.creation_timestamp:
|
||||
return 0.0
|
||||
created = metadata.creation_timestamp
|
||||
if hasattr(created, "timestamp"):
|
||||
created_ts = created.timestamp()
|
||||
else:
|
||||
try:
|
||||
dt = datetime.fromisoformat(created.replace("Z", "+00:00"))
|
||||
created_ts = dt.timestamp()
|
||||
except Exception:
|
||||
return 0.0
|
||||
return (time.time() - created_ts) / 60.0
|
||||
|
||||
|
||||
def _wait_for_namespace_gone(k8s_core, name: str, timeout: int = 60):
|
||||
"""Poll until the namespace no longer exists."""
|
||||
deadline = time.monotonic() + timeout
|
||||
while time.monotonic() < deadline:
|
||||
try:
|
||||
k8s_core.read_namespace(name=name)
|
||||
except ApiException as e:
|
||||
if e.status == 404:
|
||||
return
|
||||
raise
|
||||
time.sleep(1)
|
||||
raise TimeoutError(f"Namespace {name} did not disappear within {timeout}s")
|
||||
|
||||
|
||||
@pytest.fixture(scope="function")
|
||||
def test_namespace(request, k8s_core):
|
||||
"""
|
||||
Create an ephemeral namespace for the test. Deleted after the test unless
|
||||
--keep-ns-on-fail is set and the test failed.
|
||||
"""
|
||||
name = f"krkn-test-{uuid.uuid4().hex[:8]}"
|
||||
ns = client.V1Namespace(
|
||||
metadata=client.V1ObjectMeta(
|
||||
name=name,
|
||||
labels={
|
||||
"pod-security.kubernetes.io/audit": "privileged",
|
||||
"pod-security.kubernetes.io/enforce": "privileged",
|
||||
"pod-security.kubernetes.io/enforce-version": "v1.24",
|
||||
"pod-security.kubernetes.io/warn": "privileged",
|
||||
"security.openshift.io/scc.podSecurityLabelSync": "false",
|
||||
},
|
||||
)
|
||||
)
|
||||
k8s_core.create_namespace(body=ns)
|
||||
logger.info("Created test namespace: %s", name)
|
||||
|
||||
yield name
|
||||
|
||||
keep_on_fail = request.config.getoption("--keep-ns-on-fail", False)
|
||||
rep_call = getattr(request.node, "rep_call", None)
|
||||
failed = rep_call is not None and rep_call.failed
|
||||
if keep_on_fail and failed:
|
||||
logger.info("[keep-ns-on-fail] Keeping namespace %s for debugging", name)
|
||||
return
|
||||
|
||||
try:
|
||||
k8s_core.delete_namespace(
|
||||
name=name,
|
||||
body=client.V1DeleteOptions(propagation_policy="Background"),
|
||||
)
|
||||
logger.debug("Scheduled background deletion for namespace: %s", name)
|
||||
except Exception as e:
|
||||
logger.warning("Failed to delete namespace %s: %s", name, e)
|
||||
|
||||
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
def _cleanup_stale_namespaces(k8s_core):
|
||||
"""Delete krkn-test-* namespaces older than STALE_NS_AGE_MINUTES at session start."""
|
||||
if os.environ.get("PYTEST_XDIST_WORKER"):
|
||||
return
|
||||
try:
|
||||
namespaces = k8s_core.list_namespace()
|
||||
except Exception as e:
|
||||
logger.warning("Could not list namespaces for stale cleanup: %s", e)
|
||||
return
|
||||
for ns in namespaces.items or []:
|
||||
name = ns.metadata.name if ns.metadata else ""
|
||||
if not name.startswith("krkn-test-"):
|
||||
continue
|
||||
if _namespace_age_minutes(ns.metadata) <= STALE_NS_AGE_MINUTES:
|
||||
continue
|
||||
try:
|
||||
logger.warning("Deleting stale namespace: %s", name)
|
||||
k8s_core.delete_namespace(
|
||||
name=name,
|
||||
body=client.V1DeleteOptions(propagation_policy="Background"),
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning("Failed to delete stale namespace %s: %s", name, e)
|
||||
48
CI/tests_v2/lib/preflight.py
Normal file
48
CI/tests_v2/lib/preflight.py
Normal file
@@ -0,0 +1,48 @@
|
||||
"""
|
||||
Preflight checks for CI/tests_v2: cluster reachability and test deps at session start.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import subprocess
|
||||
|
||||
import pytest
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
def _preflight_checks(repo_root):
|
||||
"""
|
||||
Verify cluster is reachable and test deps are importable at session start.
|
||||
Skips the session if cluster-info fails or required plugins are missing.
|
||||
"""
|
||||
# Check test deps (pytest plugins)
|
||||
try:
|
||||
import pytest_rerunfailures # noqa: F401
|
||||
import pytest_html # noqa: F401
|
||||
import pytest_timeout # noqa: F401
|
||||
import pytest_order # noqa: F401
|
||||
import xdist # noqa: F401
|
||||
except ImportError as e:
|
||||
pytest.skip(
|
||||
f"Missing test dependency: {e}. "
|
||||
"Run: pip install -r CI/tests_v2/requirements.txt"
|
||||
)
|
||||
|
||||
# Check cluster reachable and log server URL
|
||||
result = subprocess.run(
|
||||
["kubectl", "cluster-info"],
|
||||
cwd=repo_root,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=10,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
pytest.skip(
|
||||
f"Cluster not reachable (kubectl cluster-info failed). "
|
||||
f"Start a cluster (e.g. make setup) or check KUBECONFIG. stderr: {result.stderr or '(none)'}"
|
||||
)
|
||||
# Log first line of cluster-info (server URL) for debugging
|
||||
if result.stdout:
|
||||
first_line = result.stdout.strip().split("\n")[0]
|
||||
logger.info("Preflight: %s", first_line)
|
||||
212
CI/tests_v2/lib/utils.py
Normal file
212
CI/tests_v2/lib/utils.py
Normal file
@@ -0,0 +1,212 @@
|
||||
"""
|
||||
Shared helpers for CI/tests_v2 functional tests.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Union
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
from kubernetes.client import V1NetworkPolicy, V1NetworkPolicyList, V1Pod, V1PodList
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _pods(pod_list: Union[V1PodList, List[V1Pod]]) -> List[V1Pod]:
|
||||
"""Normalize V1PodList or list of V1Pod to list of V1Pod."""
|
||||
return pod_list.items if hasattr(pod_list, "items") else pod_list
|
||||
|
||||
|
||||
def _policies(
|
||||
policy_list: Union[V1NetworkPolicyList, List[V1NetworkPolicy]],
|
||||
) -> List[V1NetworkPolicy]:
|
||||
"""Normalize V1NetworkPolicyList or list to list of V1NetworkPolicy."""
|
||||
return policy_list.items if hasattr(policy_list, "items") else policy_list
|
||||
|
||||
|
||||
def scenario_dir(repo_root: Path, scenario_name: str) -> Path:
|
||||
"""Return the path to a scenario folder under CI/tests_v2/scenarios/."""
|
||||
return repo_root / "CI" / "tests_v2" / "scenarios" / scenario_name
|
||||
|
||||
|
||||
def load_scenario_base(
|
||||
repo_root: Path,
|
||||
scenario_name: str,
|
||||
filename: str = "scenario_base.yaml",
|
||||
) -> Union[dict, list]:
|
||||
"""
|
||||
Load and parse the scenario base YAML for a scenario.
|
||||
Returns dict or list depending on the YAML structure.
|
||||
"""
|
||||
path = scenario_dir(repo_root, scenario_name) / filename
|
||||
text = path.read_text()
|
||||
data = yaml.safe_load(text)
|
||||
if data is None:
|
||||
raise ValueError(f"Empty or invalid YAML in {path}")
|
||||
return data
|
||||
|
||||
|
||||
def patch_namespace_in_docs(docs: list, namespace: str) -> list:
|
||||
"""Override metadata.namespace in each doc so create_from_yaml respects target namespace."""
|
||||
for doc in docs:
|
||||
if isinstance(doc, dict) and doc.get("metadata") is not None:
|
||||
doc["metadata"]["namespace"] = namespace
|
||||
return docs
|
||||
|
||||
|
||||
def get_pods_list(k8s_core, namespace: str, label_selector: str) -> V1PodList:
|
||||
"""Return V1PodList from the Kubernetes API."""
|
||||
return k8s_core.list_namespaced_pod(
|
||||
namespace=namespace,
|
||||
label_selector=label_selector,
|
||||
)
|
||||
|
||||
|
||||
def get_pods_or_skip(
|
||||
k8s_core,
|
||||
namespace: str,
|
||||
label_selector: str,
|
||||
no_pods_reason: Optional[str] = None,
|
||||
) -> V1PodList:
|
||||
"""
|
||||
Get pods via Kubernetes API or skip if cluster unreachable or no matching pods.
|
||||
Use at test start when prerequisites may be missing.
|
||||
no_pods_reason: message when no pods match; if None, a default message is used.
|
||||
"""
|
||||
try:
|
||||
pod_list = k8s_core.list_namespaced_pod(
|
||||
namespace=namespace,
|
||||
label_selector=label_selector,
|
||||
)
|
||||
except Exception as e:
|
||||
pytest.skip(f"Cluster unreachable: {e}")
|
||||
if not pod_list.items or len(pod_list.items) == 0:
|
||||
reason = (
|
||||
no_pods_reason
|
||||
if no_pods_reason
|
||||
else f"No pods in {namespace} with label {label_selector}. "
|
||||
"Start a KinD cluster with default storage (local-path-provisioner)."
|
||||
)
|
||||
pytest.skip(reason)
|
||||
return pod_list
|
||||
|
||||
|
||||
def pod_uids(pod_list: Union[V1PodList, List[V1Pod]]) -> list:
|
||||
"""Return list of pod UIDs from V1PodList or list of V1Pod."""
|
||||
return [p.metadata.uid for p in _pods(pod_list)]
|
||||
|
||||
|
||||
def restart_counts(pod_list: Union[V1PodList, List[V1Pod]]) -> int:
|
||||
"""Return total restart count across all containers in V1PodList or list of V1Pod."""
|
||||
total = 0
|
||||
for p in _pods(pod_list):
|
||||
if not p.status or not p.status.container_statuses:
|
||||
continue
|
||||
for cs in p.status.container_statuses:
|
||||
total += getattr(cs, "restart_count", 0)
|
||||
return total
|
||||
|
||||
|
||||
def get_network_policies_list(k8s_networking, namespace: str) -> V1NetworkPolicyList:
|
||||
"""Return V1NetworkPolicyList from the Kubernetes API."""
|
||||
return k8s_networking.list_namespaced_network_policy(namespace=namespace)
|
||||
|
||||
|
||||
def find_network_policy_by_prefix(
|
||||
policy_list: Union[V1NetworkPolicyList, List[V1NetworkPolicy]],
|
||||
name_prefix: str,
|
||||
) -> Optional[V1NetworkPolicy]:
|
||||
"""Return the first NetworkPolicy whose name starts with name_prefix, or None."""
|
||||
for policy in _policies(policy_list):
|
||||
if (
|
||||
policy.metadata
|
||||
and policy.metadata.name
|
||||
and policy.metadata.name.startswith(name_prefix)
|
||||
):
|
||||
return policy
|
||||
return None
|
||||
|
||||
|
||||
def assert_all_pods_running_and_ready(
|
||||
pod_list: Union[V1PodList, List[V1Pod]],
|
||||
namespace: str = "",
|
||||
) -> None:
|
||||
"""
|
||||
Assert all pods are Running and all containers Ready.
|
||||
Include namespace in assertion messages for debugging.
|
||||
"""
|
||||
ns_suffix = f" (namespace={namespace})" if namespace else ""
|
||||
for pod in _pods(pod_list):
|
||||
assert pod.status and pod.status.phase == "Running", (
|
||||
f"Pod {pod.metadata.name} not Running after scenario: {pod.status}{ns_suffix}"
|
||||
)
|
||||
if pod.status.container_statuses:
|
||||
for cs in pod.status.container_statuses:
|
||||
assert getattr(cs, "ready", False) is True, (
|
||||
f"Container {getattr(cs, 'name', '?')} not ready in pod {pod.metadata.name}{ns_suffix}"
|
||||
)
|
||||
|
||||
|
||||
def assert_pod_count_unchanged(
|
||||
before: Union[V1PodList, List[V1Pod]],
|
||||
after: Union[V1PodList, List[V1Pod]],
|
||||
namespace: str = "",
|
||||
) -> None:
|
||||
"""Assert pod count is unchanged; include namespace in failure message."""
|
||||
before_items = _pods(before)
|
||||
after_items = _pods(after)
|
||||
ns_suffix = f" (namespace={namespace})" if namespace else ""
|
||||
assert len(after_items) == len(before_items), (
|
||||
f"Pod count changed after scenario: expected {len(before_items)}, got {len(after_items)}.{ns_suffix}"
|
||||
)
|
||||
|
||||
|
||||
def assert_kraken_success(result, context: str = "", tmp_path=None, allowed_codes=(0,)) -> None:
|
||||
"""
|
||||
Assert Kraken run succeeded (returncode in allowed_codes). On failure, include stdout and stderr
|
||||
in the assertion message and optionally write full output to tmp_path.
|
||||
Default allowed_codes=(0,). For alert-aware tests, use allowed_codes=(0, 2).
|
||||
"""
|
||||
if result.returncode in allowed_codes:
|
||||
return
|
||||
if tmp_path is not None:
|
||||
try:
|
||||
(tmp_path / "kraken_stdout.log").write_text(result.stdout or "")
|
||||
(tmp_path / "kraken_stderr.log").write_text(result.stderr or "")
|
||||
except Exception as e:
|
||||
logger.warning("Could not write Kraken logs to tmp_path: %s", e)
|
||||
lines = (result.stdout or "").splitlines()
|
||||
tail_stdout = "\n".join(lines[-20:]) if lines else "(empty)"
|
||||
context_str = f" {context}" if context else ""
|
||||
path_hint = f"\nFull logs: {tmp_path}/kraken_stdout.log, {tmp_path}/kraken_stderr.log" if tmp_path else ""
|
||||
raise AssertionError(
|
||||
f"Krkn failed (rc={result.returncode}){context_str}.{path_hint}\n"
|
||||
f"--- stderr ---\n{result.stderr or '(empty)'}\n"
|
||||
f"--- stdout (last 20 lines) ---\n{tail_stdout}"
|
||||
)
|
||||
|
||||
|
||||
def assert_kraken_failure(result, context: str = "", tmp_path=None) -> None:
|
||||
"""
|
||||
Assert Kraken run failed (returncode != 0). On failure (Kraken unexpectedly succeeded),
|
||||
raise AssertionError with stdout/stderr and optional tmp_path log files for diagnostics.
|
||||
"""
|
||||
if result.returncode != 0:
|
||||
return
|
||||
if tmp_path is not None:
|
||||
try:
|
||||
(tmp_path / "kraken_stdout.log").write_text(result.stdout or "")
|
||||
(tmp_path / "kraken_stderr.log").write_text(result.stderr or "")
|
||||
except Exception as e:
|
||||
logger.warning("Could not write Kraken logs to tmp_path: %s", e)
|
||||
lines = (result.stdout or "").splitlines()
|
||||
tail_stdout = "\n".join(lines[-20:]) if lines else "(empty)"
|
||||
context_str = f" {context}" if context else ""
|
||||
path_hint = f"\nFull logs: {tmp_path}/kraken_stdout.log, {tmp_path}/kraken_stderr.log" if tmp_path else ""
|
||||
raise AssertionError(
|
||||
f"Expected Krkn to fail but it succeeded (rc=0){context_str}.{path_hint}\n"
|
||||
f"--- stderr ---\n{result.stderr or '(empty)'}\n"
|
||||
f"--- stdout (last 20 lines) ---\n{tail_stdout}"
|
||||
)
|
||||
14
CI/tests_v2/pytest.ini
Normal file
14
CI/tests_v2/pytest.ini
Normal file
@@ -0,0 +1,14 @@
|
||||
[pytest]
|
||||
testpaths = .
|
||||
python_files = test_*.py
|
||||
python_functions = test_*
|
||||
# Install CI/tests_v2/requirements.txt for --timeout, --reruns, --reruns-delay.
|
||||
# Example full run: pytest CI/tests_v2/ -v --timeout=300 --reruns=2 --reruns-delay=10 --html=... --junitxml=...
|
||||
addopts = -v
|
||||
markers =
|
||||
functional: marks a test as a functional test (deselect with '-m "not functional"')
|
||||
pod_disruption: marks a test as a pod disruption scenario test
|
||||
application_outage: marks a test as an application outage scenario test
|
||||
no_workload: skip workload deployment for this test (e.g. negative tests)
|
||||
order: set test order (pytest-order)
|
||||
junit_family = xunit2
|
||||
15
CI/tests_v2/requirements.txt
Normal file
15
CI/tests_v2/requirements.txt
Normal file
@@ -0,0 +1,15 @@
|
||||
# Pytest plugin deps for CI/tests_v2 functional tests.
|
||||
#
|
||||
# Kept separate from the root requirements.txt because:
|
||||
# - Root deps are Kraken runtime (cloud SDKs, K8s client, etc.)
|
||||
# - These are test-only plugins not needed by Kraken itself
|
||||
# - Merging would bloat installs for users who don't run functional tests
|
||||
# - Separate files reduce version-conflict risk between test and runtime deps
|
||||
#
|
||||
# pytest and coverage are already in root requirements.txt; do NOT duplicate here.
|
||||
# The Makefile installs both files automatically via `make setup`.
|
||||
pytest-rerunfailures>=14.0
|
||||
pytest-html>=4.1.0
|
||||
pytest-timeout>=2.2.0
|
||||
pytest-order>=1.2.0
|
||||
pytest-xdist>=3.5.0
|
||||
230
CI/tests_v2/scaffold.py
Normal file
230
CI/tests_v2/scaffold.py
Normal file
@@ -0,0 +1,230 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Generate boilerplate for a new scenario test in CI/tests_v2.
|
||||
|
||||
Usage (from repository root):
|
||||
python CI/tests_v2/scaffold.py --scenario service_hijacking
|
||||
python CI/tests_v2/scaffold.py --scenario node_disruption --scenario-type node_scenarios
|
||||
|
||||
Creates (folder-per-scenario layout):
|
||||
- CI/tests_v2/scenarios/<scenario>/test_<scenario>.py (BaseScenarioTest subclass + stub test)
|
||||
- CI/tests_v2/scenarios/<scenario>/resource.yaml (placeholder workload)
|
||||
- CI/tests_v2/scenarios/<scenario>/scenario_base.yaml (placeholder Krkn scenario; edit for your scenario_type)
|
||||
- Adds the scenario marker to pytest.ini (if not already present)
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def snake_to_camel(snake: str) -> str:
|
||||
"""Convert snake_case to CamelCase."""
|
||||
return "".join(word.capitalize() for word in snake.split("_"))
|
||||
|
||||
|
||||
def scenario_type_default(scenario: str) -> str:
|
||||
"""Default scenario_type for build_config (e.g. service_hijacking -> service_hijacking_scenarios)."""
|
||||
return f"{scenario}_scenarios"
|
||||
|
||||
|
||||
TEST_FILE_TEMPLATE = '''"""
|
||||
Functional test for {scenario} scenario.
|
||||
Each test runs in its own ephemeral namespace with workload deployed automatically.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
|
||||
from lib.base import BaseScenarioTest
|
||||
from lib.utils import (
|
||||
assert_all_pods_running_and_ready,
|
||||
assert_kraken_failure,
|
||||
assert_kraken_success,
|
||||
assert_pod_count_unchanged,
|
||||
get_pods_list,
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.functional
|
||||
@pytest.mark.{marker}
|
||||
class Test{class_name}(BaseScenarioTest):
|
||||
"""{scenario} scenario."""
|
||||
|
||||
WORKLOAD_MANIFEST = "CI/tests_v2/scenarios/{scenario}/resource.yaml"
|
||||
WORKLOAD_IS_PATH = True
|
||||
LABEL_SELECTOR = "app={app_label}"
|
||||
SCENARIO_NAME = "{scenario}"
|
||||
SCENARIO_TYPE = "{scenario_type}"
|
||||
NAMESPACE_KEY_PATH = {namespace_key_path}
|
||||
NAMESPACE_IS_REGEX = {namespace_is_regex}
|
||||
OVERRIDES_KEY_PATH = {overrides_key_path}
|
||||
|
||||
@pytest.mark.order(1)
|
||||
def test_happy_path(self):
|
||||
"""Run {scenario} scenario and assert pods remain healthy."""
|
||||
ns = self.ns
|
||||
before = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
|
||||
|
||||
result = self.run_scenario(self.tmp_path, ns)
|
||||
assert_kraken_success(result, context=f"namespace={{ns}}", tmp_path=self.tmp_path)
|
||||
|
||||
after = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
|
||||
assert_pod_count_unchanged(before, after, namespace=ns)
|
||||
assert_all_pods_running_and_ready(after, namespace=ns)
|
||||
'''
|
||||
|
||||
RESOURCE_YAML_TEMPLATE = '''# Target workload for {scenario} scenario tests.
|
||||
# Namespace is patched at deploy time by the test framework.
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: {app_label}
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: {app_label}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: {app_label}
|
||||
spec:
|
||||
containers:
|
||||
- name: app
|
||||
image: nginx:alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
'''
|
||||
|
||||
SCENARIO_BASE_DICT_TEMPLATE = '''# Base scenario for {scenario} (used by build_config with scenario_type: {scenario_type}).
|
||||
# Edit this file with the structure expected by Krkn. Top-level key must match SCENARIO_NAME.
|
||||
# See scenarios/application_outage/scenario_base.yaml and scenarios/pod_disruption/scenario_base.yaml for examples.
|
||||
{scenario}:
|
||||
namespace: default
|
||||
# Add fields required by your scenario plugin.
|
||||
'''
|
||||
|
||||
SCENARIO_BASE_LIST_TEMPLATE = '''# Base scenario for {scenario} (list format). Tests patch config.namespace_pattern with ^<ns>$.
|
||||
# Edit with the structure expected by your scenario plugin. See scenarios/pod_disruption/scenario_base.yaml.
|
||||
- id: {scenario}-default
|
||||
config:
|
||||
namespace_pattern: "^default$"
|
||||
# Add fields required by your scenario plugin.
|
||||
'''
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Scaffold a new scenario test in CI/tests_v2 (folder-per-scenario)")
|
||||
parser.add_argument(
|
||||
"--scenario",
|
||||
required=True,
|
||||
help="Scenario name in snake_case (e.g. service_hijacking)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--scenario-type",
|
||||
default=None,
|
||||
help="Kraken scenario_type for build_config (default: <scenario>_scenarios)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--list-based",
|
||||
action="store_true",
|
||||
help="Use list-based scenario (NAMESPACE_KEY_PATH [0, 'config', 'namespace_pattern'], OVERRIDES_KEY_PATH [0, 'config'])",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--regex-namespace",
|
||||
action="store_true",
|
||||
help="Set NAMESPACE_IS_REGEX = True (namespace wrapped in ^...$)",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
scenario = args.scenario.strip().lower()
|
||||
if not re.match(r"^[a-z][a-z0-9_]*$", scenario):
|
||||
print("Error: --scenario must be snake_case (e.g. service_hijacking)", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
scenario_type = args.scenario_type or scenario_type_default(scenario)
|
||||
class_name = snake_to_camel(scenario)
|
||||
marker = scenario
|
||||
app_label = scenario.replace("_", "-")
|
||||
|
||||
if args.list_based:
|
||||
namespace_key_path = [0, "config", "namespace_pattern"]
|
||||
namespace_is_regex = True
|
||||
overrides_key_path = [0, "config"]
|
||||
scenario_base_template = SCENARIO_BASE_LIST_TEMPLATE
|
||||
else:
|
||||
namespace_key_path = [scenario, "namespace"]
|
||||
namespace_is_regex = args.regex_namespace
|
||||
overrides_key_path = [scenario]
|
||||
scenario_base_template = SCENARIO_BASE_DICT_TEMPLATE
|
||||
|
||||
repo_root = Path(__file__).resolve().parent.parent.parent
|
||||
scenario_dir_path = repo_root / "CI" / "tests_v2" / "scenarios" / scenario
|
||||
test_path = scenario_dir_path / f"test_{scenario}.py"
|
||||
resource_path = scenario_dir_path / "resource.yaml"
|
||||
scenario_base_path = scenario_dir_path / "scenario_base.yaml"
|
||||
|
||||
if scenario_dir_path.exists() and any(scenario_dir_path.iterdir()):
|
||||
print(f"Error: scenario directory already exists and is non-empty: {scenario_dir_path}", file=sys.stderr)
|
||||
return 1
|
||||
if test_path.exists():
|
||||
print(f"Error: {test_path} already exists", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
scenario_dir_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
test_content = TEST_FILE_TEMPLATE.format(
|
||||
scenario=scenario,
|
||||
marker=marker,
|
||||
class_name=class_name,
|
||||
app_label=app_label,
|
||||
scenario_type=scenario_type,
|
||||
namespace_key_path=repr(namespace_key_path),
|
||||
namespace_is_regex=namespace_is_regex,
|
||||
overrides_key_path=repr(overrides_key_path),
|
||||
)
|
||||
resource_content = RESOURCE_YAML_TEMPLATE.format(scenario=scenario, app_label=app_label)
|
||||
scenario_base_content = scenario_base_template.format(
|
||||
scenario=scenario,
|
||||
scenario_type=scenario_type,
|
||||
)
|
||||
|
||||
test_path.write_text(test_content, encoding="utf-8")
|
||||
resource_path.write_text(resource_content, encoding="utf-8")
|
||||
scenario_base_path.write_text(scenario_base_content, encoding="utf-8")
|
||||
|
||||
# Auto-add marker to pytest.ini if not already present
|
||||
pytest_ini_path = repo_root / "CI" / "tests_v2" / "pytest.ini"
|
||||
marker_line = f" {marker}: marks a test as a {scenario} scenario test"
|
||||
if pytest_ini_path.exists():
|
||||
content = pytest_ini_path.read_text(encoding="utf-8")
|
||||
if f" {marker}:" not in content and f"{marker}: marks" not in content:
|
||||
lines = content.splitlines(keepends=True)
|
||||
insert_at = None
|
||||
for i, line in enumerate(lines):
|
||||
if re.match(r"^ \w+:\s*.+", line):
|
||||
insert_at = i + 1
|
||||
if insert_at is not None:
|
||||
lines.insert(insert_at, marker_line + "\n")
|
||||
pytest_ini_path.write_text("".join(lines), encoding="utf-8")
|
||||
print("Added marker to pytest.ini")
|
||||
else:
|
||||
print("Could not find markers block in pytest.ini; add manually:")
|
||||
print(marker_line)
|
||||
else:
|
||||
print("Marker already in pytest.ini")
|
||||
else:
|
||||
print("pytest.ini not found; add this marker under 'markers':")
|
||||
print(marker_line)
|
||||
|
||||
print(f"Created: {test_path}")
|
||||
print(f"Created: {resource_path}")
|
||||
print(f"Created: {scenario_base_path}")
|
||||
print()
|
||||
print("Then edit scenario_base.yaml with your scenario structure (top-level key should match SCENARIO_NAME).")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
34
CI/tests_v2/scenarios/application_outage/nginx_http.yaml
Normal file
34
CI/tests_v2/scenarios/application_outage/nginx_http.yaml
Normal file
@@ -0,0 +1,34 @@
|
||||
# Nginx Deployment + Service for application outage traffic test.
|
||||
# Namespace is patched at deploy time by the test framework.
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: nginx-outage-http
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx-outage-http
|
||||
scenario: outage
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx-outage-http
|
||||
scenario: outage
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: nginx:alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx-outage-http
|
||||
spec:
|
||||
selector:
|
||||
app: nginx-outage-http
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
15
CI/tests_v2/scenarios/application_outage/resource.yaml
Normal file
15
CI/tests_v2/scenarios/application_outage/resource.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: outage
|
||||
labels:
|
||||
scenario: outage
|
||||
spec:
|
||||
containers:
|
||||
- name: fedtools
|
||||
image: quay.io/krkn-chaos/krkn:tools
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
sleep infinity
|
||||
10
CI/tests_v2/scenarios/application_outage/scenario_base.yaml
Normal file
10
CI/tests_v2/scenarios/application_outage/scenario_base.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
# Base application_outage scenario. Tests load this and patch namespace (and optionally duration, block, exclude_label).
|
||||
application_outage:
|
||||
duration: 10
|
||||
namespace: default
|
||||
pod_selector:
|
||||
scenario: outage
|
||||
block:
|
||||
- Ingress
|
||||
- Egress
|
||||
exclude_label: ""
|
||||
@@ -0,0 +1,229 @@
|
||||
"""
|
||||
Functional test for application outage scenario (block network to target pods, then restore).
|
||||
Equivalent to CI/tests/test_app_outages.sh with proper assertions.
|
||||
The main happy-path test reuses one namespace and workload for multiple scenario runs (default, exclude_label, block variants); other tests use their own ephemeral namespace as needed.
|
||||
"""
|
||||
|
||||
import time
|
||||
|
||||
import pytest
|
||||
|
||||
from lib.base import (
|
||||
BaseScenarioTest,
|
||||
KRAKEN_PROC_WAIT_TIMEOUT,
|
||||
POLICY_WAIT_TIMEOUT,
|
||||
)
|
||||
from lib.utils import (
|
||||
assert_all_pods_running_and_ready,
|
||||
assert_kraken_failure,
|
||||
assert_kraken_success,
|
||||
assert_pod_count_unchanged,
|
||||
find_network_policy_by_prefix,
|
||||
get_network_policies_list,
|
||||
get_pods_list,
|
||||
)
|
||||
|
||||
|
||||
def _wait_for_network_policy(k8s_networking, namespace: str, prefix: str, timeout: int = 30):
|
||||
"""Poll until a NetworkPolicy with name starting with prefix exists. Return its name."""
|
||||
deadline = time.monotonic() + timeout
|
||||
while time.monotonic() < deadline:
|
||||
policy_list = get_network_policies_list(k8s_networking, namespace)
|
||||
policy = find_network_policy_by_prefix(policy_list, prefix)
|
||||
if policy:
|
||||
return policy.metadata.name
|
||||
time.sleep(1)
|
||||
raise TimeoutError(f"No NetworkPolicy with prefix {prefix!r} in {namespace} within {timeout}s")
|
||||
|
||||
|
||||
def _assert_no_network_policy_with_prefix(k8s_networking, namespace: str, prefix: str):
|
||||
policy_list = get_network_policies_list(k8s_networking, namespace)
|
||||
policy = find_network_policy_by_prefix(policy_list, prefix)
|
||||
name = policy.metadata.name if policy and policy.metadata else "?"
|
||||
assert policy is None, (
|
||||
f"Expected no NetworkPolicy with prefix {prefix!r} in namespace={namespace}, found {name}"
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.functional
|
||||
@pytest.mark.application_outage
|
||||
class TestApplicationOutage(BaseScenarioTest):
|
||||
"""Application outage scenario: block network to target pods, then restore."""
|
||||
|
||||
WORKLOAD_MANIFEST = "CI/tests_v2/scenarios/application_outage/resource.yaml"
|
||||
WORKLOAD_IS_PATH = True
|
||||
LABEL_SELECTOR = "scenario=outage"
|
||||
POLICY_PREFIX = "krkn-deny-"
|
||||
SCENARIO_NAME = "application_outage"
|
||||
SCENARIO_TYPE = "application_outages_scenarios"
|
||||
NAMESPACE_KEY_PATH = ["application_outage", "namespace"]
|
||||
NAMESPACE_IS_REGEX = False
|
||||
OVERRIDES_KEY_PATH = ["application_outage"]
|
||||
|
||||
@pytest.mark.order(1)
|
||||
def test_app_outage_block_restore_and_variants(self):
|
||||
"""Default, exclude_label, and block-type variants (Ingress, Egress, both) run successfully in one namespace; each run restores and pods stay ready."""
|
||||
ns = self.ns
|
||||
before = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
|
||||
|
||||
cases = [
|
||||
("default", {}, "app_outage_config.yaml"),
|
||||
("exclude_label", {"exclude_label": {"env": "prod"}}, "app_outage_exclude_config.yaml"),
|
||||
("block=Ingress", {"block": ["Ingress"]}, "app_outage_block_ingress_config.yaml"),
|
||||
("block=Egress", {"block": ["Egress"]}, "app_outage_block_egress_config.yaml"),
|
||||
("block=Ingress,Egress", {"block": ["Ingress", "Egress"]}, "app_outage_block_ingress_egress_config.yaml"),
|
||||
]
|
||||
for context_name, overrides, config_filename in cases:
|
||||
result = self.run_scenario(
|
||||
self.tmp_path, ns,
|
||||
overrides=overrides if overrides else None,
|
||||
config_filename=config_filename,
|
||||
)
|
||||
assert_kraken_success(
|
||||
result, context=f"{context_name} namespace={ns}", tmp_path=self.tmp_path
|
||||
)
|
||||
after = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
|
||||
assert_pod_count_unchanged(before, after, namespace=ns)
|
||||
assert_all_pods_running_and_ready(after, namespace=ns)
|
||||
|
||||
def test_network_policy_created_then_deleted(self):
|
||||
"""NetworkPolicy with prefix krkn-deny- is created during run and deleted after."""
|
||||
ns = self.ns
|
||||
scenario = self.load_and_patch_scenario(self.repo_root, ns, duration=12)
|
||||
scenario_path = self.write_scenario(self.tmp_path, scenario, suffix="_np_lifecycle")
|
||||
config_path = self.build_config(
|
||||
self.SCENARIO_TYPE, str(scenario_path),
|
||||
filename="app_outage_np_lifecycle.yaml",
|
||||
)
|
||||
proc = self.run_kraken_background(config_path)
|
||||
try:
|
||||
policy_name = _wait_for_network_policy(
|
||||
self.k8s_networking, ns, self.POLICY_PREFIX, timeout=POLICY_WAIT_TIMEOUT
|
||||
)
|
||||
assert policy_name.startswith(self.POLICY_PREFIX), (
|
||||
f"Policy name {policy_name!r} should start with {self.POLICY_PREFIX!r} (namespace={ns})"
|
||||
)
|
||||
policy_list = get_network_policies_list(self.k8s_networking, ns)
|
||||
policy = find_network_policy_by_prefix(policy_list, self.POLICY_PREFIX)
|
||||
assert policy is not None and policy.spec is not None, (
|
||||
f"Expected NetworkPolicy with spec (namespace={ns})"
|
||||
)
|
||||
assert policy.spec.pod_selector is not None, f"Policy should have pod_selector (namespace={ns})"
|
||||
assert policy.spec.policy_types is not None, f"Policy should have policy_types (namespace={ns})"
|
||||
finally:
|
||||
proc.wait(timeout=KRAKEN_PROC_WAIT_TIMEOUT)
|
||||
_assert_no_network_policy_with_prefix(self.k8s_networking, ns, self.POLICY_PREFIX)
|
||||
|
||||
# def test_traffic_blocked_during_outage(self, request):
|
||||
# """During outage, ingress to target pods is blocked; after run, traffic is restored."""
|
||||
# ns = self.ns
|
||||
# nginx_path = scenario_dir(self.repo_root, "application_outage") / "nginx_http.yaml"
|
||||
# docs = list(yaml.safe_load_all(nginx_path.read_text()))
|
||||
# docs = patch_namespace_in_docs(docs, ns)
|
||||
# try:
|
||||
# k8s_utils.create_from_yaml(
|
||||
# self.k8s_client,
|
||||
# yaml_objects=docs,
|
||||
# namespace=ns,
|
||||
# )
|
||||
# except k8s_utils.FailToCreateError as e:
|
||||
# msgs = [str(exc) for exc in e.api_exceptions]
|
||||
# raise AssertionError(
|
||||
# f"Failed to create nginx resources (namespace={ns}): {'; '.join(msgs)}"
|
||||
# ) from e
|
||||
# wait_for_deployment_replicas(self.k8s_apps, ns, "nginx-outage-http", timeout=READINESS_TIMEOUT)
|
||||
# port = _get_free_port()
|
||||
# pf_ref = []
|
||||
|
||||
# def _kill_port_forward():
|
||||
# if pf_ref and pf_ref[0].poll() is None:
|
||||
# pf_ref[0].terminate()
|
||||
# try:
|
||||
# pf_ref[0].wait(timeout=5)
|
||||
# except subprocess.TimeoutExpired:
|
||||
# pf_ref[0].kill()
|
||||
|
||||
# request.addfinalizer(_kill_port_forward)
|
||||
# pf = subprocess.Popen(
|
||||
# ["kubectl", "port-forward", "-n", ns, "service/nginx-outage-http", f"{port}:80"],
|
||||
# cwd=self.repo_root,
|
||||
# stdout=subprocess.DEVNULL,
|
||||
# stderr=subprocess.DEVNULL,
|
||||
# )
|
||||
# pf_ref.append(pf)
|
||||
# url = f"http://127.0.0.1:{port}/"
|
||||
# try:
|
||||
# time.sleep(2)
|
||||
# baseline_ok = False
|
||||
# for _ in range(10):
|
||||
# try:
|
||||
# resp = requests.get(url, timeout=3)
|
||||
# if resp.ok:
|
||||
# baseline_ok = True
|
||||
# break
|
||||
# except (requests.ConnectionError, requests.Timeout):
|
||||
# pass
|
||||
# time.sleep(1)
|
||||
# assert baseline_ok, f"Baseline: HTTP request to nginx should succeed (namespace={ns})"
|
||||
|
||||
# scenario = self.load_and_patch_scenario(self.repo_root, ns, duration=15)
|
||||
# scenario_path = self.write_scenario(self.tmp_path, scenario, suffix="_traffic")
|
||||
# config_path = self.build_config(
|
||||
# self.SCENARIO_TYPE, str(scenario_path),
|
||||
# filename="app_outage_traffic_config.yaml",
|
||||
# )
|
||||
# proc = self.run_kraken_background(config_path)
|
||||
# policy_name = _wait_for_network_policy(
|
||||
# self.k8s_networking, ns, self.POLICY_PREFIX, timeout=POLICY_WAIT_TIMEOUT
|
||||
# )
|
||||
# assert policy_name, f"Expected policy to exist (namespace={ns})"
|
||||
# time.sleep(2)
|
||||
# failed = False
|
||||
# for _ in range(5):
|
||||
# try:
|
||||
# resp = requests.get(url, timeout=2)
|
||||
# if not resp.ok:
|
||||
# failed = True
|
||||
# break
|
||||
# except (requests.ConnectionError, requests.Timeout):
|
||||
# failed = True
|
||||
# break
|
||||
# time.sleep(1)
|
||||
# assert failed, f"During outage, HTTP request to nginx should fail (namespace={ns})"
|
||||
# proc.wait(timeout=KRAKEN_PROC_WAIT_TIMEOUT)
|
||||
# time.sleep(1)
|
||||
# resp = requests.get(url, timeout=5)
|
||||
# assert resp.ok, f"After scenario, HTTP request to nginx should succeed (namespace={ns})"
|
||||
# finally:
|
||||
# pf.terminate()
|
||||
# pf.wait(timeout=5)
|
||||
|
||||
@pytest.mark.no_workload
|
||||
def test_invalid_scenario_fails(self):
|
||||
"""Invalid scenario file (missing application_outage) causes Kraken to exit non-zero."""
|
||||
invalid_scenario_path = self.tmp_path / "invalid_scenario.yaml"
|
||||
invalid_scenario_path.write_text("foo: bar\n")
|
||||
config_path = self.build_config(
|
||||
self.SCENARIO_TYPE, str(invalid_scenario_path),
|
||||
filename="invalid_config.yaml",
|
||||
)
|
||||
result = self.run_kraken(config_path)
|
||||
assert_kraken_failure(
|
||||
result, context=f"namespace={self.ns}", tmp_path=self.tmp_path
|
||||
)
|
||||
|
||||
@pytest.mark.no_workload
|
||||
def test_bad_namespace_fails(self):
|
||||
"""Scenario targeting non-existent namespace causes Kraken to exit non-zero."""
|
||||
scenario = self.load_and_patch_scenario(self.repo_root, "nonexistent-namespace-xyz-12345")
|
||||
scenario_path = self.write_scenario(self.tmp_path, scenario, suffix="_bad_ns")
|
||||
config_path = self.build_config(
|
||||
self.SCENARIO_TYPE, str(scenario_path),
|
||||
filename="app_outage_bad_ns_config.yaml",
|
||||
)
|
||||
result = self.run_kraken(config_path)
|
||||
assert_kraken_failure(
|
||||
result,
|
||||
context=f"test namespace={self.ns}",
|
||||
tmp_path=self.tmp_path,
|
||||
)
|
||||
21
CI/tests_v2/scenarios/pod_disruption/resource.yaml
Normal file
21
CI/tests_v2/scenarios/pod_disruption/resource.yaml
Normal file
@@ -0,0 +1,21 @@
|
||||
# Single-pod deployment targeted by pod disruption scenario.
|
||||
# Namespace is patched at deploy time by the test framework.
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: krkn-pod-disruption-target
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: krkn-pod-disruption-target
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: krkn-pod-disruption-target
|
||||
spec:
|
||||
containers:
|
||||
- name: app
|
||||
image: nginx:alpine
|
||||
ports:
|
||||
- containerPort: 80
|
||||
7
CI/tests_v2/scenarios/pod_disruption/scenario_base.yaml
Normal file
7
CI/tests_v2/scenarios/pod_disruption/scenario_base.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
# Base pod_disruption scenario (list). Tests load this and patch namespace_pattern with ^<ns>$.
|
||||
- id: kill-pods
|
||||
config:
|
||||
namespace_pattern: "^default$"
|
||||
label_selector: app=krkn-pod-disruption-target
|
||||
krkn_pod_recovery_time: 5
|
||||
kill: 1
|
||||
58
CI/tests_v2/scenarios/pod_disruption/test_pod_disruption.py
Normal file
58
CI/tests_v2/scenarios/pod_disruption/test_pod_disruption.py
Normal file
@@ -0,0 +1,58 @@
|
||||
"""
|
||||
Functional test for pod disruption scenario (pod crash and recovery).
|
||||
Equivalent to CI/tests/test_pod.sh with proper before/after assertions.
|
||||
Each test runs in its own ephemeral namespace with workload deployed automatically.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
|
||||
from lib.base import BaseScenarioTest, READINESS_TIMEOUT
|
||||
from lib.utils import (
|
||||
assert_all_pods_running_and_ready,
|
||||
assert_kraken_success,
|
||||
assert_pod_count_unchanged,
|
||||
get_pods_list,
|
||||
pod_uids,
|
||||
restart_counts,
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.functional
|
||||
@pytest.mark.pod_disruption
|
||||
class TestPodDisruption(BaseScenarioTest):
|
||||
"""Pod disruption scenario: kill pods and verify recovery."""
|
||||
|
||||
WORKLOAD_MANIFEST = "CI/tests_v2/scenarios/pod_disruption/resource.yaml"
|
||||
WORKLOAD_IS_PATH = True
|
||||
LABEL_SELECTOR = "app=krkn-pod-disruption-target"
|
||||
SCENARIO_NAME = "pod_disruption"
|
||||
SCENARIO_TYPE = "pod_disruption_scenarios"
|
||||
NAMESPACE_KEY_PATH = [0, "config", "namespace_pattern"]
|
||||
NAMESPACE_IS_REGEX = True
|
||||
|
||||
@pytest.mark.order(1)
|
||||
def test_pod_crash_and_recovery(self, wait_for_pods_running):
|
||||
ns = self.ns
|
||||
before = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
|
||||
before_uids = pod_uids(before)
|
||||
before_restarts = restart_counts(before)
|
||||
|
||||
result = self.run_scenario(self.tmp_path, ns)
|
||||
assert_kraken_success(result, context=f"namespace={ns}", tmp_path=self.tmp_path)
|
||||
|
||||
after = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
|
||||
after_uids = pod_uids(after)
|
||||
after_restarts = restart_counts(after)
|
||||
uids_changed = set(after_uids) != set(before_uids)
|
||||
restarts_increased = after_restarts > before_restarts
|
||||
assert uids_changed or restarts_increased, (
|
||||
f"Chaos had no effect in namespace={ns}: pod UIDs unchanged and restart count did not increase. "
|
||||
f"Before UIDs: {before_uids}, restarts: {before_restarts}. "
|
||||
f"After UIDs: {after_uids}, restarts: {after_restarts}."
|
||||
)
|
||||
|
||||
wait_for_pods_running(ns, self.LABEL_SELECTOR, timeout=READINESS_TIMEOUT)
|
||||
|
||||
after_final = get_pods_list(self.k8s_core, ns, self.LABEL_SELECTOR)
|
||||
assert_pod_count_unchanged(before, after_final, namespace=ns)
|
||||
assert_all_pods_running_and_ready(after_final, namespace=ns)
|
||||
74
CI/tests_v2/setup_env.sh
Executable file
74
CI/tests_v2/setup_env.sh
Executable file
@@ -0,0 +1,74 @@
|
||||
#!/usr/bin/env bash
|
||||
# Setup environment for CI/tests_v2 pytest functional tests.
|
||||
# Run from the repository root: ./CI/tests_v2/setup_env.sh
|
||||
#
|
||||
# - Creates a KinD cluster using kind-config-dev.yml (override with KIND_CONFIG=...).
|
||||
# - Waits for the cluster and for local-path-provisioner pods (required by pod disruption test).
|
||||
# - Does not install Python deps; use a venv and pip install -r requirements.txt and CI/tests_v2/requirements.txt yourself.
|
||||
|
||||
set -e
|
||||
|
||||
REPO_ROOT="$(cd "$(dirname "$0")/../.." && pwd)"
|
||||
KIND_CONFIG="${KIND_CONFIG:-${REPO_ROOT}/CI/tests_v2/kind-config-dev.yml}"
|
||||
CLUSTER_NAME="${KIND_CLUSTER_NAME:-ci-krkn}"
|
||||
|
||||
echo "Repository root: $REPO_ROOT"
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# Check required tools
|
||||
command -v kind >/dev/null 2>&1 || { echo "Error: kind is not installed. Install from https://kind.sigs.k8s.io/docs/user/quick-start/"; exit 1; }
|
||||
command -v kubectl >/dev/null 2>&1 || { echo "Error: kubectl is not installed."; exit 1; }
|
||||
|
||||
# Python 3.9+
|
||||
python3 -c "import sys; exit(0 if sys.version_info >= (3, 9) else 1)" 2>/dev/null || { echo "Error: Python 3.9+ required. Check: python3 --version"; exit 1; }
|
||||
|
||||
# Docker running (required for KinD)
|
||||
docker info >/dev/null 2>&1 || { echo "Error: Docker is not running. Start Docker Desktop or run: systemctl start docker"; exit 1; }
|
||||
|
||||
# Tool versions for reproducibility
|
||||
echo "kind: $(kind --version 2>/dev/null || kind version 2>/dev/null)"
|
||||
echo "kubectl: $(kubectl version --client --short 2>/dev/null || kubectl version --client 2>/dev/null)"
|
||||
|
||||
# Create cluster if it doesn't exist (use "kind get clusters" so we skip when nodes exist even if kubeconfig check would fail)
|
||||
if kind get clusters 2>/dev/null | grep -qx "$CLUSTER_NAME"; then
|
||||
echo "KinD cluster '$CLUSTER_NAME' already exists, skipping creation."
|
||||
else
|
||||
echo "Creating KinD cluster '$CLUSTER_NAME' from $KIND_CONFIG ..."
|
||||
kind create cluster --name "$CLUSTER_NAME" --config "$KIND_CONFIG"
|
||||
fi
|
||||
|
||||
# echo "Pre-pulling test workload images into KinD cluster..."
|
||||
# docker pull nginx:alpine
|
||||
# kind load docker-image nginx:alpine --name "$CLUSTER_NAME"
|
||||
|
||||
# kind merges into default kubeconfig (~/.kube/config), so kubectl should work in this shell.
|
||||
# If you need to use this cluster from another terminal: export KUBECONFIG=~/.kube/config
|
||||
# and ensure context: kubectl config use-context kind-$CLUSTER_NAME
|
||||
|
||||
echo "Waiting for cluster nodes to be Ready..."
|
||||
kubectl wait --for=condition=Ready nodes --all --timeout=120s 2>/dev/null || true
|
||||
|
||||
echo "Waiting for local-path-provisioner pods (namespace local-path-storage, label app=local-path-provisioner)..."
|
||||
for i in {1..60}; do
|
||||
if kubectl get pods -n local-path-storage -l app=local-path-provisioner -o name 2>/dev/null | grep -q .; then
|
||||
echo "Found local-path-provisioner pod(s). Waiting for Ready..."
|
||||
kubectl wait --for=condition=ready pod -l app=local-path-provisioner -n local-path-storage --timeout=120s 2>/dev/null && break
|
||||
fi
|
||||
echo "Attempt $i: local-path-provisioner not ready yet..."
|
||||
sleep 3
|
||||
done
|
||||
|
||||
if ! kubectl get pods -n local-path-storage -l app=local-path-provisioner -o name 2>/dev/null | grep -q .; then
|
||||
echo "Warning: No pods with label app=local-path-provisioner in local-path-storage."
|
||||
echo "KinD usually deploys this by default. Check: kubectl get pods -n local-path-storage"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "Cluster is ready for CI/tests_v2."
|
||||
echo " kubectl uses the default kubeconfig (kind merged it). For another terminal: export KUBECONFIG=~/.kube/config"
|
||||
echo ""
|
||||
echo "Next: activate your venv, install deps, and run tests from repo root:"
|
||||
echo " pip install -r requirements.txt"
|
||||
echo " pip install -r CI/tests_v2/requirements.txt"
|
||||
echo " pytest CI/tests_v2/ -v --timeout=300 --reruns=2 --reruns-delay=10"
|
||||
273
CLAUDE.md
Normal file
273
CLAUDE.md
Normal file
@@ -0,0 +1,273 @@
|
||||
# CLAUDE.md - Krkn Chaos Engineering Framework
|
||||
|
||||
## Project Overview
|
||||
|
||||
Krkn (Kraken) is a chaos engineering tool for Kubernetes/OpenShift clusters. It injects deliberate failures to validate cluster resilience. Plugin-based architecture with multi-cloud support (AWS, Azure, GCP, IBM Cloud, VMware, Alibaba, OpenStack).
|
||||
|
||||
## Repository Structure
|
||||
|
||||
```
|
||||
krkn/
|
||||
├── krkn/
|
||||
│ ├── scenario_plugins/ # Chaos scenario plugins (pod, node, network, hogs, etc.)
|
||||
│ ├── utils/ # Utility functions
|
||||
│ ├── rollback/ # Rollback management
|
||||
│ ├── prometheus/ # Prometheus integration
|
||||
│ └── cerberus/ # Health monitoring
|
||||
├── tests/ # Unit tests (unittest framework)
|
||||
├── scenarios/ # Example scenario configs (openshift/, kube/, kind/)
|
||||
├── config/ # Configuration files
|
||||
└── CI/ # CI/CD test scripts
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Setup (ALWAYS use virtual environment)
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Run Krkn
|
||||
python run_kraken.py --config config/config.yaml
|
||||
|
||||
# Note: Scenarios are specified in config.yaml under kraken.chaos_scenarios
|
||||
# There is no --scenario flag; edit config/config.yaml to select scenarios
|
||||
|
||||
# Run tests
|
||||
python -m unittest discover -s tests -v
|
||||
python -m coverage run -a -m unittest discover -s tests -v
|
||||
```
|
||||
|
||||
## Critical Requirements
|
||||
|
||||
### Python Environment
|
||||
- **Python 3.9+** required
|
||||
- **NEVER install packages globally** - always use virtual environment
|
||||
- **CRITICAL**: `docker` must be <7.0 and `requests` must be <2.32 (Unix socket compatibility)
|
||||
|
||||
### Key Dependencies
|
||||
- **krkn-lib** (5.1.13): Core library for Kubernetes/OpenShift operations
|
||||
- **kubernetes** (34.1.0): Kubernetes Python client
|
||||
- **docker** (<7.0), **requests** (<2.32): DO NOT upgrade without verifying compatibility
|
||||
- Cloud SDKs: boto3 (AWS), azure-mgmt-* (Azure), google-cloud-compute (GCP), ibm_vpc (IBM), pyVmomi (VMware)
|
||||
|
||||
## Plugin Architecture (CRITICAL)
|
||||
|
||||
**Strictly enforced naming conventions:**
|
||||
|
||||
### Naming Rules
|
||||
- **Module files**: Must end with `_scenario_plugin.py` and use snake_case
|
||||
- Example: `pod_disruption_scenario_plugin.py`
|
||||
- **Class names**: Must be CamelCase and end with `ScenarioPlugin`
|
||||
- Example: `PodDisruptionScenarioPlugin`
|
||||
- Must match module filename (snake_case ↔ CamelCase)
|
||||
- **Directory structure**: Plugin dirs CANNOT contain "scenario" or "plugin"
|
||||
- Location: `krkn/scenario_plugins/<plugin_name>/`
|
||||
|
||||
### Plugin Implementation
|
||||
Every plugin MUST:
|
||||
1. Extend `AbstractScenarioPlugin`
|
||||
2. Implement `run()` method
|
||||
3. Implement `get_scenario_types()` method
|
||||
|
||||
```python
|
||||
from krkn.scenario_plugins import AbstractScenarioPlugin
|
||||
|
||||
class PodDisruptionScenarioPlugin(AbstractScenarioPlugin):
|
||||
def run(self, config, scenarios_list, kubeconfig_path, wait_duration):
|
||||
pass
|
||||
|
||||
def get_scenario_types(self):
|
||||
return ["pod_scenarios", "pod_outage"]
|
||||
```
|
||||
|
||||
### Creating a New Plugin
|
||||
1. Create directory: `krkn/scenario_plugins/<plugin_name>/`
|
||||
2. Create module: `<plugin_name>_scenario_plugin.py`
|
||||
3. Create class: `<PluginName>ScenarioPlugin` extending `AbstractScenarioPlugin`
|
||||
4. Implement `run()` and `get_scenario_types()`
|
||||
5. Create unit test: `tests/test_<plugin_name>_scenario_plugin.py`
|
||||
6. Add example scenario: `scenarios/<platform>/<scenario>.yaml`
|
||||
|
||||
**DO NOT**: Violate naming conventions (factory will reject), include "scenario"/"plugin" in directory names, create plugins without tests.
|
||||
|
||||
## Testing
|
||||
|
||||
### Unit Tests
|
||||
```bash
|
||||
# Run all tests
|
||||
python -m unittest discover -s tests -v
|
||||
|
||||
# Specific test
|
||||
python -m unittest tests.test_pod_disruption_scenario_plugin
|
||||
|
||||
# With coverage
|
||||
python -m coverage run -a -m unittest discover -s tests -v
|
||||
python -m coverage html
|
||||
```
|
||||
|
||||
**Test requirements:**
|
||||
- Naming: `test_<module>_scenario_plugin.py`
|
||||
- Mock external dependencies (Kubernetes API, cloud providers)
|
||||
- Test success, failure, and edge cases
|
||||
- Keep tests isolated and independent
|
||||
|
||||
### Functional Tests
|
||||
Located in `CI/tests/`. Can be run locally on a kind cluster with Prometheus and Elasticsearch set up.
|
||||
|
||||
**Setup for local testing:**
|
||||
1. Deploy Prometheus and Elasticsearch on your kind cluster:
|
||||
- Prometheus setup: https://krkn-chaos.dev/docs/developers-guide/testing-changes/#prometheus
|
||||
- Elasticsearch setup: https://krkn-chaos.dev/docs/developers-guide/testing-changes/#elasticsearch
|
||||
|
||||
2. Or disable monitoring features in `config/config.yaml`:
|
||||
```yaml
|
||||
performance_monitoring:
|
||||
enable_alerts: False
|
||||
enable_metrics: False
|
||||
check_critical_alerts: False
|
||||
```
|
||||
|
||||
**Note:** Functional tests run automatically in CI with full monitoring enabled.
|
||||
|
||||
## Cloud Provider Implementations
|
||||
|
||||
Node chaos scenarios are cloud-specific. Each in `krkn/scenario_plugins/node_actions/<provider>_node_scenarios.py`:
|
||||
- AWS, Azure, GCP, IBM Cloud, VMware, Alibaba, OpenStack, Bare Metal
|
||||
|
||||
Implement: stop, start, reboot, terminate instances.
|
||||
|
||||
**When modifying**: Maintain consistency with other providers, handle API errors, add logging, update tests.
|
||||
|
||||
### Adding Cloud Provider Support
|
||||
1. Create: `krkn/scenario_plugins/node_actions/<provider>_node_scenarios.py`
|
||||
2. Extend: `abstract_node_scenarios.AbstractNodeScenarios`
|
||||
3. Implement: `stop_instances`, `start_instances`, `reboot_instances`, `terminate_instances`
|
||||
4. Add SDK to `requirements.txt`
|
||||
5. Create unit test with mocked SDK
|
||||
6. Add example scenario: `scenarios/openshift/<provider>_node_scenarios.yml`
|
||||
|
||||
## Configuration
|
||||
|
||||
**Main config**: `config/config.yaml`
|
||||
- `kraken`: Core settings
|
||||
- `cerberus`: Health monitoring
|
||||
- `performance_monitoring`: Prometheus
|
||||
- `elastic`: Elasticsearch telemetry
|
||||
|
||||
**Scenario configs**: `scenarios/` directory
|
||||
```yaml
|
||||
- config:
|
||||
scenario_type: <type> # Must match plugin's get_scenario_types()
|
||||
```
|
||||
|
||||
## Code Style
|
||||
|
||||
- **Import order**: Standard library, third-party, local imports
|
||||
- **Naming**: snake_case (functions/variables), CamelCase (classes)
|
||||
- **Logging**: Use Python's `logging` module
|
||||
- **Error handling**: Return appropriate exit codes
|
||||
- **Docstrings**: Required for public functions/classes
|
||||
|
||||
## Exit Codes
|
||||
|
||||
Krkn uses specific exit codes to communicate execution status:
|
||||
|
||||
- `0`: Success - all scenarios passed, no critical alerts
|
||||
- `1`: Scenario failure - one or more scenarios failed
|
||||
- `2`: Critical alerts fired during execution
|
||||
- `3+`: Health check failure (Cerberus monitoring detected issues)
|
||||
|
||||
**When implementing scenarios:**
|
||||
- Return `0` on success
|
||||
- Return `1` on scenario-specific failures
|
||||
- Propagate health check failures appropriately
|
||||
- Log exit code reasons clearly
|
||||
|
||||
## Container Support
|
||||
|
||||
Krkn can run inside a container. See `containers/` directory.
|
||||
|
||||
**Building custom image:**
|
||||
```bash
|
||||
cd containers
|
||||
./compile_dockerfile.sh # Generates Dockerfile from template
|
||||
docker build -t krkn:latest .
|
||||
```
|
||||
|
||||
**Running containerized:**
|
||||
```bash
|
||||
docker run -v ~/.kube:/root/.kube:Z \
|
||||
-v $(pwd)/config:/config:Z \
|
||||
-v $(pwd)/scenarios:/scenarios:Z \
|
||||
krkn:latest
|
||||
```
|
||||
|
||||
## Git Workflow
|
||||
|
||||
- **NEVER commit directly to main**
|
||||
- **NEVER use `--force` without approval**
|
||||
- **ALWAYS create feature branches**: `git checkout -b feature/description`
|
||||
- **ALWAYS run tests before pushing**
|
||||
|
||||
**Conventional commits**: `feat:`, `fix:`, `test:`, `docs:`, `refactor:`
|
||||
|
||||
```bash
|
||||
git checkout main && git pull origin main
|
||||
git checkout -b feature/your-feature-name
|
||||
# Make changes, write tests
|
||||
python -m unittest discover -s tests -v
|
||||
git add <specific-files>
|
||||
git commit -m "feat: description"
|
||||
git push -u origin feature/your-feature-name
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
- `KUBECONFIG`: Path to kubeconfig
|
||||
- `AWS_*`, `AZURE_*`, `GOOGLE_APPLICATION_CREDENTIALS`: Cloud credentials
|
||||
- `PROMETHEUS_URL`, `ELASTIC_URL`, `ELASTIC_PASSWORD`: Monitoring config
|
||||
|
||||
**NEVER commit credentials or API keys.**
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
1. Missing virtual environment - always activate venv
|
||||
2. Running functional tests without cluster setup
|
||||
3. Ignoring exit codes
|
||||
4. Modifying krkn-lib directly (it's a separate package)
|
||||
5. Upgrading docker/requests beyond version constraints
|
||||
|
||||
## Before Writing Code
|
||||
|
||||
1. Check for existing implementations
|
||||
2. Review existing plugins as examples
|
||||
3. Maintain consistency with cloud provider patterns
|
||||
4. Plan rollback logic
|
||||
5. Write tests alongside code
|
||||
6. Update documentation
|
||||
|
||||
## When Adding Dependencies
|
||||
|
||||
1. Check if functionality exists in krkn-lib or current dependencies
|
||||
2. Verify compatibility with existing versions
|
||||
3. Pin specific versions in `requirements.txt`
|
||||
4. Check for security vulnerabilities
|
||||
5. Test thoroughly for conflicts
|
||||
|
||||
## Common Development Tasks
|
||||
|
||||
### Modifying Existing Plugin
|
||||
1. Read plugin code and corresponding test
|
||||
2. Make changes
|
||||
3. Update/add unit tests
|
||||
4. Run: `python -m unittest tests.test_<plugin>_scenario_plugin`
|
||||
|
||||
### Writing Unit Tests
|
||||
1. Create: `tests/test_<module>_scenario_plugin.py`
|
||||
2. Import `unittest` and plugin class
|
||||
3. Mock external dependencies
|
||||
4. Test success, failure, and edge cases
|
||||
5. Run: `python -m unittest tests.test_<module>_scenario_plugin`
|
||||
|
||||
83
GOVERNANCE.md
Normal file
83
GOVERNANCE.md
Normal file
@@ -0,0 +1,83 @@
|
||||
|
||||
|
||||
|
||||
The governance model adopted here is heavily influenced by a set of CNCF projects, especially drew
|
||||
reference from [Kubernetes governance](https://github.com/kubernetes/community/blob/master/governance.md).
|
||||
*For similar structures some of the same wordings from kubernetes governance are borrowed to adhere
|
||||
to the originally construed meaning.*
|
||||
|
||||
## Principles
|
||||
|
||||
- **Open**: Krkn is open source community.
|
||||
- **Welcoming and respectful**: See [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
||||
- **Transparent and accessible**: Work and collaboration should be done in public.
|
||||
Changes to the Krkn organization, Krkn code repositories, and CNCF related activities (e.g.
|
||||
level, involvement, etc) are done in public.
|
||||
- **Merit**: Ideas and contributions are accepted according to their technical merit
|
||||
and alignment with project objectives, scope and design principles.
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
Krkn follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
||||
Here is an excerpt:
|
||||
|
||||
> As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.
|
||||
|
||||
## Maintainer Levels
|
||||
|
||||
### Contributor
|
||||
Contributors contribute to the community. Anyone can become a contributor by participating in discussions, reporting bugs, or contributing code or documentation.
|
||||
|
||||
#### Responsibilities:
|
||||
|
||||
Be active in the community and adhere to the Code of Conduct.
|
||||
|
||||
Report bugs and suggest new features.
|
||||
|
||||
Contribute high-quality code and documentation.
|
||||
|
||||
|
||||
### Member
|
||||
Members are active contributors to the community. Members have demonstrated a strong understanding of the project's codebase and conventions.
|
||||
|
||||
#### Responsibilities:
|
||||
|
||||
Review pull requests for correctness, quality, and adherence to project standards.
|
||||
|
||||
Provide constructive and timely feedback to contributors.
|
||||
|
||||
Ensure that all contributions are well-tested and documented.
|
||||
|
||||
Work with maintainers to ensure a smooth and efficient release process.
|
||||
|
||||
### Maintainer
|
||||
Maintainers are responsible for the overall health and direction of the project. They are long-standing contributors who have shown a deep commitment to the project's success.
|
||||
|
||||
#### Responsibilities:
|
||||
|
||||
Set the technical direction and vision for the project.
|
||||
|
||||
Manage releases and ensure the stability of the main branch.
|
||||
|
||||
Make decisions on feature inclusion and project priorities.
|
||||
|
||||
Mentor other contributors and help grow the community.
|
||||
|
||||
Resolve disputes and make final decisions when consensus cannot be reached.
|
||||
|
||||
### Owner
|
||||
Owners have administrative access to the project and are the final decision-makers.
|
||||
|
||||
#### Responsibilities:
|
||||
|
||||
Manage the core team of maintainers and approvers.
|
||||
|
||||
Set the overall vision and strategy for the project.
|
||||
|
||||
Handle administrative tasks, such as managing the project's repository and other resources.
|
||||
|
||||
Represent the project in the broader open-source community.
|
||||
|
||||
|
||||
# Credits
|
||||
Sections of this document have been borrowed from [Kubernetes governance](https://github.com/kubernetes/community/blob/master/governance.md)
|
||||
@@ -1,12 +1,34 @@
|
||||
## Overview
|
||||
|
||||
This document contains a list of maintainers in this repo.
|
||||
This file lists the maintainers and committers of the Krkn project.
|
||||
|
||||
In short, maintainers are people who are in charge of the maintenance of the Krkn project. Committers are active community members who have shown that they are committed to the continuous development of the project through ongoing engagement with the community.
|
||||
|
||||
For detailed description of the roles, see [Governance](./GOVERNANCE.md) page.
|
||||
|
||||
## Current Maintainers
|
||||
|
||||
| Maintainer | GitHub ID | Email |
|
||||
|---------------------| --------------------------------------------------------- | ----------------------- |
|
||||
| Ravi Elluri | [chaitanyaenr](https://github.com/chaitanyaenr) | nelluri@redhat.com |
|
||||
| Pradeep Surisetty | [psuriset](https://github.com/psuriset) | psuriset@redhat.com |
|
||||
| Paige Rubendall | [paigerube14](https://github.com/paigerube14) | prubenda@redhat.com |
|
||||
| Tullio Sebastiani | [tsebastiani](https://github.com/tsebastiani) | tsebasti@redhat.com |
|
||||
| Maintainer | GitHub ID | Email | Contribution Level |
|
||||
|---------------------| --------------------------------------------------------- | ----------------------- | ---------------------- |
|
||||
| Ravi Elluri | [chaitanyaenr](https://github.com/chaitanyaenr) | nelluri@redhat.com | Owner |
|
||||
| Pradeep Surisetty | [psuriset](https://github.com/psuriset) | psuriset@redhat.com | Owner |
|
||||
| Paige Patton | [paigerube14](https://github.com/paigerube14) | prubenda@redhat.com | Maintainer |
|
||||
| Tullio Sebastiani | [tsebastiani](https://github.com/tsebastiani) | tsebasti@redhat.com | Maintainer |
|
||||
| Yogananth Subramanian | [yogananth-subramanian](https://github.com/yogananth-subramanian) | ysubrama@redhat.com |Maintainer |
|
||||
| Sahil Shah | [shahsahil264](https://github.com/shahsahil264) | sahshah@redhat.com | Member |
|
||||
|
||||
|
||||
Note : It is mandatory for all Krkn community members to follow our [Code of Conduct](./CODE_OF_CONDUCT.md)
|
||||
|
||||
|
||||
## Contributor Ladder
|
||||
This project follows a contributor ladder model, where contributors can take on more responsibilities as they gain experience and demonstrate their commitment to the project.
|
||||
The roles are:
|
||||
* Contributor: A contributor to the community whether it be with code, docs or issues
|
||||
|
||||
* Member: A contributor who is active in the community and reviews pull requests.
|
||||
|
||||
* Maintainer: A contributor who is responsible for the overall health and direction of the project.
|
||||
|
||||
* Owner: A contributor who has administrative ownership of the project.
|
||||
|
||||
10
README.md
10
README.md
@@ -22,14 +22,8 @@ Kraken injects deliberate failures into Kubernetes clusters to check if it is re
|
||||
Instructions on how to setup, configure and run Kraken can be found in the [documentation](https://krkn-chaos.dev/docs/).
|
||||
|
||||
|
||||
### Blogs and other useful resources
|
||||
- Blog post on introduction to Kraken: https://www.openshift.com/blog/introduction-to-kraken-a-chaos-tool-for-openshift/kubernetes
|
||||
- Discussion and demo on how Kraken can be leveraged to ensure OpenShift is reliable, performant and scalable: https://www.youtube.com/watch?v=s1PvupI5sD0&ab_channel=OpenShift
|
||||
- Blog post emphasizing the importance of making Chaos part of Performance and Scale runs to mimic the production environments: https://www.openshift.com/blog/making-chaos-part-of-kubernetes/openshift-performance-and-scalability-tests
|
||||
- Blog post on findings from Chaos test runs: https://cloud.redhat.com/blog/openshift/kubernetes-chaos-stories
|
||||
- Discussion with CNCF TAG App Delivery on Krkn workflow, features and addition to CNCF sandbox: [Github](https://github.com/cncf/sandbox/issues/44), [Tracker](https://github.com/cncf/tag-app-delivery/issues/465), [recording](https://www.youtube.com/watch?v=nXQkBFK_MWc&t=722s)
|
||||
- Blog post on supercharging chaos testing using AI integration in Krkn: https://www.redhat.com/en/blog/supercharging-chaos-testing-using-ai
|
||||
- Blog post announcing Krkn joining CNCF Sandbox: https://www.redhat.com/en/blog/krknchaos-joining-cncf-sandbox
|
||||
### Blogs, podcasts and interviews
|
||||
Additional resources, including blog posts, podcasts, and community interviews, can be found on the [website](https://krkn-chaos.dev/blog)
|
||||
|
||||
|
||||
### Roadmap
|
||||
|
||||
55
RELEASE.md
Normal file
55
RELEASE.md
Normal file
@@ -0,0 +1,55 @@
|
||||
### Release Protocol: The Community-First Cycle
|
||||
|
||||
This document outlines the project's release protocol, a methodology designed to ensure a responsive and transparent development process that is closely aligned with the needs of our users and contributors. This protocol is tailored for projects in their early stages, prioritizing agility and community feedback over a rigid, time-boxed schedule.
|
||||
|
||||
#### 1. Key Principles
|
||||
|
||||
* **Community as the Compass:** The primary driver for all development is feedback from our user and contributor community.
|
||||
* **Prioritization by Impact:** Tasks are prioritized based on their impact on user experience, the urgency of bug fixes, and the value of community-contributed features.
|
||||
* **Event-Driven Releases:** Releases are not bound by a fixed calendar. New versions are published when a significant body of work is complete, a critical issue is resolved, or a new feature is ready for adoption.
|
||||
* **Transparency and Communication:** All development decisions, progress, and plans are communicated openly through our issue tracker, pull requests, and community channels.
|
||||
|
||||
#### 2. The Release Lifecycle
|
||||
|
||||
The release cycle is a continuous flow of activities rather than a series of sequential phases.
|
||||
|
||||
**2.1. Discovery & Prioritization**
|
||||
* New features and bug fixes are identified through user feedback on our issue tracker, community discussions, and direct contributions.
|
||||
* The core maintainers, in collaboration with the community, continuously evaluate and tag issues to create an open and dynamic backlog.
|
||||
|
||||
**2.2. Development & Code Review**
|
||||
* Work is initiated based on the highest-priority items in the backlog.
|
||||
* All code contributions are made via pull requests (PRs).
|
||||
* PRs are reviewed by maintainers and other contributors to ensure code quality, adherence to project standards, and overall stability.
|
||||
|
||||
**2.3. Release Readiness**
|
||||
A new release is considered ready when one of the following conditions is met:
|
||||
* A major new feature has been completed and thoroughly tested.
|
||||
* A critical security vulnerability or bug has been addressed.
|
||||
* A sufficient number of smaller improvements and fixes have been merged, providing meaningful value to users.
|
||||
|
||||
**2.4. Versioning**
|
||||
We adhere to [**Semantic Versioning 2.0.0**](https://semver.org/).
|
||||
* **Major version (`X.y.z`)**: Reserved for releases that introduce breaking changes.
|
||||
* **Minor version (`x.Y.z`)**: Used for new features or significant non-breaking changes.
|
||||
* **Patch version (`x.y.Z`)**: Used for bug fixes and small, non-functional improvements.
|
||||
|
||||
#### 3. Roles and Responsibilities
|
||||
|
||||
* **Members:** The [core team](https://github.com/krkn-chaos/krkn/blob/main/MAINTAINERS.md) responsible for the project's health. Their duties include:
|
||||
* Reviewing pull requests.
|
||||
* Contributing code and documentation via pull requests.
|
||||
* Engaging in discussions and providing feedback.
|
||||
* **Maintainers and Owners:** The [core team](https://github.com/krkn-chaos/krkn/blob/main/MAINTAINERS.md) responsible for the project's health. Their duties include:
|
||||
* Facilitating community discussions and prioritization.
|
||||
* Reviewing and merging pull requests.
|
||||
* Cutting and announcing official releases.
|
||||
* **Contributors:** The community. Their duties include:
|
||||
* Reporting bugs and suggesting new features.
|
||||
* Contributing code and documentation via pull requests.
|
||||
* Engaging in discussions and providing feedback.
|
||||
|
||||
#### 4. Adoption and Future Evolution
|
||||
|
||||
This protocol is designed for the current stage of the project. As the project matures and the contributor base grows, the maintainers will evaluate the need for a more structured methodology to ensure continued scalability and stability.
|
||||
|
||||
16
ROADMAP.md
16
ROADMAP.md
@@ -2,11 +2,11 @@
|
||||
|
||||
Following are a list of enhancements that we are planning to work on adding support in Krkn. Of course any help/contributions are greatly appreciated.
|
||||
|
||||
- [ ] [Ability to run multiple chaos scenarios in parallel under load to mimic real world outages](https://github.com/krkn-chaos/krkn/issues/424)
|
||||
- [x] [Ability to run multiple chaos scenarios in parallel under load to mimic real world outages](https://github.com/krkn-chaos/krkn/issues/424)
|
||||
- [x] [Centralized storage for chaos experiments artifacts](https://github.com/krkn-chaos/krkn/issues/423)
|
||||
- [ ] [Support for causing DNS outages](https://github.com/krkn-chaos/krkn/issues/394)
|
||||
- [x] [Support for causing DNS outages](https://github.com/krkn-chaos/krkn/issues/394)
|
||||
- [x] [Chaos recommender](https://github.com/krkn-chaos/krkn/tree/main/utils/chaos-recommender) to suggest scenarios having probability of impacting the service under test using profiling results
|
||||
- [] Chaos AI integration to improve test coverage while reducing fault space to save costs and execution time
|
||||
- [x] Chaos AI integration to improve test coverage while reducing fault space to save costs and execution time [krkn-chaos-ai](https://github.com/krkn-chaos/krkn-chaos-ai)
|
||||
- [x] [Support for pod level network traffic shaping](https://github.com/krkn-chaos/krkn/issues/393)
|
||||
- [ ] [Ability to visualize the metrics that are being captured by Kraken and stored in Elasticsearch](https://github.com/krkn-chaos/krkn/issues/124)
|
||||
- [x] Support for running all the scenarios of Kraken on Kubernetes distribution - see https://github.com/krkn-chaos/krkn/issues/185, https://github.com/redhat-chaos/krkn/issues/186
|
||||
@@ -14,3 +14,13 @@ Following are a list of enhancements that we are planning to work on adding supp
|
||||
- [x] [Switch documentation references to Kubernetes](https://github.com/krkn-chaos/krkn/issues/495)
|
||||
- [x] [OCP and Kubernetes functionalities segregation](https://github.com/krkn-chaos/krkn/issues/497)
|
||||
- [x] [Krknctl - client for running Krkn scenarios with ease](https://github.com/krkn-chaos/krknctl)
|
||||
- [x] [AI Chat bot to help get started with Krkn and commands](https://github.com/krkn-chaos/krkn-lightspeed)
|
||||
- [ ] [Ability to roll back cluster to original state if chaos fails](https://github.com/krkn-chaos/krkn/issues/804)
|
||||
- [ ] Add recovery time metrics to each scenario for better regression analysis
|
||||
- [ ] [Add resiliency scoring to chaos scenarios ran on cluster](https://github.com/krkn-chaos/krkn/issues/125)
|
||||
- [ ] [Add AI-based Chaos Configuration Generator](https://github.com/krkn-chaos/krkn/issues/1166)
|
||||
- [ ] [Introduce Security Chaos Engineering Scenarios](https://github.com/krkn-chaos/krkn/issues/1165)
|
||||
- [ ] [Add AWS-native Chaos Scenarios (S3, Lambda, Networking)](https://github.com/krkn-chaos/krkn/issues/1164)
|
||||
- [ ] [Unify Krkn Ecosystem under krknctl for Enhanced UX](https://github.com/krkn-chaos/krknctl/issues/113)
|
||||
- [ ] [Build Web UI for Creating, Monitoring, and Reviewing Chaos Scenarios](https://github.com/krkn-chaos/krkn/issues/1167)
|
||||
- [ ] [Add Predefined Chaos Scenario Templates (KRKN Chaos Library)](https://github.com/krkn-chaos/krkn/issues/1168)
|
||||
|
||||
@@ -40,4 +40,4 @@ The security team currently consists of the [Maintainers of Krkn](https://github
|
||||
|
||||
## Process and Supported Releases
|
||||
|
||||
The Krkn security team will investigate and provide a fix in a timely mannner depending on the severity. The fix will be included in the new release of Krkn and details will be included in the release notes.
|
||||
The Krkn security team will investigate and provide a fix in a timely manner depending on the severity. The fix will be included in the new release of Krkn and details will be included in the release notes.
|
||||
|
||||
@@ -39,7 +39,7 @@ cerberus:
|
||||
Sunday:
|
||||
slack_team_alias: # The slack team alias to be tagged while reporting failures in the slack channel when no watcher is assigned
|
||||
|
||||
custom_checks: # Relative paths of files conataining additional user defined checks
|
||||
custom_checks: # Relative paths of files containing additional user defined checks
|
||||
|
||||
tunings:
|
||||
timeout: 3 # Number of seconds before requests fail
|
||||
|
||||
@@ -1,64 +1,67 @@
|
||||
kraken:
|
||||
kubeconfig_path: ~/.kube/config # Path to kubeconfig
|
||||
exit_on_failure: False # Exit when a post action scenario fails
|
||||
auto_rollback: True # Enable auto rollback for scenarios.
|
||||
rollback_versions_directory: /tmp/kraken-rollback # Directory to store rollback version files.
|
||||
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
|
||||
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
|
||||
signal_address: 0.0.0.0 # Signal listening address
|
||||
port: 8081 # Signal port
|
||||
chaos_scenarios:
|
||||
# List of policies/chaos scenarios to load
|
||||
- hog_scenarios:
|
||||
- scenarios/kube/cpu-hog.yml
|
||||
- scenarios/kube/memory-hog.yml
|
||||
- scenarios/kube/io-hog.yml
|
||||
- application_outages_scenarios:
|
||||
- scenarios/openshift/app_outage.yaml
|
||||
- container_scenarios: # List of chaos pod scenarios to load
|
||||
- scenarios/openshift/container_etcd.yml
|
||||
- pod_network_scenarios:
|
||||
- scenarios/openshift/network_chaos_ingress.yml
|
||||
- scenarios/openshift/pod_network_outage.yml
|
||||
- pod_disruption_scenarios:
|
||||
- scenarios/openshift/etcd.yml
|
||||
- scenarios/openshift/regex_openshift_pod_kill.yml
|
||||
- scenarios/openshift/prom_kill.yml
|
||||
- scenarios/openshift/openshift-apiserver.yml
|
||||
- scenarios/openshift/openshift-kube-apiserver.yml
|
||||
- node_scenarios: # List of chaos node scenarios to load
|
||||
- scenarios/openshift/aws_node_scenarios.yml
|
||||
- scenarios/openshift/vmware_node_scenarios.yml
|
||||
- scenarios/openshift/ibmcloud_node_scenarios.yml
|
||||
- time_scenarios: # List of chaos time scenarios to load
|
||||
- scenarios/openshift/time_scenarios_example.yml
|
||||
- cluster_shut_down_scenarios:
|
||||
- scenarios/openshift/cluster_shut_down_scenario.yml
|
||||
- service_disruption_scenarios:
|
||||
- scenarios/openshift/regex_namespace.yaml
|
||||
- scenarios/openshift/ingress_namespace.yaml
|
||||
- zone_outages_scenarios:
|
||||
- scenarios/openshift/zone_outage.yaml
|
||||
- pvc_scenarios:
|
||||
- scenarios/openshift/pvc_scenario.yaml
|
||||
- network_chaos_scenarios:
|
||||
- scenarios/openshift/network_chaos.yaml
|
||||
- service_hijacking_scenarios:
|
||||
- scenarios/kube/service_hijacking.yaml
|
||||
- syn_flood_scenarios:
|
||||
- scenarios/kube/syn_flood.yaml
|
||||
- network_chaos_ng_scenarios:
|
||||
- scenarios/kube/network-filter.yml
|
||||
- kubevirt_vm_outage:
|
||||
- scenarios/kubevirt/kubevirt-vm-outage.yaml
|
||||
# List of policies/chaos scenarios to load
|
||||
- hog_scenarios:
|
||||
- scenarios/kube/cpu-hog.yml
|
||||
- scenarios/kube/memory-hog.yml
|
||||
- scenarios/kube/io-hog.yml
|
||||
- application_outages_scenarios:
|
||||
- scenarios/openshift/app_outage.yaml
|
||||
- container_scenarios: # List of chaos pod scenarios to load
|
||||
- scenarios/openshift/container_etcd.yml
|
||||
- pod_network_scenarios:
|
||||
- scenarios/openshift/network_chaos_ingress.yml
|
||||
- scenarios/openshift/pod_network_outage.yml
|
||||
- pod_disruption_scenarios:
|
||||
- scenarios/openshift/etcd.yml
|
||||
- scenarios/openshift/regex_openshift_pod_kill.yml
|
||||
- scenarios/openshift/prom_kill.yml
|
||||
- scenarios/openshift/openshift-apiserver.yml
|
||||
- scenarios/openshift/openshift-kube-apiserver.yml
|
||||
- node_scenarios: # List of chaos node scenarios to load
|
||||
- scenarios/openshift/aws_node_scenarios.yml
|
||||
- scenarios/openshift/vmware_node_scenarios.yml
|
||||
- scenarios/openshift/ibmcloud_node_scenarios.yml
|
||||
- time_scenarios: # List of chaos time scenarios to load
|
||||
- scenarios/openshift/time_scenarios_example.yml
|
||||
- cluster_shut_down_scenarios:
|
||||
- scenarios/openshift/cluster_shut_down_scenario.yml
|
||||
- service_disruption_scenarios:
|
||||
- scenarios/openshift/regex_namespace.yaml
|
||||
- scenarios/openshift/ingress_namespace.yaml
|
||||
- zone_outages_scenarios:
|
||||
- scenarios/openshift/zone_outage.yaml
|
||||
- pvc_scenarios:
|
||||
- scenarios/openshift/pvc_scenario.yaml
|
||||
- network_chaos_scenarios:
|
||||
- scenarios/openshift/network_chaos.yaml
|
||||
- service_hijacking_scenarios:
|
||||
- scenarios/kube/service_hijacking.yaml
|
||||
- syn_flood_scenarios:
|
||||
- scenarios/kube/syn_flood.yaml
|
||||
- network_chaos_ng_scenarios:
|
||||
- scenarios/kube/pod-network-filter.yml
|
||||
- scenarios/kube/node-network-filter.yml
|
||||
- scenarios/kube/node-network-chaos.yml
|
||||
- scenarios/kube/pod-network-chaos.yml
|
||||
- kubevirt_vm_outage:
|
||||
- scenarios/kubevirt/kubevirt-vm-outage.yaml
|
||||
|
||||
cerberus:
|
||||
cerberus_enabled: False # Enable it when cerberus is previously installed
|
||||
cerberus_url: # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal
|
||||
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
|
||||
check_application_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
|
||||
|
||||
performance_monitoring:
|
||||
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
|
||||
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
|
||||
prometheus_url: '' # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
|
||||
prometheus_url: '' # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
|
||||
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
|
||||
uuid: # uuid for the run is generated by default if not set
|
||||
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
|
||||
@@ -76,9 +79,10 @@ elastic:
|
||||
metrics_index: "krkn-metrics"
|
||||
alerts_index: "krkn-alerts"
|
||||
telemetry_index: "krkn-telemetry"
|
||||
run_tag: ""
|
||||
|
||||
tunings:
|
||||
wait_duration: 60 # Duration to wait between each chaos scenario
|
||||
wait_duration: 1 # Duration to wait between each chaos scenario
|
||||
iterations: 1 # Number of times to execute the scenarios
|
||||
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever
|
||||
telemetry:
|
||||
@@ -92,7 +96,7 @@ telemetry:
|
||||
prometheus_pod_name: "" # name of the prometheus pod (if distribution is kubernetes)
|
||||
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
|
||||
backup_threads: 5 # number of telemetry download/upload threads
|
||||
archive_path: /tmp # local path where the archive files will be temporarly stored
|
||||
archive_path: /tmp # local path where the archive files will be temporarily stored
|
||||
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
|
||||
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
|
||||
archive_size: 500000
|
||||
@@ -118,3 +122,14 @@ health_checks: # Utilizing health c
|
||||
bearer_token: # Bearer token for authentication if any
|
||||
auth: # Provide authentication credentials (username , password) in tuple format if any, ex:("admin","secretpassword")
|
||||
exit_on_failure: # If value is True exits when health check failed for application, values can be True/False
|
||||
|
||||
kubevirt_checks: # Utilizing virt check endpoints to observe ssh ability to VMI's during chaos injection.
|
||||
interval: 2 # Interval in seconds to perform virt checks, default value is 2 seconds
|
||||
namespace: # Namespace where to find VMI's
|
||||
name: # Regex Name style of VMI's to watch, optional, will watch all VMI names in the namespace if left blank
|
||||
only_failures: False # Boolean of whether to show all VMI's failures and successful ssh connection (False), or only failure status' (True)
|
||||
disconnected: False # Boolean of how to try to connect to the VMIs; if True will use the ip_address to try ssh from within a node, if false will use the name and uses virtctl to try to connect; Default is False
|
||||
ssh_node: "" # If set, will be a backup way to ssh to a node. Will want to set to a node that isn't targeted in chaos
|
||||
node_names: ""
|
||||
exit_on_failure: # If value is True and VMI's are failing post chaos returns failure, values can be True/False
|
||||
|
||||
|
||||
@@ -7,26 +7,33 @@ kraken:
|
||||
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
|
||||
signal_address: 0.0.0.0 # Signal listening address
|
||||
chaos_scenarios: # List of policies/chaos scenarios to load
|
||||
- plugin_scenarios:
|
||||
- scenarios/kind/scheduler.yml
|
||||
- node_scenarios:
|
||||
- scenarios/kind/node_scenarios_example.yml
|
||||
- pod_disruption_scenarios:
|
||||
- scenarios/kube/pod.yml
|
||||
|
||||
cerberus:
|
||||
cerberus_enabled: False # Enable it when cerberus is previously installed
|
||||
cerberus_url: # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal
|
||||
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
|
||||
check_application_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
|
||||
|
||||
performance_monitoring:
|
||||
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
|
||||
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
|
||||
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
|
||||
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
|
||||
uuid: # uuid for the run is generated by default if not set
|
||||
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
|
||||
alert_profile: config/alerts.yaml # Path to alert profile with the prometheus queries
|
||||
|
||||
elastic:
|
||||
enable_elastic: False
|
||||
|
||||
tunings:
|
||||
wait_duration: 60 # Duration to wait between each chaos scenario
|
||||
iterations: 1 # Number of times to execute the scenarios
|
||||
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever
|
||||
|
||||
telemetry:
|
||||
enabled: False # enable/disables the telemetry collection feature
|
||||
archive_path: /tmp # local path where the archive files will be temporarily stored
|
||||
events_backup: False # enables/disables cluster events collection
|
||||
logs_backup: False
|
||||
|
||||
health_checks: # Utilizing health check endpoints to observe application behavior during chaos injection.
|
||||
|
||||
@@ -14,11 +14,9 @@ kraken:
|
||||
cerberus:
|
||||
cerberus_enabled: False # Enable it when cerberus is previously installed
|
||||
cerberus_url: # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal
|
||||
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
|
||||
check_application_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
|
||||
|
||||
performance_monitoring:
|
||||
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
|
||||
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
|
||||
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
|
||||
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
|
||||
uuid: # uuid for the run is generated by default if not set
|
||||
|
||||
@@ -35,7 +35,7 @@ kraken:
|
||||
cerberus:
|
||||
cerberus_enabled: True # Enable it when cerberus is previously installed
|
||||
cerberus_url: http://0.0.0.0:8080 # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal
|
||||
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
|
||||
check_application_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
|
||||
|
||||
performance_monitoring:
|
||||
deploy_dashboards: True # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
|
||||
@@ -61,7 +61,7 @@ telemetry:
|
||||
prometheus_backup: True # enables/disables prometheus data collection
|
||||
full_prometheus_backup: False # if is set to False only the /prometheus/wal folder will be downloaded.
|
||||
backup_threads: 5 # number of telemetry download/upload threads
|
||||
archive_path: /tmp # local path where the archive files will be temporarly stored
|
||||
archive_path: /tmp # local path where the archive files will be temporarily stored
|
||||
max_retries: 0 # maximum number of upload retries (if 0 will retry forever)
|
||||
run_tag: '' # if set, this will be appended to the run folder in the bucket (useful to group the runs)
|
||||
archive_size: 500000 # the size of the prometheus data archive size in KB. The lower the size of archive is
|
||||
|
||||
@@ -1,22 +1,35 @@
|
||||
# oc build
|
||||
FROM golang:1.23.1 AS oc-build
|
||||
FROM golang:1.24.9 AS oc-build
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends libkrb5-dev
|
||||
WORKDIR /tmp
|
||||
# oc build
|
||||
RUN git clone --branch release-4.18 https://github.com/openshift/oc.git
|
||||
WORKDIR /tmp/oc
|
||||
RUN go mod edit -go 1.23.1 &&\
|
||||
go get github.com/moby/buildkit@v0.12.5 &&\
|
||||
go get github.com/containerd/containerd@v1.7.11&&\
|
||||
go get github.com/docker/docker@v25.0.6&&\
|
||||
go get github.com/opencontainers/runc@v1.1.14&&\
|
||||
go get github.com/go-git/go-git/v5@v5.13.0&&\
|
||||
go get golang.org/x/net@v0.36.0&&\
|
||||
go get github.com/containerd/containerd@v1.7.27&&\
|
||||
go get golang.org/x/oauth2@v0.27.0&&\
|
||||
go get golang.org/x/crypto@v0.35.0&&\
|
||||
RUN go mod edit -go 1.24.9 &&\
|
||||
go mod edit -require github.com/moby/buildkit@v0.12.5 &&\
|
||||
go mod edit -require github.com/containerd/containerd@v1.7.29&&\
|
||||
go mod edit -require github.com/docker/docker@v27.5.1+incompatible&&\
|
||||
go mod edit -require github.com/opencontainers/runc@v1.2.8&&\
|
||||
go mod edit -require github.com/go-git/go-git/v5@v5.13.0&&\
|
||||
go mod edit -require github.com/opencontainers/selinux@v1.13.0&&\
|
||||
go mod edit -require github.com/ulikunitz/xz@v0.5.15&&\
|
||||
go mod edit -require golang.org/x/net@v0.38.0&&\
|
||||
go mod edit -require github.com/containerd/containerd@v1.7.27&&\
|
||||
go mod edit -require golang.org/x/oauth2@v0.27.0&&\
|
||||
go mod edit -require golang.org/x/crypto@v0.35.0&&\
|
||||
go mod edit -replace github.com/containerd/containerd@v1.7.27=github.com/containerd/containerd@v1.7.29&&\
|
||||
go mod tidy && go mod vendor
|
||||
|
||||
RUN make GO_REQUIRED_MIN_VERSION:= oc
|
||||
|
||||
# virtctl build
|
||||
WORKDIR /tmp
|
||||
RUN git clone https://github.com/kubevirt/kubevirt.git
|
||||
WORKDIR /tmp/kubevirt
|
||||
RUN go mod edit -go 1.24.9 &&\
|
||||
go work use &&\
|
||||
go build -o virtctl ./cmd/virtctl/
|
||||
|
||||
FROM fedora:40
|
||||
ARG PR_NUMBER
|
||||
ARG TAG
|
||||
@@ -28,16 +41,20 @@ ENV KUBECONFIG /home/krkn/.kube/config
|
||||
|
||||
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
|
||||
RUN dnf update && dnf install -y --setopt=install_weak_deps=False \
|
||||
git python39 jq yq gettext wget which &&\
|
||||
git python3.11 jq yq gettext wget which ipmitool openssh-server &&\
|
||||
dnf clean all
|
||||
|
||||
# copy oc client binary from oc-build image
|
||||
COPY --from=oc-build /tmp/oc/oc /usr/bin/oc
|
||||
COPY --from=oc-build /tmp/kubevirt/virtctl /usr/bin/virtctl
|
||||
|
||||
# krkn build
|
||||
RUN git clone https://github.com/krkn-chaos/krkn.git /home/krkn/kraken && \
|
||||
mkdir -p /home/krkn/.kube
|
||||
|
||||
RUN mkdir -p /home/krkn/.ssh && \
|
||||
chmod 700 /home/krkn/.ssh
|
||||
|
||||
WORKDIR /home/krkn/kraken
|
||||
|
||||
# default behaviour will be to build main
|
||||
@@ -46,17 +63,28 @@ RUN if [ -n "$PR_NUMBER" ]; then git fetch origin pull/${PR_NUMBER}/head:pr-${PR
|
||||
# if it is a TAG trigger checkout the tag
|
||||
RUN if [ -n "$TAG" ]; then git checkout "$TAG";fi
|
||||
|
||||
RUN python3.9 -m ensurepip --upgrade --default-pip
|
||||
RUN python3.9 -m pip install --upgrade pip setuptools==70.0.0
|
||||
RUN pip3.9 install -r requirements.txt
|
||||
RUN pip3.9 install jsonschema
|
||||
RUN python3.11 -m ensurepip --upgrade --default-pip
|
||||
RUN python3.11 -m pip install --upgrade pip setuptools==78.1.1
|
||||
|
||||
# removes the the vulnerable versions of setuptools and pip
|
||||
RUN rm -rf "$(pip cache dir)"
|
||||
RUN rm -rf /tmp/*
|
||||
RUN rm -rf /usr/local/lib/python3.11/ensurepip/_bundled
|
||||
RUN pip3.11 install -r requirements.txt
|
||||
RUN pip3.11 install jsonschema
|
||||
|
||||
LABEL krknctl.title.global="Krkn Base Image"
|
||||
LABEL krknctl.description.global="This is the krkn base image."
|
||||
LABEL krknctl.input_fields.global='$KRKNCTL_INPUT'
|
||||
|
||||
# SSH setup script
|
||||
RUN chmod +x /home/krkn/kraken/containers/setup-ssh.sh
|
||||
|
||||
# Main entrypoint script
|
||||
RUN chmod +x /home/krkn/kraken/containers/entrypoint.sh
|
||||
|
||||
RUN chown -R krkn:krkn /home/krkn && chmod 755 /home/krkn
|
||||
USER krkn
|
||||
ENTRYPOINT ["python3.9", "run_kraken.py"]
|
||||
|
||||
ENTRYPOINT ["/bin/bash", "/home/krkn/kraken/containers/entrypoint.sh"]
|
||||
CMD ["--config=config/config.yaml"]
|
||||
|
||||
8
containers/entrypoint.sh
Normal file
8
containers/entrypoint.sh
Normal file
@@ -0,0 +1,8 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
# Run SSH setup
|
||||
./containers/setup-ssh.sh
|
||||
# Change to kraken directory
|
||||
|
||||
# Execute the main command
|
||||
exec python3.9 run_kraken.py "$@"
|
||||
@@ -31,6 +31,24 @@
|
||||
"separator": ",",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "ssh-public-key",
|
||||
"short_description": "Krkn ssh public key path",
|
||||
"description": "Sets the path where krkn will search for ssh public key (in container)",
|
||||
"variable": "KRKN_SSH_PUBLIC",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "ssh-private-key",
|
||||
"short_description": "Krkn ssh private key path",
|
||||
"description": "Sets the path where krkn will search for ssh private key (in container)",
|
||||
"variable": "KRKN_SSH_PRIVATE",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "krkn-kubeconfig",
|
||||
"short_description": "Krkn kubeconfig path",
|
||||
@@ -67,6 +85,24 @@
|
||||
"default": "False",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "prometheus-url",
|
||||
"short_description": "Prometheus url",
|
||||
"description": "Prometheus url for when running on kuberenetes",
|
||||
"variable": "PROMETHEUS_URL",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "prometheus-token",
|
||||
"short_description": "Prometheus bearer token",
|
||||
"description": "Prometheus bearer token for prometheus url authentication",
|
||||
"variable": "PROMETHEUS_TOKEN",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "uuid",
|
||||
"short_description": "Sets krkn run uuid",
|
||||
@@ -425,6 +461,84 @@
|
||||
"default": "False",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-check-interval",
|
||||
"short_description": "Kube Virt check interval",
|
||||
"description": "How often to check the kube virt check Vms ssh status",
|
||||
"variable": "KUBE_VIRT_CHECK_INTERVAL",
|
||||
"type": "number",
|
||||
"default": "2",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-namespace",
|
||||
"short_description": "KubeVirt namespace to check",
|
||||
"description": "KubeVirt namespace to check the health of",
|
||||
"variable": "KUBE_VIRT_NAMESPACE",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-name",
|
||||
"short_description": "KubeVirt regex names to watch",
|
||||
"description": "KubeVirt regex names to check VMs",
|
||||
"variable": "KUBE_VIRT_NAME",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-only-failures",
|
||||
"short_description": "KubeVirt checks only report if failure occurs",
|
||||
"description": "KubeVirt checks only report if failure occurs",
|
||||
"variable": "KUBE_VIRT_FAILURES",
|
||||
"type": "enum",
|
||||
"allowed_values": "True,False,true,false",
|
||||
"separator": ",",
|
||||
"default": "False",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-disconnected",
|
||||
"short_description": "KubeVirt checks in disconnected mode",
|
||||
"description": "KubeVirt checks in disconnected mode, bypassing the clusters Api",
|
||||
"variable": "KUBE_VIRT_DISCONNECTED",
|
||||
"type": "enum",
|
||||
"allowed_values": "True,False,true,false",
|
||||
"separator": ",",
|
||||
"default": "False",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-ssh-node",
|
||||
"short_description": "KubeVirt node to ssh from",
|
||||
"description": "KubeVirt node to ssh from, should be available whole chaos run",
|
||||
"variable": "KUBE_VIRT_SSH_NODE",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-exit-on-failure",
|
||||
"short_description": "KubeVirt fail if failed vms at end of run",
|
||||
"description": "KubeVirt fails run if vms still have false status",
|
||||
"variable": "KUBE_VIRT_EXIT_ON_FAIL",
|
||||
"type": "enum",
|
||||
"allowed_values": "True,False,true,false",
|
||||
"separator": ",",
|
||||
"default": "False",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-node-node",
|
||||
"short_description": "KubeVirt node to filter vms on",
|
||||
"description": "Only track VMs in KubeVirt on given node name",
|
||||
"variable": "KUBE_VIRT_NODE_NAME",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "krkn-debug",
|
||||
"short_description": "Krkn debug mode",
|
||||
|
||||
73
containers/setup-ssh.sh
Normal file
73
containers/setup-ssh.sh
Normal file
@@ -0,0 +1,73 @@
|
||||
#!/bin/bash
|
||||
# Setup SSH key if mounted
|
||||
# Support multiple mount locations
|
||||
MOUNTED_PRIVATE_KEY_ALT="/secrets/id_rsa"
|
||||
MOUNTED_PRIVATE_KEY="/home/krkn/.ssh/id_rsa"
|
||||
MOUNTED_PUBLIC_KEY="/home/krkn/.ssh/id_rsa.pub"
|
||||
WORKING_KEY="/home/krkn/.ssh/id_rsa.key"
|
||||
|
||||
# Determine which source to use
|
||||
SOURCE_KEY=""
|
||||
if [ -f "$MOUNTED_PRIVATE_KEY_ALT" ]; then
|
||||
SOURCE_KEY="$MOUNTED_PRIVATE_KEY_ALT"
|
||||
echo "Found SSH key at alternative location: $SOURCE_KEY"
|
||||
elif [ -f "$MOUNTED_PRIVATE_KEY" ]; then
|
||||
SOURCE_KEY="$MOUNTED_PRIVATE_KEY"
|
||||
echo "Found SSH key at default location: $SOURCE_KEY"
|
||||
fi
|
||||
|
||||
# Setup SSH private key and create config for outbound connections
|
||||
if [ -n "$SOURCE_KEY" ]; then
|
||||
echo "Setting up SSH private key from: $SOURCE_KEY"
|
||||
|
||||
# Check current permissions and ownership
|
||||
ls -la "$SOURCE_KEY"
|
||||
|
||||
# Since the mounted key might be owned by root and we run as krkn user,
|
||||
# we cannot modify it directly. Copy to a new location we can control.
|
||||
echo "Copying SSH key to working location: $WORKING_KEY"
|
||||
|
||||
# Try to copy - if readable by anyone, this will work
|
||||
if cp "$SOURCE_KEY" "$WORKING_KEY" 2>/dev/null || cat "$SOURCE_KEY" > "$WORKING_KEY" 2>/dev/null; then
|
||||
chmod 600 "$WORKING_KEY"
|
||||
echo "SSH key copied successfully"
|
||||
ls -la "$WORKING_KEY"
|
||||
|
||||
# Verify the key is readable
|
||||
if ssh-keygen -y -f "$WORKING_KEY" > /dev/null 2>&1; then
|
||||
echo "SSH private key verified successfully"
|
||||
else
|
||||
echo "Warning: SSH key verification failed, but continuing anyway"
|
||||
fi
|
||||
|
||||
# Create SSH config to use the working key
|
||||
cat > /home/krkn/.ssh/config <<EOF
|
||||
Host *
|
||||
IdentityFile $WORKING_KEY
|
||||
StrictHostKeyChecking no
|
||||
UserKnownHostsFile /dev/null
|
||||
EOF
|
||||
chmod 600 /home/krkn/.ssh/config
|
||||
echo "SSH config created with default identity: $WORKING_KEY"
|
||||
else
|
||||
echo "ERROR: Cannot read SSH key at $SOURCE_KEY"
|
||||
echo "Key is owned by: $(stat -c '%U:%G' "$SOURCE_KEY" 2>/dev/null || stat -f '%Su:%Sg' "$SOURCE_KEY" 2>/dev/null)"
|
||||
echo ""
|
||||
echo "Solutions:"
|
||||
echo "1. Mount with world-readable permissions (less secure): chmod 644 /path/to/key"
|
||||
echo "2. Mount to /secrets/id_rsa instead of /home/krkn/.ssh/id_rsa"
|
||||
echo "3. Change ownership on host: chown \$(id -u):\$(id -g) /path/to/key"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Setup SSH public key if mounted (for inbound server access)
|
||||
if [ -f "$MOUNTED_PUBLIC_KEY" ]; then
|
||||
echo "SSH public key already present at $MOUNTED_PUBLIC_KEY"
|
||||
# Try to fix permissions (will fail silently if file is mounted read-only or owned by another user)
|
||||
chmod 600 "$MOUNTED_PUBLIC_KEY" 2>/dev/null
|
||||
if [ ! -f "/home/krkn/.ssh/authorized_keys" ]; then
|
||||
cp "$MOUNTED_PUBLIC_KEY" /home/krkn/.ssh/authorized_keys
|
||||
chmod 600 /home/krkn/.ssh/authorized_keys
|
||||
fi
|
||||
fi
|
||||
@@ -3,10 +3,16 @@ apiVersion: kind.x-k8s.io/v1alpha4
|
||||
nodes:
|
||||
- role: control-plane
|
||||
extraPortMappings:
|
||||
- containerPort: 30000
|
||||
hostPort: 9090
|
||||
- containerPort: 32766
|
||||
hostPort: 9200
|
||||
- containerPort: 30036
|
||||
hostPort: 8888
|
||||
- containerPort: 30037
|
||||
hostPort: 8889
|
||||
- containerPort: 30080
|
||||
hostPort: 30080
|
||||
- role: control-plane
|
||||
- role: control-plane
|
||||
- role: worker
|
||||
|
||||
@@ -2,19 +2,33 @@ import logging
|
||||
import requests
|
||||
import sys
|
||||
import json
|
||||
from krkn_lib.utils.functions import get_yaml_item_value
|
||||
|
||||
check_application_routes = ""
|
||||
cerberus_url = None
|
||||
exit_on_failure = False
|
||||
cerberus_enabled = False
|
||||
|
||||
def get_status(config, start_time, end_time):
|
||||
def set_url(config):
|
||||
global exit_on_failure
|
||||
exit_on_failure = get_yaml_item_value(config["kraken"], "exit_on_failure", False)
|
||||
global cerberus_enabled
|
||||
cerberus_enabled = get_yaml_item_value(config["cerberus"],"cerberus_enabled", False)
|
||||
if cerberus_enabled:
|
||||
global cerberus_url
|
||||
cerberus_url = get_yaml_item_value(config["cerberus"],"cerberus_url", "")
|
||||
global check_application_routes
|
||||
check_application_routes = \
|
||||
get_yaml_item_value(config["cerberus"],"check_applicaton_routes","")
|
||||
|
||||
def get_status(start_time, end_time):
|
||||
"""
|
||||
Get cerberus status
|
||||
"""
|
||||
cerberus_status = True
|
||||
check_application_routes = False
|
||||
application_routes_status = True
|
||||
if config["cerberus"]["cerberus_enabled"]:
|
||||
cerberus_url = config["cerberus"]["cerberus_url"]
|
||||
check_application_routes = \
|
||||
config["cerberus"]["check_applicaton_routes"]
|
||||
if cerberus_enabled:
|
||||
if not cerberus_url:
|
||||
logging.error(
|
||||
"url where Cerberus publishes True/False signal "
|
||||
@@ -61,40 +75,38 @@ def get_status(config, start_time, end_time):
|
||||
return cerberus_status
|
||||
|
||||
|
||||
def publish_kraken_status(config, failed_post_scenarios, start_time, end_time):
|
||||
def publish_kraken_status( start_time, end_time):
|
||||
"""
|
||||
Publish kraken status to cerberus
|
||||
"""
|
||||
cerberus_status = get_status(config, start_time, end_time)
|
||||
cerberus_status = get_status(start_time, end_time)
|
||||
if not cerberus_status:
|
||||
if failed_post_scenarios:
|
||||
if config["kraken"]["exit_on_failure"]:
|
||||
logging.info(
|
||||
"Cerberus status is not healthy and post action scenarios "
|
||||
"are still failing, exiting kraken run"
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
logging.info(
|
||||
"Cerberus status is not healthy and post action scenarios "
|
||||
"are still failing"
|
||||
)
|
||||
if exit_on_failure:
|
||||
logging.info(
|
||||
"Cerberus status is not healthy and post action scenarios "
|
||||
"are still failing, exiting kraken run"
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
logging.info(
|
||||
"Cerberus status is not healthy and post action scenarios "
|
||||
"are still failing"
|
||||
)
|
||||
else:
|
||||
if failed_post_scenarios:
|
||||
if config["kraken"]["exit_on_failure"]:
|
||||
logging.info(
|
||||
"Cerberus status is healthy but post action scenarios "
|
||||
"are still failing, exiting kraken run"
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
logging.info(
|
||||
"Cerberus status is healthy but post action scenarios "
|
||||
"are still failing"
|
||||
)
|
||||
if exit_on_failure:
|
||||
logging.info(
|
||||
"Cerberus status is healthy but post action scenarios "
|
||||
"are still failing, exiting kraken run"
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
logging.info(
|
||||
"Cerberus status is healthy but post action scenarios "
|
||||
"are still failing"
|
||||
)
|
||||
|
||||
|
||||
def application_status(cerberus_url, start_time, end_time):
|
||||
def application_status( start_time, end_time):
|
||||
"""
|
||||
Check application availability
|
||||
"""
|
||||
|
||||
@@ -15,13 +15,11 @@ def invoke(command, timeout=None):
|
||||
|
||||
|
||||
# Invokes a given command and returns the stdout
|
||||
def invoke_no_exit(command, timeout=None):
|
||||
def invoke_no_exit(command, timeout=15):
|
||||
output = ""
|
||||
try:
|
||||
output = subprocess.check_output(command, shell=True, universal_newlines=True, timeout=timeout)
|
||||
logging.info("output " + str(output))
|
||||
output = subprocess.check_output(command, shell=True, universal_newlines=True, timeout=timeout, stderr=subprocess.DEVNULL)
|
||||
except Exception as e:
|
||||
logging.error("Failed to run %s, error: %s" % (command, e))
|
||||
return str(e)
|
||||
return output
|
||||
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
import subprocess
|
||||
import logging
|
||||
import git
|
||||
import sys
|
||||
|
||||
|
||||
# Installs a mutable grafana on the Kubernetes/OpenShift cluster and loads the performance dashboards
|
||||
def setup(repo, distribution):
|
||||
if distribution == "kubernetes":
|
||||
command = "cd performance-dashboards/dittybopper && ./k8s-deploy.sh"
|
||||
elif distribution == "openshift":
|
||||
command = "cd performance-dashboards/dittybopper && ./deploy.sh"
|
||||
else:
|
||||
logging.error("Provided distribution: %s is not supported" % (distribution))
|
||||
sys.exit(1)
|
||||
delete_repo = "rm -rf performance-dashboards || exit 0"
|
||||
logging.info(
|
||||
"Cloning, installing mutable grafana on the cluster and loading the dashboards"
|
||||
)
|
||||
try:
|
||||
# delete repo to clone the latest copy if exists
|
||||
subprocess.run(delete_repo, shell=True, universal_newlines=True, timeout=45)
|
||||
# clone the repo
|
||||
git.Repo.clone_from(repo, "performance-dashboards")
|
||||
# deploy performance dashboards
|
||||
subprocess.run(command, shell=True, universal_newlines=True)
|
||||
except Exception as e:
|
||||
logging.error("Failed to install performance-dashboards, error: %s" % (e))
|
||||
@@ -46,7 +46,7 @@ def alerts(
|
||||
sys.exit(1)
|
||||
|
||||
for alert in profile_yaml:
|
||||
if list(alert.keys()).sort() != ["expr", "description", "severity"].sort():
|
||||
if sorted(alert.keys()) != sorted(["expr", "description", "severity"]):
|
||||
logging.error(f"wrong alert {alert}, skipping")
|
||||
continue
|
||||
|
||||
@@ -75,10 +75,12 @@ def alerts(
|
||||
def critical_alerts(
|
||||
prom_cli: KrknPrometheus,
|
||||
summary: ChaosRunAlertSummary,
|
||||
elastic: KrknElastic,
|
||||
run_id,
|
||||
scenario,
|
||||
start_time,
|
||||
end_time,
|
||||
elastic_alerts_index
|
||||
):
|
||||
summary.scenario = scenario
|
||||
summary.run_id = run_id
|
||||
@@ -113,7 +115,6 @@ def critical_alerts(
|
||||
summary.chaos_alerts.append(alert)
|
||||
|
||||
post_critical_alerts = prom_cli.process_query(query)
|
||||
|
||||
for alert in post_critical_alerts:
|
||||
if "metric" in alert:
|
||||
alertname = (
|
||||
@@ -136,6 +137,21 @@ def critical_alerts(
|
||||
)
|
||||
alert = ChaosRunAlert(alertname, alertstate, namespace, severity)
|
||||
summary.post_chaos_alerts.append(alert)
|
||||
if elastic:
|
||||
elastic_alert = ElasticAlert(
|
||||
run_uuid=run_id,
|
||||
severity=severity,
|
||||
alert=alertname,
|
||||
created_at=end_time,
|
||||
namespace=namespace,
|
||||
alertstate=alertstate,
|
||||
phase="post_chaos"
|
||||
)
|
||||
result = elastic.push_alert(elastic_alert, elastic_alerts_index)
|
||||
if result == -1:
|
||||
logging.error("failed to save alert on ElasticSearch")
|
||||
pass
|
||||
|
||||
|
||||
during_critical_alerts_count = len(during_critical_alerts)
|
||||
post_critical_alerts_count = len(post_critical_alerts)
|
||||
@@ -149,8 +165,8 @@ def critical_alerts(
|
||||
|
||||
if not firing_alerts:
|
||||
logging.info("No critical alerts are firing!!")
|
||||
|
||||
|
||||
|
||||
|
||||
def metrics(
|
||||
prom_cli: KrknPrometheus,
|
||||
elastic: KrknElastic,
|
||||
@@ -189,8 +205,8 @@ def metrics(
|
||||
query
|
||||
)
|
||||
elif (
|
||||
list(metric_query.keys()).sort()
|
||||
== ["query", "metricName"].sort()
|
||||
sorted(metric_query.keys())
|
||||
== sorted(["query", "metricName"])
|
||||
):
|
||||
metrics_result = prom_cli.process_prom_query_in_range(
|
||||
query,
|
||||
@@ -198,7 +214,7 @@ def metrics(
|
||||
end_time=datetime.datetime.fromtimestamp(end_time), granularity=30
|
||||
)
|
||||
else:
|
||||
logging.info('didnt match keys')
|
||||
logging.info("didn't match keys")
|
||||
continue
|
||||
|
||||
for returned_metric in metrics_result:
|
||||
@@ -252,6 +268,14 @@ def metrics(
|
||||
metric[k] = v
|
||||
metric['timestamp'] = str(datetime.datetime.now())
|
||||
metrics_list.append(metric.copy())
|
||||
if telemetry_json['virt_checks']:
|
||||
for virt_check in telemetry_json["virt_checks"]:
|
||||
metric_name = "virt_check_recovery"
|
||||
metric = {"metricName": metric_name}
|
||||
for k,v in virt_check.items():
|
||||
metric[k] = v
|
||||
metric['timestamp'] = str(datetime.datetime.now())
|
||||
metrics_list.append(metric.copy())
|
||||
|
||||
save_metrics = False
|
||||
if elastic is not None and elastic_metrics_index is not None:
|
||||
|
||||
79
krkn/prometheus/collector.py
Normal file
79
krkn/prometheus/collector.py
Normal file
@@ -0,0 +1,79 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import datetime
|
||||
import logging
|
||||
from typing import Dict, Any, List, Optional
|
||||
|
||||
from krkn_lib.prometheus.krkn_prometheus import KrknPrometheus
|
||||
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# SLO evaluation helpers (used by krkn.resiliency)
|
||||
# -----------------------------------------------------------------------------
|
||||
|
||||
|
||||
def slo_passed(prometheus_result: List[Any]) -> Optional[bool]:
|
||||
if not prometheus_result:
|
||||
return None
|
||||
has_samples = False
|
||||
for series in prometheus_result:
|
||||
if "values" in series:
|
||||
has_samples = True
|
||||
for _ts, val in series["values"]:
|
||||
try:
|
||||
if float(val) > 0:
|
||||
return False
|
||||
except (TypeError, ValueError):
|
||||
continue
|
||||
elif "value" in series:
|
||||
has_samples = True
|
||||
try:
|
||||
return float(series["value"][1]) == 0
|
||||
except (TypeError, ValueError):
|
||||
return False
|
||||
|
||||
# If we reached here and never saw any samples, skip
|
||||
return None if not has_samples else True
|
||||
|
||||
|
||||
def evaluate_slos(
|
||||
prom_cli: KrknPrometheus,
|
||||
slo_list: List[Dict[str, Any]],
|
||||
start_time: datetime.datetime,
|
||||
end_time: datetime.datetime,
|
||||
) -> Dict[str, bool]:
|
||||
"""Evaluate a list of SLO expressions against Prometheus.
|
||||
|
||||
Args:
|
||||
prom_cli: Configured Prometheus client.
|
||||
slo_list: List of dicts with keys ``name``, ``expr``.
|
||||
start_time: Start timestamp.
|
||||
end_time: End timestamp.
|
||||
granularity: Step in seconds for range queries.
|
||||
Returns:
|
||||
Mapping name -> bool indicating pass status.
|
||||
True means good we passed the SLO test otherwise failed the SLO
|
||||
"""
|
||||
results: Dict[str, bool] = {}
|
||||
logging.info("Evaluating %d SLOs over window %s – %s", len(slo_list), start_time, end_time)
|
||||
for slo in slo_list:
|
||||
expr = slo["expr"]
|
||||
name = slo["name"]
|
||||
try:
|
||||
response = prom_cli.process_prom_query_in_range(
|
||||
expr,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
)
|
||||
|
||||
passed = slo_passed(response)
|
||||
if passed is None:
|
||||
# Absence of data indicates the condition did not trigger; treat as pass.
|
||||
logging.debug("SLO '%s' query returned no data; assuming pass.", name)
|
||||
results[name] = True
|
||||
else:
|
||||
results[name] = passed
|
||||
except Exception as exc:
|
||||
logging.error("PromQL query failed for SLO '%s': %s", name, exc)
|
||||
results[name] = False
|
||||
return results
|
||||
4
krkn/resiliency/__init__.py
Normal file
4
krkn/resiliency/__init__.py
Normal file
@@ -0,0 +1,4 @@
|
||||
"""krkn.resiliency package public interface."""
|
||||
|
||||
from .resiliency import Resiliency # noqa: F401
|
||||
from .score import calculate_resiliency_score # noqa: F401
|
||||
366
krkn/resiliency/resiliency.py
Normal file
366
krkn/resiliency/resiliency.py
Normal file
@@ -0,0 +1,366 @@
|
||||
"""Resiliency evaluation orchestrator for Krkn chaos runs.
|
||||
|
||||
This module provides the `Resiliency` class which loads the canonical
|
||||
`alerts.yaml`, executes every SLO expression against Prometheus in the
|
||||
chaos-test time window, determines pass/fail status and calculates an
|
||||
overall resiliency score using the generic weighted model implemented
|
||||
in `krkn.resiliency.score`.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import datetime
|
||||
import logging
|
||||
import os
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
|
||||
import yaml
|
||||
import json
|
||||
import dataclasses
|
||||
from krkn_lib.models.telemetry import ChaosRunTelemetry
|
||||
|
||||
from krkn_lib.prometheus.krkn_prometheus import KrknPrometheus
|
||||
from krkn.prometheus.collector import evaluate_slos
|
||||
from krkn.resiliency.score import calculate_resiliency_score
|
||||
|
||||
|
||||
class Resiliency:
|
||||
"""Central orchestrator for resiliency scoring."""
|
||||
|
||||
def __init__(self, alerts_yaml_path: str):
|
||||
|
||||
if not os.path.exists(alerts_yaml_path):
|
||||
raise FileNotFoundError(f"alerts file not found: {alerts_yaml_path}")
|
||||
with open(alerts_yaml_path, "r", encoding="utf-8") as fp:
|
||||
raw_yaml_data = yaml.safe_load(fp)
|
||||
logging.info("Loaded SLO configuration from %s", alerts_yaml_path)
|
||||
|
||||
self._slos = self._normalise_alerts(raw_yaml_data)
|
||||
self._results: Dict[str, bool] = {}
|
||||
self._score: Optional[int] = None
|
||||
self._breakdown: Optional[Dict[str, int]] = None
|
||||
self._health_check_results: Dict[str, bool] = {}
|
||||
self.scenario_reports: List[Dict[str, Any]] = []
|
||||
self.summary: Optional[Dict[str, Any]] = None
|
||||
self.detailed_report: Optional[Dict[str, Any]] = None
|
||||
|
||||
# ---------------------------------------------------------------------
|
||||
# Public API
|
||||
# ---------------------------------------------------------------------
|
||||
|
||||
def calculate_score(
|
||||
self,
|
||||
*,
|
||||
health_check_results: Optional[Dict[str, bool]] = None,
|
||||
) -> int:
|
||||
"""Calculate the resiliency score using collected SLO results."""
|
||||
slo_defs = {slo["name"]: {"severity": slo["severity"], "weight": slo.get("weight")} for slo in self._slos}
|
||||
score, breakdown = calculate_resiliency_score(
|
||||
slo_definitions=slo_defs,
|
||||
prometheus_results=self._results,
|
||||
health_check_results=health_check_results or {},
|
||||
)
|
||||
self._score = score
|
||||
self._breakdown = breakdown
|
||||
self._health_check_results = health_check_results or {}
|
||||
return score
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Return a dictionary ready for telemetry output."""
|
||||
if self._score is None:
|
||||
raise RuntimeError("calculate_score() must be called before to_dict()")
|
||||
return {
|
||||
"score": self._score,
|
||||
"breakdown": self._breakdown,
|
||||
"slo_results": self._results,
|
||||
"health_check_results": getattr(self, "_health_check_results", {}),
|
||||
}
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Scenario-based resiliency evaluation
|
||||
# ------------------------------------------------------------------
|
||||
def add_scenario_report(
|
||||
self,
|
||||
*,
|
||||
scenario_name: str,
|
||||
prom_cli: KrknPrometheus,
|
||||
start_time: datetime.datetime,
|
||||
end_time: datetime.datetime,
|
||||
weight: float | int = 1,
|
||||
health_check_results: Optional[Dict[str, bool]] = None,
|
||||
) -> int:
|
||||
"""
|
||||
Evaluate SLOs for a single scenario window and store the result.
|
||||
|
||||
Args:
|
||||
scenario_name: Human-friendly scenario identifier.
|
||||
prom_cli: Initialized KrknPrometheus instance.
|
||||
start_time: Window start.
|
||||
end_time: Window end.
|
||||
weight: Weight to use for the final weighted average calculation.
|
||||
health_check_results: Optional mapping of custom health-check name ➡ bool.
|
||||
Returns:
|
||||
The calculated integer resiliency score (0-100) for this scenario.
|
||||
"""
|
||||
slo_results = evaluate_slos(
|
||||
prom_cli=prom_cli,
|
||||
slo_list=self._slos,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
)
|
||||
slo_defs = {slo["name"]: {"severity": slo["severity"], "weight": slo.get("weight")} for slo in self._slos}
|
||||
score, breakdown = calculate_resiliency_score(
|
||||
slo_definitions=slo_defs,
|
||||
prometheus_results=slo_results,
|
||||
health_check_results=health_check_results or {},
|
||||
)
|
||||
self.scenario_reports.append(
|
||||
{
|
||||
"name": scenario_name,
|
||||
"window": {
|
||||
"start": start_time.isoformat(),
|
||||
"end": end_time.isoformat(),
|
||||
},
|
||||
"score": score,
|
||||
"weight": weight,
|
||||
"breakdown": breakdown,
|
||||
"slo_results": slo_results,
|
||||
"health_check_results": health_check_results or {},
|
||||
}
|
||||
)
|
||||
return score
|
||||
|
||||
def finalize_report(
|
||||
self,
|
||||
*,
|
||||
prom_cli: KrknPrometheus,
|
||||
total_start_time: datetime.datetime,
|
||||
total_end_time: datetime.datetime,
|
||||
) -> None:
|
||||
if not self.scenario_reports:
|
||||
raise RuntimeError("No scenario reports added – nothing to finalize")
|
||||
|
||||
# ---------------- Weighted average (primary resiliency_score) ----------
|
||||
total_weight = sum(rep["weight"] for rep in self.scenario_reports)
|
||||
resiliency_score = int(
|
||||
sum(rep["score"] * rep["weight"] for rep in self.scenario_reports) / total_weight
|
||||
)
|
||||
|
||||
# ---------------- Overall SLO evaluation across full test window -----------------------------
|
||||
full_slo_results = evaluate_slos(
|
||||
prom_cli=prom_cli,
|
||||
slo_list=self._slos,
|
||||
start_time=total_start_time,
|
||||
end_time=total_end_time,
|
||||
)
|
||||
slo_defs = {slo["name"]: {"severity": slo["severity"], "weight": slo.get("weight")} for slo in self._slos}
|
||||
_overall_score, full_breakdown = calculate_resiliency_score(
|
||||
slo_definitions=slo_defs,
|
||||
prometheus_results=full_slo_results,
|
||||
health_check_results={},
|
||||
)
|
||||
|
||||
self.summary = {
|
||||
"scenarios": {rep["name"]: rep["score"] for rep in self.scenario_reports},
|
||||
"resiliency_score": resiliency_score,
|
||||
"passed_slos": full_breakdown.get("passed", 0),
|
||||
"total_slos": full_breakdown.get("passed", 0) + full_breakdown.get("failed", 0),
|
||||
}
|
||||
|
||||
# Detailed report currently limited to per-scenario information; system stability section removed
|
||||
self.detailed_report = {
|
||||
"scenarios": self.scenario_reports,
|
||||
}
|
||||
|
||||
def get_summary(self) -> Dict[str, Any]:
|
||||
"""Return the concise resiliency_summary structure."""
|
||||
if not hasattr(self, "summary") or self.summary is None:
|
||||
raise RuntimeError("finalize_report() must be called first")
|
||||
return self.summary
|
||||
|
||||
def get_detailed_report(self) -> Dict[str, Any]:
|
||||
"""Return the full resiliency-report structure."""
|
||||
if not hasattr(self, "detailed_report") or self.detailed_report is None:
|
||||
raise RuntimeError("finalize_report() must be called first")
|
||||
return self.detailed_report
|
||||
|
||||
@staticmethod
|
||||
def compact_breakdown(report: Dict[str, Any]) -> Dict[str, int]:
|
||||
"""Return a compact summary dict for a single scenario report."""
|
||||
try:
|
||||
passed = report["breakdown"]["passed"]
|
||||
failed = report["breakdown"]["failed"]
|
||||
score_val = report["score"]
|
||||
except Exception:
|
||||
passed = report.get("breakdown", {}).get("passed", 0)
|
||||
failed = report.get("breakdown", {}).get("failed", 0)
|
||||
score_val = report.get("score", 0)
|
||||
return {
|
||||
"resiliency_score": score_val,
|
||||
"passed_slos": passed,
|
||||
"total_slos": passed + failed,
|
||||
}
|
||||
|
||||
def attach_compact_to_telemetry(self, chaos_telemetry: ChaosRunTelemetry) -> None:
|
||||
"""Embed per-scenario compact resiliency reports into a ChaosRunTelemetry instance."""
|
||||
score_map = {
|
||||
rep["name"]: self.compact_breakdown(rep) for rep in self.scenario_reports
|
||||
}
|
||||
new_scenarios = []
|
||||
for item in getattr(chaos_telemetry, "scenarios", []):
|
||||
if isinstance(item, dict):
|
||||
name = item.get("scenario")
|
||||
if name in score_map:
|
||||
item["resiliency_report"] = score_map[name]
|
||||
new_scenarios.append(item)
|
||||
else:
|
||||
name = getattr(item, "scenario", None)
|
||||
try:
|
||||
item_dict = dataclasses.asdict(item)
|
||||
except Exception:
|
||||
item_dict = {
|
||||
k: getattr(item, k)
|
||||
for k in dir(item)
|
||||
if not k.startswith("__") and not callable(getattr(item, k))
|
||||
}
|
||||
if name in score_map:
|
||||
item_dict["resiliency_report"] = score_map[name]
|
||||
new_scenarios.append(item_dict)
|
||||
chaos_telemetry.scenarios = new_scenarios
|
||||
|
||||
def add_scenario_reports(
|
||||
self,
|
||||
*,
|
||||
scenario_telemetries,
|
||||
prom_cli: KrknPrometheus,
|
||||
scenario_type: str,
|
||||
batch_start_dt: datetime.datetime,
|
||||
batch_end_dt: datetime.datetime,
|
||||
weight: int | float = 1,
|
||||
) -> None:
|
||||
"""Evaluate SLOs for every telemetry item belonging to a scenario window,
|
||||
store the result and enrich the telemetry list with a compact resiliency breakdown.
|
||||
|
||||
Args:
|
||||
scenario_telemetries: Iterable with telemetry objects/dicts for the
|
||||
current scenario batch window.
|
||||
prom_cli: Pre-configured :class:`KrknPrometheus` instance.
|
||||
scenario_type: Fallback scenario identifier in case individual
|
||||
telemetry items do not provide one.
|
||||
batch_start_dt: Fallback start timestamp for the batch window.
|
||||
batch_end_dt: Fallback end timestamp for the batch window.
|
||||
weight: Weight to assign to every scenario when calculating the final
|
||||
weighted average.
|
||||
logger: Optional custom logger.
|
||||
"""
|
||||
|
||||
for tel in scenario_telemetries:
|
||||
try:
|
||||
# -------- Extract timestamps & scenario name --------------------
|
||||
if isinstance(tel, dict):
|
||||
st_ts = tel.get("start_timestamp")
|
||||
en_ts = tel.get("end_timestamp")
|
||||
scen_name = tel.get("scenario", scenario_type)
|
||||
else:
|
||||
st_ts = getattr(tel, "start_timestamp", None)
|
||||
en_ts = getattr(tel, "end_timestamp", None)
|
||||
scen_name = getattr(tel, "scenario", scenario_type)
|
||||
|
||||
if st_ts and en_ts:
|
||||
st_dt = datetime.datetime.fromtimestamp(int(st_ts))
|
||||
en_dt = datetime.datetime.fromtimestamp(int(en_ts))
|
||||
else:
|
||||
st_dt = batch_start_dt
|
||||
en_dt = batch_end_dt
|
||||
|
||||
# -------- Calculate resiliency score for the scenario -----------
|
||||
self.add_scenario_report(
|
||||
scenario_name=str(scen_name),
|
||||
prom_cli=prom_cli,
|
||||
start_time=st_dt,
|
||||
end_time=en_dt,
|
||||
weight=weight,
|
||||
health_check_results=None,
|
||||
)
|
||||
|
||||
compact = self.compact_breakdown(self.scenario_reports[-1])
|
||||
if isinstance(tel, dict):
|
||||
tel["resiliency_report"] = compact
|
||||
else:
|
||||
setattr(tel, "resiliency_report", compact)
|
||||
except Exception as exc:
|
||||
logging.error("Resiliency per-scenario evaluation failed: %s", exc)
|
||||
|
||||
def finalize_and_save(
|
||||
self,
|
||||
*,
|
||||
prom_cli: KrknPrometheus,
|
||||
total_start_time: datetime.datetime,
|
||||
total_end_time: datetime.datetime,
|
||||
run_mode: str = "standalone",
|
||||
detailed_path: str = "resiliency-report.json",
|
||||
) -> Tuple[Dict[str, Any], Dict[str, Any]]:
|
||||
"""Finalize resiliency scoring, persist reports and return them.
|
||||
|
||||
Args:
|
||||
prom_cli: Pre-configured KrknPrometheus instance.
|
||||
total_start_time: Start time for the full test window.
|
||||
total_end_time: End time for the full test window.
|
||||
run_mode: "controller" or "standalone" mode.
|
||||
|
||||
Returns:
|
||||
(detailed_report)
|
||||
"""
|
||||
|
||||
try:
|
||||
self.finalize_report(
|
||||
prom_cli=prom_cli,
|
||||
total_start_time=total_start_time,
|
||||
total_end_time=total_end_time,
|
||||
)
|
||||
detailed = self.get_detailed_report()
|
||||
|
||||
if run_mode == "controller":
|
||||
# krknctl expects the detailed report on stdout in a special format
|
||||
try:
|
||||
detailed_json = json.dumps(detailed)
|
||||
print(f"KRKN_RESILIENCY_REPORT_JSON:{detailed_json}")
|
||||
logging.info("Resiliency report logged to stdout for krknctl.")
|
||||
except Exception as exc:
|
||||
logging.error("Failed to serialize and log detailed resiliency report: %s", exc)
|
||||
else:
|
||||
# Stand-alone mode – write to files for post-run consumption
|
||||
try:
|
||||
with open(detailed_path, "w", encoding="utf-8") as fp:
|
||||
json.dump(detailed, fp, indent=2)
|
||||
logging.info("Resiliency report written: %s", detailed_path)
|
||||
except Exception as io_exc:
|
||||
logging.error("Failed to write resiliency report files: %s", io_exc)
|
||||
|
||||
except Exception as exc:
|
||||
logging.error("Failed to finalize resiliency scoring: %s", exc)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Internal helpers
|
||||
# ------------------------------------------------------------------
|
||||
@staticmethod
|
||||
def _normalise_alerts(raw_alerts: Any) -> List[Dict[str, Any]]:
|
||||
"""Convert raw YAML alerts data into internal SLO list structure."""
|
||||
if not isinstance(raw_alerts, list):
|
||||
raise ValueError("SLO configuration must be a list under key 'slos' or top-level list")
|
||||
|
||||
slos: List[Dict[str, Any]] = []
|
||||
for idx, alert in enumerate(raw_alerts):
|
||||
if not (isinstance(alert, dict) and "expr" in alert and "severity" in alert):
|
||||
logging.warning("Skipping invalid alert entry at index %d: %s", idx, alert)
|
||||
continue
|
||||
name = alert.get("description") or f"slo_{idx}"
|
||||
slos.append(
|
||||
{
|
||||
"name": name,
|
||||
"expr": alert["expr"],
|
||||
"severity": str(alert["severity"]).lower(),
|
||||
"weight": alert.get("weight")
|
||||
}
|
||||
)
|
||||
return slos
|
||||
76
krkn/resiliency/score.py
Normal file
76
krkn/resiliency/score.py
Normal file
@@ -0,0 +1,76 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Dict, List, Tuple
|
||||
|
||||
DEFAULT_WEIGHTS = {"critical": 3, "warning": 1}
|
||||
|
||||
|
||||
class SLOResult:
|
||||
"""Simple container representing evaluation outcome for a single SLO."""
|
||||
|
||||
def __init__(self, name: str, severity: str, passed: bool, weight: int | None = None):
|
||||
self.name = name
|
||||
self.severity = severity
|
||||
self.passed = passed
|
||||
self._custom_weight = weight
|
||||
|
||||
def weight(self, severity_weights: Dict[str, int]) -> int:
|
||||
"""Return the weight for this SLO. Uses custom weight if set, otherwise uses severity-based weight."""
|
||||
if self._custom_weight is not None:
|
||||
return self._custom_weight
|
||||
return severity_weights.get(self.severity, severity_weights.get("warning", 1))
|
||||
|
||||
|
||||
def calculate_resiliency_score(
|
||||
slo_definitions: Dict[str, str] | Dict[str, Dict[str, int | str | None]],
|
||||
prometheus_results: Dict[str, bool],
|
||||
health_check_results: Dict[str, bool],
|
||||
) -> Tuple[int, Dict[str, int]]:
|
||||
"""Compute a resiliency score between 0-100 based on SLO pass/fail results.
|
||||
|
||||
Args:
|
||||
slo_definitions: Mapping of SLO name -> severity ("critical" | "warning") OR
|
||||
SLO name -> {"severity": str, "weight": int | None}.
|
||||
prometheus_results: Mapping of SLO name -> bool indicating whether the SLO
|
||||
passed. Any SLO missing in this mapping is treated as failed.
|
||||
health_check_results: Mapping of custom health-check name -> bool pass flag.
|
||||
These checks are always treated as *critical*.
|
||||
|
||||
Returns:
|
||||
Tuple containing (final_score, breakdown) where *breakdown* is a dict with
|
||||
the counts of passed/failed SLOs per severity.
|
||||
"""
|
||||
|
||||
slo_objects: List[SLOResult] = []
|
||||
for slo_name, slo_def in slo_definitions.items():
|
||||
# Exclude SLOs that were not evaluated (query returned no data)
|
||||
if slo_name not in prometheus_results:
|
||||
continue
|
||||
passed = bool(prometheus_results[slo_name])
|
||||
|
||||
# Support both old format (str) and new format (dict)
|
||||
if isinstance(slo_def, str):
|
||||
severity = slo_def
|
||||
slo_weight = None
|
||||
else:
|
||||
severity = slo_def.get("severity", "warning")
|
||||
slo_weight = slo_def.get("weight")
|
||||
|
||||
slo_objects.append(SLOResult(slo_name, severity, passed, weight=slo_weight))
|
||||
|
||||
# Health-check SLOs (by default keeping them critical)
|
||||
for hc_name, hc_passed in health_check_results.items():
|
||||
slo_objects.append(SLOResult(hc_name, "critical", bool(hc_passed)))
|
||||
|
||||
total_points = sum(slo.weight(DEFAULT_WEIGHTS) for slo in slo_objects)
|
||||
points_lost = sum(slo.weight(DEFAULT_WEIGHTS) for slo in slo_objects if not slo.passed)
|
||||
|
||||
score = 0 if total_points == 0 else int(((total_points - points_lost) / total_points) * 100)
|
||||
|
||||
breakdown = {
|
||||
"total_points": total_points,
|
||||
"points_lost": points_lost,
|
||||
"passed": len([s for s in slo_objects if s.passed]),
|
||||
"failed": len([s for s in slo_objects if not s.passed]),
|
||||
}
|
||||
return score, breakdown
|
||||
113
krkn/rollback/command.py
Normal file
113
krkn/rollback/command.py
Normal file
@@ -0,0 +1,113 @@
|
||||
import os
|
||||
import logging
|
||||
from typing import Optional, TYPE_CHECKING
|
||||
|
||||
from krkn.rollback.config import RollbackConfig
|
||||
from krkn.rollback.handler import execute_rollback_version_files
|
||||
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
|
||||
def list_rollback(run_uuid: Optional[str]=None, scenario_type: Optional[str]=None):
|
||||
"""
|
||||
List rollback version files in a tree-like format.
|
||||
|
||||
:param cfg: Configuration file path
|
||||
:param run_uuid: Optional run UUID to filter by
|
||||
:param scenario_type: Optional scenario type to filter by
|
||||
:return: Exit code (0 for success, 1 for error)
|
||||
"""
|
||||
logging.info("Listing rollback version files")
|
||||
|
||||
versions_directory = RollbackConfig().versions_directory
|
||||
|
||||
logging.info(f"Rollback versions directory: {versions_directory}")
|
||||
|
||||
# Check if the directory exists first
|
||||
if not os.path.exists(versions_directory):
|
||||
logging.info(f"Rollback versions directory does not exist: {versions_directory}")
|
||||
return 0
|
||||
|
||||
# List all directories and files
|
||||
try:
|
||||
# Get all run directories
|
||||
run_dirs = []
|
||||
for item in os.listdir(versions_directory):
|
||||
item_path = os.path.join(versions_directory, item)
|
||||
if os.path.isdir(item_path):
|
||||
# Apply run_uuid filter if specified
|
||||
if run_uuid is None or run_uuid in item:
|
||||
run_dirs.append(item)
|
||||
|
||||
if not run_dirs:
|
||||
if run_uuid:
|
||||
logging.info(f"No rollback directories found for run_uuid: {run_uuid}")
|
||||
else:
|
||||
logging.info("No rollback directories found")
|
||||
return 0
|
||||
|
||||
# Sort directories for consistent output
|
||||
run_dirs.sort()
|
||||
|
||||
print(f"\n{versions_directory}/")
|
||||
for i, run_dir in enumerate(run_dirs):
|
||||
is_last_dir = (i == len(run_dirs) - 1)
|
||||
dir_prefix = "└── " if is_last_dir else "├── "
|
||||
print(f"{dir_prefix}{run_dir}/")
|
||||
|
||||
# List files in this directory
|
||||
run_dir_path = os.path.join(versions_directory, run_dir)
|
||||
try:
|
||||
files = []
|
||||
for file in os.listdir(run_dir_path):
|
||||
file_path = os.path.join(run_dir_path, file)
|
||||
if os.path.isfile(file_path):
|
||||
# Apply scenario_type filter if specified
|
||||
if scenario_type is None or file.startswith(scenario_type):
|
||||
files.append(file)
|
||||
|
||||
files.sort()
|
||||
for j, file in enumerate(files):
|
||||
is_last_file = (j == len(files) - 1)
|
||||
file_prefix = " └── " if is_last_dir else "│ └── " if is_last_file else ("│ ├── " if not is_last_dir else " ├── ")
|
||||
print(f"{file_prefix}{file}")
|
||||
|
||||
except PermissionError:
|
||||
file_prefix = " └── " if is_last_dir else "│ └── "
|
||||
print(f"{file_prefix}[Permission Denied]")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error listing rollback directory: {e}")
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def execute_rollback(telemetry_ocp: "KrknTelemetryOpenshift", run_uuid: Optional[str]=None, scenario_type: Optional[str]=None):
|
||||
"""
|
||||
Execute rollback version files and cleanup if successful.
|
||||
|
||||
:param telemetry_ocp: Instance of KrknTelemetryOpenshift
|
||||
:param run_uuid: Optional run UUID to filter by
|
||||
:param scenario_type: Optional scenario type to filter by
|
||||
:return: Exit code (0 for success, 1 for error)
|
||||
"""
|
||||
logging.info("Executing rollback version files")
|
||||
logging.info(f"Executing rollback for run_uuid={run_uuid or '*'}, scenario_type={scenario_type or '*'}")
|
||||
|
||||
try:
|
||||
# Execute rollback version files
|
||||
execute_rollback_version_files(
|
||||
telemetry_ocp,
|
||||
run_uuid,
|
||||
scenario_type,
|
||||
ignore_auto_rollback_config=True
|
||||
)
|
||||
return 0
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error during rollback execution: {e}")
|
||||
return 1
|
||||
259
krkn/rollback/config.py
Normal file
259
krkn/rollback/config.py
Normal file
@@ -0,0 +1,259 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Callable, TYPE_CHECKING, Optional
|
||||
from typing_extensions import TypeAlias
|
||||
import time
|
||||
import os
|
||||
import logging
|
||||
|
||||
from krkn_lib.utils import get_random_string
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
RollbackCallable: TypeAlias = Callable[
|
||||
["RollbackContent", "KrknTelemetryOpenshift"], None
|
||||
]
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
RollbackCallable: TypeAlias = Callable[
|
||||
["RollbackContent", "KrknTelemetryOpenshift"], None
|
||||
]
|
||||
|
||||
|
||||
class SingletonMeta(type):
|
||||
_instances = {}
|
||||
|
||||
def __call__(cls, *args, **kwargs):
|
||||
if cls not in cls._instances:
|
||||
cls._instances[cls] = super().__call__(*args, **kwargs)
|
||||
return cls._instances[cls]
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class RollbackContent:
|
||||
"""
|
||||
RollbackContent is a dataclass that defines the necessary fields for rollback operations.
|
||||
"""
|
||||
|
||||
resource_identifier: str
|
||||
namespace: Optional[str] = None
|
||||
|
||||
def __str__(self):
|
||||
namespace = f'"{self.namespace}"' if self.namespace else "None"
|
||||
resource_identifier = f'"{self.resource_identifier}"'
|
||||
return f"RollbackContent(namespace={namespace}, resource_identifier={resource_identifier})"
|
||||
|
||||
|
||||
class RollbackContext(str):
|
||||
"""
|
||||
RollbackContext is a string formatted as '<timestamp (s) >-<run_uuid>'.
|
||||
It represents the context for rollback operations, uniquely identifying a run.
|
||||
"""
|
||||
|
||||
def __new__(cls, run_uuid: str):
|
||||
return super().__new__(cls, f"{time.time_ns()}-{run_uuid}")
|
||||
|
||||
|
||||
class RollbackConfig(metaclass=SingletonMeta):
|
||||
"""Configuration for the rollback scenarios."""
|
||||
|
||||
def __init__(self):
|
||||
self._auto = False
|
||||
self._versions_directory = ""
|
||||
self._registered = False
|
||||
|
||||
@property
|
||||
def auto(self):
|
||||
return self._auto
|
||||
|
||||
@auto.setter
|
||||
def auto(self, value):
|
||||
if self._registered:
|
||||
raise AttributeError("Can't modify 'auto' after registration")
|
||||
self._auto = value
|
||||
|
||||
@property
|
||||
def versions_directory(self):
|
||||
return self._versions_directory
|
||||
|
||||
@versions_directory.setter
|
||||
def versions_directory(self, value):
|
||||
if self._registered:
|
||||
raise AttributeError("Can't modify 'versions_directory' after registration")
|
||||
self._versions_directory = value
|
||||
@classmethod
|
||||
def register(cls, auto=False, versions_directory=""):
|
||||
"""Initialize and return the singleton instance with given configuration."""
|
||||
instance = cls()
|
||||
instance.auto = auto
|
||||
instance.versions_directory = versions_directory
|
||||
instance._registered = True
|
||||
return instance
|
||||
|
||||
@classmethod
|
||||
def get_rollback_versions_directory(cls, rollback_context: RollbackContext) -> str:
|
||||
"""
|
||||
Get the rollback context directory for a given rollback context.
|
||||
|
||||
:param rollback_context: The rollback context string.
|
||||
:return: The path to the rollback context directory.
|
||||
"""
|
||||
return f"{cls().versions_directory}/{rollback_context}"
|
||||
|
||||
@classmethod
|
||||
def is_rollback_version_file_format(cls, file_name: str, expected_scenario_type: str | None = None) -> bool:
|
||||
"""
|
||||
Validate the format of a rollback version file name.
|
||||
|
||||
Expected format: <scenario_type>_<timestamp>_<hash_suffix>.py
|
||||
where:
|
||||
- scenario_type: string (can include underscores)
|
||||
- timestamp: integer (nanoseconds since epoch)
|
||||
- hash_suffix: alphanumeric string (length 8)
|
||||
- .py: file extension
|
||||
|
||||
:param file_name: The name of the file to validate.
|
||||
:param expected_scenario_type: The expected scenario type (if any) to validate against.
|
||||
:return: True if the file name matches the expected format, False otherwise.
|
||||
"""
|
||||
if not file_name.endswith(".py"):
|
||||
return False
|
||||
|
||||
parts = file_name.split("_")
|
||||
if len(parts) < 3:
|
||||
return False
|
||||
|
||||
scenario_type = "_".join(parts[:-2])
|
||||
timestamp_str = parts[-2]
|
||||
hash_suffix_with_ext = parts[-1]
|
||||
hash_suffix = hash_suffix_with_ext[:-3]
|
||||
|
||||
if expected_scenario_type and scenario_type != expected_scenario_type:
|
||||
return False
|
||||
|
||||
if not timestamp_str.isdigit():
|
||||
return False
|
||||
|
||||
if len(hash_suffix) != 8 or not hash_suffix.isalnum():
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def is_rollback_context_directory_format(cls, directory_name: str, expected_run_uuid: str | None = None) -> bool:
|
||||
"""
|
||||
Validate the format of a rollback context directory name.
|
||||
|
||||
Expected format: <timestamp>-<run_uuid>
|
||||
where:
|
||||
- timestamp: integer (nanoseconds since epoch)
|
||||
- run_uuid: alphanumeric string
|
||||
|
||||
:param directory_name: The name of the directory to validate.
|
||||
:param expected_run_uuid: The expected run UUID (if any) to validate against.
|
||||
:return: True if the directory name matches the expected format, False otherwise.
|
||||
"""
|
||||
parts = directory_name.split("-", 1)
|
||||
if len(parts) != 2:
|
||||
return False
|
||||
|
||||
timestamp_str, run_uuid = parts
|
||||
|
||||
# Validate timestamp is numeric
|
||||
if not timestamp_str.isdigit():
|
||||
return False
|
||||
|
||||
# Validate run_uuid
|
||||
if expected_run_uuid and expected_run_uuid != run_uuid:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
@classmethod
|
||||
def search_rollback_version_files(cls, run_uuid: str | None = None, scenario_type: str | None = None) -> list[str]:
|
||||
"""
|
||||
Search for rollback version files based on run_uuid and scenario_type.
|
||||
|
||||
1. Search directories with "run_uuid" in name under "cls.versions_directory".
|
||||
2. Search files in those directories that start with "scenario_type" in matched directories in step 1.
|
||||
|
||||
:param run_uuid: Unique identifier for the run.
|
||||
:param scenario_type: Type of the scenario.
|
||||
:return: List of version file paths.
|
||||
"""
|
||||
|
||||
if not os.path.exists(cls().versions_directory):
|
||||
return []
|
||||
|
||||
rollback_context_directories = []
|
||||
for dir in os.listdir(cls().versions_directory):
|
||||
if cls.is_rollback_context_directory_format(dir, run_uuid):
|
||||
rollback_context_directories.append(dir)
|
||||
else:
|
||||
logger.warning(f"Directory {dir} does not match expected pattern of <timestamp>-<run_uuid>")
|
||||
|
||||
if not rollback_context_directories:
|
||||
logger.warning(f"No rollback context directories found for run UUID {run_uuid}")
|
||||
return []
|
||||
|
||||
|
||||
version_files = []
|
||||
for rollback_context_dir in rollback_context_directories:
|
||||
rollback_context_dir = os.path.join(cls().versions_directory, rollback_context_dir)
|
||||
|
||||
for file in os.listdir(rollback_context_dir):
|
||||
# Skip known non-rollback files/directories
|
||||
if file == "__pycache__" or file.endswith(".executed"):
|
||||
continue
|
||||
|
||||
if cls.is_rollback_version_file_format(file, scenario_type):
|
||||
version_files.append(
|
||||
os.path.join(rollback_context_dir, file)
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"File {file} does not match expected pattern of <{scenario_type or '*'}>_<timestamp>_<hash_suffix>.py"
|
||||
)
|
||||
return version_files
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class Version:
|
||||
scenario_type: str
|
||||
rollback_context: RollbackContext
|
||||
timestamp: int = time.time_ns() # Get current timestamp in nanoseconds
|
||||
hash_suffix: str = get_random_string(8) # Generate a random string of 8 characters
|
||||
|
||||
@property
|
||||
def version_file_name(self) -> str:
|
||||
"""
|
||||
Generate a version file name based on the timestamp and hash suffix.
|
||||
:return: The generated version file name.
|
||||
"""
|
||||
return f"{self.scenario_type}_{self.timestamp}_{self.hash_suffix}.py"
|
||||
|
||||
@property
|
||||
def version_file_full_path(self) -> str:
|
||||
"""
|
||||
Get the full path for the version file based on the version object and current context.
|
||||
|
||||
:return: The generated version file full path.
|
||||
"""
|
||||
return f"{RollbackConfig.get_rollback_versions_directory(self.rollback_context)}/{self.version_file_name}"
|
||||
|
||||
@staticmethod
|
||||
def new_version(scenario_type: str, rollback_context: RollbackContext) -> "Version":
|
||||
"""
|
||||
Get the current version of the rollback configuration.
|
||||
:return: An instance of Version with the current timestamp and hash suffix.
|
||||
"""
|
||||
return Version(
|
||||
scenario_type=scenario_type,
|
||||
rollback_context=rollback_context,
|
||||
)
|
||||
255
krkn/rollback/handler.py
Normal file
255
krkn/rollback/handler.py
Normal file
@@ -0,0 +1,255 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import cast, TYPE_CHECKING
|
||||
import os
|
||||
import importlib.util
|
||||
import inspect
|
||||
|
||||
from krkn.rollback.config import RollbackConfig, RollbackContext, Version
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
from krkn.rollback.config import RollbackContent, RollbackCallable
|
||||
from krkn.rollback.serialization import Serializer
|
||||
|
||||
|
||||
def set_rollback_context_decorator(func):
|
||||
"""
|
||||
Decorator to automatically set and clear rollback context.
|
||||
It extracts run_uuid from the function arguments and sets the context in rollback_handler
|
||||
before executing the function, and clears it after execution.
|
||||
|
||||
Usage:
|
||||
|
||||
.. code-block:: python
|
||||
from krkn.rollback.handler import set_rollback_context_decorator
|
||||
# for any scenario plugin that inherits from AbstractScenarioPlugin
|
||||
@set_rollback_context_decorator
|
||||
def run(
|
||||
self,
|
||||
run_uuid: str,
|
||||
scenario: str,
|
||||
krkn_config: dict[str, any],
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry,
|
||||
):
|
||||
# Your scenario logic here
|
||||
pass
|
||||
"""
|
||||
|
||||
def wrapper(self, *args, **kwargs):
|
||||
self = cast("AbstractScenarioPlugin", self)
|
||||
# Since `AbstractScenarioPlugin.run_scenarios` will call `self.run` and pass all parameters as `kwargs`
|
||||
logger.debug(f"kwargs of ScenarioPlugin.run: {kwargs}")
|
||||
run_uuid = kwargs.get("run_uuid", None)
|
||||
# so we can safely assume that `run_uuid` will be present in `kwargs`
|
||||
assert run_uuid is not None, "run_uuid must be provided in kwargs"
|
||||
|
||||
# Set context if run_uuid is available and rollback_handler exists
|
||||
if run_uuid and hasattr(self, "rollback_handler"):
|
||||
self.rollback_handler = cast("RollbackHandler", self.rollback_handler)
|
||||
self.rollback_handler.set_context(run_uuid)
|
||||
|
||||
try:
|
||||
# Execute the `run` method with the original arguments
|
||||
result = func(self, *args, **kwargs)
|
||||
return result
|
||||
finally:
|
||||
# Clear context after function execution, regardless of success or failure
|
||||
if hasattr(self, "rollback_handler"):
|
||||
self.rollback_handler = cast("RollbackHandler", self.rollback_handler)
|
||||
self.rollback_handler.clear_context()
|
||||
|
||||
return wrapper
|
||||
|
||||
def _parse_rollback_module(version_file_path: str) -> tuple[RollbackCallable, RollbackContent]:
|
||||
"""
|
||||
Parse a rollback module to extract the rollback function and RollbackContent.
|
||||
|
||||
:param version_file_path: Path to the rollback version file
|
||||
:return: Tuple of (rollback_callable, rollback_content)
|
||||
"""
|
||||
|
||||
# Create a unique module name based on the file path
|
||||
module_name = f"rollback_module_{os.path.basename(version_file_path).replace('.py', '').replace('-', '_')}"
|
||||
|
||||
# Load the module using importlib
|
||||
spec = importlib.util.spec_from_file_location(module_name, version_file_path)
|
||||
if spec is None or spec.loader is None:
|
||||
raise ImportError(f"Could not load module from {version_file_path}")
|
||||
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
|
||||
# Find the rollback function
|
||||
rollback_callable = None
|
||||
for name, obj in inspect.getmembers(module):
|
||||
if inspect.isfunction(obj) and name.startswith('rollback_'):
|
||||
# Check function signature
|
||||
sig = inspect.signature(obj)
|
||||
params = list(sig.parameters.values())
|
||||
if (len(params) == 2 and
|
||||
'RollbackContent' in str(params[0].annotation) and
|
||||
'KrknTelemetryOpenshift' in str(params[1].annotation)):
|
||||
rollback_callable = obj
|
||||
logger.debug(f"Found rollback function: {name}")
|
||||
break
|
||||
|
||||
if rollback_callable is None:
|
||||
raise ValueError(f"No valid rollback function found in {version_file_path}")
|
||||
|
||||
# Find the rollback_content variable
|
||||
if not hasattr(module, 'rollback_content'):
|
||||
raise ValueError("Could not find variable named 'rollback_content' in the module")
|
||||
|
||||
rollback_content = getattr(module, 'rollback_content', None)
|
||||
if rollback_content is None:
|
||||
raise ValueError("Variable 'rollback_content' is None")
|
||||
|
||||
logger.debug(f"Found rollback_content variable in module: {rollback_content}")
|
||||
return rollback_callable, rollback_content
|
||||
|
||||
|
||||
def execute_rollback_version_files(
|
||||
telemetry_ocp: "KrknTelemetryOpenshift",
|
||||
run_uuid: str | None = None,
|
||||
scenario_type: str | None = None,
|
||||
ignore_auto_rollback_config: bool = False
|
||||
):
|
||||
"""
|
||||
Execute rollback version files for the given run_uuid and scenario_type.
|
||||
This function is called when a signal is received to perform rollback operations.
|
||||
|
||||
:param run_uuid: Unique identifier for the run.
|
||||
:param scenario_type: Type of the scenario being rolled back.
|
||||
:param ignore_auto_rollback_config: Flag to ignore auto rollback configuration. Will be set to True for manual execute-rollback calls.
|
||||
"""
|
||||
if not ignore_auto_rollback_config and RollbackConfig().auto is False:
|
||||
logger.warning(f"Auto rollback is disabled, skipping execution for run_uuid={run_uuid or '*'}, scenario_type={scenario_type or '*'}")
|
||||
return
|
||||
|
||||
# Get the rollback versions directory
|
||||
version_files = RollbackConfig.search_rollback_version_files(run_uuid, scenario_type)
|
||||
if not version_files:
|
||||
logger.warning(f"Skip execution for run_uuid={run_uuid or '*'}, scenario_type={scenario_type or '*'}")
|
||||
return
|
||||
|
||||
# Execute all version files in the directory
|
||||
logger.info(f"Executing rollback version files for run_uuid={run_uuid or '*'}, scenario_type={scenario_type or '*'}")
|
||||
for version_file in version_files:
|
||||
try:
|
||||
logger.info(f"Executing rollback version file: {version_file}")
|
||||
|
||||
# Parse the rollback module to get function and content
|
||||
rollback_callable, rollback_content = _parse_rollback_module(version_file)
|
||||
# Execute the rollback function
|
||||
logger.info('Executing rollback callable...')
|
||||
rollback_callable(rollback_content, telemetry_ocp)
|
||||
logger.info('Rollback completed.')
|
||||
success = True
|
||||
except Exception as e:
|
||||
success = False
|
||||
logger.error(f"Failed to execute rollback version file {version_file}: {e}")
|
||||
raise
|
||||
|
||||
# Rename the version file with .executed suffix if successful
|
||||
if success:
|
||||
try:
|
||||
executed_file = f"{version_file}.executed"
|
||||
os.rename(version_file, executed_file)
|
||||
logger.info(f"Renamed {version_file} to {executed_file} successfully.")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to rename rollback version file {version_file}: {e}")
|
||||
raise
|
||||
|
||||
def cleanup_rollback_version_files(run_uuid: str, scenario_type: str):
|
||||
"""
|
||||
Cleanup rollback version files for the given run_uuid and scenario_type.
|
||||
This function is called to remove the rollback version files after successful scenario execution in run_scenarios.
|
||||
|
||||
:param run_uuid: Unique identifier for the run.
|
||||
:param scenario_type: Type of the scenario being rolled back.
|
||||
"""
|
||||
|
||||
# Get the rollback versions directory
|
||||
version_files = RollbackConfig.search_rollback_version_files(run_uuid, scenario_type)
|
||||
if not version_files:
|
||||
logger.warning(f"Skip cleanup for run_uuid={run_uuid}, scenario_type={scenario_type or '*'}")
|
||||
return
|
||||
|
||||
# Remove all version files in the directory
|
||||
logger.info(f"Cleaning up rollback version files for run_uuid={run_uuid}, scenario_type={scenario_type}")
|
||||
for version_file in version_files:
|
||||
try:
|
||||
os.remove(version_file)
|
||||
logger.info(f"Removed {version_file} successfully.")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to remove rollback version file {version_file}: {e}")
|
||||
raise
|
||||
|
||||
class RollbackHandler:
|
||||
def __init__(
|
||||
self,
|
||||
scenario_type: str,
|
||||
serializer: "Serializer",
|
||||
):
|
||||
self.scenario_type = scenario_type
|
||||
self.serializer = serializer
|
||||
self.rollback_context: RollbackContext | None = (
|
||||
None # will be set when `set_context` is called
|
||||
)
|
||||
|
||||
def set_context(self, run_uuid: str):
|
||||
"""
|
||||
Set the context for the rollback handler.
|
||||
:param run_uuid: Unique identifier for the run.
|
||||
"""
|
||||
self.rollback_context = RollbackContext(run_uuid)
|
||||
logger.info(
|
||||
f"Set rollback_context: {self.rollback_context} for scenario_type: {self.scenario_type} RollbackHandler"
|
||||
)
|
||||
|
||||
def clear_context(self):
|
||||
"""
|
||||
Clear the run_uuid context for the rollback handler.
|
||||
"""
|
||||
logger.debug(
|
||||
f"Clear rollback_context {self.rollback_context} for scenario type {self.scenario_type} RollbackHandler"
|
||||
)
|
||||
self.rollback_context = None
|
||||
|
||||
def set_rollback_callable(
|
||||
self,
|
||||
callable: "RollbackCallable",
|
||||
rollback_content: "RollbackContent",
|
||||
):
|
||||
"""
|
||||
Set the rollback callable to be executed after the scenario is finished.
|
||||
|
||||
:param callable: The rollback callable to be set.
|
||||
:param rollback_content: The rollback content for the callable.
|
||||
"""
|
||||
logger.debug(
|
||||
f"Rollback callable set to {callable.__name__} for version directory {RollbackConfig.get_rollback_versions_directory(self.rollback_context)}"
|
||||
)
|
||||
|
||||
version: Version = Version.new_version(
|
||||
scenario_type=self.scenario_type,
|
||||
rollback_context=self.rollback_context,
|
||||
)
|
||||
|
||||
# Serialize the callable to a file
|
||||
try:
|
||||
version_file = self.serializer.serialize_callable(
|
||||
callable, rollback_content, version
|
||||
)
|
||||
logger.info(f"Rollback callable serialized to {version_file}")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to serialize rollback callable: {e}")
|
||||
123
krkn/rollback/serialization.py
Normal file
123
krkn/rollback/serialization.py
Normal file
@@ -0,0 +1,123 @@
|
||||
import inspect
|
||||
import os
|
||||
import logging
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from krkn.rollback.config import RollbackCallable, RollbackContent, Version
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Serializer:
|
||||
def __init__(self, scenario_type: str):
|
||||
self.scenario_type = scenario_type
|
||||
# Set up Jinja2 environment to load templates from the rollback directory
|
||||
template_dir = os.path.join(os.path.dirname(__file__))
|
||||
env = Environment(loader=FileSystemLoader(template_dir))
|
||||
self.template = env.get_template("version_template.j2")
|
||||
|
||||
def _parse_rollback_callable_code(
|
||||
self, rollback_callable: "RollbackCallable"
|
||||
) -> tuple[str, str]:
|
||||
"""
|
||||
Parse the rollback callable code to extract its implementation.
|
||||
:param rollback_callable: The callable function to parse (can be staticmethod or regular function).
|
||||
:return: A tuple containing (function_name, function_code).
|
||||
"""
|
||||
# Get the implementation code of the rollback_callable
|
||||
rollback_callable_code = inspect.getsource(rollback_callable)
|
||||
|
||||
# Split into lines for processing
|
||||
code_lines = rollback_callable_code.split("\n")
|
||||
cleaned_lines = []
|
||||
function_name = None
|
||||
|
||||
# Find the function definition line and extract function name
|
||||
def_line_index = None
|
||||
for i, line in enumerate(code_lines):
|
||||
# Skip decorators (including @staticmethod)
|
||||
if line.strip().startswith("@"):
|
||||
continue
|
||||
|
||||
# Look for function definition
|
||||
if line.strip().startswith("def "):
|
||||
def_line_index = i
|
||||
# Extract function name from the def line
|
||||
def_line = line.strip()
|
||||
if "(" in def_line:
|
||||
function_name = def_line.split("def ")[1].split("(")[0].strip()
|
||||
break
|
||||
|
||||
if def_line_index is None or function_name is None:
|
||||
raise ValueError(
|
||||
"Could not find function definition in callable source code"
|
||||
)
|
||||
|
||||
# Get the base indentation level from the def line
|
||||
def_line = code_lines[def_line_index]
|
||||
base_indent_level = len(def_line) - len(def_line.lstrip())
|
||||
|
||||
# Process all lines starting from the def line
|
||||
for i in range(def_line_index, len(code_lines)):
|
||||
line = code_lines[i]
|
||||
|
||||
# Handle empty lines
|
||||
if not line.strip():
|
||||
cleaned_lines.append("")
|
||||
continue
|
||||
|
||||
# Calculate current line's indentation
|
||||
current_indent = len(line) - len(line.lstrip())
|
||||
|
||||
# Remove the base indentation to normalize to function level
|
||||
if current_indent >= base_indent_level:
|
||||
# Remove base indentation
|
||||
normalized_line = line[base_indent_level:]
|
||||
cleaned_lines.append(normalized_line)
|
||||
else:
|
||||
# This shouldn't happen in well-formed code, but handle it gracefully
|
||||
cleaned_lines.append(line.lstrip())
|
||||
|
||||
# Reconstruct the code and clean up trailing whitespace
|
||||
function_code = "\n".join(cleaned_lines).rstrip()
|
||||
|
||||
return function_name, function_code
|
||||
|
||||
def serialize_callable(
|
||||
self,
|
||||
rollback_callable: "RollbackCallable",
|
||||
rollback_content: "RollbackContent",
|
||||
version: "Version",
|
||||
) -> str:
|
||||
"""
|
||||
Serialize a callable function to a file with its arguments and keyword arguments.
|
||||
:param rollback_callable: The callable to serialize.
|
||||
:param rollback_content: The rollback content for the callable.
|
||||
:param version: The version representing the rollback context and file path for the rollback.
|
||||
:return: Path to the serialized callable file.
|
||||
"""
|
||||
|
||||
rollback_callable_name, rollback_callable_code = (
|
||||
self._parse_rollback_callable_code(rollback_callable)
|
||||
)
|
||||
|
||||
# Render the template with the required variables
|
||||
file_content = self.template.render(
|
||||
rollback_callable_name=rollback_callable_name,
|
||||
rollback_callable_code=rollback_callable_code,
|
||||
rollback_content=str(rollback_content),
|
||||
)
|
||||
|
||||
# Write the file to the version directory
|
||||
os.makedirs(os.path.dirname(version.version_file_full_path), exist_ok=True)
|
||||
|
||||
logger.debug("Creating version file at %s", version.version_file_full_path)
|
||||
logger.debug("Version file content:\n%s", file_content)
|
||||
with open(version.version_file_full_path, "w") as f:
|
||||
f.write(file_content)
|
||||
logger.info(f"Serialized callable written to {version.version_file_full_path}")
|
||||
|
||||
return version.version_file_full_path
|
||||
106
krkn/rollback/signal.py
Normal file
106
krkn/rollback/signal.py
Normal file
@@ -0,0 +1,106 @@
|
||||
from typing import Dict, Any, Optional
|
||||
import threading
|
||||
import signal
|
||||
import sys
|
||||
import logging
|
||||
from contextlib import contextmanager
|
||||
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
from krkn.rollback.handler import execute_rollback_version_files
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class SignalHandler:
|
||||
# Class-level variables for signal handling (shared across all instances)
|
||||
_signal_handlers_installed = False # No need for thread-safe variable due to _signal_lock
|
||||
_original_handlers: Dict[int, Any] = {}
|
||||
_signal_lock = threading.Lock()
|
||||
|
||||
# Thread-local storage for context
|
||||
_local = threading.local()
|
||||
|
||||
@classmethod
|
||||
def _set_context(cls, run_uuid: str, scenario_type: str, telemetry_ocp: KrknTelemetryOpenshift):
|
||||
"""Set the current execution context for this thread."""
|
||||
cls._local.run_uuid = run_uuid
|
||||
cls._local.scenario_type = scenario_type
|
||||
cls._local.telemetry_ocp = telemetry_ocp
|
||||
logger.debug(f"Set signal context set for thread {threading.current_thread().name} - run_uuid={run_uuid}, scenario_type={scenario_type}")
|
||||
|
||||
@classmethod
|
||||
def _get_context(cls) -> tuple[Optional[str], Optional[str], Optional[KrknTelemetryOpenshift]]:
|
||||
"""Get the current execution context for this thread."""
|
||||
run_uuid = getattr(cls._local, 'run_uuid', None)
|
||||
scenario_type = getattr(cls._local, 'scenario_type', None)
|
||||
telemetry_ocp = getattr(cls._local, 'telemetry_ocp', None)
|
||||
return run_uuid, scenario_type, telemetry_ocp
|
||||
|
||||
@classmethod
|
||||
def _signal_handler(cls, signum: int, frame):
|
||||
"""Handle signals with current thread context information."""
|
||||
signal_name = signal.Signals(signum).name
|
||||
run_uuid, scenario_type, telemetry_ocp = cls._get_context()
|
||||
if not run_uuid or not scenario_type or not telemetry_ocp:
|
||||
logger.warning(f"Signal {signal_name} received without complete context, skipping rollback.")
|
||||
return
|
||||
|
||||
# Clear the context for the next signal, as another signal may arrive before the rollback completes.
|
||||
# This ensures that the rollback is performed only once.
|
||||
cls._set_context(None, None, telemetry_ocp)
|
||||
|
||||
# Perform rollback
|
||||
logger.info(f"Performing rollback for signal {signal_name} with run_uuid={run_uuid}, scenario_type={scenario_type}")
|
||||
execute_rollback_version_files(telemetry_ocp, run_uuid, scenario_type)
|
||||
|
||||
# Call original handler if it exists
|
||||
if signum not in cls._original_handlers:
|
||||
logger.info(f"Signal {signal_name} has no registered handler, exiting...")
|
||||
return
|
||||
|
||||
original_handler = cls._original_handlers[signum]
|
||||
if callable(original_handler):
|
||||
logger.info(f"Calling original handler for {signal_name}")
|
||||
original_handler(signum, frame)
|
||||
elif original_handler == signal.SIG_DFL:
|
||||
# Restore default behavior
|
||||
logger.info(f"Restoring default signal handler for {signal_name}")
|
||||
signal.signal(signum, signal.SIG_DFL)
|
||||
signal.raise_signal(signum)
|
||||
|
||||
@classmethod
|
||||
def _register_signal_handler(cls):
|
||||
"""Register signal handlers once (called by first instance)."""
|
||||
with cls._signal_lock: # Lock protects _signal_handlers_installed from race conditions
|
||||
if cls._signal_handlers_installed:
|
||||
return
|
||||
|
||||
signals_to_handle = [signal.SIGINT, signal.SIGTERM]
|
||||
if hasattr(signal, 'SIGHUP'):
|
||||
signals_to_handle.append(signal.SIGHUP)
|
||||
|
||||
for sig in signals_to_handle:
|
||||
try:
|
||||
original_handler = signal.signal(sig, cls._signal_handler)
|
||||
cls._original_handlers[sig] = original_handler
|
||||
logger.debug(f"SignalHandler: Registered signal handler for {signal.Signals(sig).name}")
|
||||
except (OSError, ValueError) as e:
|
||||
logger.warning(f"AbstractScenarioPlugin: Could not register handler for signal {sig}: {e}")
|
||||
|
||||
cls._signal_handlers_installed = True
|
||||
logger.info("Signal handlers registered globally")
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def signal_context(cls, run_uuid: str, scenario_type: str, telemetry_ocp: KrknTelemetryOpenshift):
|
||||
"""Context manager to set the signal context for the current thread."""
|
||||
cls._set_context(run_uuid, scenario_type, telemetry_ocp)
|
||||
cls._register_signal_handler()
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
# Clear context after exiting the context manager
|
||||
cls._set_context(None, None, telemetry_ocp)
|
||||
|
||||
|
||||
signal_handler = SignalHandler()
|
||||
55
krkn/rollback/version_template.j2
Normal file
55
krkn/rollback/version_template.j2
Normal file
@@ -0,0 +1,55 @@
|
||||
# This file is auto-generated by krkn-lib.
|
||||
# It contains the rollback callable and its arguments for the scenario plugin.
|
||||
|
||||
from dataclasses import dataclass
|
||||
import os
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from krkn_lib.utils import SafeLogger
|
||||
from krkn_lib.ocp import KrknOpenshift
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class RollbackContent:
|
||||
resource_identifier: str
|
||||
namespace: Optional[str] = None
|
||||
|
||||
# Actual rollback callable
|
||||
{{ rollback_callable_code }}
|
||||
|
||||
# Create necessary variables for execution
|
||||
lib_openshift = None
|
||||
lib_telemetry = None
|
||||
rollback_content = {{ rollback_content }}
|
||||
|
||||
|
||||
# Main entry point for execution
|
||||
if __name__ == '__main__':
|
||||
# setup logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s [%(levelname)s] %(message)s",
|
||||
handlers=[
|
||||
logging.StreamHandler(),
|
||||
]
|
||||
)
|
||||
|
||||
# setup logging and get kubeconfig path
|
||||
kubeconfig_path = os.getenv("KUBECONFIG", "~/.kube/config")
|
||||
log_directory = os.path.dirname(os.path.abspath(__file__))
|
||||
os.makedirs(os.path.join(log_directory, 'logs'), exist_ok=True)
|
||||
# setup SafeLogger for telemetry
|
||||
telemetry_log_path = os.path.join(log_directory, 'logs', 'telemetry.log')
|
||||
safe_logger = SafeLogger(telemetry_log_path)
|
||||
# setup krkn-lib objects
|
||||
lib_openshift = KrknOpenshift(kubeconfig_path=kubeconfig_path)
|
||||
lib_telemetry = KrknTelemetryOpenshift(safe_logger=safe_logger, lib_openshift=lib_openshift)
|
||||
|
||||
# execute
|
||||
logging.info('Executing rollback callable...')
|
||||
{{ rollback_callable_name }}(
|
||||
rollback_content,
|
||||
lib_telemetry
|
||||
)
|
||||
logging.info('Rollback completed.')
|
||||
@@ -4,16 +4,32 @@ from abc import ABC, abstractmethod
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
from krkn import utils
|
||||
|
||||
from krkn import utils, cerberus
|
||||
from krkn.rollback.handler import (
|
||||
RollbackHandler,
|
||||
execute_rollback_version_files,
|
||||
cleanup_rollback_version_files
|
||||
)
|
||||
from krkn.rollback.signal import signal_handler
|
||||
from krkn.rollback.serialization import Serializer
|
||||
|
||||
class AbstractScenarioPlugin(ABC):
|
||||
|
||||
def __init__(self, scenario_type: str = "placeholder_scenario_type"):
|
||||
"""Initializes the AbstractScenarioPlugin with the scenario type and rollback configuration.
|
||||
|
||||
:param scenario_type: the scenario type defined in the config.yaml
|
||||
"""
|
||||
serializer = Serializer(
|
||||
scenario_type=scenario_type,
|
||||
)
|
||||
self.rollback_handler = RollbackHandler(scenario_type, serializer)
|
||||
|
||||
@abstractmethod
|
||||
def run(
|
||||
self,
|
||||
run_uuid: str,
|
||||
scenario: str,
|
||||
krkn_config: dict[str, any],
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry,
|
||||
) -> int:
|
||||
@@ -74,32 +90,48 @@ class AbstractScenarioPlugin(ABC):
|
||||
scenario_telemetry, scenario_config
|
||||
)
|
||||
|
||||
try:
|
||||
logging.info(
|
||||
f"Running {self.__class__.__name__}: {self.get_scenario_types()} -> {scenario_config}"
|
||||
)
|
||||
return_value = self.run(
|
||||
run_uuid,
|
||||
scenario_config,
|
||||
krkn_config,
|
||||
telemetry,
|
||||
scenario_telemetry,
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
f"uncaught exception on scenario `run()` method: {e} "
|
||||
f"please report an issue on https://github.com/krkn-chaos/krkn"
|
||||
)
|
||||
return_value = 1
|
||||
with signal_handler.signal_context(
|
||||
run_uuid=run_uuid,
|
||||
scenario_type=scenario_telemetry.scenario_type,
|
||||
telemetry_ocp=telemetry
|
||||
):
|
||||
try:
|
||||
logging.info(
|
||||
f"Running {self.__class__.__name__}: {self.get_scenario_types()} -> {scenario_config}"
|
||||
)
|
||||
# pass all the parameters by kwargs to make `set_rollback_context_decorator` get the `run_uuid` and `scenario_type`
|
||||
return_value = self.run(
|
||||
run_uuid=run_uuid,
|
||||
scenario=scenario_config,
|
||||
lib_telemetry=telemetry,
|
||||
scenario_telemetry=scenario_telemetry,
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
f"uncaught exception on scenario `run()` method: {e} "
|
||||
f"please report an issue on https://github.com/krkn-chaos/krkn"
|
||||
)
|
||||
return_value = 1
|
||||
|
||||
if return_value == 0:
|
||||
cleanup_rollback_version_files(
|
||||
run_uuid, scenario_telemetry.scenario_type
|
||||
)
|
||||
else:
|
||||
# execute rollback files based on the return value
|
||||
execute_rollback_version_files(
|
||||
telemetry, run_uuid, scenario_telemetry.scenario_type
|
||||
)
|
||||
scenario_telemetry.exit_status = return_value
|
||||
scenario_telemetry.end_timestamp = time.time()
|
||||
start_time = int(scenario_telemetry.start_timestamp)
|
||||
end_time = int(scenario_telemetry.end_timestamp)
|
||||
utils.collect_and_put_ocp_logs(
|
||||
telemetry,
|
||||
parsed_scenario_config,
|
||||
telemetry.get_telemetry_request_id(),
|
||||
int(scenario_telemetry.start_timestamp),
|
||||
int(scenario_telemetry.end_timestamp),
|
||||
start_time,
|
||||
end_time
|
||||
)
|
||||
|
||||
if events_backup:
|
||||
@@ -107,15 +139,17 @@ class AbstractScenarioPlugin(ABC):
|
||||
krkn_config,
|
||||
parsed_scenario_config,
|
||||
telemetry.get_lib_kubernetes(),
|
||||
int(scenario_telemetry.start_timestamp),
|
||||
int(scenario_telemetry.end_timestamp),
|
||||
start_time,
|
||||
end_time
|
||||
)
|
||||
|
||||
if scenario_telemetry.exit_status != 0:
|
||||
failed_scenarios.append(scenario_config)
|
||||
scenario_telemetries.append(scenario_telemetry)
|
||||
cerberus.publish_kraken_status(start_time,end_time)
|
||||
logging.info(f"wating {wait_duration} before running the next scenario")
|
||||
time.sleep(wait_duration)
|
||||
|
||||
return failed_scenarios, scenario_telemetries
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -5,20 +5,20 @@ from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
from krkn_lib.utils import get_yaml_item_value, get_random_string
|
||||
from jinja2 import Template
|
||||
from krkn import cerberus
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
from krkn.rollback.config import RollbackContent
|
||||
from krkn.rollback.handler import set_rollback_context_decorator
|
||||
|
||||
|
||||
class ApplicationOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
@set_rollback_context_decorator
|
||||
def run(
|
||||
self,
|
||||
run_uuid: str,
|
||||
scenario: str,
|
||||
krkn_config: dict[str, any],
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry,
|
||||
) -> int:
|
||||
wait_duration = krkn_config["tunings"]["wait_duration"]
|
||||
try:
|
||||
with open(scenario, "r") as f:
|
||||
app_outage_config_yaml = yaml.full_load(f)
|
||||
@@ -31,6 +31,21 @@ class ApplicationOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
)
|
||||
namespace = get_yaml_item_value(scenario_config, "namespace", "")
|
||||
duration = get_yaml_item_value(scenario_config, "duration", 60)
|
||||
exclude_label = get_yaml_item_value(
|
||||
scenario_config, "exclude_label", None
|
||||
)
|
||||
match_expressions = self._build_exclude_expressions(exclude_label)
|
||||
if match_expressions:
|
||||
# Log the format being used for better clarity
|
||||
format_type = "dict" if isinstance(exclude_label, dict) else "string"
|
||||
logging.info(
|
||||
"Excluding pods with labels (%s format): %s",
|
||||
format_type,
|
||||
", ".join(
|
||||
f"{expr['key']} NOT IN {expr['values']}"
|
||||
for expr in match_expressions
|
||||
),
|
||||
)
|
||||
|
||||
start_time = int(time.time())
|
||||
policy_name = f"krkn-deny-{get_random_string(5)}"
|
||||
@@ -40,23 +55,42 @@ class ApplicationOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: NetworkPolicy
|
||||
metadata:
|
||||
name: """
|
||||
+ policy_name
|
||||
+ """
|
||||
name: {{ policy_name }}
|
||||
spec:
|
||||
podSelector:
|
||||
matchLabels: {{ pod_selector }}
|
||||
{% if match_expressions %}
|
||||
matchExpressions:
|
||||
{% for expression in match_expressions %}
|
||||
- key: {{ expression["key"] }}
|
||||
operator: NotIn
|
||||
values:
|
||||
{% for value in expression["values"] %}
|
||||
- {{ value }}
|
||||
{% endfor %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
policyTypes: {{ traffic_type }}
|
||||
"""
|
||||
)
|
||||
t = Template(network_policy_template)
|
||||
rendered_spec = t.render(
|
||||
pod_selector=pod_selector, traffic_type=traffic_type
|
||||
pod_selector=pod_selector,
|
||||
traffic_type=traffic_type,
|
||||
match_expressions=match_expressions,
|
||||
policy_name=policy_name,
|
||||
)
|
||||
yaml_spec = yaml.safe_load(rendered_spec)
|
||||
# Block the traffic by creating network policy
|
||||
logging.info("Creating the network policy")
|
||||
|
||||
self.rollback_handler.set_rollback_callable(
|
||||
self.rollback_network_policy,
|
||||
RollbackContent(
|
||||
namespace=namespace,
|
||||
resource_identifier=policy_name,
|
||||
),
|
||||
)
|
||||
lib_telemetry.get_lib_kubernetes().create_net_policy(
|
||||
yaml_spec, namespace
|
||||
)
|
||||
@@ -73,14 +107,8 @@ class ApplicationOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
policy_name, namespace
|
||||
)
|
||||
|
||||
logging.info(
|
||||
"End of scenario. Waiting for the specified duration: %s"
|
||||
% wait_duration
|
||||
)
|
||||
time.sleep(wait_duration)
|
||||
|
||||
end_time = int(time.time())
|
||||
cerberus.publish_kraken_status(krkn_config, [], start_time, end_time)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"ApplicationOutageScenarioPlugin exiting due to Exception %s" % e
|
||||
@@ -89,5 +117,86 @@ class ApplicationOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
else:
|
||||
return 0
|
||||
|
||||
@staticmethod
|
||||
def rollback_network_policy(
|
||||
rollback_content: RollbackContent,
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
):
|
||||
"""Rollback function to delete the network policy created during the scenario.
|
||||
|
||||
:param rollback_content: Rollback content containing namespace and resource_identifier.
|
||||
:param lib_telemetry: Instance of KrknTelemetryOpenshift for Kubernetes operations.
|
||||
"""
|
||||
try:
|
||||
namespace = rollback_content.namespace
|
||||
policy_name = rollback_content.resource_identifier
|
||||
logging.info(
|
||||
f"Rolling back network policy: {policy_name} in namespace: {namespace}"
|
||||
)
|
||||
lib_telemetry.get_lib_kubernetes().delete_net_policy(policy_name, namespace)
|
||||
logging.info("Network policy rollback completed successfully.")
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to rollback network policy: {e}")
|
||||
|
||||
def get_scenario_types(self) -> list[str]:
|
||||
return ["application_outages_scenarios"]
|
||||
|
||||
@staticmethod
|
||||
def _build_exclude_expressions(exclude_label) -> list[dict]:
|
||||
"""
|
||||
Build match expressions for NetworkPolicy from exclude_label.
|
||||
|
||||
Supports multiple formats:
|
||||
- Dict format (preferred, similar to pod_selector): {key1: value1, key2: [value2, value3]}
|
||||
Example: {tier: "gold", env: ["prod", "staging"]}
|
||||
- String format: "key1=value1,key2=value2" or "key1=value1|value2"
|
||||
Example: "tier=gold,env=prod" or "tier=gold|platinum"
|
||||
- List format (list of strings): ["key1=value1", "key2=value2"]
|
||||
Example: ["tier=gold", "env=prod"]
|
||||
Note: List elements must be strings in "key=value" format.
|
||||
|
||||
:param exclude_label: Can be dict, string, list of strings, or None
|
||||
:return: List of match expression dictionaries
|
||||
"""
|
||||
expressions: list[dict] = []
|
||||
|
||||
if not exclude_label:
|
||||
return expressions
|
||||
|
||||
def _append_expr(key: str, values):
|
||||
if not key or values is None:
|
||||
return
|
||||
if not isinstance(values, list):
|
||||
values = [values]
|
||||
cleaned_values = [str(v).strip() for v in values if str(v).strip()]
|
||||
if cleaned_values:
|
||||
expressions.append({"key": key.strip(), "values": cleaned_values})
|
||||
|
||||
if isinstance(exclude_label, dict):
|
||||
for k, v in exclude_label.items():
|
||||
_append_expr(str(k), v)
|
||||
return expressions
|
||||
|
||||
if isinstance(exclude_label, list):
|
||||
selectors = exclude_label
|
||||
else:
|
||||
selectors = [sel.strip() for sel in str(exclude_label).split(",")]
|
||||
|
||||
for selector in selectors:
|
||||
if not selector:
|
||||
continue
|
||||
if "=" not in selector:
|
||||
logging.warning(
|
||||
"exclude_label entry '%s' is invalid, expected key=value format",
|
||||
selector,
|
||||
)
|
||||
continue
|
||||
key, value = selector.split("=", 1)
|
||||
value_items = (
|
||||
[item.strip() for item in value.split("|") if item.strip()]
|
||||
if value
|
||||
else []
|
||||
)
|
||||
_append_expr(key, value_items or value)
|
||||
|
||||
return expressions
|
||||
|
||||
@@ -1,15 +1,15 @@
|
||||
import logging
|
||||
import random
|
||||
import time
|
||||
|
||||
import traceback
|
||||
from asyncio import Future
|
||||
import yaml
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
from krkn_lib.k8s.pods_monitor_pool import PodsMonitorPool
|
||||
from krkn_lib.k8s.pod_monitor import select_and_monitor_by_namespace_pattern_and_label
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
from krkn_lib.utils import get_yaml_item_value
|
||||
|
||||
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
|
||||
|
||||
@@ -18,34 +18,30 @@ class ContainerScenarioPlugin(AbstractScenarioPlugin):
|
||||
self,
|
||||
run_uuid: str,
|
||||
scenario: str,
|
||||
krkn_config: dict[str, any],
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry,
|
||||
) -> int:
|
||||
pool = PodsMonitorPool(lib_telemetry.get_lib_kubernetes())
|
||||
try:
|
||||
with open(scenario, "r") as f:
|
||||
cont_scenario_config = yaml.full_load(f)
|
||||
|
||||
for kill_scenario in cont_scenario_config["scenarios"]:
|
||||
self.start_monitoring(
|
||||
kill_scenario, pool
|
||||
future_snapshot = self.start_monitoring(
|
||||
kill_scenario,
|
||||
lib_telemetry
|
||||
)
|
||||
killed_containers = self.container_killing_in_pod(
|
||||
self.container_killing_in_pod(
|
||||
kill_scenario, lib_telemetry.get_lib_kubernetes()
|
||||
)
|
||||
result = pool.join()
|
||||
if result.error:
|
||||
logging.error(
|
||||
logging.error(
|
||||
f"ContainerScenarioPlugin pods failed to recovery: {result.error}"
|
||||
)
|
||||
)
|
||||
return 1
|
||||
scenario_telemetry.affected_pods = result
|
||||
|
||||
except (RuntimeError, Exception):
|
||||
logging.error("ContainerScenarioPlugin exiting due to Exception %s")
|
||||
snapshot = future_snapshot.result()
|
||||
result = snapshot.get_pods_status()
|
||||
scenario_telemetry.affected_pods = result
|
||||
if len(result.unrecovered) > 0:
|
||||
logging.info("ContainerScenarioPlugin failed with unrecovered containers")
|
||||
return 1
|
||||
except (RuntimeError, Exception) as e:
|
||||
logging.error("Stack trace:\n%s", traceback.format_exc())
|
||||
logging.error("ContainerScenarioPlugin exiting due to Exception %s" % e)
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
@@ -53,17 +49,17 @@ class ContainerScenarioPlugin(AbstractScenarioPlugin):
|
||||
def get_scenario_types(self) -> list[str]:
|
||||
return ["container_scenarios"]
|
||||
|
||||
def start_monitoring(self, kill_scenario: dict, pool: PodsMonitorPool):
|
||||
|
||||
def start_monitoring(self, kill_scenario: dict, lib_telemetry: KrknTelemetryOpenshift) -> Future:
|
||||
namespace_pattern = f"^{kill_scenario['namespace']}$"
|
||||
label_selector = kill_scenario["label_selector"]
|
||||
recovery_time = kill_scenario["expected_recovery_time"]
|
||||
pool.select_and_monitor_by_namespace_pattern_and_label(
|
||||
future_snapshot = select_and_monitor_by_namespace_pattern_and_label(
|
||||
namespace_pattern=namespace_pattern,
|
||||
label_selector=label_selector,
|
||||
max_timeout=recovery_time,
|
||||
field_selector="status.phase=Running"
|
||||
v1_client=lib_telemetry.get_lib_kubernetes().cli
|
||||
)
|
||||
return future_snapshot
|
||||
|
||||
def container_killing_in_pod(self, cont_scenario, kubecli: KrknKubernetes):
|
||||
scenario_name = get_yaml_item_value(cont_scenario, "name", "")
|
||||
@@ -73,6 +69,7 @@ class ContainerScenarioPlugin(AbstractScenarioPlugin):
|
||||
container_name = get_yaml_item_value(cont_scenario, "container_name", "")
|
||||
kill_action = get_yaml_item_value(cont_scenario, "action", 1)
|
||||
kill_count = get_yaml_item_value(cont_scenario, "count", 1)
|
||||
exclude_label = get_yaml_item_value(cont_scenario, "exclude_label", "")
|
||||
if not isinstance(kill_action, int):
|
||||
logging.error(
|
||||
"Please make sure the action parameter defined in the "
|
||||
@@ -94,7 +91,19 @@ class ContainerScenarioPlugin(AbstractScenarioPlugin):
|
||||
pods = kubecli.get_all_pods(label_selector)
|
||||
else:
|
||||
# Only returns pod names
|
||||
pods = kubecli.list_pods(namespace, label_selector)
|
||||
# Use list_pods with exclude_label parameter to exclude pods
|
||||
if exclude_label:
|
||||
logging.info(
|
||||
"Using exclude_label '%s' to exclude pods from container scenario %s in namespace %s",
|
||||
exclude_label,
|
||||
scenario_name,
|
||||
namespace,
|
||||
)
|
||||
pods = kubecli.list_pods(
|
||||
namespace=namespace,
|
||||
label_selector=label_selector,
|
||||
exclude_label=exclude_label if exclude_label else None
|
||||
)
|
||||
else:
|
||||
if namespace == "*":
|
||||
logging.error(
|
||||
@@ -105,6 +114,7 @@ class ContainerScenarioPlugin(AbstractScenarioPlugin):
|
||||
# sys.exit(1)
|
||||
raise RuntimeError()
|
||||
pods = pod_names
|
||||
|
||||
# get container and pod name
|
||||
container_pod_list = []
|
||||
for pod in pods:
|
||||
@@ -221,4 +231,5 @@ class ContainerScenarioPlugin(AbstractScenarioPlugin):
|
||||
timer += 5
|
||||
logging.info("Waiting 5 seconds for containers to become ready")
|
||||
time.sleep(5)
|
||||
|
||||
return killed_container_list
|
||||
|
||||
@@ -16,15 +16,23 @@ from krkn_lib.k8s import KrknKubernetes
|
||||
from krkn_lib.utils import get_random_string
|
||||
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
from krkn.rollback.config import RollbackContent
|
||||
from krkn.rollback.handler import set_rollback_context_decorator
|
||||
|
||||
|
||||
class HogsScenarioPlugin(AbstractScenarioPlugin):
|
||||
def run(self, run_uuid: str, scenario: str, krkn_config: dict[str, any], lib_telemetry: KrknTelemetryOpenshift,
|
||||
|
||||
@set_rollback_context_decorator
|
||||
def run(self, run_uuid: str, scenario: str, lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry) -> int:
|
||||
try:
|
||||
with open(scenario, "r") as f:
|
||||
scenario = yaml.full_load(f)
|
||||
scenario_config = HogConfig.from_yaml_dict(scenario)
|
||||
|
||||
# Get node-name if provided
|
||||
node_name = scenario.get('node-name')
|
||||
|
||||
has_selector = True
|
||||
if not scenario_config.node_selector or not re.match("^.+=.*$", scenario_config.node_selector):
|
||||
if scenario_config.node_selector:
|
||||
@@ -33,13 +41,19 @@ class HogsScenarioPlugin(AbstractScenarioPlugin):
|
||||
else:
|
||||
node_selector = scenario_config.node_selector
|
||||
|
||||
available_nodes = lib_telemetry.get_lib_kubernetes().list_nodes(node_selector)
|
||||
if len(available_nodes) == 0:
|
||||
raise Exception("no available nodes to schedule workload")
|
||||
if node_name:
|
||||
logging.info(f"Using specific node: {node_name}")
|
||||
all_nodes = lib_telemetry.get_lib_kubernetes().list_nodes("")
|
||||
if node_name not in all_nodes:
|
||||
raise Exception(f"Specified node {node_name} not found or not available")
|
||||
available_nodes = [node_name]
|
||||
else:
|
||||
available_nodes = lib_telemetry.get_lib_kubernetes().list_nodes(node_selector)
|
||||
if len(available_nodes) == 0:
|
||||
raise Exception("no available nodes to schedule workload")
|
||||
|
||||
if not has_selector:
|
||||
# if selector not specified picks a random node between the available
|
||||
available_nodes = [available_nodes[random.randint(0, len(available_nodes))]]
|
||||
if not has_selector:
|
||||
available_nodes = [available_nodes[random.randint(0, len(available_nodes) - 1)]]
|
||||
|
||||
if scenario_config.number_of_nodes and len(available_nodes) > scenario_config.number_of_nodes:
|
||||
available_nodes = random.sample(available_nodes, scenario_config.number_of_nodes)
|
||||
@@ -69,6 +83,13 @@ class HogsScenarioPlugin(AbstractScenarioPlugin):
|
||||
config.node_selector = f"kubernetes.io/hostname={node}"
|
||||
pod_name = f"{config.type.value}-hog-{get_random_string(5)}"
|
||||
node_resources_start = lib_k8s.get_node_resources_info(node)
|
||||
self.rollback_handler.set_rollback_callable(
|
||||
self.rollback_hog_pod,
|
||||
RollbackContent(
|
||||
namespace=config.namespace,
|
||||
resource_identifier=pod_name,
|
||||
),
|
||||
)
|
||||
lib_k8s.deploy_hog(pod_name, config)
|
||||
start = time.time()
|
||||
# waiting 3 seconds before starting sample collection
|
||||
@@ -140,3 +161,22 @@ class HogsScenarioPlugin(AbstractScenarioPlugin):
|
||||
raise exception
|
||||
except queue.Empty:
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def rollback_hog_pod(rollback_content: RollbackContent, lib_telemetry: KrknTelemetryOpenshift):
|
||||
"""
|
||||
Rollback function to delete hog pod.
|
||||
|
||||
:param rollback_content: Rollback content containing namespace and resource_identifier.
|
||||
:param lib_telemetry: Instance of KrknTelemetryOpenshift for Kubernetes operations
|
||||
"""
|
||||
try:
|
||||
namespace = rollback_content.namespace
|
||||
pod_name = rollback_content.resource_identifier
|
||||
logging.info(
|
||||
f"Rolling back hog pod: {pod_name} in namespace: {namespace}"
|
||||
)
|
||||
lib_telemetry.get_lib_kubernetes().delete_pod(pod_name, namespace)
|
||||
logging.info("Rollback of hog pod completed successfully.")
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to rollback hog pod: {e}")
|
||||
|
||||
@@ -20,9 +20,12 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
This plugin simulates a VM crash or outage scenario and supports automated or manual recovery.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
def __init__(self, scenario_type: str = None):
|
||||
scenario_type = self.get_scenario_types()[0]
|
||||
super().__init__(scenario_type)
|
||||
self.k8s_client = None
|
||||
self.original_vmi = None
|
||||
self.vmis_list = []
|
||||
|
||||
# Scenario type is handled directly in execute_scenario
|
||||
def get_scenario_types(self) -> list[str]:
|
||||
@@ -43,7 +46,7 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
try:
|
||||
with open(scenario, "r") as f:
|
||||
scenario_config = yaml.full_load(f)
|
||||
|
||||
|
||||
self.init_clients(lib_telemetry.get_lib_kubernetes())
|
||||
pods_status = PodsStatus()
|
||||
for config in scenario_config["scenarios"]:
|
||||
@@ -52,7 +55,8 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
pods_status.merge(single_pods_status)
|
||||
|
||||
scenario_telemetry.affected_pods = pods_status
|
||||
|
||||
if len(scenario_telemetry.affected_pods.unrecovered) > 0:
|
||||
return 1
|
||||
return 0
|
||||
except Exception as e:
|
||||
logging.error(f"KubeVirt VM Outage scenario failed: {e}")
|
||||
@@ -67,76 +71,16 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
self.custom_object_client = k8s_client.custom_object_client
|
||||
logging.info("Successfully initialized Kubernetes client for KubeVirt operations")
|
||||
|
||||
def get_vmi(self, name: str, namespace: str) -> Optional[Dict]:
|
||||
"""
|
||||
Get a Virtual Machine Instance by name and namespace.
|
||||
|
||||
:param name: Name of the VMI to retrieve
|
||||
:param namespace: Namespace of the VMI
|
||||
:return: The VMI object if found, None otherwise
|
||||
"""
|
||||
try:
|
||||
vmi = self.custom_object_client.get_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachineinstances",
|
||||
name=name
|
||||
)
|
||||
return vmi
|
||||
except ApiException as e:
|
||||
if e.status == 404:
|
||||
logging.warning(f"VMI {name} not found in namespace {namespace}")
|
||||
return None
|
||||
else:
|
||||
logging.error(f"Error getting VMI {name}: {e}")
|
||||
raise
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected error getting VMI {name}: {e}")
|
||||
raise
|
||||
|
||||
def get_vmis(self, regex_name: str, namespace: str) -> Optional[Dict]:
|
||||
"""
|
||||
Get a Virtual Machine Instance by name and namespace.
|
||||
|
||||
:param name: Name of the VMI to retrieve
|
||||
:param namespace: Namespace of the VMI
|
||||
:return: The VMI object if found, None otherwise
|
||||
"""
|
||||
try:
|
||||
vmis = self.custom_object_client.list_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachineinstances",
|
||||
)
|
||||
|
||||
vmi_list = []
|
||||
for vmi in vmis.get("items"):
|
||||
vmi_name = vmi.get("metadata",{}).get("name")
|
||||
match = re.match(regex_name, vmi_name)
|
||||
if match:
|
||||
vmi_list.append(vmi)
|
||||
return vmi_list
|
||||
except ApiException as e:
|
||||
if e.status == 404:
|
||||
logging.warning(f"VMI {regex_name} not found in namespace {namespace}")
|
||||
return None
|
||||
else:
|
||||
logging.error(f"Error getting VMI {regex_name}: {e}")
|
||||
raise
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected error getting VMI {regex_name}: {e}")
|
||||
raise
|
||||
|
||||
def execute_scenario(self, config: Dict[str, Any], scenario_telemetry: ScenarioTelemetry) -> int:
|
||||
def execute_scenario(self, config: Dict[str, Any], scenario_telemetry: ScenarioTelemetry) -> PodsStatus:
|
||||
"""
|
||||
Execute a KubeVirt VM outage scenario based on the provided configuration.
|
||||
|
||||
|
||||
:param config: The scenario configuration
|
||||
:param scenario_telemetry: The telemetry object for recording metrics
|
||||
:return: 0 for success, 1 for failure
|
||||
:return: PodsStatus object containing recovered and unrecovered pods
|
||||
"""
|
||||
self.pods_status = PodsStatus()
|
||||
try:
|
||||
params = config.get("parameters", {})
|
||||
vm_name = params.get("vm_name")
|
||||
@@ -144,55 +88,63 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
timeout = params.get("timeout", 60)
|
||||
kill_count = params.get("kill_count", 1)
|
||||
disable_auto_restart = params.get("disable_auto_restart", False)
|
||||
self.pods_status = PodsStatus()
|
||||
|
||||
if not vm_name:
|
||||
logging.error("vm_name parameter is required")
|
||||
return 1
|
||||
vmis_list = self.get_vmis(vm_name,namespace)
|
||||
rand_int = random.randint(0, len(vmis_list) - 1)
|
||||
vmi = vmis_list[rand_int]
|
||||
|
||||
logging.info(f"Starting KubeVirt VM outage scenario for VM: {vm_name} in namespace: {namespace}")
|
||||
vmi_name = vmi.get("metadata").get("name")
|
||||
if not self.validate_environment(vmi_name, namespace):
|
||||
return 1
|
||||
|
||||
vmi = self.get_vmi(vmi_name, namespace)
|
||||
self.affected_pod = AffectedPod(
|
||||
pod_name=vmi_name,
|
||||
namespace=namespace,
|
||||
)
|
||||
if not vmi:
|
||||
logging.error(f"VMI {vm_name} not found in namespace {namespace}")
|
||||
return 1
|
||||
|
||||
self.original_vmi = vmi
|
||||
logging.info(f"Captured initial state of VMI: {vm_name}")
|
||||
result = self.delete_vmi(vmi_name, namespace, disable_auto_restart)
|
||||
if result != 0:
|
||||
|
||||
return self.pods_status
|
||||
self.pods_status = PodsStatus()
|
||||
self.vmis_list = self.k8s_client.get_vmis(vm_name,namespace)
|
||||
for _ in range(kill_count):
|
||||
|
||||
rand_int = random.randint(0, len(self.vmis_list) - 1)
|
||||
vmi = self.vmis_list[rand_int]
|
||||
|
||||
logging.info(f"Starting KubeVirt VM outage scenario for VM: {vm_name} in namespace: {namespace}")
|
||||
vmi_name = vmi.get("metadata").get("name")
|
||||
vmi_namespace = vmi.get("metadata").get("namespace")
|
||||
|
||||
result = self.wait_for_running(vmi_name,namespace, timeout)
|
||||
if result != 0:
|
||||
self.recover(vmi_name, namespace)
|
||||
self.pods_status.unrecovered = self.affected_pod
|
||||
return self.pods_status
|
||||
|
||||
self.affected_pod.total_recovery_time = (
|
||||
self.affected_pod.pod_readiness_time
|
||||
+ self.affected_pod.pod_rescheduling_time
|
||||
)
|
||||
# Create affected_pod early so we can track failures
|
||||
self.affected_pod = AffectedPod(
|
||||
pod_name=vmi_name,
|
||||
namespace=vmi_namespace,
|
||||
)
|
||||
|
||||
self.pods_status.recovered.append(self.affected_pod)
|
||||
logging.info(f"Successfully completed KubeVirt VM outage scenario for VM: {vm_name}")
|
||||
if not self.validate_environment(vmi_name, vmi_namespace):
|
||||
self.pods_status.unrecovered.append(self.affected_pod)
|
||||
continue
|
||||
|
||||
vmi = self.k8s_client.get_vmi(vmi_name, vmi_namespace)
|
||||
if not vmi:
|
||||
logging.error(f"VMI {vm_name} not found in namespace {namespace}")
|
||||
self.pods_status.unrecovered.append(self.affected_pod)
|
||||
continue
|
||||
|
||||
self.original_vmi = vmi
|
||||
logging.info(f"Captured initial state of VMI: {vm_name}")
|
||||
result = self.delete_vmi(vmi_name, vmi_namespace, disable_auto_restart)
|
||||
if result != 0:
|
||||
self.pods_status.unrecovered.append(self.affected_pod)
|
||||
continue
|
||||
|
||||
result = self.wait_for_running(vmi_name,vmi_namespace, timeout)
|
||||
if result != 0:
|
||||
self.pods_status.unrecovered.append(self.affected_pod)
|
||||
continue
|
||||
|
||||
self.affected_pod.total_recovery_time = (
|
||||
self.affected_pod.pod_readiness_time
|
||||
+ self.affected_pod.pod_rescheduling_time
|
||||
)
|
||||
|
||||
self.pods_status.recovered.append(self.affected_pod)
|
||||
logging.info(f"Successfully completed KubeVirt VM outage scenario for VM: {vm_name}")
|
||||
|
||||
return self.pods_status
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error executing KubeVirt VM outage scenario: {e}")
|
||||
log_exception(e)
|
||||
return 1
|
||||
return self.pods_status
|
||||
|
||||
def validate_environment(self, vm_name: str, namespace: str) -> bool:
|
||||
"""
|
||||
@@ -204,15 +156,13 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
"""
|
||||
try:
|
||||
# Check if KubeVirt CRDs exist
|
||||
crd_list = self.custom_object_client.list_namespaced_custom_object("kubevirt.io","v1",namespace,"virtualmachines")
|
||||
kubevirt_crds = [crd for crd in crd_list.items() ]
|
||||
|
||||
kubevirt_crds = self.k8s_client.get_vms(vm_name, namespace)
|
||||
if not kubevirt_crds:
|
||||
logging.error("KubeVirt CRDs not found. Ensure KubeVirt/CNV is installed in the cluster")
|
||||
return False
|
||||
|
||||
# Check if VMI exists
|
||||
vmi = self.get_vmi(vm_name, namespace)
|
||||
vmi = self.k8s_client.get_vmi(vm_name, namespace)
|
||||
if not vmi:
|
||||
logging.error(f"VMI {vm_name} not found in namespace {namespace}")
|
||||
return False
|
||||
@@ -235,13 +185,7 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
"""
|
||||
try:
|
||||
# Get the VM object first to get its current spec
|
||||
vm = self.custom_object_client.get_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachines",
|
||||
name=vm_name
|
||||
)
|
||||
vm = self.k8s_client.get_vm(vm_name, namespace)
|
||||
|
||||
# Update the running state
|
||||
if 'spec' not in vm:
|
||||
@@ -249,14 +193,7 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
vm['spec']['running'] = running
|
||||
|
||||
# Apply the patch
|
||||
self.custom_object_client.patch_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachines",
|
||||
name=vm_name,
|
||||
body=vm
|
||||
)
|
||||
self.k8s_client.patch_vm(vm_name,namespace,vm)
|
||||
return True
|
||||
|
||||
except ApiException as e:
|
||||
@@ -285,26 +222,12 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
" - proceeding with deletion but VM may auto-restart")
|
||||
start_creation_time = self.original_vmi.get('metadata', {}).get('creationTimestamp')
|
||||
start_time = time.time()
|
||||
try:
|
||||
self.custom_object_client.delete_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachineinstances",
|
||||
name=vm_name
|
||||
)
|
||||
except ApiException as e:
|
||||
if e.status == 404:
|
||||
logging.warning(f"VMI {vm_name} not found during deletion")
|
||||
return 1
|
||||
else:
|
||||
logging.error(f"API error during VMI deletion: {e}")
|
||||
return 1
|
||||
self.k8s_client.delete_vmi(vm_name, namespace)
|
||||
|
||||
# Wait for the VMI to be deleted
|
||||
|
||||
while time.time() - start_time < timeout:
|
||||
deleted_vmi = self.get_vmi(vm_name, namespace)
|
||||
deleted_vmi = self.k8s_client.get_vmi(vm_name, namespace)
|
||||
if deleted_vmi:
|
||||
if start_creation_time != deleted_vmi.get('metadata', {}).get('creationTimestamp'):
|
||||
logging.info(f"VMI {vm_name} successfully recreated")
|
||||
@@ -315,13 +238,13 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
time.sleep(1)
|
||||
|
||||
logging.error(f"Timed out waiting for VMI {vm_name} to be deleted")
|
||||
self.pods_status.unrecovered = self.affected_pod
|
||||
self.pods_status.unrecovered.append(self.affected_pod)
|
||||
return 1
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error deleting VMI {vm_name}: {e}")
|
||||
log_exception(e)
|
||||
self.pods_status.unrecovered = self.affected_pod
|
||||
self.pods_status.unrecovered.append(self.affected_pod)
|
||||
return 1
|
||||
|
||||
def wait_for_running(self, vm_name: str, namespace: str, timeout: int = 120) -> int:
|
||||
@@ -329,7 +252,7 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
while time.time() - start_time < timeout:
|
||||
|
||||
# Check current state once since we've already waited for the duration
|
||||
vmi = self.get_vmi(vm_name, namespace)
|
||||
vmi = self.k8s_client.get_vmi(vm_name, namespace)
|
||||
|
||||
if vmi:
|
||||
if vmi.get('status', {}).get('phase') == "Running":
|
||||
@@ -370,13 +293,7 @@ class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
del metadata[field]
|
||||
|
||||
# Create the VMI
|
||||
self.custom_object_client.create_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachineinstances",
|
||||
body=vmi_dict
|
||||
)
|
||||
self.k8s_client.create_vmi(vm_name, namespace, vmi_dict)
|
||||
logging.info(f"Successfully recreated VMI {vm_name}")
|
||||
|
||||
# Wait for VMI to start running
|
||||
|
||||
@@ -7,7 +7,6 @@ from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
from krkn_lib.utils import get_yaml_item_value
|
||||
|
||||
from krkn import cerberus, utils
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
from krkn.scenario_plugins.managed_cluster.common_functions import get_managedcluster
|
||||
from krkn.scenario_plugins.managed_cluster.scenarios import Scenarios
|
||||
@@ -18,7 +17,6 @@ class ManagedClusterScenarioPlugin(AbstractScenarioPlugin):
|
||||
self,
|
||||
run_uuid: str,
|
||||
scenario: str,
|
||||
krkn_config: dict[str, any],
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry,
|
||||
) -> int:
|
||||
@@ -38,8 +36,6 @@ class ManagedClusterScenarioPlugin(AbstractScenarioPlugin):
|
||||
managedcluster_scenario_object,
|
||||
lib_telemetry.get_lib_kubernetes(),
|
||||
)
|
||||
end_time = int(time.time())
|
||||
cerberus.get_status(krkn_config, start_time, end_time)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"ManagedClusterScenarioPlugin exiting due to Exception %s"
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
from krkn.scenario_plugins.native.plugins import PLUGINS
|
||||
from krkn_lib.k8s.pods_monitor_pool import PodsMonitorPool
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
from typing import Any
|
||||
@@ -13,7 +12,6 @@ class NativeScenarioPlugin(AbstractScenarioPlugin):
|
||||
self,
|
||||
run_uuid: str,
|
||||
scenario: str,
|
||||
krkn_config: dict[str, any],
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry,
|
||||
) -> int:
|
||||
@@ -22,13 +20,11 @@ class NativeScenarioPlugin(AbstractScenarioPlugin):
|
||||
PLUGINS.run(
|
||||
scenario,
|
||||
lib_telemetry.get_lib_kubernetes().get_kubeconfig_path(),
|
||||
krkn_config,
|
||||
run_uuid,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logging.error("NativeScenarioPlugin exiting due to Exception %s" % e)
|
||||
pool.cancel()
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
|
||||
@@ -1,141 +0,0 @@
|
||||
import logging
|
||||
import requests
|
||||
import sys
|
||||
import json
|
||||
|
||||
|
||||
def get_status(config, start_time, end_time):
|
||||
"""
|
||||
Function to get Cerberus status
|
||||
|
||||
Args:
|
||||
config
|
||||
- Kraken config dictionary
|
||||
|
||||
start_time
|
||||
- The time when chaos is injected
|
||||
|
||||
end_time
|
||||
- The time when chaos is removed
|
||||
|
||||
Returns:
|
||||
Cerberus status
|
||||
"""
|
||||
|
||||
cerberus_status = True
|
||||
check_application_routes = False
|
||||
application_routes_status = True
|
||||
if config["cerberus"]["cerberus_enabled"]:
|
||||
cerberus_url = config["cerberus"]["cerberus_url"]
|
||||
check_application_routes = config["cerberus"]["check_applicaton_routes"]
|
||||
if not cerberus_url:
|
||||
logging.error("url where Cerberus publishes True/False signal is not provided.")
|
||||
sys.exit(1)
|
||||
cerberus_status = requests.get(cerberus_url, timeout=60).content
|
||||
cerberus_status = True if cerberus_status == b"True" else False
|
||||
|
||||
# Fail if the application routes monitored by cerberus experience downtime during the chaos
|
||||
if check_application_routes:
|
||||
application_routes_status, unavailable_routes = application_status(cerberus_url, start_time, end_time)
|
||||
if not application_routes_status:
|
||||
logging.error(
|
||||
"Application routes: %s monitored by cerberus encountered downtime during the run, failing"
|
||||
% unavailable_routes
|
||||
)
|
||||
else:
|
||||
logging.info("Application routes being monitored didn't encounter any downtime during the run!")
|
||||
|
||||
if not cerberus_status:
|
||||
logging.error(
|
||||
"Received a no-go signal from Cerberus, looks like "
|
||||
"the cluster is unhealthy. Please check the Cerberus "
|
||||
"report for more details. Test failed."
|
||||
)
|
||||
|
||||
if not application_routes_status or not cerberus_status:
|
||||
sys.exit(1)
|
||||
else:
|
||||
logging.info("Received a go signal from Ceberus, the cluster is healthy. " "Test passed.")
|
||||
return cerberus_status
|
||||
|
||||
|
||||
def publish_kraken_status(config, failed_post_scenarios, start_time, end_time):
|
||||
"""
|
||||
Function to publish Kraken status to Cerberus
|
||||
|
||||
Args:
|
||||
config
|
||||
- Kraken config dictionary
|
||||
|
||||
failed_post_scenarios
|
||||
- String containing the failed post scenarios
|
||||
|
||||
start_time
|
||||
- The time when chaos is injected
|
||||
|
||||
end_time
|
||||
- The time when chaos is removed
|
||||
"""
|
||||
|
||||
cerberus_status = get_status(config, start_time, end_time)
|
||||
if not cerberus_status:
|
||||
if failed_post_scenarios:
|
||||
if config["kraken"]["exit_on_failure"]:
|
||||
logging.info(
|
||||
"Cerberus status is not healthy and post action scenarios " "are still failing, exiting kraken run"
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
logging.info("Cerberus status is not healthy and post action scenarios " "are still failing")
|
||||
else:
|
||||
if failed_post_scenarios:
|
||||
if config["kraken"]["exit_on_failure"]:
|
||||
logging.info(
|
||||
"Cerberus status is healthy but post action scenarios " "are still failing, exiting kraken run"
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
logging.info("Cerberus status is healthy but post action scenarios " "are still failing")
|
||||
|
||||
|
||||
def application_status(cerberus_url, start_time, end_time):
|
||||
"""
|
||||
Function to check application availability
|
||||
|
||||
Args:
|
||||
cerberus_url
|
||||
- url where Cerberus publishes True/False signal
|
||||
|
||||
start_time
|
||||
- The time when chaos is injected
|
||||
|
||||
end_time
|
||||
- The time when chaos is removed
|
||||
|
||||
Returns:
|
||||
Application status and failed routes
|
||||
"""
|
||||
|
||||
if not cerberus_url:
|
||||
logging.error("url where Cerberus publishes True/False signal is not provided.")
|
||||
sys.exit(1)
|
||||
else:
|
||||
duration = (end_time - start_time) / 60
|
||||
url = cerberus_url + "/" + "history" + "?" + "loopback=" + str(duration)
|
||||
logging.info("Scraping the metrics for the test duration from cerberus url: %s" % url)
|
||||
try:
|
||||
failed_routes = []
|
||||
status = True
|
||||
metrics = requests.get(url, timeout=60).content
|
||||
metrics_json = json.loads(metrics)
|
||||
for entry in metrics_json["history"]["failures"]:
|
||||
if entry["component"] == "route":
|
||||
name = entry["name"]
|
||||
failed_routes.append(name)
|
||||
status = False
|
||||
else:
|
||||
continue
|
||||
except Exception as e:
|
||||
logging.error("Failed to scrape metrics from cerberus API at %s: %s" % (url, e))
|
||||
sys.exit(1)
|
||||
return status, set(failed_routes)
|
||||
@@ -5,10 +5,10 @@ import time
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
import random
|
||||
from traceback import format_exc
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
from . import kubernetes_functions as kube_helper
|
||||
from . import cerberus
|
||||
import typing
|
||||
from arcaflow_plugin_sdk import validation, plugin
|
||||
from kubernetes.client.api.core_v1_api import CoreV1Api as CoreV1Api
|
||||
@@ -28,6 +28,14 @@ class NetworkScenarioConfig:
|
||||
},
|
||||
)
|
||||
|
||||
image: typing.Annotated[str, validation.min(1)]= field(
|
||||
default="quay.io/krkn-chaos/krkn:tools",
|
||||
metadata={
|
||||
"name": "Image",
|
||||
"description": "Image of krkn tools to run"
|
||||
}
|
||||
)
|
||||
|
||||
label_selector: typing.Annotated[
|
||||
typing.Optional[str], validation.required_if_not("node_interface_name")
|
||||
] = field(
|
||||
@@ -91,13 +99,13 @@ class NetworkScenarioConfig:
|
||||
default=None,
|
||||
metadata={
|
||||
"name": "Network Parameters",
|
||||
"description": "The network filters that are applied on the interface. "
|
||||
"The currently supported filters are latency, "
|
||||
"loss and bandwidth",
|
||||
},
|
||||
"description":
|
||||
"The network filters that are applied on the interface. "
|
||||
"The currently supported filters are latency, "
|
||||
"loss and bandwidth"
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class NetworkScenarioSuccessOutput:
|
||||
filter_direction: str = field(
|
||||
@@ -142,7 +150,7 @@ class NetworkScenarioErrorOutput:
|
||||
)
|
||||
|
||||
|
||||
def get_default_interface(node: str, pod_template, cli: CoreV1Api) -> str:
|
||||
def get_default_interface(node: str, pod_template, cli: CoreV1Api, image: str) -> str:
|
||||
"""
|
||||
Function that returns a random interface from a node
|
||||
|
||||
@@ -160,14 +168,14 @@ def get_default_interface(node: str, pod_template, cli: CoreV1Api) -> str:
|
||||
Returns:
|
||||
Default interface (string) belonging to the node
|
||||
"""
|
||||
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
|
||||
logging.info("Creating pod to query interface on node %s" % node)
|
||||
kube_helper.create_pod(cli, pod_body, "default", 300)
|
||||
|
||||
pod_name = f"fedtools-{pod_name_regex}"
|
||||
try:
|
||||
cmd = ["ip", "r"]
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, "fedtools", "default")
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, pod_name, "default")
|
||||
|
||||
if not output:
|
||||
logging.error("Exception occurred while executing command in pod")
|
||||
@@ -183,13 +191,13 @@ def get_default_interface(node: str, pod_template, cli: CoreV1Api) -> str:
|
||||
|
||||
finally:
|
||||
logging.info("Deleting pod to query interface on node")
|
||||
kube_helper.delete_pod(cli, "fedtools", "default")
|
||||
kube_helper.delete_pod(cli, pod_name, "default")
|
||||
|
||||
return interfaces
|
||||
|
||||
|
||||
def verify_interface(
|
||||
input_interface_list: typing.List[str], node: str, pod_template, cli: CoreV1Api
|
||||
input_interface_list: typing.List[str], node: str, pod_template, cli: CoreV1Api, image: str
|
||||
) -> typing.List[str]:
|
||||
"""
|
||||
Function that verifies whether a list of interfaces is present in the node.
|
||||
@@ -212,13 +220,15 @@ def verify_interface(
|
||||
Returns:
|
||||
The interface list for the node
|
||||
"""
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
|
||||
logging.info("Creating pod to query interface on node %s" % node)
|
||||
kube_helper.create_pod(cli, pod_body, "default", 300)
|
||||
pod_name = f"fedtools-{pod_name_regex}"
|
||||
try:
|
||||
if input_interface_list == []:
|
||||
cmd = ["ip", "r"]
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, "fedtools", "default")
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, pod_name, "default")
|
||||
|
||||
if not output:
|
||||
logging.error("Exception occurred while executing command in pod")
|
||||
@@ -234,7 +244,7 @@ def verify_interface(
|
||||
|
||||
else:
|
||||
cmd = ["ip", "-br", "addr", "show"]
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, "fedtools", "default")
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, pod_name, "default")
|
||||
|
||||
if not output:
|
||||
logging.error("Exception occurred while executing command in pod")
|
||||
@@ -257,7 +267,7 @@ def verify_interface(
|
||||
)
|
||||
finally:
|
||||
logging.info("Deleting pod to query interface on node")
|
||||
kube_helper.delete_pod(cli, "fedtools", "default")
|
||||
kube_helper.delete_pod(cli, pod_name, "default")
|
||||
|
||||
return input_interface_list
|
||||
|
||||
@@ -268,6 +278,7 @@ def get_node_interfaces(
|
||||
instance_count: int,
|
||||
pod_template,
|
||||
cli: CoreV1Api,
|
||||
image: str
|
||||
) -> typing.Dict[str, typing.List[str]]:
|
||||
"""
|
||||
Function that is used to process the input dictionary with the nodes and
|
||||
@@ -309,7 +320,7 @@ def get_node_interfaces(
|
||||
nodes = kube_helper.get_node(None, label_selector, instance_count, cli)
|
||||
node_interface_dict = {}
|
||||
for node in nodes:
|
||||
node_interface_dict[node] = get_default_interface(node, pod_template, cli)
|
||||
node_interface_dict[node] = get_default_interface(node, pod_template, cli, image)
|
||||
else:
|
||||
node_name_list = node_interface_dict.keys()
|
||||
filtered_node_list = []
|
||||
@@ -321,7 +332,7 @@ def get_node_interfaces(
|
||||
|
||||
for node in filtered_node_list:
|
||||
node_interface_dict[node] = verify_interface(
|
||||
node_interface_dict[node], node, pod_template, cli
|
||||
node_interface_dict[node], node, pod_template, cli, image
|
||||
)
|
||||
|
||||
return node_interface_dict
|
||||
@@ -337,6 +348,7 @@ def apply_ingress_filter(
|
||||
cli: CoreV1Api,
|
||||
create_interfaces: bool = True,
|
||||
param_selector: str = "all",
|
||||
image:str = "quay.io/krkn-chaos/krkn:tools",
|
||||
) -> str:
|
||||
"""
|
||||
Function that applies the filters to shape incoming traffic to
|
||||
@@ -382,14 +394,14 @@ def apply_ingress_filter(
|
||||
network_params = {param_selector: cfg.network_params[param_selector]}
|
||||
|
||||
if create_interfaces:
|
||||
create_virtual_interfaces(cli, interface_list, node, pod_template)
|
||||
create_virtual_interfaces(cli, interface_list, node, pod_template, image)
|
||||
|
||||
exec_cmd = get_ingress_cmd(
|
||||
interface_list, network_params, duration=cfg.test_duration
|
||||
)
|
||||
logging.info("Executing %s on node %s" % (exec_cmd, node))
|
||||
job_body = yaml.safe_load(
|
||||
job_template.render(jobname=str(hash(node))[:5], nodename=node, cmd=exec_cmd)
|
||||
job_template.render(jobname=str(hash(node))[:5], nodename=node, image=image, cmd=exec_cmd)
|
||||
)
|
||||
api_response = kube_helper.create_job(batch_cli, job_body)
|
||||
|
||||
@@ -400,7 +412,7 @@ def apply_ingress_filter(
|
||||
|
||||
|
||||
def create_virtual_interfaces(
|
||||
cli: CoreV1Api, interface_list: typing.List[str], node: str, pod_template
|
||||
cli: CoreV1Api, interface_list: typing.List[str], node: str, pod_template, image: str
|
||||
) -> None:
|
||||
"""
|
||||
Function that creates a privileged pod and uses it to create
|
||||
@@ -421,20 +433,22 @@ def create_virtual_interfaces(
|
||||
- The YAML template used to instantiate a pod to create
|
||||
virtual interfaces on the node
|
||||
"""
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
|
||||
kube_helper.create_pod(cli, pod_body, "default", 300)
|
||||
logging.info(
|
||||
"Creating {0} virtual interfaces on node {1} using a pod".format(
|
||||
len(interface_list), node
|
||||
)
|
||||
)
|
||||
create_ifb(cli, len(interface_list), "modtools")
|
||||
pod_name = f"modtools-{pod_name_regex}"
|
||||
create_ifb(cli, len(interface_list), pod_name)
|
||||
logging.info("Deleting pod used to create virtual interfaces")
|
||||
kube_helper.delete_pod(cli, "modtools", "default")
|
||||
kube_helper.delete_pod(cli, pod_name, "default")
|
||||
|
||||
|
||||
def delete_virtual_interfaces(
|
||||
cli: CoreV1Api, node_list: typing.List[str], pod_template
|
||||
cli: CoreV1Api, node_list: typing.List[str], pod_template, image: str
|
||||
):
|
||||
"""
|
||||
Function that creates a privileged pod and uses it to delete all
|
||||
@@ -457,11 +471,13 @@ def delete_virtual_interfaces(
|
||||
"""
|
||||
|
||||
for node in node_list:
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
|
||||
kube_helper.create_pod(cli, pod_body, "default", 300)
|
||||
logging.info("Deleting all virtual interfaces on node {0}".format(node))
|
||||
delete_ifb(cli, "modtools")
|
||||
kube_helper.delete_pod(cli, "modtools", "default")
|
||||
pod_name = f"modtools-{pod_name_regex}"
|
||||
delete_ifb(cli, pod_name)
|
||||
kube_helper.delete_pod(cli, pod_name, "default")
|
||||
|
||||
|
||||
def create_ifb(cli: CoreV1Api, number: int, pod_name: str):
|
||||
@@ -700,7 +716,7 @@ def network_chaos(
|
||||
pod_interface_template = env.get_template("pod_interface.j2")
|
||||
pod_module_template = env.get_template("pod_module.j2")
|
||||
cli, batch_cli = kube_helper.setup_kubernetes(cfg.kubeconfig_path)
|
||||
|
||||
test_image = cfg.image
|
||||
logging.info("Starting Ingress Network Chaos")
|
||||
try:
|
||||
node_interface_dict = get_node_interfaces(
|
||||
@@ -709,6 +725,7 @@ def network_chaos(
|
||||
cfg.instance_count,
|
||||
pod_interface_template,
|
||||
cli,
|
||||
test_image
|
||||
)
|
||||
except Exception:
|
||||
return "error", NetworkScenarioErrorOutput(format_exc())
|
||||
@@ -726,6 +743,7 @@ def network_chaos(
|
||||
job_template,
|
||||
batch_cli,
|
||||
cli,
|
||||
test_image
|
||||
)
|
||||
)
|
||||
logging.info("Waiting for parallel job to finish")
|
||||
@@ -746,6 +764,7 @@ def network_chaos(
|
||||
cli,
|
||||
create_interfaces=create_interfaces,
|
||||
param_selector=param,
|
||||
image=test_image
|
||||
)
|
||||
)
|
||||
logging.info("Waiting for serial job to finish")
|
||||
@@ -753,8 +772,7 @@ def network_chaos(
|
||||
logging.info("Deleting jobs")
|
||||
delete_jobs(cli, batch_cli, job_list[:])
|
||||
job_list = []
|
||||
logging.info("Waiting for wait_duration : %ss" % cfg.wait_duration)
|
||||
time.sleep(cfg.wait_duration)
|
||||
|
||||
create_interfaces = False
|
||||
else:
|
||||
|
||||
@@ -772,6 +790,6 @@ def network_chaos(
|
||||
logging.error("Ingress Network Chaos exiting due to Exception - %s" % e)
|
||||
return "error", NetworkScenarioErrorOutput(format_exc())
|
||||
finally:
|
||||
delete_virtual_interfaces(cli, node_interface_dict.keys(), pod_module_template)
|
||||
delete_virtual_interfaces(cli, node_interface_dict.keys(), pod_module_template, test_image)
|
||||
logging.info("Deleting jobs(if any)")
|
||||
delete_jobs(cli, batch_cli, job_list[:])
|
||||
|
||||
@@ -9,7 +9,7 @@ spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: networkchaos
|
||||
image: docker.io/fedora/tools
|
||||
image: {{image}}
|
||||
command: ["/bin/sh", "-c", "{{cmd}}"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
@@ -22,4 +22,4 @@ spec:
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
restartPolicy: Never
|
||||
backoffLimit: 0
|
||||
backoffLimit: 0
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: fedtools
|
||||
name: fedtools-{{regex_name}}
|
||||
spec:
|
||||
hostNetwork: true
|
||||
nodeName: {{nodename}}
|
||||
containers:
|
||||
- name: fedtools
|
||||
image: docker.io/fedora/tools
|
||||
image: {{image}}
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user