mirror of
https://github.com/krkn-chaos/krkn.git
synced 2026-02-18 20:09:55 +00:00
Compare commits
60 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6629c7ec33 | ||
|
|
fb6af04b09 | ||
|
|
dc1215a61b | ||
|
|
f74aef18f8 | ||
|
|
166204e3c5 | ||
|
|
fc7667aef1 | ||
|
|
3eea42770f | ||
|
|
77a46e3869 | ||
|
|
b801308d4a | ||
|
|
97f4c1fd9c | ||
|
|
c54390d8b1 | ||
|
|
543729b18a | ||
|
|
a0ea4dc749 | ||
|
|
a5459792ef | ||
|
|
d434bb26fa | ||
|
|
fee41d404e | ||
|
|
8663ee8893 | ||
|
|
a072f0306a | ||
|
|
8221392356 | ||
|
|
671fc581dd | ||
|
|
11508ce017 | ||
|
|
0d78139fb6 | ||
|
|
a3baffe8ee | ||
|
|
438b08fcd5 | ||
|
|
9b930a02a5 | ||
|
|
194e3b87ee | ||
|
|
8c05e44c23 | ||
|
|
88f8cf49f1 | ||
|
|
015ba4d90d | ||
|
|
26fdbef144 | ||
|
|
d77e6dc79c | ||
|
|
2885645e77 | ||
|
|
84169e2d4e | ||
|
|
05bc404d32 | ||
|
|
e8fd432fc5 | ||
|
|
ec05675e3a | ||
|
|
c91648d35c | ||
|
|
24aa9036b0 | ||
|
|
816363d151 | ||
|
|
90c52f907f | ||
|
|
4f250c9601 | ||
|
|
6480adc00a | ||
|
|
5002f210ae | ||
|
|
62c5afa9a2 | ||
|
|
c109fc0b17 | ||
|
|
fff675f3dd | ||
|
|
c125e5acf7 | ||
|
|
ca6995a1a1 | ||
|
|
50cf91ac9e | ||
|
|
11069c6982 | ||
|
|
106d9bf1ae | ||
|
|
17f832637c | ||
|
|
0e5c8c55a4 | ||
|
|
9d9a6f9b80 | ||
|
|
f8fe2ae5b7 | ||
|
|
77b1dd32c7 | ||
|
|
9df727ccf5 | ||
|
|
70c8fec705 | ||
|
|
0731144a6b | ||
|
|
9337052e7b |
1
.github/CODEOWNERS
vendored
Normal file
1
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1 @@
|
||||
* @paigerube14 @tsebastiani @chaitanyaenr
|
||||
43
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
43
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report an issue
|
||||
title: "[BUG]"
|
||||
labels: bug
|
||||
---
|
||||
|
||||
# Bug Description
|
||||
|
||||
## **Describe the bug**
|
||||
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
## **To Reproduce**
|
||||
|
||||
Any specific steps used to reproduce the behavior
|
||||
|
||||
### Scenario File
|
||||
Scenario file(s) that were specified in your config file (can be starred (*) with confidential information )
|
||||
```yaml
|
||||
<config>
|
||||
|
||||
```
|
||||
|
||||
### Config File
|
||||
Config file you used when error was seen (the default used is config/config.yaml)
|
||||
|
||||
```yaml
|
||||
<config>
|
||||
|
||||
```
|
||||
|
||||
## **Expected behavior**
|
||||
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
## **Krkn Output**
|
||||
|
||||
Krkn output to help show your problem
|
||||
|
||||
## **Additional context**
|
||||
|
||||
Add any other context about the problem
|
||||
16
.github/ISSUE_TEMPLATE/feature.md
vendored
Normal file
16
.github/ISSUE_TEMPLATE/feature.md
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
name: New Feature Request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: enhancement
|
||||
assignees: ''
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to see added/changed. Ex. new parameter in [xxx] scenario, new scenario that does [xxx]
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the feature request here.
|
||||
19
.github/PULL_REQUEST_TEMPLATE.md
vendored
19
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,10 +1,27 @@
|
||||
## Type of change
|
||||
|
||||
- [ ] Refactor
|
||||
- [ ] New feature
|
||||
- [ ] Bug fix
|
||||
- [ ] Optimization
|
||||
|
||||
## Description
|
||||
<!-- Provide a brief description of the changes made in this PR. -->
|
||||
|
||||
## Related Tickets & Documents
|
||||
|
||||
- Related Issue #
|
||||
- Closes #
|
||||
|
||||
## Documentation
|
||||
- [ ] **Is documentation needed for this update?**
|
||||
|
||||
If checked, a documentation PR must be created and merged in the [website repository](https://github.com/krkn-chaos/website/).
|
||||
|
||||
## Related Documentation PR (if applicable)
|
||||
<!-- Add the link to the corresponding documentation PR in the website repository -->
|
||||
<!-- Add the link to the corresponding documentation PR in the website repository -->
|
||||
|
||||
## Checklist before requesting a review
|
||||
|
||||
- [ ] I have performed a self-review of my code.
|
||||
- [ ] If it is a core feature, I have added thorough tests.
|
||||
13
.github/workflows/release.yml
vendored
13
.github/workflows/release.yml
vendored
@@ -16,6 +16,7 @@ jobs:
|
||||
PREVIOUS_TAG=$(git tag --sort=-creatordate | sed -n '2 p')
|
||||
echo $PREVIOUS_TAG
|
||||
echo "PREVIOUS_TAG=$PREVIOUS_TAG" >> "$GITHUB_ENV"
|
||||
|
||||
- name: generate release notes from template
|
||||
id: release-notes
|
||||
env:
|
||||
@@ -45,3 +46,15 @@ jobs:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
gh release create ${{ github.ref_name }} --title "${{ github.ref_name }}" -F release-notes.md
|
||||
|
||||
- name: Install Syft
|
||||
run: |
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sudo sh -s -- -b /usr/local/bin
|
||||
|
||||
- name: Generate SBOM
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: |
|
||||
syft . --scope all-layers --output cyclonedx-json > sbom.json
|
||||
echo "SBOM generated successfully!"
|
||||
gh release upload ${{ github.ref_name }} sbom.json
|
||||
|
||||
64
.github/workflows/tests.yml
vendored
64
.github/workflows/tests.yml
vendored
@@ -14,34 +14,21 @@ jobs:
|
||||
uses: actions/checkout@v3
|
||||
- name: Create multi-node KinD cluster
|
||||
uses: redhat-chaos/actions/kind@main
|
||||
- name: Install Helm & add repos
|
||||
run: |
|
||||
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
|
||||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
helm repo add stable https://charts.helm.sh/stable
|
||||
helm repo update
|
||||
- name: Deploy prometheus & Port Forwarding
|
||||
uses: redhat-chaos/actions/prometheus@main
|
||||
- name: Deploy Elasticsearch
|
||||
with:
|
||||
ELASTIC_PORT: ${{ env.ELASTIC_PORT }}
|
||||
RUN_ID: ${{ github.run_id }}
|
||||
uses: redhat-chaos/actions/elastic@main
|
||||
- name: Download elastic password
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: elastic_password_${{ github.run_id }}
|
||||
- name: Set elastic password on env
|
||||
run: |
|
||||
kubectl create namespace prometheus-k8s
|
||||
helm install \
|
||||
--wait --timeout 360s \
|
||||
kind-prometheus \
|
||||
prometheus-community/kube-prometheus-stack \
|
||||
--namespace prometheus-k8s \
|
||||
--set prometheus.service.nodePort=30000 \
|
||||
--set prometheus.service.type=NodePort \
|
||||
--set grafana.service.nodePort=31000 \
|
||||
--set grafana.service.type=NodePort \
|
||||
--set alertmanager.service.nodePort=32000 \
|
||||
--set alertmanager.service.type=NodePort \
|
||||
--set prometheus-node-exporter.service.nodePort=32001 \
|
||||
--set prometheus-node-exporter.service.type=NodePort \
|
||||
--set prometheus.prometheusSpec.maximumStartupDurationSeconds=300
|
||||
|
||||
SELECTOR=`kubectl -n prometheus-k8s get service kind-prometheus-kube-prome-prometheus -o wide --no-headers=true | awk '{ print $7 }'`
|
||||
POD_NAME=`kubectl -n prometheus-k8s get pods --selector="$SELECTOR" --no-headers=true | awk '{ print $1 }'`
|
||||
kubectl -n prometheus-k8s port-forward $POD_NAME 9090:9090 &
|
||||
sleep 5
|
||||
ELASTIC_PASSWORD=$(cat elastic_password.txt)
|
||||
echo "ELASTIC_PASSWORD=$ELASTIC_PASSWORD" >> "$GITHUB_ENV"
|
||||
- name: Install Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
@@ -55,6 +42,11 @@ jobs:
|
||||
|
||||
- name: Deploy test workloads
|
||||
run: |
|
||||
es_pod_name=$(kubectl get pods -l "app=elasticsearch-master" -o name)
|
||||
echo "POD_NAME: $es_pod_name"
|
||||
kubectl --namespace default port-forward $es_pod_name 9200 &
|
||||
prom_name=$(kubectl get pods -n monitoring -l "app.kubernetes.io/name=prometheus" -o name)
|
||||
kubectl --namespace monitoring port-forward $prom_name 9090 &
|
||||
kubectl apply -f CI/templates/outage_pod.yaml
|
||||
kubectl wait --for=condition=ready pod -l scenario=outage --timeout=300s
|
||||
kubectl apply -f CI/templates/container_scenario_pod.yaml
|
||||
@@ -79,16 +71,23 @@ jobs:
|
||||
yq -i '.kraken.port="8081"' CI/config/common_test_config.yaml
|
||||
yq -i '.kraken.signal_address="0.0.0.0"' CI/config/common_test_config.yaml
|
||||
yq -i '.kraken.performance_monitoring="localhost:9090"' CI/config/common_test_config.yaml
|
||||
yq -i '.elastic.elastic_port=9200' CI/config/common_test_config.yaml
|
||||
yq -i '.elastic.elastic_url="https://localhost"' CI/config/common_test_config.yaml
|
||||
yq -i '.elastic.enable_elastic=True' CI/config/common_test_config.yaml
|
||||
yq -i '.elastic.password="${{env.ELASTIC_PASSWORD}}"' CI/config/common_test_config.yaml
|
||||
yq -i '.performance_monitoring.prometheus_url="http://localhost:9090"' CI/config/common_test_config.yaml
|
||||
echo "test_service_hijacking" > ./CI/tests/functional_tests
|
||||
echo "test_app_outages" >> ./CI/tests/functional_tests
|
||||
echo "test_container" >> ./CI/tests/functional_tests
|
||||
echo "test_pod" >> ./CI/tests/functional_tests
|
||||
echo "test_customapp_pod" >> ./CI/tests/functional_tests
|
||||
echo "test_namespace" >> ./CI/tests/functional_tests
|
||||
echo "test_net_chaos" >> ./CI/tests/functional_tests
|
||||
echo "test_time" >> ./CI/tests/functional_tests
|
||||
echo "test_cpu_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_memory_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_io_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_pod_network_filter" >> ./CI/tests/functional_tests
|
||||
|
||||
|
||||
# Push on main only steps + all other functional to collect coverage
|
||||
@@ -106,6 +105,11 @@ jobs:
|
||||
yq -i '.kraken.port="8081"' CI/config/common_test_config.yaml
|
||||
yq -i '.kraken.signal_address="0.0.0.0"' CI/config/common_test_config.yaml
|
||||
yq -i '.kraken.performance_monitoring="localhost:9090"' CI/config/common_test_config.yaml
|
||||
yq -i '.elastic.enable_elastic=True' CI/config/common_test_config.yaml
|
||||
yq -i '.elastic.password="${{env.ELASTIC_PASSWORD}}"' CI/config/common_test_config.yaml
|
||||
yq -i '.elastic.elastic_port=9200' CI/config/common_test_config.yaml
|
||||
yq -i '.elastic.elastic_url="https://localhost"' CI/config/common_test_config.yaml
|
||||
yq -i '.performance_monitoring.prometheus_url="http://localhost:9090"' CI/config/common_test_config.yaml
|
||||
yq -i '.telemetry.username="${{secrets.TELEMETRY_USERNAME}}"' CI/config/common_test_config.yaml
|
||||
yq -i '.telemetry.password="${{secrets.TELEMETRY_PASSWORD}}"' CI/config/common_test_config.yaml
|
||||
echo "test_telemetry" > ./CI/tests/functional_tests
|
||||
@@ -113,12 +117,14 @@ jobs:
|
||||
echo "test_app_outages" >> ./CI/tests/functional_tests
|
||||
echo "test_container" >> ./CI/tests/functional_tests
|
||||
echo "test_pod" >> ./CI/tests/functional_tests
|
||||
echo "test_customapp_pod" >> ./CI/tests/functional_tests
|
||||
echo "test_namespace" >> ./CI/tests/functional_tests
|
||||
echo "test_net_chaos" >> ./CI/tests/functional_tests
|
||||
echo "test_time" >> ./CI/tests/functional_tests
|
||||
echo "test_cpu_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_memory_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_io_hog" >> ./CI/tests/functional_tests
|
||||
echo "test_pod_network_filter" >> ./CI/tests/functional_tests
|
||||
|
||||
# Final common steps
|
||||
- name: Run Functional tests
|
||||
@@ -129,32 +135,38 @@ jobs:
|
||||
cat ./CI/results.markdown >> $GITHUB_STEP_SUMMARY
|
||||
echo >> $GITHUB_STEP_SUMMARY
|
||||
- name: Upload CI logs
|
||||
if: ${{ success() || failure() }}
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ci-logs
|
||||
path: CI/out
|
||||
if-no-files-found: error
|
||||
- name: Collect coverage report
|
||||
if: ${{ success() || failure() }}
|
||||
run: |
|
||||
python -m coverage html
|
||||
python -m coverage json
|
||||
- name: Publish coverage report to job summary
|
||||
if: ${{ success() || failure() }}
|
||||
run: |
|
||||
pip install html2text
|
||||
html2text --ignore-images --ignore-links -b 0 htmlcov/index.html >> $GITHUB_STEP_SUMMARY
|
||||
- name: Upload coverage data
|
||||
if: ${{ success() || failure() }}
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: coverage
|
||||
path: htmlcov
|
||||
if-no-files-found: error
|
||||
- name: Upload json coverage
|
||||
if: ${{ success() || failure() }}
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: coverage.json
|
||||
path: coverage.json
|
||||
if-no-files-found: error
|
||||
- name: Check CI results
|
||||
if: ${{ success() || failure() }}
|
||||
run: "! grep Fail CI/results.markdown"
|
||||
|
||||
badge:
|
||||
|
||||
@@ -6,3 +6,4 @@ This is a list of organizations that have publicly acknowledged usage of Krkn an
|
||||
|:-|:-|:-|:-|
|
||||
| MarketAxess | 2024 | https://www.marketaxess.com/ | Kraken enables us to achieve our goal of increasing the reliability of our cloud products on Kubernetes. The tool allows us to automatically run various chaos scenarios, identify resilience and performance bottlenecks, and seamlessly restore the system to its original state once scenarios finish. These chaos scenarios include pod disruptions, node (EC2) outages, simulating availability zone (AZ) outages, and filling up storage spaces like EBS and EFS. The community is highly responsive to requests and works on expanding the tool's capabilities. MarketAxess actively contributes to the project, adding features such as the ability to leverage existing network ACLs and proposing several feature improvements to enhance test coverage. |
|
||||
| Red Hat Openshift | 2020 | https://www.redhat.com/ | Kraken is a highly reliable chaos testing tool used to ensure the quality and resiliency of Red Hat Openshift. The engineering team runs all the test scenarios under Kraken on different cloud platforms on both self-managed and cloud services environments prior to the release of a new version of the product. The team also contributes to the Kraken project consistently which helps the test scenarios to keep up with the new features introduced to the product. Inclusion of this test coverage has contributed to gaining the trust of new and existing customers of the product. |
|
||||
| IBM | 2023 | https://www.ibm.com/ | While working on AI for Chaos Testing at IBM Research, we closely collaborated with the Kraken (Krkn) team to advance intelligent chaos engineering. Our contributions included developing AI-enabled chaos injection strategies and integrating reinforcement learning (RL)-based fault search techniques into the Krkn tool, enabling it to identify and explore system vulnerabilities more efficiently. Kraken stands out as one of the most user-friendly and effective tools for chaos engineering, and the Kraken team’s deep technical involvement played a crucial role in the success of this collaboration—helping bridge cutting-edge AI research with practical, real-world system reliability testing. |
|
||||
|
||||
@@ -2,6 +2,8 @@ kraken:
|
||||
distribution: kubernetes # Distribution can be kubernetes or openshift.
|
||||
kubeconfig_path: ~/.kube/config # Path to kubeconfig.
|
||||
exit_on_failure: False # Exit when a post action scenario fails.
|
||||
auto_rollback: True # Enable auto rollback for scenarios.
|
||||
rollback_versions_directory: /tmp/kraken-rollback # Directory to store rollback version files.
|
||||
chaos_scenarios: # List of policies/chaos scenarios to load.
|
||||
- $scenario_type: # List of chaos pod scenarios to load.
|
||||
- $scenario_file
|
||||
@@ -10,15 +12,16 @@ cerberus:
|
||||
cerberus_url: # When cerberus_enabled is set to True, provide the url where cerberus publishes go/no-go signal.
|
||||
|
||||
performance_monitoring:
|
||||
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift.
|
||||
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
|
||||
capture_metrics: False
|
||||
metrics_profile_path: config/metrics-aggregated.yaml
|
||||
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
|
||||
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
|
||||
uuid: # uuid for the run is generated by default if not set.
|
||||
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error.
|
||||
alert_profile: config/alerts.yaml # Path to alert profile with the prometheus queries.
|
||||
enable_alerts: True # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
|
||||
enable_metrics: True
|
||||
alert_profile: config/alerts.yaml # Path or URL to alert profile with the prometheus queries
|
||||
metrics_profile: config/metrics-report.yaml
|
||||
check_critical_alerts: True # Path to alert profile with the prometheus queries.
|
||||
|
||||
tunings:
|
||||
wait_duration: 6 # Duration to wait between each chaos scenario.
|
||||
@@ -29,7 +32,7 @@ telemetry:
|
||||
api_url: https://yvnn4rfoi7.execute-api.us-west-2.amazonaws.com/test #telemetry service endpoint
|
||||
username: $TELEMETRY_USERNAME # telemetry service username
|
||||
password: $TELEMETRY_PASSWORD # telemetry service password
|
||||
prometheus_namespace: 'prometheus-k8s' # prometheus namespace
|
||||
prometheus_namespace: 'monitoring' # prometheus namespace
|
||||
prometheus_pod_name: 'prometheus-kind-prometheus-kube-prome-prometheus-0' # prometheus pod_name
|
||||
prometheus_container_name: 'prometheus'
|
||||
prometheus_backup: True # enables/disables prometheus data collection
|
||||
|
||||
@@ -8,9 +8,9 @@ spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: fedtools
|
||||
image: docker.io/fedora/tools
|
||||
image: quay.io/krkn-chaos/krkn:tools
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
sleep infinity
|
||||
sleep infinity
|
||||
|
||||
29
CI/templates/pod_network_filter.yaml
Normal file
29
CI/templates/pod_network_filter.yaml
Normal file
@@ -0,0 +1,29 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-network-filter-test
|
||||
labels:
|
||||
app.kubernetes.io/name: pod-network-filter
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: quay.io/krkn-chaos/krkn-funtests:pod-network-filter
|
||||
ports:
|
||||
- containerPort: 5000
|
||||
name: pod-network-prt
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: pod-network-filter-service
|
||||
spec:
|
||||
selector:
|
||||
app.kubernetes.io/name: pod-network-filter
|
||||
type: NodePort
|
||||
ports:
|
||||
- name: pod-network-filter-svc
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: pod-network-prt
|
||||
nodePort: 30037
|
||||
@@ -8,9 +8,9 @@ spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: fedtools
|
||||
image: docker.io/fedora/tools
|
||||
image: quay.io/krkn-chaos/krkn:tools
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
sleep infinity
|
||||
sleep infinity
|
||||
|
||||
@@ -13,7 +13,12 @@ function functional_test_app_outage {
|
||||
export scenario_type="application_outages_scenarios"
|
||||
export scenario_file="scenarios/openshift/app_outage.yaml"
|
||||
export post_config=""
|
||||
|
||||
kubectl get services -A
|
||||
|
||||
kubectl get pods
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/app_outage.yaml
|
||||
cat $scenario_file
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/app_outage.yaml
|
||||
echo "App outage scenario test: Success"
|
||||
}
|
||||
|
||||
@@ -7,7 +7,7 @@ trap finish EXIT
|
||||
|
||||
|
||||
function functional_test_cpu_hog {
|
||||
yq -i '.node_selector="kubernetes.io/hostname=kind-worker2"' scenarios/kube/cpu-hog.yml
|
||||
yq -i '."node-selector"="kubernetes.io/hostname=kind-worker2"' scenarios/kube/cpu-hog.yml
|
||||
|
||||
export scenario_type="hog_scenarios"
|
||||
export scenario_file="scenarios/kube/cpu-hog.yml"
|
||||
|
||||
18
CI/tests/test_customapp_pod.sh
Executable file
18
CI/tests/test_customapp_pod.sh
Executable file
@@ -0,0 +1,18 @@
|
||||
set -xeEo pipefail
|
||||
|
||||
source CI/tests/common.sh
|
||||
|
||||
trap error ERR
|
||||
trap finish EXIT
|
||||
|
||||
function functional_test_customapp_pod_node_selector {
|
||||
export scenario_type="pod_disruption_scenarios"
|
||||
export scenario_file="scenarios/openshift/customapp_pod.yaml"
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/customapp_pod_config.yaml
|
||||
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/customapp_pod_config.yaml
|
||||
echo "Pod disruption with node_label_selector test: Success"
|
||||
}
|
||||
|
||||
functional_test_customapp_pod_node_selector
|
||||
@@ -5,12 +5,13 @@ source CI/tests/common.sh
|
||||
trap error ERR
|
||||
trap finish EXIT
|
||||
|
||||
|
||||
function functional_test_io_hog {
|
||||
yq -i '.node_selector="kubernetes.io/hostname=kind-worker2"' scenarios/kube/io-hog.yml
|
||||
yq -i '."node-selector"="kubernetes.io/hostname=kind-worker2"' scenarios/kube/io-hog.yml
|
||||
export scenario_type="hog_scenarios"
|
||||
export scenario_file="scenarios/kube/io-hog.yml"
|
||||
export post_config=""
|
||||
|
||||
cat $scenario_file
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/io_hog.yaml
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/io_hog.yaml
|
||||
echo "IO Hog: Success"
|
||||
|
||||
@@ -7,7 +7,7 @@ trap finish EXIT
|
||||
|
||||
|
||||
function functional_test_memory_hog {
|
||||
yq -i '.node_selector="kubernetes.io/hostname=kind-worker2"' scenarios/kube/memory-hog.yml
|
||||
yq -i '."node-selector"="kubernetes.io/hostname=kind-worker2"' scenarios/kube/memory-hog.yml
|
||||
export scenario_type="hog_scenarios"
|
||||
export scenario_file="scenarios/kube/memory-hog.yml"
|
||||
export post_config=""
|
||||
|
||||
62
CI/tests/test_pod_network_filter.sh
Executable file
62
CI/tests/test_pod_network_filter.sh
Executable file
@@ -0,0 +1,62 @@
|
||||
function functional_pod_network_filter {
|
||||
export SERVICE_URL="http://localhost:8889"
|
||||
export scenario_type="network_chaos_ng_scenarios"
|
||||
export scenario_file="scenarios/kube/pod-network-filter.yml"
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/pod_network_filter.yaml
|
||||
yq -i '.[0].test_duration=10' scenarios/kube/pod-network-filter.yml
|
||||
yq -i '.[0].label_selector=""' scenarios/kube/pod-network-filter.yml
|
||||
yq -i '.[0].ingress=false' scenarios/kube/pod-network-filter.yml
|
||||
yq -i '.[0].egress=true' scenarios/kube/pod-network-filter.yml
|
||||
yq -i '.[0].target="pod-network-filter-test"' scenarios/kube/pod-network-filter.yml
|
||||
yq -i '.[0].protocols=["tcp"]' scenarios/kube/pod-network-filter.yml
|
||||
yq -i '.[0].ports=[443]' scenarios/kube/pod-network-filter.yml
|
||||
yq -i '.performance_monitoring.check_critical_alerts=False' CI/config/pod_network_filter.yaml
|
||||
|
||||
## Test webservice deployment
|
||||
kubectl apply -f ./CI/templates/pod_network_filter.yaml
|
||||
COUNTER=0
|
||||
while true
|
||||
do
|
||||
curl $SERVICE_URL
|
||||
EXITSTATUS=$?
|
||||
if [ "$EXITSTATUS" -eq "0" ]
|
||||
then
|
||||
break
|
||||
fi
|
||||
sleep 1
|
||||
COUNTER=$((COUNTER+1))
|
||||
[ $COUNTER -eq "100" ] && echo "maximum number of retry reached, test failed" && exit 1
|
||||
done
|
||||
|
||||
cat scenarios/kube/pod-network-filter.yml
|
||||
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/pod_network_filter.yaml > krkn_pod_network.out 2>&1 &
|
||||
PID=$!
|
||||
|
||||
# wait until the dns resolution starts failing and the service returns 400
|
||||
DNS_FAILURE_STATUS=0
|
||||
while true
|
||||
do
|
||||
OUT_STATUS_CODE=$(curl -X GET -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL)
|
||||
if [ "$OUT_STATUS_CODE" -eq "404" ]
|
||||
then
|
||||
DNS_FAILURE_STATUS=404
|
||||
fi
|
||||
|
||||
if [ "$DNS_FAILURE_STATUS" -eq "404" ] && [ "$OUT_STATUS_CODE" -eq "200" ]
|
||||
then
|
||||
echo "service restored"
|
||||
break
|
||||
fi
|
||||
COUNTER=$((COUNTER+1))
|
||||
[ $COUNTER -eq "100" ] && echo "maximum number of retry reached, test failed" && exit 1
|
||||
sleep 2
|
||||
done
|
||||
|
||||
wait $PID
|
||||
|
||||
}
|
||||
|
||||
functional_pod_network_filter
|
||||
|
||||
@@ -39,7 +39,7 @@ function functional_test_service_hijacking {
|
||||
export scenario_file="scenarios/kube/service_hijacking.yaml"
|
||||
export post_config=""
|
||||
envsubst < CI/config/common_test_config.yaml > CI/config/service_hijacking.yaml
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/service_hijacking.yaml > /dev/null 2>&1 &
|
||||
python3 -m coverage run -a run_kraken.py -c CI/config/service_hijacking.yaml > /tmp/krkn.log 2>&1 &
|
||||
PID=$!
|
||||
#Waiting the hijacking to have effect
|
||||
COUNTER=0
|
||||
@@ -100,8 +100,13 @@ function functional_test_service_hijacking {
|
||||
[ "${PAYLOAD_PATCH_2//[$'\t\r\n ']}" == "${OUT_PATCH//[$'\t\r\n ']}" ] && echo "Step 2 PATCH Payload OK" || (echo "Step 2 PATCH Payload did not match. Test failed." && exit 1)
|
||||
[ "$OUT_STATUS_CODE" == "$STATUS_CODE_PATCH_2" ] && echo "Step 2 PATCH Status Code OK" || (echo "Step 2 PATCH status code did not match. Test failed." && exit 1)
|
||||
[ "$OUT_CONTENT" == "$TEXT_MIME" ] && echo "Step 2 PATCH MIME OK" || (echo " Step 2 PATCH MIME did not match. Test failed." && exit 1)
|
||||
|
||||
|
||||
|
||||
wait $PID
|
||||
|
||||
cat /tmp/krkn.log
|
||||
|
||||
# now checking if service has been restore correctly and nginx responds correctly
|
||||
curl -s $SERVICE_URL | grep nginx! && echo "BODY: Service restored!" || (echo "BODY: failed to restore service" && exit 1)
|
||||
OUT_STATUS_CODE=`curl -X GET -s -o /dev/null -I -w "%{http_code}" $SERVICE_URL`
|
||||
|
||||
83
GOVERNANCE.md
Normal file
83
GOVERNANCE.md
Normal file
@@ -0,0 +1,83 @@
|
||||
|
||||
|
||||
|
||||
The governance model adopted here is heavily influenced by a set of CNCF projects, especially drew
|
||||
reference from [Kubernetes governance](https://github.com/kubernetes/community/blob/master/governance.md).
|
||||
*For similar structures some of the same wordings from kubernetes governance are borrowed to adhere
|
||||
to the originally construed meaning.*
|
||||
|
||||
## Principles
|
||||
|
||||
- **Open**: Krkn is open source community.
|
||||
- **Welcoming and respectful**: See [Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
||||
- **Transparent and accessible**: Work and collaboration should be done in public.
|
||||
Changes to the Krkn organization, Krkn code repositories, and CNCF related activities (e.g.
|
||||
level, involvement, etc) are done in public.
|
||||
- **Merit**: Ideas and contributions are accepted according to their technical merit
|
||||
and alignment with project objectives, scope and design principles.
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
Krkn follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
||||
Here is an excerpt:
|
||||
|
||||
> As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities.
|
||||
|
||||
## Maintainer Levels
|
||||
|
||||
### Contributor
|
||||
Contributors contributor to the community. Anyone can become a contributor by participating in discussions, reporting bugs, or contributing code or documentation.
|
||||
|
||||
#### Responsibilities:
|
||||
|
||||
Be active in the community and adhere to the Code of Conduct.
|
||||
|
||||
Report bugs and suggest new features.
|
||||
|
||||
Contribute high-quality code and documentation.
|
||||
|
||||
|
||||
### Member
|
||||
Members are active contributors to the community. Members have demonstrated a strong understanding of the project's codebase and conventions.
|
||||
|
||||
#### Responsibilities:
|
||||
|
||||
Review pull requests for correctness, quality, and adherence to project standards.
|
||||
|
||||
Provide constructive and timely feedback to contributors.
|
||||
|
||||
Ensure that all contributions are well-tested and documented.
|
||||
|
||||
Work with maintainers to ensure a smooth and efficient release process.
|
||||
|
||||
### Maintainer
|
||||
Maintainers are responsible for the overall health and direction of the project. They are long-standing contributors who have shown a deep commitment to the project's success.
|
||||
|
||||
#### Responsibilities:
|
||||
|
||||
Set the technical direction and vision for the project.
|
||||
|
||||
Manage releases and ensure the stability of the main branch.
|
||||
|
||||
Make decisions on feature inclusion and project priorities.
|
||||
|
||||
Mentor other contributors and help grow the community.
|
||||
|
||||
Resolve disputes and make final decisions when consensus cannot be reached.
|
||||
|
||||
### Owner
|
||||
Owners have administrative access to the project and are the final decision-makers.
|
||||
|
||||
#### Responsibilities:
|
||||
|
||||
Manage the core team of maintainers and approvers.
|
||||
|
||||
Set the overall vision and strategy for the project.
|
||||
|
||||
Handle administrative tasks, such as managing the project's repository and other resources.
|
||||
|
||||
Represent the project in the broader open-source community.
|
||||
|
||||
|
||||
# Credits
|
||||
Sections of this documents have been borrowed from [Kubernetes governance](https://github.com/kubernetes/community/blob/master/governance.md)
|
||||
@@ -1,12 +1,34 @@
|
||||
## Overview
|
||||
|
||||
This document contains a list of maintainers in this repo.
|
||||
This file lists the maintainers and committers of the Krkn project.
|
||||
|
||||
In short, maintainers are people who are in charge of the maintenance of the Krkn project. Committers are active community members who have shown that they are committed to the continuous development of the project through ongoing engagement with the community.
|
||||
|
||||
For detailed description of the roles, see [Governance](./GOVERNANCE.md) page.
|
||||
|
||||
## Current Maintainers
|
||||
|
||||
| Maintainer | GitHub ID | Email |
|
||||
|---------------------| --------------------------------------------------------- | ----------------------- |
|
||||
| Ravi Elluri | [chaitanyaenr](https://github.com/chaitanyaenr) | nelluri@redhat.com |
|
||||
| Pradeep Surisetty | [psuriset](https://github.com/psuriset) | psuriset@redhat.com |
|
||||
| Paige Rubendall | [paigerube14](https://github.com/paigerube14) | prubenda@redhat.com |
|
||||
| Tullio Sebastiani | [tsebastiani](https://github.com/tsebastiani) | tsebasti@redhat.com |
|
||||
| Maintainer | GitHub ID | Email | Contribution Level |
|
||||
|---------------------| --------------------------------------------------------- | ----------------------- | ---------------------- |
|
||||
| Ravi Elluri | [chaitanyaenr](https://github.com/chaitanyaenr) | nelluri@redhat.com | Owner |
|
||||
| Pradeep Surisetty | [psuriset](https://github.com/psuriset) | psuriset@redhat.com | Owner |
|
||||
| Paige Patton | [paigerube14](https://github.com/paigerube14) | prubenda@redhat.com | Maintainer |
|
||||
| Tullio Sebastiani | [tsebastiani](https://github.com/tsebastiani) | tsebasti@redhat.com | Maintainer |
|
||||
| Yogananth Subramanian | [yogananth-subramanian](https://github.com/yogananth-subramanian) | ysubrama@redhat.com |Maintainer |
|
||||
| Sahil Shah | [shahsahil264](https://github.com/shahsahil264) | sahshah@redhat.com | Member |
|
||||
|
||||
|
||||
Note : It is mandatory for all Krkn community members to follow our [Code of Conduct](./CODE_OF_CONDUCT.md)
|
||||
|
||||
|
||||
## Contributor Ladder
|
||||
This project follows a contributor ladder model, where contributors can take on more responsibilities as they gain experience and demonstrate their commitment to the project.
|
||||
The roles are:
|
||||
* Contributor: A contributor to the community whether it be with code, docs or issues
|
||||
|
||||
* Member: A contributor who is active in the community and reviews pull requests.
|
||||
|
||||
* Maintainer: A contributor who is responsible for the overall health and direction of the project.
|
||||
|
||||
* Owner: A contributor who has administrative ownership of the project.
|
||||
|
||||
10
README.md
10
README.md
@@ -22,14 +22,8 @@ Kraken injects deliberate failures into Kubernetes clusters to check if it is re
|
||||
Instructions on how to setup, configure and run Kraken can be found in the [documentation](https://krkn-chaos.dev/docs/).
|
||||
|
||||
|
||||
### Blogs and other useful resources
|
||||
- Blog post on introduction to Kraken: https://www.openshift.com/blog/introduction-to-kraken-a-chaos-tool-for-openshift/kubernetes
|
||||
- Discussion and demo on how Kraken can be leveraged to ensure OpenShift is reliable, performant and scalable: https://www.youtube.com/watch?v=s1PvupI5sD0&ab_channel=OpenShift
|
||||
- Blog post emphasizing the importance of making Chaos part of Performance and Scale runs to mimic the production environments: https://www.openshift.com/blog/making-chaos-part-of-kubernetes/openshift-performance-and-scalability-tests
|
||||
- Blog post on findings from Chaos test runs: https://cloud.redhat.com/blog/openshift/kubernetes-chaos-stories
|
||||
- Discussion with CNCF TAG App Delivery on Krkn workflow, features and addition to CNCF sandbox: [Github](https://github.com/cncf/sandbox/issues/44), [Tracker](https://github.com/cncf/tag-app-delivery/issues/465), [recording](https://www.youtube.com/watch?v=nXQkBFK_MWc&t=722s)
|
||||
- Blog post on supercharging chaos testing using AI integration in Krkn: https://www.redhat.com/en/blog/supercharging-chaos-testing-using-ai
|
||||
- Blog post announcing Krkn joining CNCF Sandbox: https://www.redhat.com/en/blog/krknchaos-joining-cncf-sandbox
|
||||
### Blogs, podcasts and interviews
|
||||
Additional resources, including blog posts, podcasts, and community interviews, can be found on the [website](https://krkn-chaos.dev/blog)
|
||||
|
||||
|
||||
### Roadmap
|
||||
|
||||
55
RELEASE.md
Normal file
55
RELEASE.md
Normal file
@@ -0,0 +1,55 @@
|
||||
### Release Protocol: The Community-First Cycle
|
||||
|
||||
This document outlines the project's release protocol, a methodology designed to ensure a responsive and transparent development process that is closely aligned with the needs of our users and contributors. This protocol is tailored for projects in their early stages, prioritizing agility and community feedback over a rigid, time-boxed schedule.
|
||||
|
||||
#### 1. Key Principles
|
||||
|
||||
* **Community as the Compass:** The primary driver for all development is feedback from our user and contributor community.
|
||||
* **Prioritization by Impact:** Tasks are prioritized based on their impact on user experience, the urgency of bug fixes, and the value of community-contributed features.
|
||||
* **Event-Driven Releases:** Releases are not bound by a fixed calendar. New versions are published when a significant body of work is complete, a critical issue is resolved, or a new feature is ready for adoption.
|
||||
* **Transparency and Communication:** All development decisions, progress, and plans are communicated openly through our issue tracker, pull requests, and community channels.
|
||||
|
||||
#### 2. The Release Lifecycle
|
||||
|
||||
The release cycle is a continuous flow of activities rather than a series of sequential phases.
|
||||
|
||||
**2.1. Discovery & Prioritization**
|
||||
* New features and bug fixes are identified through user feedback on our issue tracker, community discussions, and direct contributions.
|
||||
* The core maintainers, in collaboration with the community, continuously evaluate and tag issues to create an open and dynamic backlog.
|
||||
|
||||
**2.2. Development & Code Review**
|
||||
* Work is initiated based on the highest-priority items in the backlog.
|
||||
* All code contributions are made via pull requests (PRs).
|
||||
* PRs are reviewed by maintainers and other contributors to ensure code quality, adherence to project standards, and overall stability.
|
||||
|
||||
**2.3. Release Readiness**
|
||||
A new release is considered ready when one of the following conditions is met:
|
||||
* A major new feature has been completed and thoroughly tested.
|
||||
* A critical security vulnerability or bug has been addressed.
|
||||
* A sufficient number of smaller improvements and fixes have been merged, providing meaningful value to users.
|
||||
|
||||
**2.4. Versioning**
|
||||
We adhere to [**Semantic Versioning 2.0.0**](https://semver.org/).
|
||||
* **Major version (`X.y.z`)**: Reserved for releases that introduce breaking changes.
|
||||
* **Minor version (`x.Y.z`)**: Used for new features or significant non-breaking changes.
|
||||
* **Patch version (`x.y.Z`)**: Used for bug fixes and small, non-functional improvements.
|
||||
|
||||
#### 3. Roles and Responsibilities
|
||||
|
||||
* **Members:** The [core team](https://github.com/krkn-chaos/krkn/blob/main/MAINTAINERS.md) responsible for the project's health. Their duties include:
|
||||
* Reviewing pull requests.
|
||||
* Contributing code and documentation via pull requests.
|
||||
* Engaging in discussions and providing feedback.
|
||||
* **Maintainers and Owners:** The [core team](https://github.com/krkn-chaos/krkn/blob/main/MAINTAINERS.md) responsible for the project's health. Their duties include:
|
||||
* Facilitating community discussions and prioritization.
|
||||
* Reviewing and merging pull requests.
|
||||
* Cutting and announcing official releases.
|
||||
* **Contributors:** The community. Their duties include:
|
||||
* Reporting bugs and suggesting new features.
|
||||
* Contributing code and documentation via pull requests.
|
||||
* Engaging in discussions and providing feedback.
|
||||
|
||||
#### 4. Adoption and Future Evolution
|
||||
|
||||
This protocol is designed for the current stage of the project. As the project matures and the contributor base grows, the maintainers will evaluate the need for a more structured methodology to ensure continued scalability and stability.
|
||||
|
||||
10
ROADMAP.md
10
ROADMAP.md
@@ -2,11 +2,11 @@
|
||||
|
||||
Following are a list of enhancements that we are planning to work on adding support in Krkn. Of course any help/contributions are greatly appreciated.
|
||||
|
||||
- [ ] [Ability to run multiple chaos scenarios in parallel under load to mimic real world outages](https://github.com/krkn-chaos/krkn/issues/424)
|
||||
- [x] [Ability to run multiple chaos scenarios in parallel under load to mimic real world outages](https://github.com/krkn-chaos/krkn/issues/424)
|
||||
- [x] [Centralized storage for chaos experiments artifacts](https://github.com/krkn-chaos/krkn/issues/423)
|
||||
- [ ] [Support for causing DNS outages](https://github.com/krkn-chaos/krkn/issues/394)
|
||||
- [x] [Support for causing DNS outages](https://github.com/krkn-chaos/krkn/issues/394)
|
||||
- [x] [Chaos recommender](https://github.com/krkn-chaos/krkn/tree/main/utils/chaos-recommender) to suggest scenarios having probability of impacting the service under test using profiling results
|
||||
- [] Chaos AI integration to improve test coverage while reducing fault space to save costs and execution time
|
||||
- [x] Chaos AI integration to improve test coverage while reducing fault space to save costs and execution time [krkn-chaos-ai](https://github.com/krkn-chaos/krkn-chaos-ai)
|
||||
- [x] [Support for pod level network traffic shaping](https://github.com/krkn-chaos/krkn/issues/393)
|
||||
- [ ] [Ability to visualize the metrics that are being captured by Kraken and stored in Elasticsearch](https://github.com/krkn-chaos/krkn/issues/124)
|
||||
- [x] Support for running all the scenarios of Kraken on Kubernetes distribution - see https://github.com/krkn-chaos/krkn/issues/185, https://github.com/redhat-chaos/krkn/issues/186
|
||||
@@ -14,3 +14,7 @@ Following are a list of enhancements that we are planning to work on adding supp
|
||||
- [x] [Switch documentation references to Kubernetes](https://github.com/krkn-chaos/krkn/issues/495)
|
||||
- [x] [OCP and Kubernetes functionalities segregation](https://github.com/krkn-chaos/krkn/issues/497)
|
||||
- [x] [Krknctl - client for running Krkn scenarios with ease](https://github.com/krkn-chaos/krknctl)
|
||||
- [x] [AI Chat bot to help get started with Krkn and commands](https://github.com/krkn-chaos/krkn-lightspeed)
|
||||
- [ ] [Ability to roll back cluster to original state if chaos fails](https://github.com/krkn-chaos/krkn/issues/804)
|
||||
- [ ] Add recovery time metrics to each scenario for each better regression analysis
|
||||
- [ ] [Add resiliency scoring to chaos scenarios ran on cluster](https://github.com/krkn-chaos/krkn/issues/125)
|
||||
@@ -1,52 +1,57 @@
|
||||
kraken:
|
||||
kubeconfig_path: ~/.kube/config # Path to kubeconfig
|
||||
exit_on_failure: False # Exit when a post action scenario fails
|
||||
auto_rollback: True # Enable auto rollback for scenarios.
|
||||
rollback_versions_directory: /tmp/kraken-rollback # Directory to store rollback version files.
|
||||
publish_kraken_status: True # Can be accessed at http://0.0.0.0:8081
|
||||
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
|
||||
signal_address: 0.0.0.0 # Signal listening address
|
||||
port: 8081 # Signal port
|
||||
chaos_scenarios:
|
||||
# List of policies/chaos scenarios to load
|
||||
- hog_scenarios:
|
||||
- scenarios/kube/cpu-hog.yml
|
||||
- scenarios/kube/memory-hog.yml
|
||||
- scenarios/kube/io-hog.yml
|
||||
- application_outages_scenarios:
|
||||
- scenarios/openshift/app_outage.yaml
|
||||
- container_scenarios: # List of chaos pod scenarios to load
|
||||
- scenarios/openshift/container_etcd.yml
|
||||
- pod_network_scenarios:
|
||||
- scenarios/openshift/network_chaos_ingress.yml
|
||||
- scenarios/openshift/pod_network_outage.yml
|
||||
- pod_disruption_scenarios:
|
||||
- scenarios/openshift/etcd.yml
|
||||
- scenarios/openshift/regex_openshift_pod_kill.yml
|
||||
- scenarios/openshift/prom_kill.yml
|
||||
- scenarios/openshift/openshift-apiserver.yml
|
||||
- scenarios/openshift/openshift-kube-apiserver.yml
|
||||
- node_scenarios: # List of chaos node scenarios to load
|
||||
- scenarios/openshift/aws_node_scenarios.yml
|
||||
- scenarios/openshift/vmware_node_scenarios.yml
|
||||
- scenarios/openshift/ibmcloud_node_scenarios.yml
|
||||
- time_scenarios: # List of chaos time scenarios to load
|
||||
- scenarios/openshift/time_scenarios_example.yml
|
||||
- cluster_shut_down_scenarios:
|
||||
- scenarios/openshift/cluster_shut_down_scenario.yml
|
||||
- service_disruption_scenarios:
|
||||
- scenarios/openshift/regex_namespace.yaml
|
||||
- scenarios/openshift/ingress_namespace.yaml
|
||||
- zone_outages_scenarios:
|
||||
- scenarios/openshift/zone_outage.yaml
|
||||
- pvc_scenarios:
|
||||
- scenarios/openshift/pvc_scenario.yaml
|
||||
- network_chaos_scenarios:
|
||||
- scenarios/openshift/network_chaos.yaml
|
||||
- service_hijacking_scenarios:
|
||||
- scenarios/kube/service_hijacking.yaml
|
||||
- syn_flood_scenarios:
|
||||
- scenarios/kube/syn_flood.yaml
|
||||
- network_chaos_ng_scenarios:
|
||||
- scenarios/kube/network-filter.yml
|
||||
# List of policies/chaos scenarios to load
|
||||
- hog_scenarios:
|
||||
- scenarios/kube/cpu-hog.yml
|
||||
- scenarios/kube/memory-hog.yml
|
||||
- scenarios/kube/io-hog.yml
|
||||
- application_outages_scenarios:
|
||||
- scenarios/openshift/app_outage.yaml
|
||||
- container_scenarios: # List of chaos pod scenarios to load
|
||||
- scenarios/openshift/container_etcd.yml
|
||||
- pod_network_scenarios:
|
||||
- scenarios/openshift/network_chaos_ingress.yml
|
||||
- scenarios/openshift/pod_network_outage.yml
|
||||
- pod_disruption_scenarios:
|
||||
- scenarios/openshift/etcd.yml
|
||||
- scenarios/openshift/regex_openshift_pod_kill.yml
|
||||
- scenarios/openshift/prom_kill.yml
|
||||
- scenarios/openshift/openshift-apiserver.yml
|
||||
- scenarios/openshift/openshift-kube-apiserver.yml
|
||||
- node_scenarios: # List of chaos node scenarios to load
|
||||
- scenarios/openshift/aws_node_scenarios.yml
|
||||
- scenarios/openshift/vmware_node_scenarios.yml
|
||||
- scenarios/openshift/ibmcloud_node_scenarios.yml
|
||||
- time_scenarios: # List of chaos time scenarios to load
|
||||
- scenarios/openshift/time_scenarios_example.yml
|
||||
- cluster_shut_down_scenarios:
|
||||
- scenarios/openshift/cluster_shut_down_scenario.yml
|
||||
- service_disruption_scenarios:
|
||||
- scenarios/openshift/regex_namespace.yaml
|
||||
- scenarios/openshift/ingress_namespace.yaml
|
||||
- zone_outages_scenarios:
|
||||
- scenarios/openshift/zone_outage.yaml
|
||||
- pvc_scenarios:
|
||||
- scenarios/openshift/pvc_scenario.yaml
|
||||
- network_chaos_scenarios:
|
||||
- scenarios/openshift/network_chaos.yaml
|
||||
- service_hijacking_scenarios:
|
||||
- scenarios/kube/service_hijacking.yaml
|
||||
- syn_flood_scenarios:
|
||||
- scenarios/kube/syn_flood.yaml
|
||||
- network_chaos_ng_scenarios:
|
||||
- scenarios/kube/pod-network-filter.yml
|
||||
- scenarios/kube/node-network-filter.yml
|
||||
- kubevirt_vm_outage:
|
||||
- scenarios/kubevirt/kubevirt-vm-outage.yaml
|
||||
|
||||
cerberus:
|
||||
cerberus_enabled: False # Enable it when cerberus is previously installed
|
||||
@@ -54,9 +59,7 @@ cerberus:
|
||||
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
|
||||
|
||||
performance_monitoring:
|
||||
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
|
||||
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
|
||||
prometheus_url: '' # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
|
||||
prometheus_url: '' # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
|
||||
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
|
||||
uuid: # uuid for the run is generated by default if not set
|
||||
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
|
||||
@@ -76,7 +79,7 @@ elastic:
|
||||
telemetry_index: "krkn-telemetry"
|
||||
|
||||
tunings:
|
||||
wait_duration: 60 # Duration to wait between each chaos scenario
|
||||
wait_duration: 1 # Duration to wait between each chaos scenario
|
||||
iterations: 1 # Number of times to execute the scenarios
|
||||
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever
|
||||
telemetry:
|
||||
@@ -116,3 +119,11 @@ health_checks: # Utilizing health c
|
||||
bearer_token: # Bearer token for authentication if any
|
||||
auth: # Provide authentication credentials (username , password) in tuple format if any, ex:("admin","secretpassword")
|
||||
exit_on_failure: # If value is True exits when health check failed for application, values can be True/False
|
||||
|
||||
kubevirt_checks: # Utilizing virt check endpoints to observe ssh ability to VMI's during chaos injection.
|
||||
interval: 2 # Interval in seconds to perform virt checks, default value is 2 seconds
|
||||
namespace: # Namespace where to find VMI's
|
||||
name: # Regex Name style of VMI's to watch, optional, will watch all VMI names in the namespace if left blank
|
||||
only_failures: False # Boolean of whether to show all VMI's failures and successful ssh connection (False), or only failure status' (True)
|
||||
disconnected: False # Boolean of how to try to connect to the VMIs; if True will use the ip_address to try ssh from within a node, if false will use the name and uses virtctl to try to connect; Default is False
|
||||
ssh_node: "" # If set, will be a backup way to ssh to a node. Will want to set to a node that isn't targeted in chaos
|
||||
@@ -7,10 +7,8 @@ kraken:
|
||||
signal_state: RUN # Will wait for the RUN signal when set to PAUSE before running the scenarios, refer docs/signal.md for more details
|
||||
signal_address: 0.0.0.0 # Signal listening address
|
||||
chaos_scenarios: # List of policies/chaos scenarios to load
|
||||
- plugin_scenarios:
|
||||
- scenarios/kind/scheduler.yml
|
||||
- node_scenarios:
|
||||
- scenarios/kind/node_scenarios_example.yml
|
||||
- pod_disruption_scenarios:
|
||||
- scenarios/kube/pod.yml
|
||||
|
||||
cerberus:
|
||||
cerberus_enabled: False # Enable it when cerberus is previously installed
|
||||
@@ -18,15 +16,24 @@ cerberus:
|
||||
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
|
||||
|
||||
performance_monitoring:
|
||||
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
|
||||
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
|
||||
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
|
||||
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
|
||||
uuid: # uuid for the run is generated by default if not set
|
||||
enable_alerts: False # Runs the queries specified in the alert profile and displays the info or exits 1 when severity=error
|
||||
alert_profile: config/alerts.yaml # Path to alert profile with the prometheus queries
|
||||
|
||||
elastic:
|
||||
enable_elastic: False
|
||||
|
||||
tunings:
|
||||
wait_duration: 60 # Duration to wait between each chaos scenario
|
||||
iterations: 1 # Number of times to execute the scenarios
|
||||
daemon_mode: False # Iterations are set to infinity which means that the kraken will cause chaos forever
|
||||
|
||||
telemetry:
|
||||
enabled: False # enable/disables the telemetry collection feature
|
||||
archive_path: /tmp # local path where the archive files will be temporarly stored
|
||||
events_backup: False # enables/disables cluster events collection
|
||||
logs_backup: False
|
||||
|
||||
health_checks: # Utilizing health check endpoints to observe application behavior during chaos injection.
|
||||
|
||||
@@ -17,8 +17,6 @@ cerberus:
|
||||
check_applicaton_routes: False # When enabled will look for application unavailability using the routes specified in the cerberus config and fails the run
|
||||
|
||||
performance_monitoring:
|
||||
deploy_dashboards: False # Install a mutable grafana and load the performance dashboards. Enable this only when running on OpenShift
|
||||
repo: "https://github.com/cloud-bulldozer/performance-dashboards.git"
|
||||
prometheus_url: # The prometheus url/route is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes.
|
||||
prometheus_bearer_token: # The bearer token is automatically obtained in case of OpenShift, please set it when the distribution is Kubernetes. This is needed to authenticate with prometheus.
|
||||
uuid: # uuid for the run is generated by default if not set
|
||||
|
||||
@@ -10,7 +10,7 @@ RUN go mod edit -go 1.23.1 &&\
|
||||
go get github.com/docker/docker@v25.0.6&&\
|
||||
go get github.com/opencontainers/runc@v1.1.14&&\
|
||||
go get github.com/go-git/go-git/v5@v5.13.0&&\
|
||||
go get golang.org/x/net@v0.36.0&&\
|
||||
go get golang.org/x/net@v0.38.0&&\
|
||||
go get github.com/containerd/containerd@v1.7.27&&\
|
||||
go get golang.org/x/oauth2@v0.27.0&&\
|
||||
go get golang.org/x/crypto@v0.35.0&&\
|
||||
@@ -28,9 +28,14 @@ ENV KUBECONFIG /home/krkn/.kube/config
|
||||
|
||||
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
|
||||
RUN dnf update && dnf install -y --setopt=install_weak_deps=False \
|
||||
git python39 jq yq gettext wget which &&\
|
||||
git python39 jq yq gettext wget which ipmitool openssh-server &&\
|
||||
dnf clean all
|
||||
|
||||
# Virtctl
|
||||
RUN export VERSION=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt) && \
|
||||
wget https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-linux-amd64 && \
|
||||
chmod +x virtctl-${VERSION}-linux-amd64 && sudo mv virtctl-${VERSION}-linux-amd64 /usr/local/bin/virtctl
|
||||
|
||||
# copy oc client binary from oc-build image
|
||||
COPY --from=oc-build /tmp/oc/oc /usr/bin/oc
|
||||
|
||||
@@ -38,6 +43,9 @@ COPY --from=oc-build /tmp/oc/oc /usr/bin/oc
|
||||
RUN git clone https://github.com/krkn-chaos/krkn.git /home/krkn/kraken && \
|
||||
mkdir -p /home/krkn/.kube
|
||||
|
||||
RUN mkdir -p /home/krkn/.ssh && \
|
||||
chmod 700 /home/krkn/.ssh
|
||||
|
||||
WORKDIR /home/krkn/kraken
|
||||
|
||||
# default behaviour will be to build main
|
||||
@@ -47,7 +55,7 @@ RUN if [ -n "$PR_NUMBER" ]; then git fetch origin pull/${PR_NUMBER}/head:pr-${PR
|
||||
RUN if [ -n "$TAG" ]; then git checkout "$TAG";fi
|
||||
|
||||
RUN python3.9 -m ensurepip --upgrade --default-pip
|
||||
RUN python3.9 -m pip install --upgrade pip setuptools==70.0.0
|
||||
RUN python3.9 -m pip install --upgrade pip setuptools==78.1.1
|
||||
RUN pip3.9 install -r requirements.txt
|
||||
RUN pip3.9 install jsonschema
|
||||
|
||||
@@ -55,8 +63,14 @@ LABEL krknctl.title.global="Krkn Base Image"
|
||||
LABEL krknctl.description.global="This is the krkn base image."
|
||||
LABEL krknctl.input_fields.global='$KRKNCTL_INPUT'
|
||||
|
||||
# SSH setup script
|
||||
RUN chmod +x /home/krkn/kraken/containers/setup-ssh.sh
|
||||
|
||||
# Main entrypoint script
|
||||
RUN chmod +x /home/krkn/kraken/containers/entrypoint.sh
|
||||
|
||||
RUN chown -R krkn:krkn /home/krkn && chmod 755 /home/krkn
|
||||
USER krkn
|
||||
ENTRYPOINT ["python3.9", "run_kraken.py"]
|
||||
|
||||
ENTRYPOINT ["/bin/bash", "/home/krkn/kraken/containers/entrypoint.sh"]
|
||||
CMD ["--config=config/config.yaml"]
|
||||
|
||||
7
containers/entrypoint.sh
Normal file
7
containers/entrypoint.sh
Normal file
@@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
# Run SSH setup
|
||||
./containers/setup-ssh.sh
|
||||
# Change to kraken directory
|
||||
|
||||
# Execute the main command
|
||||
exec python3.9 run_kraken.py "$@"
|
||||
@@ -31,6 +31,24 @@
|
||||
"separator": ",",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "ssh-public-key",
|
||||
"short_description": "Krkn ssh public key path",
|
||||
"description": "Sets the path where krkn will search for ssh public key (in container)",
|
||||
"variable": "KRKN_SSH_PUBLIC",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "ssh-private-key",
|
||||
"short_description": "Krkn ssh private key path",
|
||||
"description": "Sets the path where krkn will search for ssh private key (in container)",
|
||||
"variable": "KRKN_SSH_PRIVATE",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "krkn-kubeconfig",
|
||||
"short_description": "Krkn kubeconfig path",
|
||||
@@ -425,6 +443,64 @@
|
||||
"default": "False",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-check-interval",
|
||||
"short_description": "Kube Virt check interval",
|
||||
"description": "How often to check the kube virt check Vms ssh status",
|
||||
"variable": "KUBE_VIRT_CHECK_INTERVAL",
|
||||
"type": "number",
|
||||
"default": "2",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-namespace",
|
||||
"short_description": "KubeVirt namespace to check",
|
||||
"description": "KubeVirt namespace to check the health of",
|
||||
"variable": "KUBE_VIRT_NAMESPACE",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-name",
|
||||
"short_description": "KubeVirt regex names to watch",
|
||||
"description": "KubeVirt regex names to check VMs",
|
||||
"variable": "KUBE_VIRT_NAME",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-only-failures",
|
||||
"short_description": "KubeVirt checks only report if failure occurs",
|
||||
"description": "KubeVirt checks only report if failure occurs",
|
||||
"variable": "KUBE_VIRT_FAILURES",
|
||||
"type": "enum",
|
||||
"allowed_values": "True,False,true,false",
|
||||
"separator": ",",
|
||||
"default": "False",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-disconnected",
|
||||
"short_description": "KubeVirt checks in disconnected mode",
|
||||
"description": "KubeVirt checks in disconnected mode, bypassing the clusters Api",
|
||||
"variable": "KUBE_VIRT_DISCONNECTED",
|
||||
"type": "enum",
|
||||
"allowed_values": "True,False,true,false",
|
||||
"separator": ",",
|
||||
"default": "False",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "kubevirt-ssh-node",
|
||||
"short_description": "KubeVirt node to ssh from",
|
||||
"description": "KubeVirt node to ssh from, should be available whole chaos run",
|
||||
"variable": "KUBE_VIRT_SSH_NODE",
|
||||
"type": "string",
|
||||
"default": "",
|
||||
"required": "false"
|
||||
},
|
||||
{
|
||||
"name": "krkn-debug",
|
||||
"short_description": "Krkn debug mode",
|
||||
|
||||
73
containers/setup-ssh.sh
Normal file
73
containers/setup-ssh.sh
Normal file
@@ -0,0 +1,73 @@
|
||||
#!/bin/bash
|
||||
# Setup SSH key if mounted
|
||||
# Support multiple mount locations
|
||||
MOUNTED_PRIVATE_KEY_ALT="/secrets/id_rsa"
|
||||
MOUNTED_PRIVATE_KEY="/home/krkn/.ssh/id_rsa"
|
||||
MOUNTED_PUBLIC_KEY="/home/krkn/.ssh/id_rsa.pub"
|
||||
WORKING_KEY="/home/krkn/.ssh/id_rsa.key"
|
||||
|
||||
# Determine which source to use
|
||||
SOURCE_KEY=""
|
||||
if [ -f "$MOUNTED_PRIVATE_KEY_ALT" ]; then
|
||||
SOURCE_KEY="$MOUNTED_PRIVATE_KEY_ALT"
|
||||
echo "Found SSH key at alternative location: $SOURCE_KEY"
|
||||
elif [ -f "$MOUNTED_PRIVATE_KEY" ]; then
|
||||
SOURCE_KEY="$MOUNTED_PRIVATE_KEY"
|
||||
echo "Found SSH key at default location: $SOURCE_KEY"
|
||||
fi
|
||||
|
||||
# Setup SSH private key and create config for outbound connections
|
||||
if [ -n "$SOURCE_KEY" ]; then
|
||||
echo "Setting up SSH private key from: $SOURCE_KEY"
|
||||
|
||||
# Check current permissions and ownership
|
||||
ls -la "$SOURCE_KEY"
|
||||
|
||||
# Since the mounted key might be owned by root and we run as krkn user,
|
||||
# we cannot modify it directly. Copy to a new location we can control.
|
||||
echo "Copying SSH key to working location: $WORKING_KEY"
|
||||
|
||||
# Try to copy - if readable by anyone, this will work
|
||||
if cp "$SOURCE_KEY" "$WORKING_KEY" 2>/dev/null || cat "$SOURCE_KEY" > "$WORKING_KEY" 2>/dev/null; then
|
||||
chmod 600 "$WORKING_KEY"
|
||||
echo "SSH key copied successfully"
|
||||
ls -la "$WORKING_KEY"
|
||||
|
||||
# Verify the key is readable
|
||||
if ssh-keygen -y -f "$WORKING_KEY" > /dev/null 2>&1; then
|
||||
echo "SSH private key verified successfully"
|
||||
else
|
||||
echo "Warning: SSH key verification failed, but continuing anyway"
|
||||
fi
|
||||
|
||||
# Create SSH config to use the working key
|
||||
cat > /home/krkn/.ssh/config <<EOF
|
||||
Host *
|
||||
IdentityFile $WORKING_KEY
|
||||
StrictHostKeyChecking no
|
||||
UserKnownHostsFile /dev/null
|
||||
EOF
|
||||
chmod 600 /home/krkn/.ssh/config
|
||||
echo "SSH config created with default identity: $WORKING_KEY"
|
||||
else
|
||||
echo "ERROR: Cannot read SSH key at $SOURCE_KEY"
|
||||
echo "Key is owned by: $(stat -c '%U:%G' "$SOURCE_KEY" 2>/dev/null || stat -f '%Su:%Sg' "$SOURCE_KEY" 2>/dev/null)"
|
||||
echo ""
|
||||
echo "Solutions:"
|
||||
echo "1. Mount with world-readable permissions (less secure): chmod 644 /path/to/key"
|
||||
echo "2. Mount to /secrets/id_rsa instead of /home/krkn/.ssh/id_rsa"
|
||||
echo "3. Change ownership on host: chown \$(id -u):\$(id -g) /path/to/key"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Setup SSH public key if mounted (for inbound server access)
|
||||
if [ -f "$MOUNTED_PUBLIC_KEY" ]; then
|
||||
echo "SSH public key already present at $MOUNTED_PUBLIC_KEY"
|
||||
# Try to fix permissions (will fail silently if file is mounted read-only or owned by another user)
|
||||
chmod 600 "$MOUNTED_PUBLIC_KEY" 2>/dev/null
|
||||
if [ ! -f "/home/krkn/.ssh/authorized_keys" ]; then
|
||||
cp "$MOUNTED_PUBLIC_KEY" /home/krkn/.ssh/authorized_keys
|
||||
chmod 600 /home/krkn/.ssh/authorized_keys
|
||||
fi
|
||||
fi
|
||||
@@ -5,6 +5,8 @@ nodes:
|
||||
extraPortMappings:
|
||||
- containerPort: 30036
|
||||
hostPort: 8888
|
||||
- containerPort: 30037
|
||||
hostPort: 8889
|
||||
- role: control-plane
|
||||
- role: control-plane
|
||||
- role: worker
|
||||
|
||||
@@ -18,10 +18,8 @@ def invoke(command, timeout=None):
|
||||
def invoke_no_exit(command, timeout=None):
|
||||
output = ""
|
||||
try:
|
||||
output = subprocess.check_output(command, shell=True, universal_newlines=True, timeout=timeout)
|
||||
logging.info("output " + str(output))
|
||||
output = subprocess.check_output(command, shell=True, universal_newlines=True, timeout=timeout, stderr=subprocess.DEVNULL)
|
||||
except Exception as e:
|
||||
logging.error("Failed to run %s, error: %s" % (command, e))
|
||||
return str(e)
|
||||
return output
|
||||
|
||||
|
||||
@@ -1,28 +0,0 @@
|
||||
import subprocess
|
||||
import logging
|
||||
import git
|
||||
import sys
|
||||
|
||||
|
||||
# Installs a mutable grafana on the Kubernetes/OpenShift cluster and loads the performance dashboards
|
||||
def setup(repo, distribution):
|
||||
if distribution == "kubernetes":
|
||||
command = "cd performance-dashboards/dittybopper && ./k8s-deploy.sh"
|
||||
elif distribution == "openshift":
|
||||
command = "cd performance-dashboards/dittybopper && ./deploy.sh"
|
||||
else:
|
||||
logging.error("Provided distribution: %s is not supported" % (distribution))
|
||||
sys.exit(1)
|
||||
delete_repo = "rm -rf performance-dashboards || exit 0"
|
||||
logging.info(
|
||||
"Cloning, installing mutable grafana on the cluster and loading the dashboards"
|
||||
)
|
||||
try:
|
||||
# delete repo to clone the latest copy if exists
|
||||
subprocess.run(delete_repo, shell=True, universal_newlines=True, timeout=45)
|
||||
# clone the repo
|
||||
git.Repo.clone_from(repo, "performance-dashboards")
|
||||
# deploy performance dashboards
|
||||
subprocess.run(command, shell=True, universal_newlines=True)
|
||||
except Exception as e:
|
||||
logging.error("Failed to install performance-dashboards, error: %s" % (e))
|
||||
@@ -9,6 +9,7 @@ import logging
|
||||
import urllib3
|
||||
import sys
|
||||
import json
|
||||
import tempfile
|
||||
|
||||
import yaml
|
||||
from krkn_lib.elastic.krkn_elastic import KrknElastic
|
||||
@@ -74,10 +75,12 @@ def alerts(
|
||||
def critical_alerts(
|
||||
prom_cli: KrknPrometheus,
|
||||
summary: ChaosRunAlertSummary,
|
||||
elastic: KrknElastic,
|
||||
run_id,
|
||||
scenario,
|
||||
start_time,
|
||||
end_time,
|
||||
elastic_alerts_index
|
||||
):
|
||||
summary.scenario = scenario
|
||||
summary.run_id = run_id
|
||||
@@ -112,7 +115,6 @@ def critical_alerts(
|
||||
summary.chaos_alerts.append(alert)
|
||||
|
||||
post_critical_alerts = prom_cli.process_query(query)
|
||||
|
||||
for alert in post_critical_alerts:
|
||||
if "metric" in alert:
|
||||
alertname = (
|
||||
@@ -135,6 +137,21 @@ def critical_alerts(
|
||||
)
|
||||
alert = ChaosRunAlert(alertname, alertstate, namespace, severity)
|
||||
summary.post_chaos_alerts.append(alert)
|
||||
if elastic:
|
||||
elastic_alert = ElasticAlert(
|
||||
run_uuid=run_id,
|
||||
severity=severity,
|
||||
alert=alertname,
|
||||
created_at=end_time,
|
||||
namespace=namespace,
|
||||
alertstate=alertstate,
|
||||
phase="post_chaos"
|
||||
)
|
||||
result = elastic.push_alert(elastic_alert, elastic_alerts_index)
|
||||
if result == -1:
|
||||
logging.error("failed to save alert on ElasticSearch")
|
||||
pass
|
||||
|
||||
|
||||
during_critical_alerts_count = len(during_critical_alerts)
|
||||
post_critical_alerts_count = len(post_critical_alerts)
|
||||
@@ -148,8 +165,8 @@ def critical_alerts(
|
||||
|
||||
if not firing_alerts:
|
||||
logging.info("No critical alerts are firing!!")
|
||||
|
||||
|
||||
|
||||
|
||||
def metrics(
|
||||
prom_cli: KrknPrometheus,
|
||||
elastic: KrknElastic,
|
||||
@@ -251,11 +268,37 @@ def metrics(
|
||||
metric[k] = v
|
||||
metric['timestamp'] = str(datetime.datetime.now())
|
||||
metrics_list.append(metric.copy())
|
||||
if elastic:
|
||||
if telemetry_json['virt_checks']:
|
||||
for virt_check in telemetry_json["virt_checks"]:
|
||||
metric_name = "virt_check_recovery"
|
||||
metric = {"metricName": metric_name}
|
||||
for k,v in virt_check.items():
|
||||
metric[k] = v
|
||||
metric['timestamp'] = str(datetime.datetime.now())
|
||||
metrics_list.append(metric.copy())
|
||||
|
||||
save_metrics = False
|
||||
if elastic is not None and elastic_metrics_index is not None:
|
||||
result = elastic.upload_metrics_to_elasticsearch(
|
||||
run_uuid=run_uuid, index=elastic_metrics_index, raw_data=metrics_list
|
||||
)
|
||||
if result == -1:
|
||||
logging.error("failed to save metrics on ElasticSearch")
|
||||
save_metrics = True
|
||||
else:
|
||||
save_metrics = True
|
||||
if save_metrics:
|
||||
local_dir = os.path.join(tempfile.gettempdir(), "krkn_metrics")
|
||||
os.makedirs(local_dir, exist_ok=True)
|
||||
local_file = os.path.join(local_dir, f"{elastic_metrics_index}_{run_uuid}.json")
|
||||
|
||||
try:
|
||||
with open(local_file, "w") as f:
|
||||
json.dump({
|
||||
"run_uuid": run_uuid,
|
||||
"metrics": metrics_list
|
||||
}, f, indent=2)
|
||||
logging.info(f"Metrics saved to {local_file}")
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to save metrics to {local_file}: {e}")
|
||||
return metrics_list
|
||||
|
||||
121
krkn/rollback/command.py
Normal file
121
krkn/rollback/command.py
Normal file
@@ -0,0 +1,121 @@
|
||||
import os
|
||||
import logging
|
||||
from typing import Optional, TYPE_CHECKING
|
||||
|
||||
from krkn.rollback.config import RollbackConfig
|
||||
from krkn.rollback.handler import execute_rollback_version_files, cleanup_rollback_version_files
|
||||
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
|
||||
def list_rollback(run_uuid: Optional[str]=None, scenario_type: Optional[str]=None):
|
||||
"""
|
||||
List rollback version files in a tree-like format.
|
||||
|
||||
:param cfg: Configuration file path
|
||||
:param run_uuid: Optional run UUID to filter by
|
||||
:param scenario_type: Optional scenario type to filter by
|
||||
:return: Exit code (0 for success, 1 for error)
|
||||
"""
|
||||
logging.info("Listing rollback version files")
|
||||
|
||||
versions_directory = RollbackConfig().versions_directory
|
||||
|
||||
logging.info(f"Rollback versions directory: {versions_directory}")
|
||||
|
||||
# Check if the directory exists first
|
||||
if not os.path.exists(versions_directory):
|
||||
logging.info(f"Rollback versions directory does not exist: {versions_directory}")
|
||||
return 0
|
||||
|
||||
# List all directories and files
|
||||
try:
|
||||
# Get all run directories
|
||||
run_dirs = []
|
||||
for item in os.listdir(versions_directory):
|
||||
item_path = os.path.join(versions_directory, item)
|
||||
if os.path.isdir(item_path):
|
||||
# Apply run_uuid filter if specified
|
||||
if run_uuid is None or run_uuid in item:
|
||||
run_dirs.append(item)
|
||||
|
||||
if not run_dirs:
|
||||
if run_uuid:
|
||||
logging.info(f"No rollback directories found for run_uuid: {run_uuid}")
|
||||
else:
|
||||
logging.info("No rollback directories found")
|
||||
return 0
|
||||
|
||||
# Sort directories for consistent output
|
||||
run_dirs.sort()
|
||||
|
||||
print(f"\n{versions_directory}/")
|
||||
for i, run_dir in enumerate(run_dirs):
|
||||
is_last_dir = (i == len(run_dirs) - 1)
|
||||
dir_prefix = "└── " if is_last_dir else "├── "
|
||||
print(f"{dir_prefix}{run_dir}/")
|
||||
|
||||
# List files in this directory
|
||||
run_dir_path = os.path.join(versions_directory, run_dir)
|
||||
try:
|
||||
files = []
|
||||
for file in os.listdir(run_dir_path):
|
||||
file_path = os.path.join(run_dir_path, file)
|
||||
if os.path.isfile(file_path):
|
||||
# Apply scenario_type filter if specified
|
||||
if scenario_type is None or file.startswith(scenario_type):
|
||||
files.append(file)
|
||||
|
||||
files.sort()
|
||||
for j, file in enumerate(files):
|
||||
is_last_file = (j == len(files) - 1)
|
||||
file_prefix = " └── " if is_last_dir else "│ └── " if is_last_file else ("│ ├── " if not is_last_dir else " ├── ")
|
||||
print(f"{file_prefix}{file}")
|
||||
|
||||
except PermissionError:
|
||||
file_prefix = " └── " if is_last_dir else "│ └── "
|
||||
print(f"{file_prefix}[Permission Denied]")
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error listing rollback directory: {e}")
|
||||
return 1
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def execute_rollback(telemetry_ocp: "KrknTelemetryOpenshift", run_uuid: Optional[str]=None, scenario_type: Optional[str]=None):
|
||||
"""
|
||||
Execute rollback version files and cleanup if successful.
|
||||
|
||||
:param telemetry_ocp: Instance of KrknTelemetryOpenshift
|
||||
:param run_uuid: Optional run UUID to filter by
|
||||
:param scenario_type: Optional scenario type to filter by
|
||||
:return: Exit code (0 for success, 1 for error)
|
||||
"""
|
||||
logging.info("Executing rollback version files")
|
||||
|
||||
if not run_uuid:
|
||||
logging.error("run_uuid is required for execute-rollback command")
|
||||
return 1
|
||||
|
||||
if not scenario_type:
|
||||
logging.warning("scenario_type is not specified, executing all scenarios in rollback directory")
|
||||
|
||||
try:
|
||||
# Execute rollback version files
|
||||
logging.info(f"Executing rollback for run_uuid={run_uuid}, scenario_type={scenario_type or '*'}")
|
||||
execute_rollback_version_files(telemetry_ocp, run_uuid, scenario_type)
|
||||
|
||||
# If execution was successful, cleanup the version files
|
||||
logging.info("Rollback execution completed successfully, cleaning up version files")
|
||||
cleanup_rollback_version_files(run_uuid, scenario_type)
|
||||
|
||||
logging.info("Rollback execution and cleanup completed successfully")
|
||||
return 0
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error during rollback execution: {e}")
|
||||
return 1
|
||||
189
krkn/rollback/config.py
Normal file
189
krkn/rollback/config.py
Normal file
@@ -0,0 +1,189 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Callable, TYPE_CHECKING, Optional
|
||||
from typing_extensions import TypeAlias
|
||||
import time
|
||||
import os
|
||||
import logging
|
||||
|
||||
from krkn_lib.utils import get_random_string
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
RollbackCallable: TypeAlias = Callable[
|
||||
["RollbackContent", "KrknTelemetryOpenshift"], None
|
||||
]
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
RollbackCallable: TypeAlias = Callable[
|
||||
["RollbackContent", "KrknTelemetryOpenshift"], None
|
||||
]
|
||||
|
||||
|
||||
class SingletonMeta(type):
|
||||
_instances = {}
|
||||
|
||||
def __call__(cls, *args, **kwargs):
|
||||
if cls not in cls._instances:
|
||||
cls._instances[cls] = super().__call__(*args, **kwargs)
|
||||
return cls._instances[cls]
|
||||
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class RollbackContent:
|
||||
"""
|
||||
RollbackContent is a dataclass that defines the necessary fields for rollback operations.
|
||||
"""
|
||||
|
||||
resource_identifier: str
|
||||
namespace: Optional[str] = None
|
||||
|
||||
def __str__(self):
|
||||
namespace = f'"{self.namespace}"' if self.namespace else "None"
|
||||
resource_identifier = f'"{self.resource_identifier}"'
|
||||
return f"RollbackContent(namespace={namespace}, resource_identifier={resource_identifier})"
|
||||
|
||||
|
||||
class RollbackContext(str):
|
||||
"""
|
||||
RollbackContext is a string formatted as '<timestamp (s) >-<run_uuid>'.
|
||||
It represents the context for rollback operations, uniquely identifying a run.
|
||||
"""
|
||||
|
||||
def __new__(cls, run_uuid: str):
|
||||
return super().__new__(cls, f"{time.time_ns()}-{run_uuid}")
|
||||
|
||||
|
||||
class RollbackConfig(metaclass=SingletonMeta):
|
||||
"""Configuration for the rollback scenarios."""
|
||||
|
||||
def __init__(self):
|
||||
self._auto = False
|
||||
self._versions_directory = ""
|
||||
self._registered = False
|
||||
|
||||
@property
|
||||
def auto(self):
|
||||
return self._auto
|
||||
|
||||
@auto.setter
|
||||
def auto(self, value):
|
||||
if self._registered:
|
||||
raise AttributeError("Can't modify 'auto' after registration")
|
||||
self._auto = value
|
||||
|
||||
@property
|
||||
def versions_directory(self):
|
||||
return self._versions_directory
|
||||
|
||||
@versions_directory.setter
|
||||
def versions_directory(self, value):
|
||||
if self._registered:
|
||||
raise AttributeError("Can't modify 'versions_directory' after registration")
|
||||
self._versions_directory = value
|
||||
@classmethod
|
||||
def register(cls, auto=False, versions_directory=""):
|
||||
"""Initialize and return the singleton instance with given configuration."""
|
||||
instance = cls()
|
||||
instance.auto = auto
|
||||
instance.versions_directory = versions_directory
|
||||
instance._registered = True
|
||||
return instance
|
||||
|
||||
@classmethod
|
||||
def get_rollback_versions_directory(cls, rollback_context: RollbackContext) -> str:
|
||||
"""
|
||||
Get the rollback context directory for a given rollback context.
|
||||
|
||||
:param rollback_context: The rollback context string.
|
||||
:return: The path to the rollback context directory.
|
||||
"""
|
||||
return f"{cls().versions_directory}/{rollback_context}"
|
||||
|
||||
@classmethod
|
||||
def search_rollback_version_files(cls, run_uuid: str, scenario_type: str | None = None) -> list[str]:
|
||||
"""
|
||||
Search for rollback version files based on run_uuid and scenario_type.
|
||||
|
||||
1. Search directories with "run_uuid" in name under "cls.versions_directory".
|
||||
2. Search files in those directories that start with "scenario_type" in matched directories in step 1.
|
||||
|
||||
:param run_uuid: Unique identifier for the run.
|
||||
:param scenario_type: Type of the scenario.
|
||||
:return: List of version file paths.
|
||||
"""
|
||||
|
||||
if not os.path.exists(cls().versions_directory):
|
||||
return []
|
||||
|
||||
rollback_context_directories = [
|
||||
dirname for dirname in os.listdir(cls().versions_directory) if run_uuid in dirname
|
||||
]
|
||||
if not rollback_context_directories:
|
||||
logger.warning(f"No rollback context directories found for run UUID {run_uuid}")
|
||||
return []
|
||||
|
||||
if len(rollback_context_directories) > 1:
|
||||
logger.warning(
|
||||
f"Expected one directory for run UUID {run_uuid}, found: {rollback_context_directories}"
|
||||
)
|
||||
|
||||
rollback_context_directory = rollback_context_directories[0]
|
||||
|
||||
version_files = []
|
||||
scenario_rollback_versions_directory = os.path.join(
|
||||
cls().versions_directory, rollback_context_directory
|
||||
)
|
||||
for file in os.listdir(scenario_rollback_versions_directory):
|
||||
# assert all files start with scenario_type and end with .py
|
||||
if file.endswith(".py") and (scenario_type is None or file.startswith(scenario_type)):
|
||||
version_files.append(
|
||||
os.path.join(scenario_rollback_versions_directory, file)
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"File {file} does not match expected pattern for scenario type {scenario_type}"
|
||||
)
|
||||
return version_files
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class Version:
|
||||
scenario_type: str
|
||||
rollback_context: RollbackContext
|
||||
timestamp: int = time.time_ns() # Get current timestamp in nanoseconds
|
||||
hash_suffix: str = get_random_string(8) # Generate a random string of 8 characters
|
||||
|
||||
@property
|
||||
def version_file_name(self) -> str:
|
||||
"""
|
||||
Generate a version file name based on the timestamp and hash suffix.
|
||||
:return: The generated version file name.
|
||||
"""
|
||||
return f"{self.scenario_type}_{self.timestamp}_{self.hash_suffix}.py"
|
||||
|
||||
@property
|
||||
def version_file_full_path(self) -> str:
|
||||
"""
|
||||
Get the full path for the version file based on the version object and current context.
|
||||
|
||||
:return: The generated version file full path.
|
||||
"""
|
||||
return f"{RollbackConfig.get_rollback_versions_directory(self.rollback_context)}/{self.version_file_name}"
|
||||
|
||||
@staticmethod
|
||||
def new_version(scenario_type: str, rollback_context: RollbackContext) -> "Version":
|
||||
"""
|
||||
Get the current version of the rollback configuration.
|
||||
:return: An instance of Version with the current timestamp and hash suffix.
|
||||
"""
|
||||
return Version(
|
||||
scenario_type=scenario_type,
|
||||
rollback_context=rollback_context,
|
||||
)
|
||||
238
krkn/rollback/handler.py
Normal file
238
krkn/rollback/handler.py
Normal file
@@ -0,0 +1,238 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import cast, TYPE_CHECKING
|
||||
import os
|
||||
import importlib.util
|
||||
import inspect
|
||||
|
||||
from krkn.rollback.config import RollbackConfig, RollbackContext, Version
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
from krkn.rollback.config import RollbackContent, RollbackCallable
|
||||
from krkn.rollback.serialization import Serializer
|
||||
|
||||
|
||||
def set_rollback_context_decorator(func):
|
||||
"""
|
||||
Decorator to automatically set and clear rollback context.
|
||||
It extracts run_uuid from the function arguments and sets the context in rollback_handler
|
||||
before executing the function, and clears it after execution.
|
||||
|
||||
Usage:
|
||||
|
||||
.. code-block:: python
|
||||
from krkn.rollback.handler import set_rollback_context_decorator
|
||||
# for any scenario plugin that inherits from AbstractScenarioPlugin
|
||||
@set_rollback_context_decorator
|
||||
def run(
|
||||
self,
|
||||
run_uuid: str,
|
||||
scenario: str,
|
||||
krkn_config: dict[str, any],
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry,
|
||||
):
|
||||
# Your scenario logic here
|
||||
pass
|
||||
"""
|
||||
|
||||
def wrapper(self, *args, **kwargs):
|
||||
self = cast("AbstractScenarioPlugin", self)
|
||||
# Since `AbstractScenarioPlugin.run_scenarios` will call `self.run` and pass all parameters as `kwargs`
|
||||
logger.debug(f"kwargs of ScenarioPlugin.run: {kwargs}")
|
||||
run_uuid = kwargs.get("run_uuid", None)
|
||||
# so we can safely assume that `run_uuid` will be present in `kwargs`
|
||||
assert run_uuid is not None, "run_uuid must be provided in kwargs"
|
||||
|
||||
# Set context if run_uuid is available and rollback_handler exists
|
||||
if run_uuid and hasattr(self, "rollback_handler"):
|
||||
self.rollback_handler = cast("RollbackHandler", self.rollback_handler)
|
||||
self.rollback_handler.set_context(run_uuid)
|
||||
|
||||
try:
|
||||
# Execute the `run` method with the original arguments
|
||||
result = func(self, *args, **kwargs)
|
||||
return result
|
||||
finally:
|
||||
# Clear context after function execution, regardless of success or failure
|
||||
if hasattr(self, "rollback_handler"):
|
||||
self.rollback_handler = cast("RollbackHandler", self.rollback_handler)
|
||||
self.rollback_handler.clear_context()
|
||||
|
||||
return wrapper
|
||||
|
||||
def _parse_rollback_module(version_file_path: str) -> tuple[RollbackCallable, RollbackContent]:
|
||||
"""
|
||||
Parse a rollback module to extract the rollback function and RollbackContent.
|
||||
|
||||
:param version_file_path: Path to the rollback version file
|
||||
:return: Tuple of (rollback_callable, rollback_content)
|
||||
"""
|
||||
|
||||
# Create a unique module name based on the file path
|
||||
module_name = f"rollback_module_{os.path.basename(version_file_path).replace('.py', '').replace('-', '_')}"
|
||||
|
||||
# Load the module using importlib
|
||||
spec = importlib.util.spec_from_file_location(module_name, version_file_path)
|
||||
if spec is None or spec.loader is None:
|
||||
raise ImportError(f"Could not load module from {version_file_path}")
|
||||
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
spec.loader.exec_module(module)
|
||||
|
||||
# Find the rollback function
|
||||
rollback_callable = None
|
||||
for name, obj in inspect.getmembers(module):
|
||||
if inspect.isfunction(obj) and name.startswith('rollback_'):
|
||||
# Check function signature
|
||||
sig = inspect.signature(obj)
|
||||
params = list(sig.parameters.values())
|
||||
if (len(params) == 2 and
|
||||
'RollbackContent' in str(params[0].annotation) and
|
||||
'KrknTelemetryOpenshift' in str(params[1].annotation)):
|
||||
rollback_callable = obj
|
||||
logger.debug(f"Found rollback function: {name}")
|
||||
break
|
||||
|
||||
if rollback_callable is None:
|
||||
raise ValueError(f"No valid rollback function found in {version_file_path}")
|
||||
|
||||
# Find the rollback_content variable
|
||||
if not hasattr(module, 'rollback_content'):
|
||||
raise ValueError("Could not find variable named 'rollback_content' in the module")
|
||||
|
||||
rollback_content = getattr(module, 'rollback_content', None)
|
||||
if rollback_content is None:
|
||||
raise ValueError("Variable 'rollback_content' is None")
|
||||
|
||||
logger.debug(f"Found rollback_content variable in module: {rollback_content}")
|
||||
return rollback_callable, rollback_content
|
||||
|
||||
|
||||
def execute_rollback_version_files(telemetry_ocp: "KrknTelemetryOpenshift", run_uuid: str, scenario_type: str | None = None):
|
||||
"""
|
||||
Execute rollback version files for the given run_uuid and scenario_type.
|
||||
This function is called when a signal is received to perform rollback operations.
|
||||
|
||||
:param run_uuid: Unique identifier for the run.
|
||||
:param scenario_type: Type of the scenario being rolled back.
|
||||
"""
|
||||
|
||||
# Get the rollback versions directory
|
||||
version_files = RollbackConfig.search_rollback_version_files(run_uuid, scenario_type)
|
||||
if not version_files:
|
||||
logger.warning(f"Skip execution for run_uuid={run_uuid}, scenario_type={scenario_type or '*'}")
|
||||
return
|
||||
|
||||
# Execute all version files in the directory
|
||||
logger.info(f"Executing rollback version files for run_uuid={run_uuid}, scenario_type={scenario_type or '*'}")
|
||||
for version_file in version_files:
|
||||
try:
|
||||
logger.info(f"Executing rollback version file: {version_file}")
|
||||
|
||||
# Parse the rollback module to get function and content
|
||||
rollback_callable, rollback_content = _parse_rollback_module(version_file)
|
||||
# Execute the rollback function
|
||||
logger.info('Executing rollback callable...')
|
||||
rollback_callable(rollback_content, telemetry_ocp)
|
||||
logger.info('Rollback completed.')
|
||||
|
||||
logger.info(f"Executed {version_file} successfully.")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to execute rollback version file {version_file}: {e}")
|
||||
raise
|
||||
|
||||
|
||||
def cleanup_rollback_version_files(run_uuid: str, scenario_type: str):
|
||||
"""
|
||||
Cleanup rollback version files for the given run_uuid and scenario_type.
|
||||
This function is called to remove the rollback version files after execution.
|
||||
|
||||
:param run_uuid: Unique identifier for the run.
|
||||
:param scenario_type: Type of the scenario being rolled back.
|
||||
"""
|
||||
|
||||
# Get the rollback versions directory
|
||||
version_files = RollbackConfig.search_rollback_version_files(run_uuid, scenario_type)
|
||||
if not version_files:
|
||||
logger.warning(f"Skip cleanup for run_uuid={run_uuid}, scenario_type={scenario_type or '*'}")
|
||||
return
|
||||
|
||||
# Remove all version files in the directory
|
||||
logger.info(f"Cleaning up rollback version files for run_uuid={run_uuid}, scenario_type={scenario_type}")
|
||||
for version_file in version_files:
|
||||
try:
|
||||
os.remove(version_file)
|
||||
logger.info(f"Removed {version_file} successfully.")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to remove rollback version file {version_file}: {e}")
|
||||
raise
|
||||
|
||||
|
||||
class RollbackHandler:
|
||||
def __init__(
|
||||
self,
|
||||
scenario_type: str,
|
||||
serializer: "Serializer",
|
||||
):
|
||||
self.scenario_type = scenario_type
|
||||
self.serializer = serializer
|
||||
self.rollback_context: RollbackContext | None = (
|
||||
None # will be set when `set_context` is called
|
||||
)
|
||||
|
||||
def set_context(self, run_uuid: str):
|
||||
"""
|
||||
Set the context for the rollback handler.
|
||||
:param run_uuid: Unique identifier for the run.
|
||||
"""
|
||||
self.rollback_context = RollbackContext(run_uuid)
|
||||
logger.info(
|
||||
f"Set rollback_context: {self.rollback_context} for scenario_type: {self.scenario_type} RollbackHandler"
|
||||
)
|
||||
|
||||
def clear_context(self):
|
||||
"""
|
||||
Clear the run_uuid context for the rollback handler.
|
||||
"""
|
||||
logger.debug(
|
||||
f"Clear rollback_context {self.rollback_context} for scenario type {self.scenario_type} RollbackHandler"
|
||||
)
|
||||
self.rollback_context = None
|
||||
|
||||
def set_rollback_callable(
|
||||
self,
|
||||
callable: "RollbackCallable",
|
||||
rollback_content: "RollbackContent",
|
||||
):
|
||||
"""
|
||||
Set the rollback callable to be executed after the scenario is finished.
|
||||
|
||||
:param callable: The rollback callable to be set.
|
||||
:param rollback_content: The rollback content for the callable.
|
||||
"""
|
||||
logger.debug(
|
||||
f"Rollback callable set to {callable.__name__} for version directory {RollbackConfig.get_rollback_versions_directory(self.rollback_context)}"
|
||||
)
|
||||
|
||||
version: Version = Version.new_version(
|
||||
scenario_type=self.scenario_type,
|
||||
rollback_context=self.rollback_context,
|
||||
)
|
||||
|
||||
# Serialize the callable to a file
|
||||
try:
|
||||
version_file = self.serializer.serialize_callable(
|
||||
callable, rollback_content, version
|
||||
)
|
||||
logger.info(f"Rollback callable serialized to {version_file}")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to serialize rollback callable: {e}")
|
||||
123
krkn/rollback/serialization.py
Normal file
123
krkn/rollback/serialization.py
Normal file
@@ -0,0 +1,123 @@
|
||||
import inspect
|
||||
import os
|
||||
import logging
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from krkn.rollback.config import RollbackCallable, RollbackContent, Version
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Serializer:
|
||||
def __init__(self, scenario_type: str):
|
||||
self.scenario_type = scenario_type
|
||||
# Set up Jinja2 environment to load templates from the rollback directory
|
||||
template_dir = os.path.join(os.path.dirname(__file__))
|
||||
env = Environment(loader=FileSystemLoader(template_dir))
|
||||
self.template = env.get_template("version_template.j2")
|
||||
|
||||
def _parse_rollback_callable_code(
|
||||
self, rollback_callable: "RollbackCallable"
|
||||
) -> tuple[str, str]:
|
||||
"""
|
||||
Parse the rollback callable code to extract its implementation.
|
||||
:param rollback_callable: The callable function to parse (can be staticmethod or regular function).
|
||||
:return: A tuple containing (function_name, function_code).
|
||||
"""
|
||||
# Get the implementation code of the rollback_callable
|
||||
rollback_callable_code = inspect.getsource(rollback_callable)
|
||||
|
||||
# Split into lines for processing
|
||||
code_lines = rollback_callable_code.split("\n")
|
||||
cleaned_lines = []
|
||||
function_name = None
|
||||
|
||||
# Find the function definition line and extract function name
|
||||
def_line_index = None
|
||||
for i, line in enumerate(code_lines):
|
||||
# Skip decorators (including @staticmethod)
|
||||
if line.strip().startswith("@"):
|
||||
continue
|
||||
|
||||
# Look for function definition
|
||||
if line.strip().startswith("def "):
|
||||
def_line_index = i
|
||||
# Extract function name from the def line
|
||||
def_line = line.strip()
|
||||
if "(" in def_line:
|
||||
function_name = def_line.split("def ")[1].split("(")[0].strip()
|
||||
break
|
||||
|
||||
if def_line_index is None or function_name is None:
|
||||
raise ValueError(
|
||||
"Could not find function definition in callable source code"
|
||||
)
|
||||
|
||||
# Get the base indentation level from the def line
|
||||
def_line = code_lines[def_line_index]
|
||||
base_indent_level = len(def_line) - len(def_line.lstrip())
|
||||
|
||||
# Process all lines starting from the def line
|
||||
for i in range(def_line_index, len(code_lines)):
|
||||
line = code_lines[i]
|
||||
|
||||
# Handle empty lines
|
||||
if not line.strip():
|
||||
cleaned_lines.append("")
|
||||
continue
|
||||
|
||||
# Calculate current line's indentation
|
||||
current_indent = len(line) - len(line.lstrip())
|
||||
|
||||
# Remove the base indentation to normalize to function level
|
||||
if current_indent >= base_indent_level:
|
||||
# Remove base indentation
|
||||
normalized_line = line[base_indent_level:]
|
||||
cleaned_lines.append(normalized_line)
|
||||
else:
|
||||
# This shouldn't happen in well-formed code, but handle it gracefully
|
||||
cleaned_lines.append(line.lstrip())
|
||||
|
||||
# Reconstruct the code and clean up trailing whitespace
|
||||
function_code = "\n".join(cleaned_lines).rstrip()
|
||||
|
||||
return function_name, function_code
|
||||
|
||||
def serialize_callable(
|
||||
self,
|
||||
rollback_callable: "RollbackCallable",
|
||||
rollback_content: "RollbackContent",
|
||||
version: "Version",
|
||||
) -> str:
|
||||
"""
|
||||
Serialize a callable function to a file with its arguments and keyword arguments.
|
||||
:param rollback_callable: The callable to serialize.
|
||||
:param rollback_content: The rollback content for the callable.
|
||||
:param version: The version representing the rollback context and file path for the rollback.
|
||||
:return: Path to the serialized callable file.
|
||||
"""
|
||||
|
||||
rollback_callable_name, rollback_callable_code = (
|
||||
self._parse_rollback_callable_code(rollback_callable)
|
||||
)
|
||||
|
||||
# Render the template with the required variables
|
||||
file_content = self.template.render(
|
||||
rollback_callable_name=rollback_callable_name,
|
||||
rollback_callable_code=rollback_callable_code,
|
||||
rollback_content=str(rollback_content),
|
||||
)
|
||||
|
||||
# Write the file to the version directory
|
||||
os.makedirs(os.path.dirname(version.version_file_full_path), exist_ok=True)
|
||||
|
||||
logger.debug("Creating version file at %s", version.version_file_full_path)
|
||||
logger.debug("Version file content:\n%s", file_content)
|
||||
with open(version.version_file_full_path, "w") as f:
|
||||
f.write(file_content)
|
||||
logger.info(f"Serialized callable written to {version.version_file_full_path}")
|
||||
|
||||
return version.version_file_full_path
|
||||
106
krkn/rollback/signal.py
Normal file
106
krkn/rollback/signal.py
Normal file
@@ -0,0 +1,106 @@
|
||||
from typing import Dict, Any, Optional
|
||||
import threading
|
||||
import signal
|
||||
import sys
|
||||
import logging
|
||||
from contextlib import contextmanager
|
||||
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
from krkn.rollback.handler import execute_rollback_version_files
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class SignalHandler:
|
||||
# Class-level variables for signal handling (shared across all instances)
|
||||
_signal_handlers_installed = False # No need for thread-safe variable due to _signal_lock
|
||||
_original_handlers: Dict[int, Any] = {}
|
||||
_signal_lock = threading.Lock()
|
||||
|
||||
# Thread-local storage for context
|
||||
_local = threading.local()
|
||||
|
||||
@classmethod
|
||||
def _set_context(cls, run_uuid: str, scenario_type: str, telemetry_ocp: KrknTelemetryOpenshift):
|
||||
"""Set the current execution context for this thread."""
|
||||
cls._local.run_uuid = run_uuid
|
||||
cls._local.scenario_type = scenario_type
|
||||
cls._local.telemetry_ocp = telemetry_ocp
|
||||
logger.debug(f"Set signal context set for thread {threading.current_thread().name} - run_uuid={run_uuid}, scenario_type={scenario_type}")
|
||||
|
||||
@classmethod
|
||||
def _get_context(cls) -> tuple[Optional[str], Optional[str], Optional[KrknTelemetryOpenshift]]:
|
||||
"""Get the current execution context for this thread."""
|
||||
run_uuid = getattr(cls._local, 'run_uuid', None)
|
||||
scenario_type = getattr(cls._local, 'scenario_type', None)
|
||||
telemetry_ocp = getattr(cls._local, 'telemetry_ocp', None)
|
||||
return run_uuid, scenario_type, telemetry_ocp
|
||||
|
||||
@classmethod
|
||||
def _signal_handler(cls, signum: int, frame):
|
||||
"""Handle signals with current thread context information."""
|
||||
signal_name = signal.Signals(signum).name
|
||||
run_uuid, scenario_type, telemetry_ocp = cls._get_context()
|
||||
if not run_uuid or not scenario_type or not telemetry_ocp:
|
||||
logger.warning(f"Signal {signal_name} received without complete context, skipping rollback.")
|
||||
return
|
||||
|
||||
# Clear the context for the next signal, as another signal may arrive before the rollback completes.
|
||||
# This ensures that the rollback is performed only once.
|
||||
cls._set_context(None, None, telemetry_ocp)
|
||||
|
||||
# Perform rollback
|
||||
logger.info(f"Performing rollback for signal {signal_name} with run_uuid={run_uuid}, scenario_type={scenario_type}")
|
||||
execute_rollback_version_files(telemetry_ocp, run_uuid, scenario_type)
|
||||
|
||||
# Call original handler if it exists
|
||||
if signum not in cls._original_handlers:
|
||||
logger.info(f"Signal {signal_name} has no registered handler, exiting...")
|
||||
return
|
||||
|
||||
original_handler = cls._original_handlers[signum]
|
||||
if callable(original_handler):
|
||||
logger.info(f"Calling original handler for {signal_name}")
|
||||
original_handler(signum, frame)
|
||||
elif original_handler == signal.SIG_DFL:
|
||||
# Restore default behavior
|
||||
logger.info(f"Restoring default signal handler for {signal_name}")
|
||||
signal.signal(signum, signal.SIG_DFL)
|
||||
signal.raise_signal(signum)
|
||||
|
||||
@classmethod
|
||||
def _register_signal_handler(cls):
|
||||
"""Register signal handlers once (called by first instance)."""
|
||||
with cls._signal_lock: # Lock protects _signal_handlers_installed from race conditions
|
||||
if cls._signal_handlers_installed:
|
||||
return
|
||||
|
||||
signals_to_handle = [signal.SIGINT, signal.SIGTERM]
|
||||
if hasattr(signal, 'SIGHUP'):
|
||||
signals_to_handle.append(signal.SIGHUP)
|
||||
|
||||
for sig in signals_to_handle:
|
||||
try:
|
||||
original_handler = signal.signal(sig, cls._signal_handler)
|
||||
cls._original_handlers[sig] = original_handler
|
||||
logger.debug(f"SignalHandler: Registered signal handler for {signal.Signals(sig).name}")
|
||||
except (OSError, ValueError) as e:
|
||||
logger.warning(f"AbstractScenarioPlugin: Could not register handler for signal {sig}: {e}")
|
||||
|
||||
cls._signal_handlers_installed = True
|
||||
logger.info("Signal handlers registered globally")
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def signal_context(cls, run_uuid: str, scenario_type: str, telemetry_ocp: KrknTelemetryOpenshift):
|
||||
"""Context manager to set the signal context for the current thread."""
|
||||
cls._set_context(run_uuid, scenario_type, telemetry_ocp)
|
||||
cls._register_signal_handler()
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
# Clear context after exiting the context manager
|
||||
cls._set_context(None, None, telemetry_ocp)
|
||||
|
||||
|
||||
signal_handler = SignalHandler()
|
||||
55
krkn/rollback/version_template.j2
Normal file
55
krkn/rollback/version_template.j2
Normal file
@@ -0,0 +1,55 @@
|
||||
# This file is auto-generated by krkn-lib.
|
||||
# It contains the rollback callable and its arguments for the scenario plugin.
|
||||
|
||||
from dataclasses import dataclass
|
||||
import os
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from krkn_lib.utils import SafeLogger
|
||||
from krkn_lib.ocp import KrknOpenshift
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class RollbackContent:
|
||||
resource_identifier: str
|
||||
namespace: Optional[str] = None
|
||||
|
||||
# Actual rollback callable
|
||||
{{ rollback_callable_code }}
|
||||
|
||||
# Create necessary variables for execution
|
||||
lib_openshift = None
|
||||
lib_telemetry = None
|
||||
rollback_content = {{ rollback_content }}
|
||||
|
||||
|
||||
# Main entry point for execution
|
||||
if __name__ == '__main__':
|
||||
# setup logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s [%(levelname)s] %(message)s",
|
||||
handlers=[
|
||||
logging.StreamHandler(),
|
||||
]
|
||||
)
|
||||
|
||||
# setup logging and get kubeconfig path
|
||||
kubeconfig_path = os.getenv("KUBECONFIG", "~/.kube/config")
|
||||
log_directory = os.path.dirname(os.path.abspath(__file__))
|
||||
os.makedirs(os.path.join(log_directory, 'logs'), exist_ok=True)
|
||||
# setup SafeLogger for telemetry
|
||||
telemetry_log_path = os.path.join(log_directory, 'logs', 'telemetry.log')
|
||||
safe_logger = SafeLogger(telemetry_log_path)
|
||||
# setup krkn-lib objects
|
||||
lib_openshift = KrknOpenshift(kubeconfig_path=kubeconfig_path)
|
||||
lib_telemetry = KrknTelemetryOpenshift(safe_logger=safe_logger, lib_openshift=lib_openshift)
|
||||
|
||||
# execute
|
||||
logging.info('Executing rollback callable...')
|
||||
{{ rollback_callable_name }}(
|
||||
rollback_content,
|
||||
lib_telemetry
|
||||
)
|
||||
logging.info('Rollback completed.')
|
||||
@@ -5,9 +5,26 @@ from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
|
||||
from krkn import utils
|
||||
|
||||
from krkn.rollback.handler import (
|
||||
RollbackHandler,
|
||||
execute_rollback_version_files,
|
||||
cleanup_rollback_version_files
|
||||
)
|
||||
from krkn.rollback.signal import signal_handler
|
||||
from krkn.rollback.serialization import Serializer
|
||||
|
||||
class AbstractScenarioPlugin(ABC):
|
||||
|
||||
def __init__(self, scenario_type: str = "placeholder_scenario_type"):
|
||||
"""Initializes the AbstractScenarioPlugin with the scenario type and rollback configuration.
|
||||
|
||||
:param scenario_type: the scenario type defined in the config.yaml
|
||||
"""
|
||||
serializer = Serializer(
|
||||
scenario_type=scenario_type,
|
||||
)
|
||||
self.rollback_handler = RollbackHandler(scenario_type, serializer)
|
||||
|
||||
@abstractmethod
|
||||
def run(
|
||||
self,
|
||||
@@ -74,24 +91,38 @@ class AbstractScenarioPlugin(ABC):
|
||||
scenario_telemetry, scenario_config
|
||||
)
|
||||
|
||||
try:
|
||||
logging.info(
|
||||
f"Running {self.__class__.__name__}: {self.get_scenario_types()} -> {scenario_config}"
|
||||
)
|
||||
return_value = self.run(
|
||||
run_uuid,
|
||||
scenario_config,
|
||||
krkn_config,
|
||||
telemetry,
|
||||
scenario_telemetry,
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
f"uncaught exception on scenario `run()` method: {e} "
|
||||
f"please report an issue on https://github.com/krkn-chaos/krkn"
|
||||
)
|
||||
return_value = 1
|
||||
with signal_handler.signal_context(
|
||||
run_uuid=run_uuid,
|
||||
scenario_type=scenario_telemetry.scenario_type,
|
||||
telemetry_ocp=telemetry
|
||||
):
|
||||
try:
|
||||
logging.info(
|
||||
f"Running {self.__class__.__name__}: {self.get_scenario_types()} -> {scenario_config}"
|
||||
)
|
||||
# pass all the parameters by kwargs to make `set_rollback_context_decorator` get the `run_uuid` and `scenario_type`
|
||||
return_value = self.run(
|
||||
run_uuid=run_uuid,
|
||||
scenario=scenario_config,
|
||||
krkn_config=krkn_config,
|
||||
lib_telemetry=telemetry,
|
||||
scenario_telemetry=scenario_telemetry,
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
f"uncaught exception on scenario `run()` method: {e} "
|
||||
f"please report an issue on https://github.com/krkn-chaos/krkn"
|
||||
)
|
||||
return_value = 1
|
||||
|
||||
# execute rollback files based on the return value
|
||||
if return_value != 0:
|
||||
execute_rollback_version_files(
|
||||
telemetry, run_uuid, scenario_telemetry.scenario_type
|
||||
)
|
||||
cleanup_rollback_version_files(
|
||||
run_uuid, scenario_telemetry.scenario_type
|
||||
)
|
||||
scenario_telemetry.exit_status = return_value
|
||||
scenario_telemetry.end_timestamp = time.time()
|
||||
utils.collect_and_put_ocp_logs(
|
||||
@@ -118,4 +149,4 @@ class AbstractScenarioPlugin(ABC):
|
||||
time.sleep(wait_duration)
|
||||
return failed_scenarios, scenario_telemetries
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -7,9 +7,12 @@ from krkn_lib.utils import get_yaml_item_value, get_random_string
|
||||
from jinja2 import Template
|
||||
from krkn import cerberus
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
from krkn.rollback.config import RollbackContent
|
||||
from krkn.rollback.handler import set_rollback_context_decorator
|
||||
|
||||
|
||||
class ApplicationOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
@set_rollback_context_decorator
|
||||
def run(
|
||||
self,
|
||||
run_uuid: str,
|
||||
@@ -57,6 +60,13 @@ class ApplicationOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
# Block the traffic by creating network policy
|
||||
logging.info("Creating the network policy")
|
||||
|
||||
self.rollback_handler.set_rollback_callable(
|
||||
self.rollback_network_policy,
|
||||
RollbackContent(
|
||||
namespace=namespace,
|
||||
resource_identifier=policy_name,
|
||||
),
|
||||
)
|
||||
lib_telemetry.get_lib_kubernetes().create_net_policy(
|
||||
yaml_spec, namespace
|
||||
)
|
||||
@@ -89,5 +99,26 @@ class ApplicationOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
else:
|
||||
return 0
|
||||
|
||||
@staticmethod
|
||||
def rollback_network_policy(
|
||||
rollback_content: RollbackContent,
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
):
|
||||
"""Rollback function to delete the network policy created during the scenario.
|
||||
|
||||
:param rollback_content: Rollback content containing namespace and resource_identifier.
|
||||
:param lib_telemetry: Instance of KrknTelemetryOpenshift for Kubernetes operations.
|
||||
"""
|
||||
try:
|
||||
namespace = rollback_content.namespace
|
||||
policy_name = rollback_content.resource_identifier
|
||||
logging.info(
|
||||
f"Rolling back network policy: {policy_name} in namespace: {namespace}"
|
||||
)
|
||||
lib_telemetry.get_lib_kubernetes().delete_net_policy(policy_name, namespace)
|
||||
logging.info("Network policy rollback completed successfully.")
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to rollback network policy: {e}")
|
||||
|
||||
def get_scenario_types(self) -> list[str]:
|
||||
return ["application_outages_scenarios"]
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
import logging
|
||||
import random
|
||||
import time
|
||||
|
||||
from asyncio import Future
|
||||
import yaml
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
from krkn_lib.k8s.pods_monitor_pool import PodsMonitorPool
|
||||
from krkn_lib.k8s.pod_monitor import select_and_monitor_by_namespace_pattern_and_label
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
from krkn_lib.utils import get_yaml_item_value
|
||||
@@ -22,30 +22,26 @@ class ContainerScenarioPlugin(AbstractScenarioPlugin):
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry,
|
||||
) -> int:
|
||||
pool = PodsMonitorPool(lib_telemetry.get_lib_kubernetes())
|
||||
try:
|
||||
with open(scenario, "r") as f:
|
||||
cont_scenario_config = yaml.full_load(f)
|
||||
|
||||
for kill_scenario in cont_scenario_config["scenarios"]:
|
||||
self.start_monitoring(
|
||||
kill_scenario, pool
|
||||
future_snapshot = self.start_monitoring(
|
||||
kill_scenario,
|
||||
lib_telemetry
|
||||
)
|
||||
killed_containers = self.container_killing_in_pod(
|
||||
self.container_killing_in_pod(
|
||||
kill_scenario, lib_telemetry.get_lib_kubernetes()
|
||||
)
|
||||
result = pool.join()
|
||||
if result.error:
|
||||
logging.error(
|
||||
logging.error(
|
||||
f"ContainerScenarioPlugin pods failed to recovery: {result.error}"
|
||||
)
|
||||
)
|
||||
return 1
|
||||
scenario_telemetry.affected_pods = result
|
||||
|
||||
except (RuntimeError, Exception):
|
||||
logging.error("ContainerScenarioPlugin exiting due to Exception %s")
|
||||
snapshot = future_snapshot.result()
|
||||
result = snapshot.get_pods_status()
|
||||
scenario_telemetry.affected_pods = result
|
||||
if len(result.unrecovered) > 0:
|
||||
logging.info("ContainerScenarioPlugin failed with unrecovered containers")
|
||||
return 1
|
||||
except (RuntimeError, Exception) as e:
|
||||
logging.error("ContainerScenarioPlugin exiting due to Exception %s" % e)
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
@@ -53,17 +49,18 @@ class ContainerScenarioPlugin(AbstractScenarioPlugin):
|
||||
def get_scenario_types(self) -> list[str]:
|
||||
return ["container_scenarios"]
|
||||
|
||||
def start_monitoring(self, kill_scenario: dict, pool: PodsMonitorPool):
|
||||
def start_monitoring(self, kill_scenario: dict, lib_telemetry: KrknTelemetryOpenshift) -> Future:
|
||||
|
||||
namespace_pattern = f"^{kill_scenario['namespace']}$"
|
||||
label_selector = kill_scenario["label_selector"]
|
||||
recovery_time = kill_scenario["expected_recovery_time"]
|
||||
pool.select_and_monitor_by_namespace_pattern_and_label(
|
||||
future_snapshot = select_and_monitor_by_namespace_pattern_and_label(
|
||||
namespace_pattern=namespace_pattern,
|
||||
label_selector=label_selector,
|
||||
max_timeout=recovery_time,
|
||||
field_selector="status.phase=Running"
|
||||
v1_client=lib_telemetry.get_lib_kubernetes().cli
|
||||
)
|
||||
return future_snapshot
|
||||
|
||||
def container_killing_in_pod(self, cont_scenario, kubecli: KrknKubernetes):
|
||||
scenario_name = get_yaml_item_value(cont_scenario, "name", "")
|
||||
|
||||
@@ -16,15 +16,23 @@ from krkn_lib.k8s import KrknKubernetes
|
||||
from krkn_lib.utils import get_random_string
|
||||
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
from krkn.rollback.config import RollbackContent
|
||||
from krkn.rollback.handler import set_rollback_context_decorator
|
||||
|
||||
|
||||
class HogsScenarioPlugin(AbstractScenarioPlugin):
|
||||
|
||||
@set_rollback_context_decorator
|
||||
def run(self, run_uuid: str, scenario: str, krkn_config: dict[str, any], lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry) -> int:
|
||||
try:
|
||||
with open(scenario, "r") as f:
|
||||
scenario = yaml.full_load(f)
|
||||
scenario_config = HogConfig.from_yaml_dict(scenario)
|
||||
|
||||
# Get node-name if provided
|
||||
node_name = scenario.get('node-name')
|
||||
|
||||
has_selector = True
|
||||
if not scenario_config.node_selector or not re.match("^.+=.*$", scenario_config.node_selector):
|
||||
if scenario_config.node_selector:
|
||||
@@ -33,13 +41,19 @@ class HogsScenarioPlugin(AbstractScenarioPlugin):
|
||||
else:
|
||||
node_selector = scenario_config.node_selector
|
||||
|
||||
available_nodes = lib_telemetry.get_lib_kubernetes().list_nodes(node_selector)
|
||||
if len(available_nodes) == 0:
|
||||
raise Exception("no available nodes to schedule workload")
|
||||
if node_name:
|
||||
logging.info(f"Using specific node: {node_name}")
|
||||
all_nodes = lib_telemetry.get_lib_kubernetes().list_nodes("")
|
||||
if node_name not in all_nodes:
|
||||
raise Exception(f"Specified node {node_name} not found or not available")
|
||||
available_nodes = [node_name]
|
||||
else:
|
||||
available_nodes = lib_telemetry.get_lib_kubernetes().list_nodes(node_selector)
|
||||
if len(available_nodes) == 0:
|
||||
raise Exception("no available nodes to schedule workload")
|
||||
|
||||
if not has_selector:
|
||||
# if selector not specified picks a random node between the available
|
||||
available_nodes = [available_nodes[random.randint(0, len(available_nodes))]]
|
||||
if not has_selector:
|
||||
available_nodes = [available_nodes[random.randint(0, len(available_nodes))]]
|
||||
|
||||
if scenario_config.number_of_nodes and len(available_nodes) > scenario_config.number_of_nodes:
|
||||
available_nodes = random.sample(available_nodes, scenario_config.number_of_nodes)
|
||||
@@ -69,6 +83,13 @@ class HogsScenarioPlugin(AbstractScenarioPlugin):
|
||||
config.node_selector = f"kubernetes.io/hostname={node}"
|
||||
pod_name = f"{config.type.value}-hog-{get_random_string(5)}"
|
||||
node_resources_start = lib_k8s.get_node_resources_info(node)
|
||||
self.rollback_handler.set_rollback_callable(
|
||||
self.rollback_hog_pod,
|
||||
RollbackContent(
|
||||
namespace=config.namespace,
|
||||
resource_identifier=pod_name,
|
||||
),
|
||||
)
|
||||
lib_k8s.deploy_hog(pod_name, config)
|
||||
start = time.time()
|
||||
# waiting 3 seconds before starting sample collection
|
||||
@@ -140,3 +161,22 @@ class HogsScenarioPlugin(AbstractScenarioPlugin):
|
||||
raise exception
|
||||
except queue.Empty:
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def rollback_hog_pod(rollback_content: RollbackContent, lib_telemetry: KrknTelemetryOpenshift):
|
||||
"""
|
||||
Rollback function to delete hog pod.
|
||||
|
||||
:param rollback_content: Rollback content containing namespace and resource_identifier.
|
||||
:param lib_telemetry: Instance of KrknTelemetryOpenshift for Kubernetes operations
|
||||
"""
|
||||
try:
|
||||
namespace = rollback_content.namespace
|
||||
pod_name = rollback_content.resource_identifier
|
||||
logging.info(
|
||||
f"Rolling back hog pod: {pod_name} in namespace: {namespace}"
|
||||
)
|
||||
lib_telemetry.get_lib_kubernetes().delete_pod(pod_name, namespace)
|
||||
logging.info("Rollback of hog pod completed successfully.")
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to rollback hog pod: {e}")
|
||||
|
||||
@@ -0,0 +1,404 @@
|
||||
import logging
|
||||
import time
|
||||
from typing import Dict, Any, Optional
|
||||
import random
|
||||
import re
|
||||
import yaml
|
||||
from kubernetes.client.rest import ApiException
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
from krkn_lib.utils import log_exception
|
||||
from krkn_lib.models.k8s import AffectedPod, PodsStatus
|
||||
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
|
||||
|
||||
class KubevirtVmOutageScenarioPlugin(AbstractScenarioPlugin):
|
||||
"""
|
||||
A scenario plugin that injects chaos by deleting a KubeVirt Virtual Machine Instance (VMI).
|
||||
This plugin simulates a VM crash or outage scenario and supports automated or manual recovery.
|
||||
"""
|
||||
|
||||
def __init__(self, scenario_type: str = None):
|
||||
scenario_type = self.get_scenario_types()[0]
|
||||
super().__init__(scenario_type)
|
||||
self.k8s_client = None
|
||||
self.original_vmi = None
|
||||
|
||||
# Scenario type is handled directly in execute_scenario
|
||||
def get_scenario_types(self) -> list[str]:
|
||||
return ["kubevirt_vm_outage"]
|
||||
|
||||
def run(
|
||||
self,
|
||||
run_uuid: str,
|
||||
scenario: str,
|
||||
krkn_config: dict[str, any],
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry,
|
||||
) -> int:
|
||||
"""
|
||||
Main entry point for the plugin.
|
||||
Parses the scenario configuration and executes the chaos scenario.
|
||||
"""
|
||||
try:
|
||||
with open(scenario, "r") as f:
|
||||
scenario_config = yaml.full_load(f)
|
||||
|
||||
self.init_clients(lib_telemetry.get_lib_kubernetes())
|
||||
pods_status = PodsStatus()
|
||||
for config in scenario_config["scenarios"]:
|
||||
if config.get("scenario") == "kubevirt_vm_outage":
|
||||
single_pods_status = self.execute_scenario(config, scenario_telemetry)
|
||||
pods_status.merge(single_pods_status)
|
||||
|
||||
scenario_telemetry.affected_pods = pods_status
|
||||
|
||||
return 0
|
||||
except Exception as e:
|
||||
logging.error(f"KubeVirt VM Outage scenario failed: {e}")
|
||||
log_exception(e)
|
||||
return 1
|
||||
|
||||
def init_clients(self, k8s_client: KrknKubernetes):
|
||||
"""
|
||||
Initialize Kubernetes client for KubeVirt operations.
|
||||
"""
|
||||
self.k8s_client = k8s_client
|
||||
self.custom_object_client = k8s_client.custom_object_client
|
||||
logging.info("Successfully initialized Kubernetes client for KubeVirt operations")
|
||||
|
||||
def get_vmi(self, name: str, namespace: str) -> Optional[Dict]:
|
||||
"""
|
||||
Get a Virtual Machine Instance by name and namespace.
|
||||
|
||||
:param name: Name of the VMI to retrieve
|
||||
:param namespace: Namespace of the VMI
|
||||
:return: The VMI object if found, None otherwise
|
||||
"""
|
||||
try:
|
||||
vmi = self.custom_object_client.get_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachineinstances",
|
||||
name=name
|
||||
)
|
||||
return vmi
|
||||
except ApiException as e:
|
||||
if e.status == 404:
|
||||
logging.warning(f"VMI {name} not found in namespace {namespace}")
|
||||
return None
|
||||
else:
|
||||
logging.error(f"Error getting VMI {name}: {e}")
|
||||
raise
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected error getting VMI {name}: {e}")
|
||||
raise
|
||||
|
||||
def get_vmis(self, regex_name: str, namespace: str) -> Optional[Dict]:
|
||||
"""
|
||||
Get a Virtual Machine Instance by name and namespace.
|
||||
|
||||
:param name: Name of the VMI to retrieve
|
||||
:param namespace: Namespace of the VMI
|
||||
:return: The VMI object if found, None otherwise
|
||||
"""
|
||||
try:
|
||||
vmis = self.custom_object_client.list_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachineinstances",
|
||||
)
|
||||
|
||||
vmi_list = []
|
||||
for vmi in vmis.get("items"):
|
||||
vmi_name = vmi.get("metadata",{}).get("name")
|
||||
match = re.match(regex_name, vmi_name)
|
||||
if match:
|
||||
vmi_list.append(vmi)
|
||||
return vmi_list
|
||||
except ApiException as e:
|
||||
if e.status == 404:
|
||||
logging.warning(f"VMI {regex_name} not found in namespace {namespace}")
|
||||
return []
|
||||
else:
|
||||
logging.error(f"Error getting VMI {regex_name}: {e}")
|
||||
raise
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected error getting VMI {regex_name}: {e}")
|
||||
raise
|
||||
|
||||
def execute_scenario(self, config: Dict[str, Any], scenario_telemetry: ScenarioTelemetry) -> int:
|
||||
"""
|
||||
Execute a KubeVirt VM outage scenario based on the provided configuration.
|
||||
|
||||
:param config: The scenario configuration
|
||||
:param scenario_telemetry: The telemetry object for recording metrics
|
||||
:return: 0 for success, 1 for failure
|
||||
"""
|
||||
self.pods_status = PodsStatus()
|
||||
try:
|
||||
params = config.get("parameters", {})
|
||||
vm_name = params.get("vm_name")
|
||||
namespace = params.get("namespace", "default")
|
||||
timeout = params.get("timeout", 60)
|
||||
kill_count = params.get("kill_count", 1)
|
||||
disable_auto_restart = params.get("disable_auto_restart", False)
|
||||
|
||||
if not vm_name:
|
||||
logging.error("vm_name parameter is required")
|
||||
return 1
|
||||
self.pods_status = PodsStatus()
|
||||
vmis_list = self.get_vmis(vm_name,namespace)
|
||||
for _ in range(kill_count):
|
||||
|
||||
rand_int = random.randint(0, len(vmis_list) - 1)
|
||||
vmi = vmis_list[rand_int]
|
||||
|
||||
logging.info(f"Starting KubeVirt VM outage scenario for VM: {vm_name} in namespace: {namespace}")
|
||||
vmi_name = vmi.get("metadata").get("name")
|
||||
if not self.validate_environment(vmi_name, namespace):
|
||||
return 1
|
||||
|
||||
vmi = self.get_vmi(vmi_name, namespace)
|
||||
self.affected_pod = AffectedPod(
|
||||
pod_name=vmi_name,
|
||||
namespace=namespace,
|
||||
)
|
||||
if not vmi:
|
||||
logging.error(f"VMI {vm_name} not found in namespace {namespace}")
|
||||
return 1
|
||||
|
||||
self.original_vmi = vmi
|
||||
logging.info(f"Captured initial state of VMI: {vm_name}")
|
||||
result = self.delete_vmi(vmi_name, namespace, disable_auto_restart)
|
||||
if result != 0:
|
||||
self.pods_status.unrecovered.append(self.affected_pod)
|
||||
continue
|
||||
|
||||
result = self.wait_for_running(vmi_name,namespace, timeout)
|
||||
if result != 0:
|
||||
self.pods_status.unrecovered.append(self.affected_pod)
|
||||
continue
|
||||
|
||||
self.affected_pod.total_recovery_time = (
|
||||
self.affected_pod.pod_readiness_time
|
||||
+ self.affected_pod.pod_rescheduling_time
|
||||
)
|
||||
|
||||
self.pods_status.recovered.append(self.affected_pod)
|
||||
logging.info(f"Successfully completed KubeVirt VM outage scenario for VM: {vm_name}")
|
||||
|
||||
return self.pods_status
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error executing KubeVirt VM outage scenario: {e}")
|
||||
log_exception(e)
|
||||
return self.pods_status
|
||||
|
||||
def validate_environment(self, vm_name: str, namespace: str) -> bool:
|
||||
"""
|
||||
Validate that KubeVirt is installed and the specified VM exists.
|
||||
|
||||
:param vm_name: Name of the VM to validate
|
||||
:param namespace: Namespace of the VM
|
||||
:return: True if environment is valid, False otherwise
|
||||
"""
|
||||
try:
|
||||
# Check if KubeVirt CRDs exist
|
||||
crd_list = self.custom_object_client.list_namespaced_custom_object("kubevirt.io","v1",namespace,"virtualmachines")
|
||||
kubevirt_crds = [crd for crd in crd_list.items() ]
|
||||
|
||||
if not kubevirt_crds:
|
||||
logging.error("KubeVirt CRDs not found. Ensure KubeVirt/CNV is installed in the cluster")
|
||||
return False
|
||||
|
||||
# Check if VMI exists
|
||||
vmi = self.get_vmi(vm_name, namespace)
|
||||
if not vmi:
|
||||
logging.error(f"VMI {vm_name} not found in namespace {namespace}")
|
||||
return False
|
||||
|
||||
logging.info(f"Validated environment: KubeVirt is installed and VMI {vm_name} exists")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error validating environment: {e}")
|
||||
return False
|
||||
|
||||
def patch_vm_spec(self, vm_name: str, namespace: str, running: bool) -> bool:
|
||||
"""
|
||||
Patch the VM spec to enable/disable auto-restart.
|
||||
|
||||
:param vm_name: Name of the VM to patch
|
||||
:param namespace: Namespace of the VM
|
||||
:param running: Whether the VM should be set to running state
|
||||
:return: True if patch was successful, False otherwise
|
||||
"""
|
||||
try:
|
||||
# Get the VM object first to get its current spec
|
||||
vm = self.custom_object_client.get_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachines",
|
||||
name=vm_name
|
||||
)
|
||||
|
||||
# Update the running state
|
||||
if 'spec' not in vm:
|
||||
vm['spec'] = {}
|
||||
vm['spec']['running'] = running
|
||||
|
||||
# Apply the patch
|
||||
self.custom_object_client.patch_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachines",
|
||||
name=vm_name,
|
||||
body=vm
|
||||
)
|
||||
return True
|
||||
|
||||
except ApiException as e:
|
||||
logging.error(f"Failed to patch VM {vm_name}: {e}")
|
||||
return False
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected error patching VM {vm_name}: {e}")
|
||||
return False
|
||||
|
||||
def delete_vmi(self, vm_name: str, namespace: str, disable_auto_restart: bool = False, timeout: int = 120) -> int:
|
||||
"""
|
||||
Delete a Virtual Machine Instance to simulate a VM outage.
|
||||
|
||||
:param vm_name: Name of the VMI to delete
|
||||
:param namespace: Namespace of the VMI
|
||||
:return: 0 for success, 1 for failure
|
||||
"""
|
||||
try:
|
||||
logging.info(f"Injecting chaos: Deleting VMI {vm_name} in namespace {namespace}")
|
||||
|
||||
# If auto-restart should be disabled, patch the VM spec first
|
||||
if disable_auto_restart:
|
||||
logging.info(f"Disabling auto-restart for VM {vm_name} by setting spec.running=False")
|
||||
if not self.patch_vm_spec(vm_name, namespace, running=False):
|
||||
logging.error("Failed to disable auto-restart for VM"
|
||||
" - proceeding with deletion but VM may auto-restart")
|
||||
start_creation_time = self.original_vmi.get('metadata', {}).get('creationTimestamp')
|
||||
start_time = time.time()
|
||||
try:
|
||||
self.custom_object_client.delete_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachineinstances",
|
||||
name=vm_name
|
||||
)
|
||||
except ApiException as e:
|
||||
if e.status == 404:
|
||||
logging.warning(f"VMI {vm_name} not found during deletion")
|
||||
return 1
|
||||
else:
|
||||
logging.error(f"API error during VMI deletion: {e}")
|
||||
return 1
|
||||
|
||||
# Wait for the VMI to be deleted
|
||||
|
||||
while time.time() - start_time < timeout:
|
||||
deleted_vmi = self.get_vmi(vm_name, namespace)
|
||||
if deleted_vmi:
|
||||
if start_creation_time != deleted_vmi.get('metadata', {}).get('creationTimestamp'):
|
||||
logging.info(f"VMI {vm_name} successfully recreated")
|
||||
self.affected_pod.pod_rescheduling_time = time.time() - start_time
|
||||
return 0
|
||||
else:
|
||||
logging.info(f"VMI {vm_name} successfully deleted")
|
||||
time.sleep(1)
|
||||
|
||||
logging.error(f"Timed out waiting for VMI {vm_name} to be deleted")
|
||||
self.pods_status.unrecovered.append(self.affected_pod)
|
||||
return 1
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error deleting VMI {vm_name}: {e}")
|
||||
log_exception(e)
|
||||
self.pods_status.unrecovered.append(self.affected_pod)
|
||||
return 1
|
||||
|
||||
def wait_for_running(self, vm_name: str, namespace: str, timeout: int = 120) -> int:
|
||||
start_time = time.time()
|
||||
while time.time() - start_time < timeout:
|
||||
|
||||
# Check current state once since we've already waited for the duration
|
||||
vmi = self.get_vmi(vm_name, namespace)
|
||||
|
||||
if vmi:
|
||||
if vmi.get('status', {}).get('phase') == "Running":
|
||||
end_time = time.time()
|
||||
self.affected_pod.pod_readiness_time = end_time - start_time
|
||||
|
||||
logging.info(f"VMI {vm_name} is already running")
|
||||
return 0
|
||||
logging.info(f"VMI {vm_name} exists but is not in Running state. Current state: {vmi.get('status', {}).get('phase')}")
|
||||
else:
|
||||
logging.info(f"VMI {vm_name} not yet recreated")
|
||||
time.sleep(1)
|
||||
return 1
|
||||
|
||||
|
||||
def recover(self, vm_name: str, namespace: str, disable_auto_restart: bool = False) -> int:
|
||||
"""
|
||||
Recover a deleted VMI, either by waiting for auto-recovery or manually recreating it.
|
||||
|
||||
:param vm_name: Name of the VMI to recover
|
||||
:param namespace: Namespace of the VMI
|
||||
:param disable_auto_restart: Whether auto-restart was disabled during injection
|
||||
:return: 0 for success, 1 for failure
|
||||
"""
|
||||
try:
|
||||
logging.info(f"Attempting to recover VMI {vm_name} in namespace {namespace}")
|
||||
|
||||
if self.original_vmi:
|
||||
logging.info(f"Auto-recovery didn't occur for VMI {vm_name}. Attempting manual recreation")
|
||||
|
||||
try:
|
||||
# Clean up server-generated fields
|
||||
vmi_dict = self.original_vmi.copy()
|
||||
if 'metadata' in vmi_dict:
|
||||
metadata = vmi_dict['metadata']
|
||||
for field in ['resourceVersion', 'uid', 'creationTimestamp', 'generation']:
|
||||
if field in metadata:
|
||||
del metadata[field]
|
||||
|
||||
# Create the VMI
|
||||
self.custom_object_client.create_namespaced_custom_object(
|
||||
group="kubevirt.io",
|
||||
version="v1",
|
||||
namespace=namespace,
|
||||
plural="virtualmachineinstances",
|
||||
body=vmi_dict
|
||||
)
|
||||
logging.info(f"Successfully recreated VMI {vm_name}")
|
||||
|
||||
# Wait for VMI to start running
|
||||
self.wait_for_running(vm_name,namespace)
|
||||
|
||||
logging.warning(f"VMI {vm_name} was recreated but didn't reach Running state in time")
|
||||
return 0 # Still consider it a success as the VMI was recreated
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error recreating VMI {vm_name}: {e}")
|
||||
log_exception(e)
|
||||
return 1
|
||||
else:
|
||||
logging.error(f"Failed to recover VMI {vm_name}: No original state captured and auto-recovery did not occur")
|
||||
return 1
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Unexpected error recovering VMI {vm_name}: {e}")
|
||||
log_exception(e)
|
||||
return 1
|
||||
@@ -1,6 +1,5 @@
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
from krkn.scenario_plugins.native.plugins import PLUGINS
|
||||
from krkn_lib.k8s.pods_monitor_pool import PodsMonitorPool
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
from typing import Any
|
||||
@@ -28,7 +27,6 @@ class NativeScenarioPlugin(AbstractScenarioPlugin):
|
||||
|
||||
except Exception as e:
|
||||
logging.error("NativeScenarioPlugin exiting due to Exception %s" % e)
|
||||
pool.cancel()
|
||||
return 1
|
||||
else:
|
||||
return 0
|
||||
|
||||
@@ -5,6 +5,7 @@ import time
|
||||
import sys
|
||||
import os
|
||||
import re
|
||||
import random
|
||||
from traceback import format_exc
|
||||
from jinja2 import Environment, FileSystemLoader
|
||||
from . import kubernetes_functions as kube_helper
|
||||
@@ -28,6 +29,14 @@ class NetworkScenarioConfig:
|
||||
},
|
||||
)
|
||||
|
||||
image: typing.Annotated[str, validation.min(1)]= field(
|
||||
default="quay.io/krkn-chaos/krkn:tools",
|
||||
metadata={
|
||||
"name": "Image",
|
||||
"description": "Image of krkn tools to run"
|
||||
}
|
||||
)
|
||||
|
||||
label_selector: typing.Annotated[
|
||||
typing.Optional[str], validation.required_if_not("node_interface_name")
|
||||
] = field(
|
||||
@@ -142,7 +151,7 @@ class NetworkScenarioErrorOutput:
|
||||
)
|
||||
|
||||
|
||||
def get_default_interface(node: str, pod_template, cli: CoreV1Api) -> str:
|
||||
def get_default_interface(node: str, pod_template, cli: CoreV1Api, image: str) -> str:
|
||||
"""
|
||||
Function that returns a random interface from a node
|
||||
|
||||
@@ -160,14 +169,14 @@ def get_default_interface(node: str, pod_template, cli: CoreV1Api) -> str:
|
||||
Returns:
|
||||
Default interface (string) belonging to the node
|
||||
"""
|
||||
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
|
||||
logging.info("Creating pod to query interface on node %s" % node)
|
||||
kube_helper.create_pod(cli, pod_body, "default", 300)
|
||||
|
||||
pod_name = f"fedtools-{pod_name_regex}"
|
||||
try:
|
||||
cmd = ["ip", "r"]
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, "fedtools", "default")
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, pod_name, "default")
|
||||
|
||||
if not output:
|
||||
logging.error("Exception occurred while executing command in pod")
|
||||
@@ -183,13 +192,13 @@ def get_default_interface(node: str, pod_template, cli: CoreV1Api) -> str:
|
||||
|
||||
finally:
|
||||
logging.info("Deleting pod to query interface on node")
|
||||
kube_helper.delete_pod(cli, "fedtools", "default")
|
||||
kube_helper.delete_pod(cli, pod_name, "default")
|
||||
|
||||
return interfaces
|
||||
|
||||
|
||||
def verify_interface(
|
||||
input_interface_list: typing.List[str], node: str, pod_template, cli: CoreV1Api
|
||||
input_interface_list: typing.List[str], node: str, pod_template, cli: CoreV1Api, image: str
|
||||
) -> typing.List[str]:
|
||||
"""
|
||||
Function that verifies whether a list of interfaces is present in the node.
|
||||
@@ -212,13 +221,15 @@ def verify_interface(
|
||||
Returns:
|
||||
The interface list for the node
|
||||
"""
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
|
||||
logging.info("Creating pod to query interface on node %s" % node)
|
||||
kube_helper.create_pod(cli, pod_body, "default", 300)
|
||||
pod_name = f"fedtools-{pod_name_regex}"
|
||||
try:
|
||||
if input_interface_list == []:
|
||||
cmd = ["ip", "r"]
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, "fedtools", "default")
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, pod_name, "default")
|
||||
|
||||
if not output:
|
||||
logging.error("Exception occurred while executing command in pod")
|
||||
@@ -234,7 +245,7 @@ def verify_interface(
|
||||
|
||||
else:
|
||||
cmd = ["ip", "-br", "addr", "show"]
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, "fedtools", "default")
|
||||
output = kube_helper.exec_cmd_in_pod(cli, cmd, pod_name, "default")
|
||||
|
||||
if not output:
|
||||
logging.error("Exception occurred while executing command in pod")
|
||||
@@ -257,7 +268,7 @@ def verify_interface(
|
||||
)
|
||||
finally:
|
||||
logging.info("Deleting pod to query interface on node")
|
||||
kube_helper.delete_pod(cli, "fedtools", "default")
|
||||
kube_helper.delete_pod(cli, pod_name, "default")
|
||||
|
||||
return input_interface_list
|
||||
|
||||
@@ -268,6 +279,7 @@ def get_node_interfaces(
|
||||
instance_count: int,
|
||||
pod_template,
|
||||
cli: CoreV1Api,
|
||||
image: str
|
||||
) -> typing.Dict[str, typing.List[str]]:
|
||||
"""
|
||||
Function that is used to process the input dictionary with the nodes and
|
||||
@@ -309,7 +321,7 @@ def get_node_interfaces(
|
||||
nodes = kube_helper.get_node(None, label_selector, instance_count, cli)
|
||||
node_interface_dict = {}
|
||||
for node in nodes:
|
||||
node_interface_dict[node] = get_default_interface(node, pod_template, cli)
|
||||
node_interface_dict[node] = get_default_interface(node, pod_template, cli, image)
|
||||
else:
|
||||
node_name_list = node_interface_dict.keys()
|
||||
filtered_node_list = []
|
||||
@@ -321,7 +333,7 @@ def get_node_interfaces(
|
||||
|
||||
for node in filtered_node_list:
|
||||
node_interface_dict[node] = verify_interface(
|
||||
node_interface_dict[node], node, pod_template, cli
|
||||
node_interface_dict[node], node, pod_template, cli, image
|
||||
)
|
||||
|
||||
return node_interface_dict
|
||||
@@ -337,6 +349,7 @@ def apply_ingress_filter(
|
||||
cli: CoreV1Api,
|
||||
create_interfaces: bool = True,
|
||||
param_selector: str = "all",
|
||||
image:str = "quay.io/krkn-chaos/krkn:tools",
|
||||
) -> str:
|
||||
"""
|
||||
Function that applies the filters to shape incoming traffic to
|
||||
@@ -382,14 +395,14 @@ def apply_ingress_filter(
|
||||
network_params = {param_selector: cfg.network_params[param_selector]}
|
||||
|
||||
if create_interfaces:
|
||||
create_virtual_interfaces(cli, interface_list, node, pod_template)
|
||||
create_virtual_interfaces(cli, interface_list, node, pod_template, image)
|
||||
|
||||
exec_cmd = get_ingress_cmd(
|
||||
interface_list, network_params, duration=cfg.test_duration
|
||||
)
|
||||
logging.info("Executing %s on node %s" % (exec_cmd, node))
|
||||
job_body = yaml.safe_load(
|
||||
job_template.render(jobname=str(hash(node))[:5], nodename=node, cmd=exec_cmd)
|
||||
job_template.render(jobname=str(hash(node))[:5], nodename=node, image=image, cmd=exec_cmd)
|
||||
)
|
||||
api_response = kube_helper.create_job(batch_cli, job_body)
|
||||
|
||||
@@ -400,7 +413,7 @@ def apply_ingress_filter(
|
||||
|
||||
|
||||
def create_virtual_interfaces(
|
||||
cli: CoreV1Api, interface_list: typing.List[str], node: str, pod_template
|
||||
cli: CoreV1Api, interface_list: typing.List[str], node: str, pod_template, image: str
|
||||
) -> None:
|
||||
"""
|
||||
Function that creates a privileged pod and uses it to create
|
||||
@@ -421,20 +434,22 @@ def create_virtual_interfaces(
|
||||
- The YAML template used to instantiate a pod to create
|
||||
virtual interfaces on the node
|
||||
"""
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
|
||||
kube_helper.create_pod(cli, pod_body, "default", 300)
|
||||
logging.info(
|
||||
"Creating {0} virtual interfaces on node {1} using a pod".format(
|
||||
len(interface_list), node
|
||||
)
|
||||
)
|
||||
create_ifb(cli, len(interface_list), "modtools")
|
||||
pod_name = f"modtools-{pod_name_regex}"
|
||||
create_ifb(cli, len(interface_list), pod_name)
|
||||
logging.info("Deleting pod used to create virtual interfaces")
|
||||
kube_helper.delete_pod(cli, "modtools", "default")
|
||||
kube_helper.delete_pod(cli, pod_name, "default")
|
||||
|
||||
|
||||
def delete_virtual_interfaces(
|
||||
cli: CoreV1Api, node_list: typing.List[str], pod_template
|
||||
cli: CoreV1Api, node_list: typing.List[str], pod_template, image: str
|
||||
):
|
||||
"""
|
||||
Function that creates a privileged pod and uses it to delete all
|
||||
@@ -457,11 +472,13 @@ def delete_virtual_interfaces(
|
||||
"""
|
||||
|
||||
for node in node_list:
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex,nodename=node, image=image))
|
||||
kube_helper.create_pod(cli, pod_body, "default", 300)
|
||||
logging.info("Deleting all virtual interfaces on node {0}".format(node))
|
||||
delete_ifb(cli, "modtools")
|
||||
kube_helper.delete_pod(cli, "modtools", "default")
|
||||
pod_name = f"modtools-{pod_name_regex}"
|
||||
delete_ifb(cli, pod_name)
|
||||
kube_helper.delete_pod(cli, pod_name, "default")
|
||||
|
||||
|
||||
def create_ifb(cli: CoreV1Api, number: int, pod_name: str):
|
||||
@@ -700,7 +717,7 @@ def network_chaos(
|
||||
pod_interface_template = env.get_template("pod_interface.j2")
|
||||
pod_module_template = env.get_template("pod_module.j2")
|
||||
cli, batch_cli = kube_helper.setup_kubernetes(cfg.kubeconfig_path)
|
||||
|
||||
test_image = cfg.image
|
||||
logging.info("Starting Ingress Network Chaos")
|
||||
try:
|
||||
node_interface_dict = get_node_interfaces(
|
||||
@@ -709,6 +726,7 @@ def network_chaos(
|
||||
cfg.instance_count,
|
||||
pod_interface_template,
|
||||
cli,
|
||||
test_image
|
||||
)
|
||||
except Exception:
|
||||
return "error", NetworkScenarioErrorOutput(format_exc())
|
||||
@@ -726,6 +744,7 @@ def network_chaos(
|
||||
job_template,
|
||||
batch_cli,
|
||||
cli,
|
||||
test_image
|
||||
)
|
||||
)
|
||||
logging.info("Waiting for parallel job to finish")
|
||||
@@ -746,6 +765,7 @@ def network_chaos(
|
||||
cli,
|
||||
create_interfaces=create_interfaces,
|
||||
param_selector=param,
|
||||
image=test_image
|
||||
)
|
||||
)
|
||||
logging.info("Waiting for serial job to finish")
|
||||
@@ -772,6 +792,6 @@ def network_chaos(
|
||||
logging.error("Ingress Network Chaos exiting due to Exception - %s" % e)
|
||||
return "error", NetworkScenarioErrorOutput(format_exc())
|
||||
finally:
|
||||
delete_virtual_interfaces(cli, node_interface_dict.keys(), pod_module_template)
|
||||
delete_virtual_interfaces(cli, node_interface_dict.keys(), pod_module_template, test_image)
|
||||
logging.info("Deleting jobs(if any)")
|
||||
delete_jobs(cli, batch_cli, job_list[:])
|
||||
|
||||
@@ -9,7 +9,7 @@ spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: networkchaos
|
||||
image: docker.io/fedora/tools
|
||||
image: {{image}}
|
||||
command: ["/bin/sh", "-c", "{{cmd}}"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
@@ -22,4 +22,4 @@ spec:
|
||||
hostPath:
|
||||
path: /lib/modules
|
||||
restartPolicy: Never
|
||||
backoffLimit: 0
|
||||
backoffLimit: 0
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: fedtools
|
||||
name: fedtools-{{regex_name}}
|
||||
spec:
|
||||
hostNetwork: true
|
||||
nodeName: {{nodename}}
|
||||
containers:
|
||||
- name: fedtools
|
||||
image: docker.io/fedora/tools
|
||||
image: {{image}}
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: modtools
|
||||
name: modtools-{{regex_name}}
|
||||
spec:
|
||||
nodeName: {{nodename}}
|
||||
containers:
|
||||
- name: modtools
|
||||
image: docker.io/fedora/tools
|
||||
image: {{image}}
|
||||
imagePullPolicy: IfNotPresent
|
||||
command:
|
||||
- /bin/sh
|
||||
@@ -27,4 +27,4 @@ spec:
|
||||
hostNetwork: true
|
||||
hostIPC: true
|
||||
hostPID: true
|
||||
restartPolicy: Never
|
||||
restartPolicy: Never
|
||||
|
||||
@@ -9,7 +9,7 @@ spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: networkchaos
|
||||
image: docker.io/fedora/tools
|
||||
image: {{image}}
|
||||
command: ["chroot", "/host", "/bin/sh", "-c", "{{cmd}}"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
|
||||
@@ -23,8 +23,7 @@ def create_job(batch_cli, body, namespace="default"):
|
||||
"""
|
||||
|
||||
try:
|
||||
api_response = batch_cli.create_namespaced_job(
|
||||
body=body, namespace=namespace)
|
||||
api_response = batch_cli.create_namespaced_job(body=body, namespace=namespace)
|
||||
return api_response
|
||||
except ApiException as api:
|
||||
logging.warning(
|
||||
@@ -71,7 +70,8 @@ def create_pod(cli, body, namespace, timeout=120):
|
||||
end_time = time.time() + timeout
|
||||
while True:
|
||||
pod_stat = cli.read_namespaced_pod(
|
||||
name=body["metadata"]["name"], namespace=namespace)
|
||||
name=body["metadata"]["name"], namespace=namespace
|
||||
)
|
||||
if pod_stat.status.phase == "Running":
|
||||
break
|
||||
if time.time() > end_time:
|
||||
@@ -121,16 +121,18 @@ def exec_cmd_in_pod(cli, command, pod_name, namespace, container=None):
|
||||
return ret
|
||||
|
||||
|
||||
def list_pods(cli, namespace, label_selector=None):
|
||||
def list_pods(cli, namespace, label_selector=None, exclude_label=None):
|
||||
"""
|
||||
Function used to list pods in a given namespace and having a certain label
|
||||
Function used to list pods in a given namespace and having a certain label and excluding pods with exclude_label
|
||||
and excluding pods with exclude_label
|
||||
"""
|
||||
|
||||
pods = []
|
||||
try:
|
||||
if label_selector:
|
||||
ret = cli.list_namespaced_pod(
|
||||
namespace, pretty=True, label_selector=label_selector)
|
||||
namespace, pretty=True, label_selector=label_selector
|
||||
)
|
||||
else:
|
||||
ret = cli.list_namespaced_pod(namespace, pretty=True)
|
||||
except ApiException as e:
|
||||
@@ -140,7 +142,16 @@ def list_pods(cli, namespace, label_selector=None):
|
||||
% e
|
||||
)
|
||||
raise e
|
||||
|
||||
for pod in ret.items:
|
||||
# Skip pods with the exclude label if specified
|
||||
if exclude_label and pod.metadata.labels:
|
||||
exclude_key, exclude_value = exclude_label.split("=", 1)
|
||||
if (
|
||||
exclude_key in pod.metadata.labels
|
||||
and pod.metadata.labels[exclude_key] == exclude_value
|
||||
):
|
||||
continue
|
||||
pods.append(pod.metadata.name)
|
||||
|
||||
return pods
|
||||
@@ -152,8 +163,7 @@ def get_job_status(batch_cli, name, namespace="default"):
|
||||
"""
|
||||
|
||||
try:
|
||||
return batch_cli.read_namespaced_job_status(
|
||||
name=name, namespace=namespace)
|
||||
return batch_cli.read_namespaced_job_status(name=name, namespace=namespace)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"Exception when calling \
|
||||
@@ -169,7 +179,10 @@ def get_pod_log(cli, name, namespace="default"):
|
||||
"""
|
||||
|
||||
return cli.read_namespaced_pod_log(
|
||||
name=name, namespace=namespace, _return_http_data_only=True, _preload_content=False
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
_return_http_data_only=True,
|
||||
_preload_content=False,
|
||||
)
|
||||
|
||||
|
||||
@@ -191,7 +204,8 @@ def delete_job(batch_cli, name, namespace="default"):
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
body=client.V1DeleteOptions(
|
||||
propagation_policy="Foreground", grace_period_seconds=0),
|
||||
propagation_policy="Foreground", grace_period_seconds=0
|
||||
),
|
||||
)
|
||||
logging.debug("Job deleted. status='%s'" % str(api_response.status))
|
||||
return api_response
|
||||
@@ -247,11 +261,8 @@ def get_node(node_name, label_selector, instance_kill_count, cli):
|
||||
)
|
||||
nodes = list_ready_nodes(cli, label_selector)
|
||||
if not nodes:
|
||||
raise Exception(
|
||||
"Ready nodes with the provided label selector do not exist")
|
||||
logging.info(
|
||||
"Ready nodes with the label selector %s: %s" % (label_selector, nodes)
|
||||
)
|
||||
raise Exception("Ready nodes with the provided label selector do not exist")
|
||||
logging.info("Ready nodes with the label selector %s: %s" % (label_selector, nodes))
|
||||
number_of_nodes = len(nodes)
|
||||
if instance_kill_count == number_of_nodes:
|
||||
return nodes
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: modtools
|
||||
name: modtools-{{regex_name}}
|
||||
spec:
|
||||
nodeName: {{nodename}}
|
||||
containers:
|
||||
- name: modtools
|
||||
image: docker.io/fedora/tools
|
||||
image: {{image}}
|
||||
imagePullPolicy: IfNotPresent
|
||||
command:
|
||||
- /bin/sh
|
||||
@@ -27,4 +27,4 @@ spec:
|
||||
hostNetwork: true
|
||||
hostIPC: true
|
||||
hostPID: true
|
||||
restartPolicy: Never
|
||||
restartPolicy: Never
|
||||
|
||||
@@ -19,7 +19,11 @@ from . import cerberus
|
||||
|
||||
|
||||
def get_test_pods(
|
||||
pod_name: str, pod_label: str, namespace: str, kubecli: KrknKubernetes
|
||||
pod_name: str,
|
||||
pod_label: str,
|
||||
namespace: str,
|
||||
kubecli: KrknKubernetes,
|
||||
exclude_label: str = None,
|
||||
) -> typing.List[str]:
|
||||
"""
|
||||
Function that returns a list of pods to apply network policy
|
||||
@@ -38,11 +42,16 @@ def get_test_pods(
|
||||
kubecli (KrknKubernetes)
|
||||
- Object to interact with Kubernetes Python client
|
||||
|
||||
exclude_label (string)
|
||||
- pods matching this label will be excluded from the outage
|
||||
|
||||
Returns:
|
||||
pod names (string) in the namespace
|
||||
"""
|
||||
pods_list = []
|
||||
pods_list = kubecli.list_pods(label_selector=pod_label, namespace=namespace)
|
||||
pods_list = kubecli.list_pods(
|
||||
label_selector=pod_label, namespace=namespace, exclude_label=exclude_label
|
||||
)
|
||||
if pod_name and pod_name not in pods_list:
|
||||
raise Exception("pod name not found in namespace ")
|
||||
elif pod_name and pod_name in pods_list:
|
||||
@@ -192,6 +201,7 @@ def apply_outage_policy(
|
||||
duration: str,
|
||||
bridge_name: str,
|
||||
kubecli: KrknKubernetes,
|
||||
image: str
|
||||
) -> typing.List[str]:
|
||||
"""
|
||||
Function that applies filters(ingress or egress) to block traffic.
|
||||
@@ -223,6 +233,12 @@ def apply_outage_policy(
|
||||
batch_cli (BatchV1Api)
|
||||
- Object to interact with Kubernetes Python client's BatchV1Api API
|
||||
|
||||
image (string)
|
||||
- Image of network chaos tool
|
||||
|
||||
exclude_label (string)
|
||||
- pods matching this label will be excluded from the outage
|
||||
|
||||
Returns:
|
||||
The name of the job created that executes the commands on a node
|
||||
for ingress chaos scenario
|
||||
@@ -239,7 +255,7 @@ def apply_outage_policy(
|
||||
br = "br-int"
|
||||
table = 8
|
||||
for node, ips in node_dict.items():
|
||||
while len(check_cookie(node, pod_template, br, cookie, kubecli)) > 2 or cookie in cookie_list:
|
||||
while len(check_cookie(node, pod_template, br, cookie, kubecli, image)) > 2 or cookie in cookie_list:
|
||||
cookie = random.randint(100, 10000)
|
||||
exec_cmd = ""
|
||||
for ip in ips:
|
||||
@@ -257,6 +273,7 @@ def apply_outage_policy(
|
||||
job_template.render(
|
||||
jobname=str(hash(node))[:5] + str(random.randint(0, 10000)),
|
||||
nodename=node,
|
||||
image=image,
|
||||
cmd=exec_cmd,
|
||||
)
|
||||
)
|
||||
@@ -281,6 +298,7 @@ def apply_ingress_policy(
|
||||
bridge_name: str,
|
||||
kubecli: KrknKubernetes,
|
||||
test_execution: str,
|
||||
image: str,
|
||||
) -> typing.List[str]:
|
||||
"""
|
||||
Function that applies ingress traffic shaping to pod interface.
|
||||
@@ -319,6 +337,9 @@ def apply_ingress_policy(
|
||||
test_execution (String)
|
||||
- The order in which the filters are applied
|
||||
|
||||
exclude_label (string)
|
||||
- pods matching this label will be excluded from the outage
|
||||
|
||||
Returns:
|
||||
The name of the job created that executes the traffic shaping
|
||||
filter
|
||||
@@ -327,22 +348,23 @@ def apply_ingress_policy(
|
||||
job_list = []
|
||||
yml_list = []
|
||||
|
||||
create_virtual_interfaces(kubecli, len(ips), node, pod_template)
|
||||
create_virtual_interfaces(kubecli, len(ips), node, pod_template, image)
|
||||
|
||||
for count, pod_ip in enumerate(set(ips)):
|
||||
pod_inf = get_pod_interface(node, pod_ip, pod_template, bridge_name, kubecli)
|
||||
pod_inf = get_pod_interface(node, pod_ip, pod_template, bridge_name, kubecli, image)
|
||||
exec_cmd = get_ingress_cmd(
|
||||
test_execution, pod_inf, mod, count, network_params, duration
|
||||
)
|
||||
logging.info("Executing %s on pod %s in node %s" % (exec_cmd, pod_ip, node))
|
||||
job_body = yaml.safe_load(
|
||||
job_template.render(jobname=mod + str(pod_ip), nodename=node, cmd=exec_cmd)
|
||||
job_template.render(jobname=mod + str(pod_ip), nodename=node, image=image, cmd=exec_cmd)
|
||||
)
|
||||
yml_list.append(job_body)
|
||||
if pod_ip == node:
|
||||
break
|
||||
|
||||
for job_body in yml_list:
|
||||
print('jbo body' + str(job_body))
|
||||
api_response = kubecli.create_job(job_body)
|
||||
if api_response is None:
|
||||
raise Exception("Error creating job")
|
||||
@@ -362,6 +384,7 @@ def apply_net_policy(
|
||||
bridge_name: str,
|
||||
kubecli: KrknKubernetes,
|
||||
test_execution: str,
|
||||
image: str,
|
||||
) -> typing.List[str]:
|
||||
"""
|
||||
Function that applies egress traffic shaping to pod interface.
|
||||
@@ -400,6 +423,9 @@ def apply_net_policy(
|
||||
test_execution (String)
|
||||
- The order in which the filters are applied
|
||||
|
||||
exclude_label (string)
|
||||
- pods matching this label will be excluded from the outage
|
||||
|
||||
Returns:
|
||||
The name of the job created that executes the traffic shaping
|
||||
filter
|
||||
@@ -415,7 +441,7 @@ def apply_net_policy(
|
||||
)
|
||||
logging.info("Executing %s on pod %s in node %s" % (exec_cmd, pod_ip, node))
|
||||
job_body = yaml.safe_load(
|
||||
job_template.render(jobname=mod + str(pod_ip), nodename=node, cmd=exec_cmd)
|
||||
job_template.render(jobname=mod + str(pod_ip), nodename=node, image=image, cmd=exec_cmd)
|
||||
)
|
||||
yml_list.append(job_body)
|
||||
|
||||
@@ -459,6 +485,9 @@ def get_ingress_cmd(
|
||||
duration (str):
|
||||
- Duration for which the traffic control is to be done
|
||||
|
||||
exclude_label (string)
|
||||
- pods matching this label will be excluded from the outage
|
||||
|
||||
Returns:
|
||||
str: ingress filter
|
||||
"""
|
||||
@@ -510,6 +539,9 @@ def get_egress_cmd(
|
||||
duration (str):
|
||||
- Duration for which the traffic control is to be done
|
||||
|
||||
exclude_label (string)
|
||||
- pods matching this label will be excluded from the outage
|
||||
|
||||
Returns:
|
||||
str: egress filter
|
||||
"""
|
||||
@@ -530,7 +562,7 @@ def get_egress_cmd(
|
||||
|
||||
|
||||
def create_virtual_interfaces(
|
||||
kubecli: KrknKubernetes, nummber: int, node: str, pod_template
|
||||
kubecli: KrknKubernetes, number: int, node: str, pod_template, image: str,
|
||||
) -> None:
|
||||
"""
|
||||
Function that creates a privileged pod and uses it to create
|
||||
@@ -550,19 +582,24 @@ def create_virtual_interfaces(
|
||||
pod_template (jinja2.environment.Template))
|
||||
- The YAML template used to instantiate a pod to create
|
||||
virtual interfaces on the node
|
||||
|
||||
image (string)
|
||||
- Image of network chaos tool
|
||||
"""
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex, nodename=node, image=image))
|
||||
kubecli.create_pod(pod_body, "default", 300)
|
||||
pod_name = f"modtools-{pod_name_regex}"
|
||||
logging.info(
|
||||
"Creating {0} virtual interfaces on node {1} using a pod".format(nummber, node)
|
||||
"Creating {0} virtual interfaces on node {1} using a pod".format(number, node)
|
||||
)
|
||||
create_ifb(kubecli, nummber, "modtools")
|
||||
create_ifb(kubecli, number, pod_name)
|
||||
logging.info("Deleting pod used to create virtual interfaces")
|
||||
kubecli.delete_pod("modtools", "default")
|
||||
kubecli.delete_pod(pod_name, "default")
|
||||
|
||||
|
||||
def delete_virtual_interfaces(
|
||||
kubecli: KrknKubernetes, node_list: typing.List[str], pod_template
|
||||
kubecli: KrknKubernetes, node_list: typing.List[str], pod_template, image: str,
|
||||
):
|
||||
"""
|
||||
Function that creates a privileged pod and uses it to delete all
|
||||
@@ -582,14 +619,18 @@ def delete_virtual_interfaces(
|
||||
pod_template (jinja2.environment.Template))
|
||||
- The YAML template used to instantiate a pod to delete
|
||||
virtual interfaces on the node
|
||||
|
||||
image (string)
|
||||
- Image of network chaos tool
|
||||
"""
|
||||
|
||||
for node in node_list:
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex, nodename=node, image=image))
|
||||
kubecli.create_pod(pod_body, "default", 300)
|
||||
logging.info("Deleting all virtual interfaces on node {0}".format(node))
|
||||
delete_ifb(kubecli, "modtools")
|
||||
kubecli.delete_pod("modtools", "default")
|
||||
delete_ifb(kubecli, "modtools-" + pod_name_regex)
|
||||
kubecli.delete_pod("modtools-" + pod_name_regex, "default")
|
||||
|
||||
|
||||
def create_ifb(kubecli: KrknKubernetes, number: int, pod_name: str):
|
||||
@@ -619,7 +660,7 @@ def delete_ifb(kubecli: KrknKubernetes, pod_name: str):
|
||||
kubecli.exec_cmd_in_pod(exec_command, pod_name, "default", base_command="chroot")
|
||||
|
||||
|
||||
def list_bridges(node: str, pod_template, kubecli: KrknKubernetes) -> typing.List[str]:
|
||||
def list_bridges(node: str, pod_template, kubecli: KrknKubernetes, image: str) -> typing.List[str]:
|
||||
"""
|
||||
Function that returns a list of bridges on the node
|
||||
|
||||
@@ -634,18 +675,24 @@ def list_bridges(node: str, pod_template, kubecli: KrknKubernetes) -> typing.Lis
|
||||
kubecli (KrknKubernetes)
|
||||
- Object to interact with Kubernetes Python client
|
||||
|
||||
image (string)
|
||||
- Image of network chaos tool
|
||||
|
||||
exclude_label (string)
|
||||
- pods matching this label will be excluded from the outage
|
||||
|
||||
Returns:
|
||||
List of bridges on the node.
|
||||
"""
|
||||
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex, nodename=node, image=image))
|
||||
logging.info("Creating pod to query bridge on node %s" % node)
|
||||
kubecli.create_pod(pod_body, "default", 300)
|
||||
|
||||
pod_name = f"modtools-{pod_name_regex}"
|
||||
try:
|
||||
cmd = ["/host", "ovs-vsctl", "list-br"]
|
||||
output = kubecli.exec_cmd_in_pod(
|
||||
cmd, "modtools", "default", base_command="chroot"
|
||||
cmd, pod_name, "default", base_command="chroot"
|
||||
)
|
||||
|
||||
if not output:
|
||||
@@ -656,13 +703,13 @@ def list_bridges(node: str, pod_template, kubecli: KrknKubernetes) -> typing.Lis
|
||||
|
||||
finally:
|
||||
logging.info("Deleting pod to query interface on node")
|
||||
kubecli.delete_pod("modtools", "default")
|
||||
kubecli.delete_pod(pod_name, "default")
|
||||
|
||||
return bridges
|
||||
|
||||
|
||||
def check_cookie(
|
||||
node: str, pod_template, br_name, cookie, kubecli: KrknKubernetes
|
||||
node: str, pod_template, br_name, cookie, kubecli: KrknKubernetes, image: str
|
||||
) -> str:
|
||||
"""
|
||||
Function to check for matching flow rules
|
||||
@@ -684,14 +731,16 @@ def check_cookie(
|
||||
cli (CoreV1Api)
|
||||
- Object to interact with Kubernetes Python client's CoreV1 API
|
||||
|
||||
image (string)
|
||||
- Image of network chaos tool
|
||||
Returns
|
||||
Returns the matching flow rules
|
||||
"""
|
||||
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name = pod_name_regex,nodename=node, image=image))
|
||||
logging.info("Creating pod to query duplicate rules on node %s" % node)
|
||||
kubecli.create_pod(pod_body, "default", 300)
|
||||
|
||||
pod_name = f"modtools-{pod_name_regex}"
|
||||
try:
|
||||
cmd = [
|
||||
"chroot",
|
||||
@@ -704,7 +753,7 @@ def check_cookie(
|
||||
f"cookie={cookie}/-1",
|
||||
]
|
||||
output = kubecli.exec_cmd_in_pod(
|
||||
cmd, "modtools", "default", base_command="chroot"
|
||||
cmd, pod_name, "default", base_command="chroot"
|
||||
)
|
||||
|
||||
if not output:
|
||||
@@ -715,13 +764,13 @@ def check_cookie(
|
||||
|
||||
finally:
|
||||
logging.info("Deleting pod to query interface on node")
|
||||
kubecli.delete_pod("modtools", "default")
|
||||
kubecli.delete_pod(pod_name, "default")
|
||||
|
||||
return flow_list
|
||||
|
||||
|
||||
def get_pod_interface(
|
||||
node: str, ip: str, pod_template, br_name, kubecli: KrknKubernetes
|
||||
node: str, ip: str, pod_template, br_name, kubecli: KrknKubernetes, image: str = "quay.io/krkn-chaos/krkn:tools"
|
||||
) -> str:
|
||||
"""
|
||||
Function to query the pod interface on a node
|
||||
@@ -746,12 +795,12 @@ def get_pod_interface(
|
||||
Returns
|
||||
Returns the pod interface name
|
||||
"""
|
||||
|
||||
pod_body = yaml.safe_load(pod_template.render(nodename=node))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(pod_template.render(regex_name=pod_name_regex, nodename=node, image=image))
|
||||
logging.info("Creating pod to query pod interface on node %s" % node)
|
||||
kubecli.create_pod(pod_body, "default", 300)
|
||||
inf = ""
|
||||
|
||||
pod_name = f"modtools-{pod_name_regex}"
|
||||
try:
|
||||
if br_name == "br-int":
|
||||
find_ip = f"external-ids:ip_addresses={ip}/23"
|
||||
@@ -769,12 +818,12 @@ def get_pod_interface(
|
||||
]
|
||||
|
||||
output = kubecli.exec_cmd_in_pod(
|
||||
cmd, "modtools", "default", base_command="chroot"
|
||||
cmd, pod_name, "default", base_command="chroot"
|
||||
)
|
||||
if not output:
|
||||
cmd = ["/host", "ip", "addr", "show"]
|
||||
output = kubecli.exec_cmd_in_pod(
|
||||
cmd, "modtools", "default", base_command="chroot"
|
||||
cmd, pod_name, "default", base_command="chroot"
|
||||
)
|
||||
for if_str in output.split("\n"):
|
||||
if re.search(ip, if_str):
|
||||
@@ -783,12 +832,13 @@ def get_pod_interface(
|
||||
inf = output
|
||||
finally:
|
||||
logging.info("Deleting pod to query interface on node")
|
||||
kubecli.delete_pod("modtools", "default")
|
||||
kubecli.delete_pod(pod_name, "default")
|
||||
return inf
|
||||
|
||||
|
||||
def check_bridge_interface(
|
||||
node_name: str, pod_template, bridge_name: str, kubecli: KrknKubernetes
|
||||
node_name: str, pod_template, bridge_name: str, kubecli: KrknKubernetes,
|
||||
image: str = "quay.io/krkn-chaos/krkn:tools"
|
||||
) -> bool:
|
||||
"""
|
||||
Function is used to check if the required OVS or OVN bridge is found in
|
||||
@@ -808,13 +858,16 @@ def check_bridge_interface(
|
||||
kubecli (KrknKubernetes)
|
||||
- Object to interact with Kubernetes Python client
|
||||
|
||||
exclude_label (string)
|
||||
- pods matching this label will be excluded from the outage
|
||||
|
||||
Returns:
|
||||
Returns True if the bridge is found in the node.
|
||||
"""
|
||||
nodes = kubecli.get_node(node_name, None, 1)
|
||||
node_bridge = []
|
||||
for node in nodes:
|
||||
node_bridge = list_bridges(node, pod_template, kubecli)
|
||||
node_bridge = list_bridges(node, pod_template, kubecli, image=image)
|
||||
if bridge_name not in node_bridge:
|
||||
raise Exception(f"OVS bridge {bridge_name} not found on the node ")
|
||||
|
||||
@@ -835,6 +888,14 @@ class InputParams:
|
||||
}
|
||||
)
|
||||
|
||||
image: typing.Annotated[str, validation.min(1)]= field(
|
||||
default="quay.io/krkn-chaos/krkn:tools",
|
||||
metadata={
|
||||
"name": "Image",
|
||||
"description": "Image of krkn tools to run"
|
||||
}
|
||||
)
|
||||
|
||||
direction: typing.List[str] = field(
|
||||
default_factory=lambda: ["ingress", "egress"],
|
||||
metadata={
|
||||
@@ -893,6 +954,15 @@ class InputParams:
|
||||
},
|
||||
)
|
||||
|
||||
exclude_label: typing.Optional[str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"name": "Exclude label",
|
||||
"description": "Kubernetes label selector for pods to exclude from the chaos. "
|
||||
"Pods matching this label will be excluded even if they match the label_selector",
|
||||
},
|
||||
)
|
||||
|
||||
kraken_config: typing.Dict[str, typing.Any] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
@@ -1004,6 +1074,7 @@ def pod_outage(
|
||||
test_namespace = params.namespace
|
||||
test_label_selector = params.label_selector
|
||||
test_pod_name = params.pod_name
|
||||
test_image = params.image
|
||||
filter_dict = {}
|
||||
job_list = []
|
||||
publish = False
|
||||
@@ -1025,7 +1096,11 @@ def pod_outage(
|
||||
|
||||
br_name = get_bridge_name(api_ext, custom_obj)
|
||||
pods_list = get_test_pods(
|
||||
test_pod_name, test_label_selector, test_namespace, kubecli
|
||||
test_pod_name,
|
||||
test_label_selector,
|
||||
test_namespace,
|
||||
kubecli,
|
||||
params.exclude_label,
|
||||
)
|
||||
|
||||
while not len(pods_list) <= params.instance_count:
|
||||
@@ -1040,7 +1115,7 @@ def pod_outage(
|
||||
label_set.add("%s=%s" % (key, value))
|
||||
|
||||
check_bridge_interface(
|
||||
list(node_dict.keys())[0], pod_module_template, br_name, kubecli
|
||||
list(node_dict.keys())[0], pod_module_template, br_name, kubecli, test_image
|
||||
)
|
||||
|
||||
for direction, ports in filter_dict.items():
|
||||
@@ -1055,6 +1130,7 @@ def pod_outage(
|
||||
params.test_duration,
|
||||
br_name,
|
||||
kubecli,
|
||||
test_image
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1095,7 +1171,16 @@ class EgressParams:
|
||||
}
|
||||
)
|
||||
|
||||
image: typing.Annotated[str, validation.min(1)]= field(
|
||||
default="quay.io/krkn-chaos/krkn:tools",
|
||||
metadata={
|
||||
"name": "Image",
|
||||
"description": "Image of krkn tools to run"
|
||||
}
|
||||
)
|
||||
|
||||
network_params: typing.Dict[str, str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"name": "Network Parameters",
|
||||
"description": "The network filters that are applied on the interface. "
|
||||
@@ -1136,6 +1221,15 @@ class EgressParams:
|
||||
},
|
||||
)
|
||||
|
||||
exclude_label: typing.Optional[str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"name": "Exclude label",
|
||||
"description": "Kubernetes label selector for pods to exclude from the chaos. "
|
||||
"Pods matching this label will be excluded even if they match the label_selector",
|
||||
},
|
||||
)
|
||||
|
||||
kraken_config: typing.Dict[str, typing.Any] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
@@ -1254,6 +1348,7 @@ def pod_egress_shaping(
|
||||
test_namespace = params.namespace
|
||||
test_label_selector = params.label_selector
|
||||
test_pod_name = params.pod_name
|
||||
test_image = params.image
|
||||
job_list = []
|
||||
publish = False
|
||||
|
||||
@@ -1273,7 +1368,11 @@ def pod_egress_shaping(
|
||||
|
||||
br_name = get_bridge_name(api_ext, custom_obj)
|
||||
pods_list = get_test_pods(
|
||||
test_pod_name, test_label_selector, test_namespace, kubecli
|
||||
test_pod_name,
|
||||
test_label_selector,
|
||||
test_namespace,
|
||||
kubecli,
|
||||
params.exclude_label,
|
||||
)
|
||||
|
||||
while not len(pods_list) <= params.instance_count:
|
||||
@@ -1287,7 +1386,7 @@ def pod_egress_shaping(
|
||||
label_set.add("%s=%s" % (key, value))
|
||||
|
||||
check_bridge_interface(
|
||||
list(node_dict.keys())[0], pod_module_template, br_name, kubecli
|
||||
list(node_dict.keys())[0], pod_module_template, br_name, kubecli, test_image
|
||||
)
|
||||
|
||||
for mod in mod_lst:
|
||||
@@ -1304,6 +1403,7 @@ def pod_egress_shaping(
|
||||
br_name,
|
||||
kubecli,
|
||||
params.execution_type,
|
||||
test_image
|
||||
)
|
||||
)
|
||||
if params.execution_type == "serial":
|
||||
@@ -1357,8 +1457,17 @@ class IngressParams:
|
||||
"for details.",
|
||||
}
|
||||
)
|
||||
|
||||
image: typing.Annotated[str, validation.min(1)] = field(
|
||||
default="quay.io/krkn-chaos/krkn:tools",
|
||||
metadata={
|
||||
"name": "Image",
|
||||
"description": "Image to use for injecting network chaos",
|
||||
}
|
||||
)
|
||||
|
||||
network_params: typing.Dict[str, str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"name": "Network Parameters",
|
||||
"description": "The network filters that are applied on the interface. "
|
||||
@@ -1399,6 +1508,15 @@ class IngressParams:
|
||||
},
|
||||
)
|
||||
|
||||
exclude_label: typing.Optional[str] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
"name": "Exclude label",
|
||||
"description": "Kubernetes label selector for pods to exclude from the chaos. "
|
||||
"Pods matching this label will be excluded even if they match the label_selector",
|
||||
},
|
||||
)
|
||||
|
||||
kraken_config: typing.Dict[str, typing.Any] = field(
|
||||
default=None,
|
||||
metadata={
|
||||
@@ -1518,6 +1636,7 @@ def pod_ingress_shaping(
|
||||
test_namespace = params.namespace
|
||||
test_label_selector = params.label_selector
|
||||
test_pod_name = params.pod_name
|
||||
test_image = params.image
|
||||
job_list = []
|
||||
publish = False
|
||||
|
||||
@@ -1537,7 +1656,11 @@ def pod_ingress_shaping(
|
||||
|
||||
br_name = get_bridge_name(api_ext, custom_obj)
|
||||
pods_list = get_test_pods(
|
||||
test_pod_name, test_label_selector, test_namespace, kubecli
|
||||
test_pod_name,
|
||||
test_label_selector,
|
||||
test_namespace,
|
||||
kubecli,
|
||||
params.exclude_label,
|
||||
)
|
||||
|
||||
while not len(pods_list) <= params.instance_count:
|
||||
@@ -1551,7 +1674,7 @@ def pod_ingress_shaping(
|
||||
label_set.add("%s=%s" % (key, value))
|
||||
|
||||
check_bridge_interface(
|
||||
list(node_dict.keys())[0], pod_module_template, br_name, kubecli
|
||||
list(node_dict.keys())[0], pod_module_template, br_name, kubecli, test_image
|
||||
)
|
||||
|
||||
for mod in mod_lst:
|
||||
@@ -1568,6 +1691,7 @@ def pod_ingress_shaping(
|
||||
br_name,
|
||||
kubecli,
|
||||
params.execution_type,
|
||||
image=test_image
|
||||
)
|
||||
)
|
||||
if params.execution_type == "serial":
|
||||
@@ -1604,6 +1728,6 @@ def pod_ingress_shaping(
|
||||
logging.error("Pod network Shaping scenario exiting due to Exception - %s" % e)
|
||||
return "error", PodIngressNetShapingErrorOutput(format_exc())
|
||||
finally:
|
||||
delete_virtual_interfaces(kubecli, node_dict.keys(), pod_module_template)
|
||||
delete_virtual_interfaces(kubecli, node_dict.keys(), pod_module_template, test_image)
|
||||
logging.info("Deleting jobs(if any)")
|
||||
delete_jobs(kubecli, job_list[:])
|
||||
|
||||
@@ -9,7 +9,7 @@ spec:
|
||||
hostNetwork: true
|
||||
containers:
|
||||
- name: networkchaos
|
||||
image: docker.io/fedora/tools
|
||||
image: {{image}}
|
||||
command: ["/bin/sh", "-c", "{{cmd}}"]
|
||||
securityContext:
|
||||
privileged: true
|
||||
|
||||
@@ -42,7 +42,9 @@ class NetworkChaosScenarioPlugin(AbstractScenarioPlugin):
|
||||
test_egress = get_yaml_item_value(
|
||||
test_dict, "egress", {"bandwidth": "100mbit"}
|
||||
)
|
||||
|
||||
test_image = get_yaml_item_value(
|
||||
test_dict, "image", "quay.io/krkn-chaos/krkn:tools"
|
||||
)
|
||||
if test_node:
|
||||
node_name_list = test_node.split(",")
|
||||
nodelst = common_node_functions.get_node_by_name(node_name_list, lib_telemetry.get_lib_kubernetes())
|
||||
@@ -60,6 +62,7 @@ class NetworkChaosScenarioPlugin(AbstractScenarioPlugin):
|
||||
nodelst,
|
||||
pod_template,
|
||||
lib_telemetry.get_lib_kubernetes(),
|
||||
image=test_image
|
||||
)
|
||||
joblst = []
|
||||
egress_lst = [i for i in param_lst if i in test_egress]
|
||||
@@ -71,6 +74,7 @@ class NetworkChaosScenarioPlugin(AbstractScenarioPlugin):
|
||||
"execution": test_execution,
|
||||
"instance_count": test_instance_count,
|
||||
"egress": test_egress,
|
||||
"image": test_image
|
||||
}
|
||||
}
|
||||
logging.info(
|
||||
@@ -94,6 +98,7 @@ class NetworkChaosScenarioPlugin(AbstractScenarioPlugin):
|
||||
jobname=i + str(hash(node))[:5],
|
||||
nodename=node,
|
||||
cmd=exec_cmd,
|
||||
image=test_image
|
||||
)
|
||||
)
|
||||
joblst.append(job_body["metadata"]["name"])
|
||||
@@ -153,20 +158,22 @@ class NetworkChaosScenarioPlugin(AbstractScenarioPlugin):
|
||||
return 0
|
||||
|
||||
def verify_interface(
|
||||
self, test_interface, nodelst, template, kubecli: KrknKubernetes
|
||||
self, test_interface, nodelst, template, kubecli: KrknKubernetes, image: str
|
||||
):
|
||||
pod_index = random.randint(0, len(nodelst) - 1)
|
||||
pod_body = yaml.safe_load(template.render(nodename=nodelst[pod_index]))
|
||||
pod_name_regex = str(random.randint(0, 10000))
|
||||
pod_body = yaml.safe_load(template.render(regex_name=pod_name_regex,nodename=nodelst[pod_index], image=image))
|
||||
logging.info("Creating pod to query interface on node %s" % nodelst[pod_index])
|
||||
kubecli.create_pod(pod_body, "default", 300)
|
||||
pod_name = f"fedtools-{pod_name_regex}"
|
||||
try:
|
||||
if test_interface == []:
|
||||
cmd = "ip r | grep default | awk '/default/ {print $5}'"
|
||||
output = kubecli.exec_cmd_in_pod(cmd, "fedtools", "default")
|
||||
output = kubecli.exec_cmd_in_pod(cmd, pod_name, "default")
|
||||
test_interface = [output.replace("\n", "")]
|
||||
else:
|
||||
cmd = "ip -br addr show|awk -v ORS=',' '{print $1}'"
|
||||
output = kubecli.exec_cmd_in_pod(cmd, "fedtools", "default")
|
||||
output = kubecli.exec_cmd_in_pod(cmd, pod_name, "default")
|
||||
interface_lst = output[:-1].split(",")
|
||||
for interface in test_interface:
|
||||
if interface not in interface_lst:
|
||||
@@ -177,8 +184,8 @@ class NetworkChaosScenarioPlugin(AbstractScenarioPlugin):
|
||||
raise RuntimeError()
|
||||
return test_interface
|
||||
finally:
|
||||
logging.info("Deleteing pod to query interface on node")
|
||||
kubecli.delete_pod("fedtools", "default")
|
||||
logging.info("Deleting pod to query interface on node")
|
||||
kubecli.delete_pod(pod_name, "default")
|
||||
|
||||
# krkn_lib
|
||||
def get_job_pods(self, api_response, kubecli: KrknKubernetes):
|
||||
|
||||
@@ -1,13 +1,13 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: fedtools
|
||||
name: fedtools-{{regex_name}}
|
||||
spec:
|
||||
hostNetwork: true
|
||||
nodeName: {{nodename}}
|
||||
containers:
|
||||
- name: fedtools
|
||||
image: docker.io/fedora/tools
|
||||
image: {{image}}
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
|
||||
@@ -14,9 +14,11 @@ class BaseNetworkChaosConfig:
|
||||
wait_duration: int
|
||||
test_duration: int
|
||||
label_selector: str
|
||||
service_account: str
|
||||
instance_count: int
|
||||
execution: str
|
||||
namespace: str
|
||||
taints: list[str]
|
||||
|
||||
def validate(self) -> list[str]:
|
||||
errors = []
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import logging
|
||||
import queue
|
||||
import time
|
||||
|
||||
@@ -12,10 +13,9 @@ from krkn.scenario_plugins.network_chaos_ng.models import (
|
||||
from krkn.scenario_plugins.network_chaos_ng.modules.abstract_network_chaos_module import (
|
||||
AbstractNetworkChaosModule,
|
||||
)
|
||||
from krkn.scenario_plugins.network_chaos_ng.modules.utils import log_info
|
||||
from krkn.scenario_plugins.network_chaos_ng.modules.utils import log_info, log_error
|
||||
from krkn.scenario_plugins.network_chaos_ng.modules.utils_network_filter import (
|
||||
deploy_network_filter_pod,
|
||||
get_default_interface,
|
||||
generate_namespaced_rules,
|
||||
apply_network_rules,
|
||||
clean_network_rules_namespaced,
|
||||
@@ -56,23 +56,28 @@ class PodNetworkFilterModule(AbstractNetworkChaosModule):
|
||||
pod_name,
|
||||
self.kubecli.get_lib_kubernetes(),
|
||||
container_name,
|
||||
host_network=False,
|
||||
)
|
||||
|
||||
if len(self.config.interfaces) == 0:
|
||||
interfaces = [
|
||||
get_default_interface(
|
||||
pod_name,
|
||||
self.config.namespace,
|
||||
self.kubecli.get_lib_kubernetes(),
|
||||
interfaces = (
|
||||
self.kubecli.get_lib_kubernetes().list_pod_network_interfaces(
|
||||
target, self.config.namespace
|
||||
)
|
||||
]
|
||||
)
|
||||
|
||||
if len(interfaces) == 0:
|
||||
log_error(
|
||||
"no network interface found in pod, impossible to execute the network filter scenario",
|
||||
parallel,
|
||||
pod_name,
|
||||
)
|
||||
return
|
||||
log_info(
|
||||
f"detected default interface {interfaces[0]}",
|
||||
f"detected network interfaces: {','.join(interfaces)}",
|
||||
parallel,
|
||||
pod_name,
|
||||
)
|
||||
|
||||
else:
|
||||
interfaces = self.config.interfaces
|
||||
|
||||
|
||||
@@ -4,16 +4,26 @@ metadata:
|
||||
name: {{pod_name}}
|
||||
namespace: {{namespace}}
|
||||
spec:
|
||||
{% if service_account %}
|
||||
serviceAccountName: {{ service_account }}
|
||||
{%endif%}
|
||||
{% if host_network %}
|
||||
hostNetwork: true
|
||||
{%endif%}
|
||||
{% if taints %}
|
||||
tolerations:
|
||||
{% for toleration in taints %}
|
||||
- key: "{{ toleration.key }}"
|
||||
operator: "{{ toleration.operator }}"
|
||||
{% if toleration.value %}
|
||||
value: "{{ toleration.value }}"
|
||||
{% endif %}
|
||||
effect: "{{ toleration.effect }}"
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
hostPID: true
|
||||
nodeSelector:
|
||||
kubernetes.io/hostname: {{target}}
|
||||
tolerations:
|
||||
- key: "node-role.kubernetes.io/master"
|
||||
operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
containers:
|
||||
- name: {{container_name}}
|
||||
imagePullPolicy: Always
|
||||
|
||||
@@ -11,7 +11,7 @@ def log_info(message: str, parallel: bool = False, node_name: str = ""):
|
||||
logging.info(message)
|
||||
|
||||
|
||||
def log_error(self, message: str, parallel: bool = False, node_name: str = ""):
|
||||
def log_error(message: str, parallel: bool = False, node_name: str = ""):
|
||||
"""
|
||||
log helper method for ERROR severity to be used in the scenarios
|
||||
"""
|
||||
@@ -21,7 +21,7 @@ def log_error(self, message: str, parallel: bool = False, node_name: str = ""):
|
||||
logging.error(message)
|
||||
|
||||
|
||||
def log_warning(self, message: str, parallel: bool = False, node_name: str = ""):
|
||||
def log_warning(message: str, parallel: bool = False, node_name: str = ""):
|
||||
"""
|
||||
log helper method for WARNING severity to be used in the scenarios
|
||||
"""
|
||||
|
||||
@@ -54,18 +54,41 @@ def deploy_network_filter_pod(
|
||||
pod_name: str,
|
||||
kubecli: KrknKubernetes,
|
||||
container_name: str = "fedora",
|
||||
host_network: bool = True,
|
||||
):
|
||||
file_loader = FileSystemLoader(os.path.abspath(os.path.dirname(__file__)))
|
||||
env = Environment(loader=file_loader, autoescape=True)
|
||||
pod_template = env.get_template("templates/network-chaos.j2")
|
||||
tolerations = []
|
||||
|
||||
for taint in config.taints:
|
||||
key_value_part, effect = taint.split(":", 1)
|
||||
if "=" in key_value_part:
|
||||
key, value = key_value_part.split("=", 1)
|
||||
operator = "Equal"
|
||||
else:
|
||||
key = key_value_part
|
||||
value = None
|
||||
operator = "Exists"
|
||||
toleration = {
|
||||
"key": key,
|
||||
"operator": operator,
|
||||
"effect": effect,
|
||||
}
|
||||
if value is not None:
|
||||
toleration["value"] = value
|
||||
tolerations.append(toleration)
|
||||
|
||||
pod_body = yaml.safe_load(
|
||||
pod_template.render(
|
||||
pod_name=pod_name,
|
||||
namespace=config.namespace,
|
||||
host_network=True,
|
||||
host_network=host_network,
|
||||
target=target_node,
|
||||
container_name=container_name,
|
||||
workload_image=config.image,
|
||||
taints=tolerations,
|
||||
service_account=config.service_account,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@@ -60,7 +60,7 @@ class abstract_node_scenarios:
|
||||
pass
|
||||
|
||||
# Node scenario to reboot the node
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
pass
|
||||
|
||||
# Node scenario to stop the kubelet
|
||||
@@ -84,7 +84,7 @@ class abstract_node_scenarios:
|
||||
)
|
||||
logging.error("stop_kubelet_scenario injection failed!")
|
||||
raise e
|
||||
self.add_affected_node(affected_node)
|
||||
self.affected_nodes_status.affected_nodes.append(affected_node)
|
||||
|
||||
# Node scenario to stop and start the kubelet
|
||||
def stop_start_kubelet_scenario(self, instance_kill_count, node, timeout):
|
||||
@@ -106,7 +106,6 @@ class abstract_node_scenarios:
|
||||
+ node
|
||||
+ " -- chroot /host systemctl restart kubelet &"
|
||||
)
|
||||
nodeaction.wait_for_not_ready_status(node, timeout, self.kubecli, affected_node)
|
||||
nodeaction.wait_for_ready_status(node, timeout, self.kubecli,affected_node)
|
||||
logging.info("The kubelet of the node %s has been restarted" % (node))
|
||||
logging.info("restart_kubelet_scenario has been successfuly injected!")
|
||||
@@ -117,7 +116,7 @@ class abstract_node_scenarios:
|
||||
)
|
||||
logging.error("restart_kubelet_scenario injection failed!")
|
||||
raise e
|
||||
self.add_affected_node(affected_node)
|
||||
self.affected_nodes_status.affected_nodes.append(affected_node)
|
||||
|
||||
# Node scenario to crash the node
|
||||
def node_crash_scenario(self, instance_kill_count, node, timeout):
|
||||
@@ -125,7 +124,7 @@ class abstract_node_scenarios:
|
||||
try:
|
||||
logging.info("Starting node_crash_scenario injection")
|
||||
logging.info("Crashing the node %s" % (node))
|
||||
runcommand.invoke(
|
||||
runcommand.run(
|
||||
"oc debug node/" + node + " -- chroot /host "
|
||||
"dd if=/dev/urandom of=/proc/sysrq-trigger"
|
||||
)
|
||||
@@ -136,7 +135,7 @@ class abstract_node_scenarios:
|
||||
"Test Failed" % (e)
|
||||
)
|
||||
logging.error("node_crash_scenario injection failed!")
|
||||
raise e
|
||||
return 1
|
||||
|
||||
# Node scenario to check service status on helper node
|
||||
def node_service_status(self, node, service, ssh_private_key, timeout):
|
||||
|
||||
@@ -316,7 +316,7 @@ class alibaba_node_scenarios(abstract_node_scenarios):
|
||||
self.affected_nodes_status.affected_nodes.append(affected_node)
|
||||
|
||||
# Node scenario to reboot the node
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
for _ in range(instance_kill_count):
|
||||
affected_node = AffectedNode(node)
|
||||
try:
|
||||
|
||||
@@ -358,7 +358,7 @@ class aws_node_scenarios(abstract_node_scenarios):
|
||||
self.affected_nodes_status.affected_nodes.append(affected_node)
|
||||
|
||||
# Node scenario to reboot the node
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
for _ in range(instance_kill_count):
|
||||
affected_node = AffectedNode(node)
|
||||
try:
|
||||
|
||||
@@ -308,7 +308,7 @@ class azure_node_scenarios(abstract_node_scenarios):
|
||||
|
||||
|
||||
# Node scenario to reboot the node
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
for _ in range(instance_kill_count):
|
||||
affected_node = AffectedNode(node)
|
||||
try:
|
||||
|
||||
@@ -214,7 +214,7 @@ class bm_node_scenarios(abstract_node_scenarios):
|
||||
logging.info("Node termination scenario is not supported on baremetal")
|
||||
|
||||
# Node scenario to reboot the node
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
for _ in range(instance_kill_count):
|
||||
affected_node = AffectedNode(node)
|
||||
try:
|
||||
@@ -274,7 +274,7 @@ done'''
|
||||
logging.info("Disk response: %s" % (disk_response))
|
||||
node_disks = [disk for disk in disk_response.split("\n") if disk]
|
||||
logging.info("Node disks: %s" % (node_disks))
|
||||
offline_disks = [disk for disk in node_disks if disk not in user_disks]
|
||||
offline_disks = [disk for disk in user_disks if disk in node_disks]
|
||||
return offline_disks if offline_disks else node_disks
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
|
||||
@@ -1,16 +1,10 @@
|
||||
import datetime
|
||||
import time
|
||||
import random
|
||||
import logging
|
||||
import paramiko
|
||||
from krkn_lib.models.k8s import AffectedNode
|
||||
import krkn.invoke.command as runcommand
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
from krkn_lib.models.k8s import AffectedNode, AffectedNodeStatus
|
||||
from krkn_lib.models.k8s import AffectedNode
|
||||
|
||||
node_general = False
|
||||
|
||||
|
||||
def get_node_by_name(node_name_list, kubecli: KrknKubernetes):
|
||||
killable_nodes = kubecli.list_killable_nodes()
|
||||
@@ -22,20 +16,22 @@ def get_node_by_name(node_name_list, kubecli: KrknKubernetes):
|
||||
)
|
||||
return
|
||||
return node_name_list
|
||||
|
||||
|
||||
|
||||
# Pick a random node with specified label selector
|
||||
def get_node(label_selector, instance_kill_count, kubecli: KrknKubernetes):
|
||||
|
||||
label_selector_list = label_selector.split(",")
|
||||
label_selector_list = label_selector.split(",")
|
||||
nodes = []
|
||||
for label_selector in label_selector_list:
|
||||
for label_selector in label_selector_list:
|
||||
nodes.extend(kubecli.list_killable_nodes(label_selector))
|
||||
if not nodes:
|
||||
raise Exception("Ready nodes with the provided label selector do not exist")
|
||||
logging.info("Ready nodes with the label selector %s: %s" % (label_selector_list, nodes))
|
||||
logging.info(
|
||||
"Ready nodes with the label selector %s: %s" % (label_selector_list, nodes)
|
||||
)
|
||||
number_of_nodes = len(nodes)
|
||||
if instance_kill_count == number_of_nodes:
|
||||
if instance_kill_count == number_of_nodes or instance_kill_count == 0:
|
||||
return nodes
|
||||
nodes_to_return = []
|
||||
for i in range(instance_kill_count):
|
||||
@@ -44,35 +40,34 @@ def get_node(label_selector, instance_kill_count, kubecli: KrknKubernetes):
|
||||
nodes.remove(node_to_add)
|
||||
return nodes_to_return
|
||||
|
||||
|
||||
# krkn_lib
|
||||
# Wait until the node status becomes Ready
|
||||
def wait_for_ready_status(node, timeout, kubecli: KrknKubernetes, affected_node: AffectedNode = None):
|
||||
affected_node = kubecli.watch_node_status(node, "True", timeout, affected_node)
|
||||
def wait_for_ready_status(
|
||||
node, timeout, kubecli: KrknKubernetes, affected_node: AffectedNode = None
|
||||
):
|
||||
affected_node = kubecli.watch_node_status(node, "True", timeout, affected_node)
|
||||
return affected_node
|
||||
|
||||
|
||||
|
||||
# krkn_lib
|
||||
# Wait until the node status becomes Not Ready
|
||||
def wait_for_not_ready_status(node, timeout, kubecli: KrknKubernetes, affected_node: AffectedNode = None):
|
||||
def wait_for_not_ready_status(
|
||||
node, timeout, kubecli: KrknKubernetes, affected_node: AffectedNode = None
|
||||
):
|
||||
affected_node = kubecli.watch_node_status(node, "False", timeout, affected_node)
|
||||
return affected_node
|
||||
|
||||
|
||||
|
||||
# krkn_lib
|
||||
# Wait until the node status becomes Unknown
|
||||
def wait_for_unknown_status(node, timeout, kubecli: KrknKubernetes, affected_node: AffectedNode = None):
|
||||
def wait_for_unknown_status(
|
||||
node, timeout, kubecli: KrknKubernetes, affected_node: AffectedNode = None
|
||||
):
|
||||
affected_node = kubecli.watch_node_status(node, "Unknown", timeout, affected_node)
|
||||
return affected_node
|
||||
|
||||
|
||||
# Get the ip of the cluster node
|
||||
def get_node_ip(node):
|
||||
return runcommand.invoke(
|
||||
"kubectl get node %s -o "
|
||||
"jsonpath='{.status.addresses[?(@.type==\"InternalIP\")].address}'" % (node)
|
||||
)
|
||||
|
||||
|
||||
def check_service_status(node, service, ssh_private_key, timeout):
|
||||
ssh = paramiko.SSHClient()
|
||||
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
|
||||
|
||||
@@ -120,7 +120,7 @@ class docker_node_scenarios(abstract_node_scenarios):
|
||||
raise e
|
||||
|
||||
# Node scenario to reboot the node
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
for _ in range(instance_kill_count):
|
||||
affected_node = AffectedNode(node)
|
||||
try:
|
||||
|
||||
@@ -321,7 +321,7 @@ class gcp_node_scenarios(abstract_node_scenarios):
|
||||
self.affected_nodes_status.affected_nodes.append(affected_node)
|
||||
|
||||
# Node scenario to reboot the node
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
for _ in range(instance_kill_count):
|
||||
affected_node = AffectedNode(node)
|
||||
try:
|
||||
|
||||
@@ -39,7 +39,7 @@ class general_node_scenarios(abstract_node_scenarios):
|
||||
)
|
||||
|
||||
# Node scenario to reboot the node
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
logging.info(
|
||||
"Node reboot is not set up yet for this cloud type,"
|
||||
" no action is going to be taken"
|
||||
|
||||
@@ -36,10 +36,25 @@ class IbmCloud:
|
||||
self.service = VpcV1(authenticator=authenticator)
|
||||
|
||||
self.service.set_service_url(service_url)
|
||||
|
||||
except Exception as e:
|
||||
logging.error("error authenticating" + str(e))
|
||||
|
||||
|
||||
def configure_ssl_verification(self, disable_ssl_verification):
|
||||
"""
|
||||
Configure SSL verification for IBM Cloud VPC service.
|
||||
|
||||
Args:
|
||||
disable_ssl_verification: If True, disables SSL verification.
|
||||
"""
|
||||
logging.info(f"Configuring SSL verification: disable_ssl_verification={disable_ssl_verification}")
|
||||
if disable_ssl_verification:
|
||||
self.service.set_disable_ssl_verification(True)
|
||||
logging.info("SSL verification disabled for IBM Cloud VPC service")
|
||||
else:
|
||||
self.service.set_disable_ssl_verification(False)
|
||||
logging.info("SSL verification enabled for IBM Cloud VPC service")
|
||||
|
||||
# Get the instance ID of the node
|
||||
def get_instance_id(self, node_name):
|
||||
node_list = self.list_instances()
|
||||
@@ -260,9 +275,13 @@ class IbmCloud:
|
||||
|
||||
@dataclass
|
||||
class ibm_node_scenarios(abstract_node_scenarios):
|
||||
def __init__(self, kubecli: KrknKubernetes, node_action_kube_check: bool, affected_nodes_status: AffectedNodeStatus):
|
||||
def __init__(self, kubecli: KrknKubernetes, node_action_kube_check: bool, affected_nodes_status: AffectedNodeStatus, disable_ssl_verification: bool):
|
||||
super().__init__(kubecli, node_action_kube_check, affected_nodes_status)
|
||||
self.ibmcloud = IbmCloud()
|
||||
|
||||
# Configure SSL verification
|
||||
self.ibmcloud.configure_ssl_verification(disable_ssl_verification)
|
||||
|
||||
self.node_action_kube_check = node_action_kube_check
|
||||
|
||||
def node_start_scenario(self, instance_kill_count, node, timeout):
|
||||
@@ -319,7 +338,7 @@ class ibm_node_scenarios(abstract_node_scenarios):
|
||||
logging.error("node_stop_scenario injection failed!")
|
||||
|
||||
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
try:
|
||||
instance_id = self.ibmcloud.get_instance_id(node)
|
||||
for _ in range(instance_kill_count):
|
||||
@@ -327,7 +346,7 @@ class ibm_node_scenarios(abstract_node_scenarios):
|
||||
logging.info("Starting node_reboot_scenario injection")
|
||||
logging.info("Rebooting the node %s " % (node))
|
||||
self.ibmcloud.reboot_instances(instance_id)
|
||||
self.ibmcloud.wait_until_rebooted(instance_id, timeout)
|
||||
self.ibmcloud.wait_until_rebooted(instance_id, timeout, affected_node)
|
||||
if self.node_action_kube_check:
|
||||
nodeaction.wait_for_unknown_status(
|
||||
node, timeout, affected_node
|
||||
|
||||
@@ -0,0 +1,403 @@
|
||||
#!/usr/bin/env python
|
||||
import time
|
||||
from os import environ
|
||||
from dataclasses import dataclass
|
||||
import logging
|
||||
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
import krkn.scenario_plugins.node_actions.common_node_functions as nodeaction
|
||||
from krkn.scenario_plugins.node_actions.abstract_node_scenarios import (
|
||||
abstract_node_scenarios,
|
||||
)
|
||||
import requests
|
||||
import sys
|
||||
import json
|
||||
|
||||
|
||||
# -o, --operation string Operation to be done in a PVM server instance.
|
||||
# Valid values are: hard-reboot, immediate-shutdown, soft-reboot, reset-state, start, stop.
|
||||
|
||||
from krkn_lib.models.k8s import AffectedNodeStatus, AffectedNode
|
||||
|
||||
|
||||
class IbmCloudPower:
|
||||
def __init__(self):
|
||||
"""
|
||||
Initialize the ibm cloud client by using the the env variables:
|
||||
'IBMC_APIKEY' 'IBMC_URL'
|
||||
"""
|
||||
self.api_key = environ.get("IBMC_APIKEY")
|
||||
self.service_url = environ.get("IBMC_POWER_URL")
|
||||
self.CRN = environ.get("IBMC_POWER_CRN")
|
||||
self.cloud_instance_id = self.CRN.split(":")[-3]
|
||||
print(self.cloud_instance_id)
|
||||
self.headers = None
|
||||
self.token = None
|
||||
if not self.api_key:
|
||||
raise Exception("Environmental variable 'IBMC_APIKEY' is not set")
|
||||
if not self.service_url:
|
||||
raise Exception("Environmental variable 'IBMC_POWER_URL' is not set")
|
||||
if not self.CRN:
|
||||
raise Exception("Environmental variable 'IBMC_POWER_CRN' is not set")
|
||||
try:
|
||||
self.authenticate()
|
||||
|
||||
except Exception as e:
|
||||
logging.error("error authenticating" + str(e))
|
||||
|
||||
def authenticate(self):
|
||||
url = "https://iam.cloud.ibm.com/identity/token"
|
||||
iam_auth_headers = {
|
||||
"content-type": "application/x-www-form-urlencoded",
|
||||
"accept": "application/json",
|
||||
}
|
||||
data = {
|
||||
"grant_type": "urn:ibm:params:oauth:grant-type:apikey",
|
||||
"apikey": self.api_key,
|
||||
}
|
||||
|
||||
response = self._make_request("POST", url, data=data, headers=iam_auth_headers)
|
||||
if response.status_code == 200:
|
||||
self.token = response.json()
|
||||
self.headers = {
|
||||
"Authorization": f"Bearer {self.token['access_token']}",
|
||||
"Content-Type": "application/json",
|
||||
"CRN": self.CRN,
|
||||
}
|
||||
else:
|
||||
logging.error(f"Authentication Error: {response.status_code}")
|
||||
return None, None
|
||||
|
||||
|
||||
def _make_request(self,method, url, data=None, headers=None):
|
||||
try:
|
||||
response = requests.request(method, url, data=data, headers=headers)
|
||||
response.raise_for_status()
|
||||
return response
|
||||
except Exception as e:
|
||||
raise Exception(f"API Error: {e}")
|
||||
|
||||
# Get the instance ID of the node
|
||||
def get_instance_id(self, node_name):
|
||||
|
||||
url = f"{self.service_url}/pcloud/v1/cloud-instances/{self.cloud_instance_id}/pvm-instances/"
|
||||
response = self._make_request("GET", url, headers=self.headers)
|
||||
for node in response.json()["pvmInstances"]:
|
||||
if node_name == node["serverName"]:
|
||||
return node["pvmInstanceID"]
|
||||
logging.error("Couldn't find node with name " + str(node_name) + ", you could try another region")
|
||||
sys.exit(1)
|
||||
|
||||
def delete_instance(self, instance_id):
|
||||
"""
|
||||
Deletes the Instance whose name is given by 'instance_id'
|
||||
"""
|
||||
try:
|
||||
url = f"{self.service_url}/pcloud/v1/cloud-instances/{self.cloud_instance_id}/pvm-instances/{instance_id}/action"
|
||||
self._make_request("POST", url, headers=self.headers, data=json.dumps({"action": "immediate-shutdown"}))
|
||||
logging.info("Deleted Instance -- '{}'".format(instance_id))
|
||||
except Exception as e:
|
||||
logging.info("Instance '{}' could not be deleted. ".format(instance_id))
|
||||
return False
|
||||
|
||||
def reboot_instances(self, instance_id, soft=False):
|
||||
"""
|
||||
Reboots the Instance whose name is given by 'instance_id'. Returns True if successful, or
|
||||
returns False if the Instance is not powered on
|
||||
"""
|
||||
|
||||
try:
|
||||
if soft:
|
||||
action = "soft-reboot"
|
||||
else:
|
||||
action = "hard-reboot"
|
||||
url = f"{self.service_url}/pcloud/v1/cloud-instances/{self.cloud_instance_id}/pvm-instances/{instance_id}/action"
|
||||
self._make_request("POST", url, headers=self.headers, data=json.dumps({"action": action}))
|
||||
logging.info("Reset Instance -- '{}'".format(instance_id))
|
||||
return True
|
||||
except Exception as e:
|
||||
logging.info("Instance '{}' could not be rebooted".format(instance_id))
|
||||
return False
|
||||
|
||||
def stop_instances(self, instance_id):
|
||||
"""
|
||||
Stops the Instance whose name is given by 'instance_id'. Returns True if successful, or
|
||||
returns False if the Instance is already stopped
|
||||
"""
|
||||
|
||||
try:
|
||||
url = f"{self.service_url}/pcloud/v1/cloud-instances/{self.cloud_instance_id}/pvm-instances/{instance_id}/action"
|
||||
self._make_request("POST", url, headers=self.headers, data=json.dumps({"action": "stop"}))
|
||||
logging.info("Stopped Instance -- '{}'".format(instance_id))
|
||||
return True
|
||||
except Exception as e:
|
||||
logging.info("Instance '{}' could not be stopped".format(instance_id))
|
||||
logging.info("error" + str(e))
|
||||
return False
|
||||
|
||||
def start_instances(self, instance_id):
|
||||
"""
|
||||
Stops the Instance whose name is given by 'instance_id'. Returns True if successful, or
|
||||
returns False if the Instance is already running
|
||||
"""
|
||||
|
||||
try:
|
||||
url = f"{self.service_url}/pcloud/v1/cloud-instances/{self.cloud_instance_id}/pvm-instances/{instance_id}/action"
|
||||
self._make_request("POST", url, headers=self.headers, data=json.dumps({"action": "start"}))
|
||||
logging.info("Started Instance -- '{}'".format(instance_id))
|
||||
return True
|
||||
except Exception as e:
|
||||
logging.info("Instance '{}' could not start running".format(instance_id))
|
||||
return False
|
||||
|
||||
def list_instances(self):
|
||||
"""
|
||||
Returns a list of Instances present in the datacenter
|
||||
"""
|
||||
instance_names = []
|
||||
try:
|
||||
url = f"{self.service_url}/pcloud/v1/cloud-instances/{self.cloud_instance_id}/pvm-instances/"
|
||||
response = self._make_request("GET", url, headers=self.headers)
|
||||
for pvm in response.json()["pvmInstances"]:
|
||||
instance_names.append({"serverName": pvm.serverName, "pvmInstanceID": pvm.pvmInstanceID})
|
||||
except Exception as e:
|
||||
logging.error("Error listing out instances: " + str(e))
|
||||
sys.exit(1)
|
||||
return instance_names
|
||||
|
||||
def find_id_in_list(self, name, vpc_list):
|
||||
for vpc in vpc_list:
|
||||
if vpc["vpc_name"] == name:
|
||||
return vpc["vpc_id"]
|
||||
|
||||
def get_instance_status(self, instance_id):
|
||||
"""
|
||||
Returns the status of the Instance whose name is given by 'instance_id'
|
||||
"""
|
||||
|
||||
try:
|
||||
url = f"{self.service_url}/pcloud/v1/cloud-instances/{self.cloud_instance_id}/pvm-instances/{instance_id}"
|
||||
response = self._make_request("GET", url, headers=self.headers)
|
||||
state = response.json()["status"]
|
||||
return state
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"Failed to get node instance status %s. Encountered following "
|
||||
"exception: %s." % (instance_id, e)
|
||||
)
|
||||
return None
|
||||
|
||||
def wait_until_deleted(self, instance_id, timeout, affected_node=None):
|
||||
"""
|
||||
Waits until the instance is deleted or until the timeout. Returns True if
|
||||
the instance is successfully deleted, else returns False
|
||||
"""
|
||||
start_time = time.time()
|
||||
time_counter = 0
|
||||
vpc = self.get_instance_status(instance_id)
|
||||
while vpc is not None:
|
||||
vpc = self.get_instance_status(instance_id)
|
||||
logging.info(
|
||||
"Instance %s is still being deleted, sleeping for 5 seconds"
|
||||
% instance_id
|
||||
)
|
||||
time.sleep(5)
|
||||
time_counter += 5
|
||||
if time_counter >= timeout:
|
||||
logging.info(
|
||||
"Instance %s is still not deleted in allotted time" % instance_id
|
||||
)
|
||||
return False
|
||||
end_time = time.time()
|
||||
if affected_node:
|
||||
affected_node.set_affected_node_status("terminated", end_time - start_time)
|
||||
return True
|
||||
|
||||
def wait_until_running(self, instance_id, timeout, affected_node=None):
|
||||
"""
|
||||
Waits until the Instance switches to running state or until the timeout.
|
||||
Returns True if the Instance switches to running, else returns False
|
||||
"""
|
||||
start_time = time.time()
|
||||
time_counter = 0
|
||||
status = self.get_instance_status(instance_id)
|
||||
while status != "ACTIVE":
|
||||
status = self.get_instance_status(instance_id)
|
||||
logging.info(
|
||||
"Instance %s is still not running, sleeping for 5 seconds" % instance_id
|
||||
)
|
||||
time.sleep(5)
|
||||
time_counter += 5
|
||||
if time_counter >= timeout:
|
||||
logging.info(
|
||||
"Instance %s is still not ready in allotted time" % instance_id
|
||||
)
|
||||
return False
|
||||
end_time = time.time()
|
||||
if affected_node:
|
||||
affected_node.set_affected_node_status("running", end_time - start_time)
|
||||
return True
|
||||
|
||||
def wait_until_stopped(self, instance_id, timeout, affected_node):
|
||||
"""
|
||||
Waits until the Instance switches to stopped state or until the timeout.
|
||||
Returns True if the Instance switches to stopped, else returns False
|
||||
"""
|
||||
start_time = time.time()
|
||||
time_counter = 0
|
||||
status = self.get_instance_status(instance_id)
|
||||
while status != "STOPPED":
|
||||
status = self.get_instance_status(instance_id)
|
||||
logging.info(
|
||||
"Instance %s is still not stopped, sleeping for 5 seconds" % instance_id
|
||||
)
|
||||
time.sleep(5)
|
||||
time_counter += 5
|
||||
if time_counter >= timeout:
|
||||
logging.info(
|
||||
"Instance %s is still not stopped in allotted time" % instance_id
|
||||
)
|
||||
return False
|
||||
end_time = time.time()
|
||||
print('affected_node' + str(affected_node))
|
||||
if affected_node:
|
||||
affected_node.set_affected_node_status("stopped", end_time - start_time)
|
||||
return True
|
||||
|
||||
|
||||
def wait_until_rebooted(self, instance_id, timeout, affected_node):
|
||||
"""
|
||||
Waits until the Instance switches to restarting state and then running state or until the timeout.
|
||||
Returns True if the Instance switches back to running, else returns False
|
||||
"""
|
||||
|
||||
time_counter = 0
|
||||
status = self.get_instance_status(instance_id)
|
||||
while status == "HARD_REBOOT" or status == "SOFT_REBOOT":
|
||||
status = self.get_instance_status(instance_id)
|
||||
logging.info(
|
||||
"Instance %s is still restarting, sleeping for 5 seconds" % instance_id
|
||||
)
|
||||
time.sleep(5)
|
||||
time_counter += 5
|
||||
if time_counter >= timeout:
|
||||
logging.info(
|
||||
"Instance %s is still restarting after allotted time" % instance_id
|
||||
)
|
||||
return False
|
||||
self.wait_until_running(instance_id, timeout, affected_node)
|
||||
print('affected_node' + str(affected_node))
|
||||
return True
|
||||
|
||||
|
||||
@dataclass
|
||||
class ibmcloud_power_node_scenarios(abstract_node_scenarios):
|
||||
def __init__(self, kubecli: KrknKubernetes, node_action_kube_check: bool, affected_nodes_status: AffectedNodeStatus, disable_ssl_verification: bool):
|
||||
super().__init__(kubecli, node_action_kube_check, affected_nodes_status)
|
||||
self.ibmcloud_power = IbmCloudPower()
|
||||
|
||||
self.node_action_kube_check = node_action_kube_check
|
||||
|
||||
def node_start_scenario(self, instance_kill_count, node, timeout):
|
||||
try:
|
||||
instance_id = self.ibmcloud_power.get_instance_id( node)
|
||||
affected_node = AffectedNode(node, node_id=instance_id)
|
||||
for _ in range(instance_kill_count):
|
||||
logging.info("Starting node_start_scenario injection")
|
||||
logging.info("Starting the node %s " % (node))
|
||||
|
||||
if instance_id:
|
||||
vm_started = self.ibmcloud_power.start_instances(instance_id)
|
||||
if vm_started:
|
||||
self.ibmcloud_power.wait_until_running(instance_id, timeout, affected_node)
|
||||
if self.node_action_kube_check:
|
||||
nodeaction.wait_for_ready_status(
|
||||
node, timeout, self.kubecli, affected_node
|
||||
)
|
||||
logging.info(
|
||||
"Node with instance ID: %s is in running state" % node
|
||||
)
|
||||
logging.info(
|
||||
"node_start_scenario has been successfully injected!"
|
||||
)
|
||||
else:
|
||||
logging.error(
|
||||
"Failed to find node that matched instances on ibm cloud in region"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logging.error("Failed to start node instance. Test Failed")
|
||||
logging.error("node_start_scenario injection failed!")
|
||||
self.affected_nodes_status.affected_nodes.append(affected_node)
|
||||
|
||||
|
||||
def node_stop_scenario(self, instance_kill_count, node, timeout):
|
||||
try:
|
||||
instance_id = self.ibmcloud_power.get_instance_id(node)
|
||||
for _ in range(instance_kill_count):
|
||||
affected_node = AffectedNode(node, instance_id)
|
||||
logging.info("Starting node_stop_scenario injection")
|
||||
logging.info("Stopping the node %s " % (node))
|
||||
vm_stopped = self.ibmcloud_power.stop_instances(instance_id)
|
||||
if vm_stopped:
|
||||
self.ibmcloud_power.wait_until_stopped(instance_id, timeout, affected_node)
|
||||
logging.info(
|
||||
"Node with instance ID: %s is in stopped state" % node
|
||||
)
|
||||
logging.info(
|
||||
"node_stop_scenario has been successfully injected!"
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error("Failed to stop node instance. Test Failed")
|
||||
logging.error("node_stop_scenario injection failed!")
|
||||
|
||||
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
try:
|
||||
instance_id = self.ibmcloud_power.get_instance_id(node)
|
||||
for _ in range(instance_kill_count):
|
||||
affected_node = AffectedNode(node, node_id=instance_id)
|
||||
logging.info("Starting node_reboot_scenario injection")
|
||||
logging.info("Rebooting the node %s " % (node))
|
||||
self.ibmcloud_power.reboot_instances(instance_id, soft_reboot)
|
||||
self.ibmcloud_power.wait_until_rebooted(instance_id, timeout, affected_node)
|
||||
if self.node_action_kube_check:
|
||||
nodeaction.wait_for_unknown_status(
|
||||
node, timeout, affected_node
|
||||
)
|
||||
nodeaction.wait_for_ready_status(
|
||||
node, timeout, affected_node
|
||||
)
|
||||
logging.info(
|
||||
"Node with instance ID: %s has rebooted successfully" % node
|
||||
)
|
||||
logging.info(
|
||||
"node_reboot_scenario has been successfully injected!"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logging.error("Failed to reboot node instance. Test Failed")
|
||||
logging.error("node_reboot_scenario injection failed!")
|
||||
|
||||
|
||||
def node_terminate_scenario(self, instance_kill_count, node, timeout):
|
||||
try:
|
||||
instance_id = self.ibmcloud_power.get_instance_id(node)
|
||||
for _ in range(instance_kill_count):
|
||||
affected_node = AffectedNode(node, node_id=instance_id)
|
||||
logging.info(
|
||||
"Starting node_termination_scenario injection by first stopping the node"
|
||||
)
|
||||
logging.info("Deleting the node with instance ID: %s " % (node))
|
||||
self.ibmcloud_power.delete_instance(instance_id)
|
||||
self.ibmcloud_power.wait_until_deleted(node, timeout, affected_node)
|
||||
logging.info(
|
||||
"Node with instance ID: %s has been released" % node
|
||||
)
|
||||
logging.info(
|
||||
"node_terminate_scenario has been successfully injected!"
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error("Failed to terminate node instance. Test Failed")
|
||||
logging.error("node_terminate_scenario injection failed!")
|
||||
|
||||
@@ -22,8 +22,16 @@ from krkn.scenario_plugins.node_actions.gcp_node_scenarios import gcp_node_scena
|
||||
from krkn.scenario_plugins.node_actions.general_cloud_node_scenarios import (
|
||||
general_node_scenarios,
|
||||
)
|
||||
from krkn.scenario_plugins.node_actions.vmware_node_scenarios import vmware_node_scenarios
|
||||
from krkn.scenario_plugins.node_actions.ibmcloud_node_scenarios import ibm_node_scenarios
|
||||
from krkn.scenario_plugins.node_actions.vmware_node_scenarios import (
|
||||
vmware_node_scenarios,
|
||||
)
|
||||
from krkn.scenario_plugins.node_actions.ibmcloud_node_scenarios import (
|
||||
ibm_node_scenarios,
|
||||
)
|
||||
|
||||
from krkn.scenario_plugins.node_actions.ibmcloud_power_node_scenarios import (
|
||||
ibmcloud_power_node_scenarios,
|
||||
)
|
||||
node_general = False
|
||||
|
||||
|
||||
@@ -63,29 +71,39 @@ class NodeActionsScenarioPlugin(AbstractScenarioPlugin):
|
||||
def get_node_scenario_object(self, node_scenario, kubecli: KrknKubernetes):
|
||||
affected_nodes_status = AffectedNodeStatus()
|
||||
|
||||
node_action_kube_check = get_yaml_item_value(node_scenario,"kube_check",True)
|
||||
node_action_kube_check = get_yaml_item_value(node_scenario, "kube_check", True)
|
||||
if (
|
||||
"cloud_type" not in node_scenario.keys()
|
||||
or node_scenario["cloud_type"] == "generic"
|
||||
):
|
||||
global node_general
|
||||
node_general = True
|
||||
return general_node_scenarios(kubecli, node_action_kube_check, affected_nodes_status)
|
||||
return general_node_scenarios(
|
||||
kubecli, node_action_kube_check, affected_nodes_status
|
||||
)
|
||||
if node_scenario["cloud_type"].lower() == "aws":
|
||||
return aws_node_scenarios(kubecli, node_action_kube_check, affected_nodes_status)
|
||||
return aws_node_scenarios(
|
||||
kubecli, node_action_kube_check, affected_nodes_status
|
||||
)
|
||||
elif node_scenario["cloud_type"].lower() == "gcp":
|
||||
return gcp_node_scenarios(kubecli, node_action_kube_check, affected_nodes_status)
|
||||
return gcp_node_scenarios(
|
||||
kubecli, node_action_kube_check, affected_nodes_status
|
||||
)
|
||||
elif node_scenario["cloud_type"].lower() == "openstack":
|
||||
from krkn.scenario_plugins.node_actions.openstack_node_scenarios import (
|
||||
openstack_node_scenarios,
|
||||
)
|
||||
|
||||
return openstack_node_scenarios(kubecli, node_action_kube_check, affected_nodes_status)
|
||||
return openstack_node_scenarios(
|
||||
kubecli, node_action_kube_check, affected_nodes_status
|
||||
)
|
||||
elif (
|
||||
node_scenario["cloud_type"].lower() == "azure"
|
||||
or node_scenario["cloud_type"].lower() == "az"
|
||||
):
|
||||
return azure_node_scenarios(kubecli, node_action_kube_check, affected_nodes_status)
|
||||
return azure_node_scenarios(
|
||||
kubecli, node_action_kube_check, affected_nodes_status
|
||||
)
|
||||
elif (
|
||||
node_scenario["cloud_type"].lower() == "alibaba"
|
||||
or node_scenario["cloud_type"].lower() == "alicloud"
|
||||
@@ -94,7 +112,9 @@ class NodeActionsScenarioPlugin(AbstractScenarioPlugin):
|
||||
alibaba_node_scenarios,
|
||||
)
|
||||
|
||||
return alibaba_node_scenarios(kubecli, node_action_kube_check, affected_nodes_status)
|
||||
return alibaba_node_scenarios(
|
||||
kubecli, node_action_kube_check, affected_nodes_status
|
||||
)
|
||||
elif node_scenario["cloud_type"].lower() == "bm":
|
||||
from krkn.scenario_plugins.node_actions.bm_node_scenarios import (
|
||||
bm_node_scenarios,
|
||||
@@ -106,21 +126,31 @@ class NodeActionsScenarioPlugin(AbstractScenarioPlugin):
|
||||
node_scenario.get("bmc_password", None),
|
||||
kubecli,
|
||||
node_action_kube_check,
|
||||
affected_nodes_status
|
||||
affected_nodes_status,
|
||||
)
|
||||
elif node_scenario["cloud_type"].lower() == "docker":
|
||||
return docker_node_scenarios(kubecli,node_action_kube_check,
|
||||
affected_nodes_status)
|
||||
return docker_node_scenarios(
|
||||
kubecli, node_action_kube_check, affected_nodes_status
|
||||
)
|
||||
elif (
|
||||
node_scenario["cloud_type"].lower() == "vsphere"
|
||||
or node_scenario["cloud_type"].lower() == "vmware"
|
||||
):
|
||||
return vmware_node_scenarios(kubecli, node_action_kube_check,affected_nodes_status)
|
||||
return vmware_node_scenarios(
|
||||
kubecli, node_action_kube_check, affected_nodes_status
|
||||
)
|
||||
elif (
|
||||
node_scenario["cloud_type"].lower() == "ibm"
|
||||
or node_scenario["cloud_type"].lower() == "ibmcloud"
|
||||
):
|
||||
return ibm_node_scenarios(kubecli, node_action_kube_check, affected_nodes_status)
|
||||
disable_ssl_verification = get_yaml_item_value(node_scenario, "disable_ssl_verification", True)
|
||||
return ibm_node_scenarios(kubecli, node_action_kube_check, affected_nodes_status, disable_ssl_verification)
|
||||
elif (
|
||||
node_scenario["cloud_type"].lower() == "ibmpower"
|
||||
or node_scenario["cloud_type"].lower() == "ibmcloudpower"
|
||||
):
|
||||
disable_ssl_verification = get_yaml_item_value(node_scenario, "disable_ssl_verification", True)
|
||||
return ibmcloud_power_node_scenarios(kubecli, node_action_kube_check, affected_nodes_status, disable_ssl_verification)
|
||||
else:
|
||||
logging.error(
|
||||
"Cloud type "
|
||||
@@ -138,16 +168,22 @@ class NodeActionsScenarioPlugin(AbstractScenarioPlugin):
|
||||
)
|
||||
|
||||
def inject_node_scenario(
|
||||
self, action, node_scenario, node_scenario_object, kubecli: KrknKubernetes, scenario_telemetry: ScenarioTelemetry
|
||||
self,
|
||||
action,
|
||||
node_scenario,
|
||||
node_scenario_object,
|
||||
kubecli: KrknKubernetes,
|
||||
scenario_telemetry: ScenarioTelemetry,
|
||||
):
|
||||
|
||||
|
||||
# Get the node scenario configurations for setting nodes
|
||||
|
||||
|
||||
instance_kill_count = get_yaml_item_value(node_scenario, "instance_count", 1)
|
||||
node_name = get_yaml_item_value(node_scenario, "node_name", "")
|
||||
label_selector = get_yaml_item_value(node_scenario, "label_selector", "")
|
||||
exclude_label = get_yaml_item_value(node_scenario, "exclude_label", "")
|
||||
parallel_nodes = get_yaml_item_value(node_scenario, "parallel", False)
|
||||
|
||||
|
||||
# Get the node to apply the scenario
|
||||
if node_name:
|
||||
node_name_list = node_name.split(",")
|
||||
@@ -156,11 +192,22 @@ class NodeActionsScenarioPlugin(AbstractScenarioPlugin):
|
||||
nodes = common_node_functions.get_node(
|
||||
label_selector, instance_kill_count, kubecli
|
||||
)
|
||||
|
||||
# GCP api doesn't support multiprocessing calls, will only actually run 1
|
||||
if exclude_label:
|
||||
exclude_nodes = common_node_functions.get_node(
|
||||
exclude_label, 0, kubecli
|
||||
)
|
||||
|
||||
for node in nodes:
|
||||
if node in exclude_nodes:
|
||||
logging.info(
|
||||
f"excluding node {node} with exclude label {exclude_nodes}"
|
||||
)
|
||||
nodes.remove(node)
|
||||
|
||||
# GCP api doesn't support multiprocessing calls, will only actually run 1
|
||||
if parallel_nodes:
|
||||
self.multiprocess_nodes(nodes, node_scenario_object, action, node_scenario)
|
||||
else:
|
||||
else:
|
||||
for single_node in nodes:
|
||||
self.run_node(single_node, node_scenario_object, action, node_scenario)
|
||||
affected_nodes_status = node_scenario_object.affected_nodes_status
|
||||
@@ -170,14 +217,21 @@ class NodeActionsScenarioPlugin(AbstractScenarioPlugin):
|
||||
try:
|
||||
# pool object with number of element
|
||||
pool = ThreadPool(processes=len(nodes))
|
||||
|
||||
pool.starmap(self.run_node,zip(nodes, repeat(node_scenario_object), repeat(action), repeat(node_scenario)))
|
||||
|
||||
pool.starmap(
|
||||
self.run_node,
|
||||
zip(
|
||||
nodes,
|
||||
repeat(node_scenario_object),
|
||||
repeat(action),
|
||||
repeat(node_scenario),
|
||||
),
|
||||
)
|
||||
|
||||
pool.close()
|
||||
except Exception as e:
|
||||
logging.info("Error on pool multiprocessing: " + str(e))
|
||||
|
||||
|
||||
def run_node(self, single_node, node_scenario_object, action, node_scenario):
|
||||
# Get the scenario specifics for running action nodes
|
||||
run_kill_count = get_yaml_item_value(node_scenario, "runs", 1)
|
||||
@@ -185,6 +239,7 @@ class NodeActionsScenarioPlugin(AbstractScenarioPlugin):
|
||||
|
||||
timeout = get_yaml_item_value(node_scenario, "timeout", 120)
|
||||
service = get_yaml_item_value(node_scenario, "service", "")
|
||||
soft_reboot = get_yaml_item_value(node_scenario, "soft_reboot", False)
|
||||
ssh_private_key = get_yaml_item_value(
|
||||
node_scenario, "ssh_private_key", "~/.ssh/id_rsa"
|
||||
)
|
||||
@@ -215,11 +270,12 @@ class NodeActionsScenarioPlugin(AbstractScenarioPlugin):
|
||||
)
|
||||
elif action == "node_reboot_scenario":
|
||||
node_scenario_object.node_reboot_scenario(
|
||||
run_kill_count, single_node, timeout
|
||||
run_kill_count, single_node, timeout, soft_reboot
|
||||
)
|
||||
elif action == "node_disk_detach_attach_scenario":
|
||||
node_scenario_object.node_disk_detach_attach_scenario(
|
||||
run_kill_count, single_node, timeout, duration)
|
||||
run_kill_count, single_node, timeout, duration
|
||||
)
|
||||
elif action == "stop_start_kubelet_scenario":
|
||||
node_scenario_object.stop_start_kubelet_scenario(
|
||||
run_kill_count, single_node, timeout
|
||||
@@ -247,9 +303,7 @@ class NodeActionsScenarioPlugin(AbstractScenarioPlugin):
|
||||
else:
|
||||
if not node_scenario["helper_node_ip"]:
|
||||
logging.error("Helper node IP address is not provided")
|
||||
raise Exception(
|
||||
"Helper node IP address is not provided"
|
||||
)
|
||||
raise Exception("Helper node IP address is not provided")
|
||||
node_scenario_object.helper_node_stop_start_scenario(
|
||||
run_kill_count, node_scenario["helper_node_ip"], timeout
|
||||
)
|
||||
@@ -269,6 +323,5 @@ class NodeActionsScenarioPlugin(AbstractScenarioPlugin):
|
||||
% action
|
||||
)
|
||||
|
||||
|
||||
def get_scenario_types(self) -> list[str]:
|
||||
return ["node_scenarios"]
|
||||
|
||||
@@ -171,7 +171,7 @@ class openstack_node_scenarios(abstract_node_scenarios):
|
||||
self.affected_nodes_status.affected_nodes.append(affected_node)
|
||||
|
||||
# Node scenario to reboot the node
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
for _ in range(instance_kill_count):
|
||||
affected_node = AffectedNode(node)
|
||||
try:
|
||||
|
||||
@@ -73,7 +73,7 @@ class vSphere:
|
||||
vms = self.client.vcenter.VM.list(VM.FilterSpec(names=names))
|
||||
|
||||
if len(vms) == 0:
|
||||
logging.info("VM with name ({}) not found", instance_id)
|
||||
logging.info("VM with name ({}) not found".format(instance_id))
|
||||
return None
|
||||
vm = vms[0].vm
|
||||
|
||||
@@ -97,7 +97,7 @@ class vSphere:
|
||||
self.client.vcenter.vm.Power.start(vm)
|
||||
self.client.vcenter.vm.Power.stop(vm)
|
||||
self.client.vcenter.VM.delete(vm)
|
||||
logging.info("Deleted VM -- '{}-({})'", instance_id, vm)
|
||||
logging.info("Deleted VM -- '{}-({})'".format(instance_id, vm))
|
||||
|
||||
def reboot_instances(self, instance_id):
|
||||
"""
|
||||
@@ -108,11 +108,11 @@ class vSphere:
|
||||
vm = self.get_vm(instance_id)
|
||||
try:
|
||||
self.client.vcenter.vm.Power.reset(vm)
|
||||
logging.info("Reset VM -- '{}-({})'", instance_id, vm)
|
||||
logging.info("Reset VM -- '{}-({})'".format(instance_id, vm))
|
||||
return True
|
||||
except NotAllowedInCurrentState:
|
||||
logging.info(
|
||||
"VM '{}'-'({})' is not Powered On. Cannot reset it", instance_id, vm
|
||||
"VM '{}'-'({})' is not Powered On. Cannot reset it".format(instance_id, vm)
|
||||
)
|
||||
return False
|
||||
|
||||
@@ -158,7 +158,7 @@ class vSphere:
|
||||
try:
|
||||
datacenter_id = datacenter_summaries[0].datacenter
|
||||
except IndexError:
|
||||
logging.error("Datacenter '{}' doesn't exist", datacenter)
|
||||
logging.error("Datacenter '{}' doesn't exist".format(datacenter))
|
||||
sys.exit(1)
|
||||
|
||||
vm_filter = self.client.vcenter.VM.FilterSpec(datacenters={datacenter_id})
|
||||
@@ -432,7 +432,7 @@ class vmware_node_scenarios(abstract_node_scenarios):
|
||||
)
|
||||
|
||||
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout):
|
||||
def node_reboot_scenario(self, instance_kill_count, node, timeout, soft_reboot=False):
|
||||
try:
|
||||
for _ in range(instance_kill_count):
|
||||
affected_node = AffectedNode(node)
|
||||
|
||||
@@ -11,6 +11,9 @@ class InputParams:
|
||||
self.label_selector = config["label_selector"] if "label_selector" in config else ""
|
||||
self.namespace_pattern = config["namespace_pattern"] if "namespace_pattern" in config else ""
|
||||
self.name_pattern = config["name_pattern"] if "name_pattern" in config else ""
|
||||
self.node_label_selector = config["node_label_selector"] if "node_label_selector" in config else ""
|
||||
self.node_names = config["node_names"] if "node_names" in config else []
|
||||
self.exclude_label = config["exclude_label"] if "exclude_label" in config else ""
|
||||
|
||||
namespace_pattern: str
|
||||
krkn_pod_recovery_time: int
|
||||
@@ -18,4 +21,7 @@ class InputParams:
|
||||
duration: int
|
||||
kill: int
|
||||
label_selector: str
|
||||
name_pattern: str
|
||||
name_pattern: str
|
||||
node_label_selector: str
|
||||
node_names: list
|
||||
exclude_label: str
|
||||
@@ -1,14 +1,16 @@
|
||||
import logging
|
||||
import random
|
||||
import time
|
||||
from asyncio import Future
|
||||
|
||||
import yaml
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
from krkn_lib.k8s.pods_monitor_pool import PodsMonitorPool
|
||||
from krkn_lib.k8s.pod_monitor import select_and_monitor_by_namespace_pattern_and_label, \
|
||||
select_and_monitor_by_name_pattern_and_namespace_pattern
|
||||
|
||||
from krkn.scenario_plugins.pod_disruption.models.models import InputParams
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
from krkn_lib.utils import get_yaml_item_value
|
||||
from datetime import datetime
|
||||
from dataclasses import dataclass
|
||||
|
||||
@@ -29,31 +31,25 @@ class PodDisruptionScenarioPlugin(AbstractScenarioPlugin):
|
||||
lib_telemetry: KrknTelemetryOpenshift,
|
||||
scenario_telemetry: ScenarioTelemetry,
|
||||
) -> int:
|
||||
pool = PodsMonitorPool(lib_telemetry.get_lib_kubernetes())
|
||||
try:
|
||||
with open(scenario, "r") as f:
|
||||
cont_scenario_config = yaml.full_load(f)
|
||||
for kill_scenario in cont_scenario_config:
|
||||
kill_scenario_config = InputParams(kill_scenario["config"])
|
||||
self.start_monitoring(
|
||||
kill_scenario_config, pool
|
||||
future_snapshot=self.start_monitoring(
|
||||
kill_scenario_config,
|
||||
lib_telemetry
|
||||
)
|
||||
return_status = self.killing_pods(
|
||||
self.killing_pods(
|
||||
kill_scenario_config, lib_telemetry.get_lib_kubernetes()
|
||||
)
|
||||
if return_status != 0:
|
||||
result = pool.cancel()
|
||||
else:
|
||||
result = pool.join()
|
||||
if result.error:
|
||||
logging.error(
|
||||
logging.error(
|
||||
f"PodDisruptionScenariosPlugin pods failed to recovery: {result.error}"
|
||||
)
|
||||
)
|
||||
return 1
|
||||
|
||||
scenario_telemetry.affected_pods = result
|
||||
|
||||
snapshot = future_snapshot.result()
|
||||
result = snapshot.get_pods_status()
|
||||
scenario_telemetry.affected_pods = result
|
||||
if len(result.unrecovered) > 0:
|
||||
logging.info("PodDisruptionScenarioPlugin failed with unrecovered pods")
|
||||
return 1
|
||||
|
||||
except (RuntimeError, Exception) as e:
|
||||
logging.error("PodDisruptionScenariosPlugin exiting due to Exception %s" % e)
|
||||
@@ -64,7 +60,7 @@ class PodDisruptionScenarioPlugin(AbstractScenarioPlugin):
|
||||
def get_scenario_types(self) -> list[str]:
|
||||
return ["pod_disruption_scenarios"]
|
||||
|
||||
def start_monitoring(self, kill_scenario: InputParams, pool: PodsMonitorPool):
|
||||
def start_monitoring(self, kill_scenario: InputParams, lib_telemetry: KrknTelemetryOpenshift) -> Future:
|
||||
|
||||
recovery_time = kill_scenario.krkn_pod_recovery_time
|
||||
if (
|
||||
@@ -73,16 +69,17 @@ class PodDisruptionScenarioPlugin(AbstractScenarioPlugin):
|
||||
):
|
||||
namespace_pattern = kill_scenario.namespace_pattern
|
||||
label_selector = kill_scenario.label_selector
|
||||
pool.select_and_monitor_by_namespace_pattern_and_label(
|
||||
future_snapshot = select_and_monitor_by_namespace_pattern_and_label(
|
||||
namespace_pattern=namespace_pattern,
|
||||
label_selector=label_selector,
|
||||
max_timeout=recovery_time,
|
||||
field_selector="status.phase=Running"
|
||||
v1_client=lib_telemetry.get_lib_kubernetes().cli
|
||||
)
|
||||
logging.info(
|
||||
f"waiting up to {recovery_time} seconds for pod recovery, "
|
||||
f"pod label pattern: {label_selector} namespace pattern: {namespace_pattern}"
|
||||
)
|
||||
return future_snapshot
|
||||
|
||||
elif (
|
||||
kill_scenario.namespace_pattern
|
||||
@@ -90,32 +87,101 @@ class PodDisruptionScenarioPlugin(AbstractScenarioPlugin):
|
||||
):
|
||||
namespace_pattern = kill_scenario.namespace_pattern
|
||||
name_pattern = kill_scenario.name_pattern
|
||||
pool.select_and_monitor_by_name_pattern_and_namespace_pattern(
|
||||
future_snapshot = select_and_monitor_by_name_pattern_and_namespace_pattern(
|
||||
pod_name_pattern=name_pattern,
|
||||
namespace_pattern=namespace_pattern,
|
||||
max_timeout=recovery_time,
|
||||
field_selector="status.phase=Running"
|
||||
v1_client=lib_telemetry.get_lib_kubernetes().cli
|
||||
)
|
||||
logging.info(
|
||||
f"waiting up to {recovery_time} seconds for pod recovery, "
|
||||
f"pod name pattern: {name_pattern} namespace pattern: {namespace_pattern}"
|
||||
)
|
||||
return future_snapshot
|
||||
else:
|
||||
raise Exception(
|
||||
f"impossible to determine monitor parameters, check {kill_scenario} configuration"
|
||||
)
|
||||
|
||||
def _select_pods_with_field_selector(self, name_pattern, label_selector, namespace, kubecli: KrknKubernetes, field_selector: str, node_name: str = None):
|
||||
"""Helper function to select pods using either label_selector or name_pattern with field_selector, optionally filtered by node"""
|
||||
# Combine field selectors if node targeting is specified
|
||||
if node_name:
|
||||
node_field_selector = f"spec.nodeName={node_name}"
|
||||
if field_selector:
|
||||
combined_field_selector = f"{field_selector},{node_field_selector}"
|
||||
else:
|
||||
combined_field_selector = node_field_selector
|
||||
else:
|
||||
combined_field_selector = field_selector
|
||||
|
||||
if label_selector:
|
||||
return kubecli.select_pods_by_namespace_pattern_and_label(
|
||||
label_selector=label_selector,
|
||||
namespace_pattern=namespace,
|
||||
field_selector=combined_field_selector
|
||||
)
|
||||
else: # name_pattern
|
||||
return kubecli.select_pods_by_name_pattern_and_namespace_pattern(
|
||||
pod_name_pattern=name_pattern,
|
||||
namespace_pattern=namespace,
|
||||
field_selector=combined_field_selector
|
||||
)
|
||||
|
||||
def get_pods(self, name_pattern, label_selector,namespace, kubecli: KrknKubernetes, field_selector: str =None):
|
||||
def get_pods(self, name_pattern, label_selector, namespace, kubecli: KrknKubernetes, field_selector: str = None, node_label_selector: str = None, node_names: list = None, quiet: bool = False):
|
||||
if label_selector and name_pattern:
|
||||
logging.error('Only, one of name pattern or label pattern can be specified')
|
||||
elif label_selector:
|
||||
pods = kubecli.select_pods_by_namespace_pattern_and_label(label_selector=label_selector,namespace_pattern=namespace, field_selector=field_selector)
|
||||
elif name_pattern:
|
||||
pods = kubecli.select_pods_by_name_pattern_and_namespace_pattern(pod_name_pattern=name_pattern, namespace_pattern=namespace, field_selector=field_selector)
|
||||
else:
|
||||
return []
|
||||
|
||||
if not label_selector and not name_pattern:
|
||||
logging.error('Name pattern or label pattern must be specified ')
|
||||
return pods
|
||||
return []
|
||||
|
||||
# If specific node names are provided, make multiple calls with field selector
|
||||
if node_names:
|
||||
if not quiet:
|
||||
logging.info(f"Targeting pods on {len(node_names)} specific nodes")
|
||||
all_pods = []
|
||||
for node_name in node_names:
|
||||
pods = self._select_pods_with_field_selector(
|
||||
name_pattern, label_selector, namespace, kubecli, field_selector, node_name
|
||||
)
|
||||
|
||||
if pods:
|
||||
all_pods.extend(pods)
|
||||
|
||||
if not quiet:
|
||||
logging.info(f"Found {len(all_pods)} target pods across {len(node_names)} nodes")
|
||||
return all_pods
|
||||
|
||||
# Node label selector approach - use field selectors
|
||||
if node_label_selector:
|
||||
# Get nodes matching the label selector first
|
||||
nodes_with_label = kubecli.list_nodes(label_selector=node_label_selector)
|
||||
if not nodes_with_label:
|
||||
logging.info(f"No nodes found with label selector: {node_label_selector}")
|
||||
return []
|
||||
|
||||
if not quiet:
|
||||
logging.info(f"Targeting pods on {len(nodes_with_label)} nodes with label: {node_label_selector}")
|
||||
# Use field selector for each node
|
||||
all_pods = []
|
||||
for node_name in nodes_with_label:
|
||||
pods = self._select_pods_with_field_selector(
|
||||
name_pattern, label_selector, namespace, kubecli, field_selector, node_name
|
||||
)
|
||||
|
||||
if pods:
|
||||
all_pods.extend(pods)
|
||||
|
||||
if not quiet:
|
||||
logging.info(f"Found {len(all_pods)} target pods across {len(nodes_with_label)} nodes")
|
||||
return all_pods
|
||||
|
||||
# Standard pod selection (no node targeting)
|
||||
return self._select_pods_with_field_selector(
|
||||
name_pattern, label_selector, namespace, kubecli, field_selector
|
||||
)
|
||||
|
||||
def killing_pods(self, config: InputParams, kubecli: KrknKubernetes):
|
||||
# region Select target pods
|
||||
@@ -124,7 +190,14 @@ class PodDisruptionScenarioPlugin(AbstractScenarioPlugin):
|
||||
if not namespace:
|
||||
logging.error('Namespace pattern must be specified')
|
||||
|
||||
pods = self.get_pods(config.name_pattern,config.label_selector,config.namespace_pattern, kubecli, field_selector="status.phase=Running")
|
||||
pods = self.get_pods(config.name_pattern,config.label_selector,config.namespace_pattern, kubecli, field_selector="status.phase=Running", node_label_selector=config.node_label_selector, node_names=config.node_names)
|
||||
exclude_pods = set()
|
||||
if config.exclude_label:
|
||||
_exclude_pods = self.get_pods("",config.exclude_label,config.namespace_pattern, kubecli, field_selector="status.phase=Running", node_label_selector=config.node_label_selector, node_names=config.node_names)
|
||||
for pod in _exclude_pods:
|
||||
exclude_pods.add(pod[0])
|
||||
|
||||
|
||||
pods_count = len(pods)
|
||||
if len(pods) < config.kill:
|
||||
logging.error("Not enough pods match the criteria, expected {} but found only {} pods".format(
|
||||
@@ -133,23 +206,25 @@ class PodDisruptionScenarioPlugin(AbstractScenarioPlugin):
|
||||
|
||||
random.shuffle(pods)
|
||||
for i in range(config.kill):
|
||||
|
||||
pod = pods[i]
|
||||
logging.info(pod)
|
||||
logging.info(f'Deleting pod {pod[0]}')
|
||||
kubecli.delete_pod(pod[0], pod[1])
|
||||
if pod[0] in exclude_pods:
|
||||
logging.info(f"Excluding {pod[0]} from chaos")
|
||||
else:
|
||||
logging.info(f'Deleting pod {pod[0]}')
|
||||
kubecli.delete_pod(pod[0], pod[1])
|
||||
|
||||
self.wait_for_pods(config.label_selector,config.name_pattern,config.namespace_pattern, pods_count, config.duration, config.timeout, kubecli)
|
||||
self.wait_for_pods(config.label_selector,config.name_pattern,config.namespace_pattern, pods_count, config.duration, config.timeout, kubecli, config.node_label_selector, config.node_names)
|
||||
return 0
|
||||
|
||||
def wait_for_pods(
|
||||
self, label_selector, pod_name, namespace, pod_count, duration, wait_timeout, kubecli: KrknKubernetes
|
||||
self, label_selector, pod_name, namespace, pod_count, duration, wait_timeout, kubecli: KrknKubernetes, node_label_selector, node_names
|
||||
):
|
||||
timeout = False
|
||||
start_time = datetime.now()
|
||||
|
||||
while not timeout:
|
||||
pods = self.get_pods(name_pattern=pod_name, label_selector=label_selector,namespace=namespace, field_selector="status.phase=Running", kubecli=kubecli)
|
||||
pods = self.get_pods(name_pattern=pod_name, label_selector=label_selector,namespace=namespace, field_selector="status.phase=Running", kubecli=kubecli, node_label_selector=node_label_selector, node_names=node_names, quiet=True)
|
||||
if pod_count == len(pods):
|
||||
return
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ class ScenarioPluginFactory:
|
||||
inherits from the AbstractScenarioPlugin abstract class
|
||||
"""
|
||||
if scenario_type in self.loaded_plugins:
|
||||
return self.loaded_plugins[scenario_type]()
|
||||
return self.loaded_plugins[scenario_type](scenario_type)
|
||||
else:
|
||||
raise ScenarioPluginNotFound(
|
||||
f"Failed to load the {scenario_type} scenario plugin. "
|
||||
@@ -61,7 +61,10 @@ class ScenarioPluginFactory:
|
||||
continue
|
||||
|
||||
cls = getattr(module, name)
|
||||
instance = cls()
|
||||
# The AbstractScenarioPlugin constructor requires a scenario_type.
|
||||
# However, since we only need to call `get_scenario_types()` here,
|
||||
# it is acceptable to use a placeholder value.
|
||||
instance = cls("placeholder_scenario_type")
|
||||
get_scenario_type = getattr(instance, "get_scenario_types")
|
||||
scenario_types = get_scenario_type()
|
||||
has_duplicates = False
|
||||
|
||||
@@ -5,7 +5,7 @@ import yaml
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
from krkn_lib.telemetry.ocp import KrknTelemetryOpenshift
|
||||
from krkn.scenario_plugins.abstract_scenario_plugin import AbstractScenarioPlugin
|
||||
|
||||
from krkn_lib.utils import get_yaml_item_value
|
||||
|
||||
class ServiceHijackingScenarioPlugin(AbstractScenarioPlugin):
|
||||
def run(
|
||||
@@ -25,6 +25,8 @@ class ServiceHijackingScenarioPlugin(AbstractScenarioPlugin):
|
||||
image = scenario_config["image"]
|
||||
target_port = scenario_config["service_target_port"]
|
||||
chaos_duration = scenario_config["chaos_duration"]
|
||||
privileged = get_yaml_item_value(scenario_config,"privileged", True)
|
||||
|
||||
|
||||
logging.info(
|
||||
f"checking service {service_name} in namespace: {service_namespace}"
|
||||
@@ -46,14 +48,14 @@ class ServiceHijackingScenarioPlugin(AbstractScenarioPlugin):
|
||||
logging.info(f"webservice will listen on port {target_port}")
|
||||
webservice = (
|
||||
lib_telemetry.get_lib_kubernetes().deploy_service_hijacking(
|
||||
service_namespace, plan, image, port_number=target_port
|
||||
service_namespace, plan, image, port_number=target_port, privileged=privileged
|
||||
)
|
||||
)
|
||||
else:
|
||||
logging.info(f"traffic will be redirected to named port: {target_port}")
|
||||
webservice = (
|
||||
lib_telemetry.get_lib_kubernetes().deploy_service_hijacking(
|
||||
service_namespace, plan, image, port_name=target_port
|
||||
service_namespace, plan, image, port_name=target_port, privileged=privileged
|
||||
)
|
||||
)
|
||||
logging.info(
|
||||
|
||||
184
krkn/utils/VirtChecker.py
Normal file
184
krkn/utils/VirtChecker.py
Normal file
@@ -0,0 +1,184 @@
|
||||
|
||||
import time
|
||||
import logging
|
||||
import queue
|
||||
from datetime import datetime
|
||||
from krkn_lib.models.telemetry.models import VirtCheck
|
||||
from krkn.invoke.command import invoke_no_exit
|
||||
from krkn.scenario_plugins.kubevirt_vm_outage.kubevirt_vm_outage_scenario_plugin import KubevirtVmOutageScenarioPlugin
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
import threading
|
||||
from krkn_lib.utils.functions import get_yaml_item_value
|
||||
|
||||
|
||||
class VirtChecker:
|
||||
current_iterations: int = 0
|
||||
ret_value = 0
|
||||
def __init__(self, kubevirt_check_config, iterations, krkn_lib: KrknKubernetes, threads_limit=20):
|
||||
self.iterations = iterations
|
||||
self.namespace = get_yaml_item_value(kubevirt_check_config, "namespace", "")
|
||||
self.vm_list = []
|
||||
self.threads = []
|
||||
self.threads_limit = threads_limit
|
||||
if self.namespace == "":
|
||||
logging.info("kube virt checks config is not defined, skipping them")
|
||||
return
|
||||
vmi_name_match = get_yaml_item_value(kubevirt_check_config, "name", ".*")
|
||||
self.krkn_lib = krkn_lib
|
||||
self.disconnected = get_yaml_item_value(kubevirt_check_config, "disconnected", False)
|
||||
self.only_failures = get_yaml_item_value(kubevirt_check_config, "only_failures", False)
|
||||
self.interval = get_yaml_item_value(kubevirt_check_config, "interval", 2)
|
||||
self.ssh_node = get_yaml_item_value(kubevirt_check_config, "ssh_node", "")
|
||||
try:
|
||||
self.kube_vm_plugin = KubevirtVmOutageScenarioPlugin()
|
||||
self.kube_vm_plugin.init_clients(k8s_client=krkn_lib)
|
||||
vmis = self.kube_vm_plugin.get_vmis(vmi_name_match,self.namespace)
|
||||
except Exception as e:
|
||||
logging.error('Virt Check init exception: ' + str(e))
|
||||
return
|
||||
|
||||
for vmi in vmis:
|
||||
node_name = vmi.get("status",{}).get("nodeName")
|
||||
vmi_name = vmi.get("metadata",{}).get("name")
|
||||
ip_address = vmi.get("status",{}).get("interfaces",[])[0].get("ipAddress")
|
||||
self.vm_list.append(VirtCheck({'vm_name':vmi_name, 'ip_address': ip_address, 'namespace':self.namespace, 'node_name':node_name, "new_ip_address":""}))
|
||||
|
||||
def check_disconnected_access(self, ip_address: str, worker_name:str = '', vmi_name: str = ''):
|
||||
|
||||
virtctl_vm_cmd = f"ssh core@{worker_name} 'ssh -o BatchMode=yes -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@{ip_address}'"
|
||||
|
||||
all_out = invoke_no_exit(virtctl_vm_cmd)
|
||||
logging.debug(f"Checking disconnected access for {ip_address} on {worker_name} output: {all_out}")
|
||||
virtctl_vm_cmd = f"ssh core@{worker_name} 'ssh -o BatchMode=yes -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@{ip_address} 2>&1 | grep Permission' && echo 'True' || echo 'False'"
|
||||
logging.debug(f"Checking disconnected access for {ip_address} on {worker_name} with command: {virtctl_vm_cmd}")
|
||||
output = invoke_no_exit(virtctl_vm_cmd)
|
||||
if 'True' in output:
|
||||
logging.debug(f"Disconnected access for {ip_address} on {worker_name} is successful: {output}")
|
||||
return True, None, None
|
||||
else:
|
||||
logging.debug(f"Disconnected access for {ip_address} on {worker_name} is failed: {output}")
|
||||
vmi = self.kube_vm_plugin.get_vmi(vmi_name,self.namespace)
|
||||
new_ip_address = vmi.get("status",{}).get("interfaces",[])[0].get("ipAddress")
|
||||
new_node_name = vmi.get("status",{}).get("nodeName")
|
||||
# if vm gets deleted, it'll start up with a new ip address
|
||||
if new_ip_address != ip_address:
|
||||
virtctl_vm_cmd = f"ssh core@{worker_name} 'ssh -o BatchMode=yes -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@{new_ip_address} 2>&1 | grep Permission' && echo 'True' || echo 'False'"
|
||||
logging.debug(f"Checking disconnected access for {new_ip_address} on {worker_name} with command: {virtctl_vm_cmd}")
|
||||
new_output = invoke_no_exit(virtctl_vm_cmd)
|
||||
logging.debug(f"Disconnected access for {ip_address} on {worker_name}: {new_output}")
|
||||
if 'True' in new_output:
|
||||
return True, new_ip_address, None
|
||||
# if node gets stopped, vmis will start up with a new node (and with new ip)
|
||||
if new_node_name != worker_name:
|
||||
virtctl_vm_cmd = f"ssh core@{new_node_name} 'ssh -o BatchMode=yes -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@{new_ip_address} 2>&1 | grep Permission' && echo 'True' || echo 'False'"
|
||||
logging.debug(f"Checking disconnected access for {new_ip_address} on {new_node_name} with command: {virtctl_vm_cmd}")
|
||||
new_output = invoke_no_exit(virtctl_vm_cmd)
|
||||
logging.debug(f"Disconnected access for {ip_address} on {new_node_name}: {new_output}")
|
||||
if 'True' in new_output:
|
||||
return True, new_ip_address, new_node_name
|
||||
# try to connect with a common "up" node as last resort
|
||||
if self.ssh_node:
|
||||
# using new_ip_address here since if it hasn't changed it'll match ip_address
|
||||
virtctl_vm_cmd = f"ssh core@{self.ssh_node} 'ssh -o BatchMode=yes -o ConnectTimeout=5 -o StrictHostKeyChecking=no root@{new_ip_address} 2>&1 | grep Permission' && echo 'True' || echo 'False'"
|
||||
logging.debug(f"Checking disconnected access for {new_ip_address} on {self.ssh_node} with command: {virtctl_vm_cmd}")
|
||||
new_output = invoke_no_exit(virtctl_vm_cmd)
|
||||
logging.debug(f"Disconnected access for {new_ip_address} on {self.ssh_node}: {new_output}")
|
||||
if 'True' in new_output:
|
||||
return True, new_ip_address, None
|
||||
return False, None, None
|
||||
|
||||
def get_vm_access(self, vm_name: str = '', namespace: str = ''):
|
||||
"""
|
||||
This method returns True when the VM is access and an error message when it is not, using virtctl protocol
|
||||
:param vm_name:
|
||||
:param namespace:
|
||||
:return: virtctl_status 'True' if successful, or an error message if it fails.
|
||||
"""
|
||||
virtctl_vm_cmd = f"virtctl ssh --local-ssh-opts='-o BatchMode=yes' --local-ssh-opts='-o PasswordAuthentication=no' --local-ssh-opts='-o ConnectTimeout=5' root@vmi/{vm_name} -n {namespace} 2>&1 |egrep 'denied|verification failed' && echo 'True' || echo 'False'"
|
||||
check_virtctl_vm_cmd = f"virtctl ssh --local-ssh-opts='-o BatchMode=yes' --local-ssh-opts='-o PasswordAuthentication=no' --local-ssh-opts='-o ConnectTimeout=5' root@{vm_name} -n {namespace} 2>&1 |egrep 'denied|verification failed' && echo 'True' || echo 'False'"
|
||||
if 'True' in invoke_no_exit(check_virtctl_vm_cmd):
|
||||
return True
|
||||
else:
|
||||
second_invoke = invoke_no_exit(virtctl_vm_cmd)
|
||||
if 'True' in second_invoke:
|
||||
return True
|
||||
return False
|
||||
|
||||
def thread_join(self):
|
||||
for thread in self.threads:
|
||||
thread.join()
|
||||
|
||||
def batch_list(self, queue: queue.Queue, batch_size=20):
|
||||
# Provided prints to easily visualize how the threads are processed.
|
||||
for i in range (0, len(self.vm_list),batch_size):
|
||||
sub_list = self.vm_list[i: i+batch_size]
|
||||
index = i
|
||||
t = threading.Thread(target=self.run_virt_check,name=str(index), args=(sub_list,queue))
|
||||
self.threads.append(t)
|
||||
t.start()
|
||||
|
||||
|
||||
def run_virt_check(self, vm_list_batch, virt_check_telemetry_queue: queue.Queue):
|
||||
|
||||
virt_check_telemetry = []
|
||||
virt_check_tracker = {}
|
||||
while self.current_iterations < self.iterations:
|
||||
for vm in vm_list_batch:
|
||||
try:
|
||||
if not self.disconnected:
|
||||
vm_status = self.get_vm_access(vm.vm_name, vm.namespace)
|
||||
else:
|
||||
# if new ip address exists use it
|
||||
if vm.new_ip_address:
|
||||
vm_status, new_ip_address, new_node_name = self.check_disconnected_access(vm.new_ip_address, vm.node_name, vm.vm_name)
|
||||
# since we already set the new ip address, we don't want to reset to none each time
|
||||
else:
|
||||
vm_status, new_ip_address, new_node_name = self.check_disconnected_access(vm.ip_address, vm.node_name, vm.vm_name)
|
||||
if new_ip_address and vm.ip_address != new_ip_address:
|
||||
vm.new_ip_address = new_ip_address
|
||||
if new_node_name and vm.node_name != new_node_name:
|
||||
vm.node_name = new_node_name
|
||||
except Exception:
|
||||
vm_status = False
|
||||
|
||||
if vm.vm_name not in virt_check_tracker:
|
||||
start_timestamp = datetime.now()
|
||||
virt_check_tracker[vm.vm_name] = {
|
||||
"vm_name": vm.vm_name,
|
||||
"ip_address": vm.ip_address,
|
||||
"namespace": vm.namespace,
|
||||
"node_name": vm.node_name,
|
||||
"status": vm_status,
|
||||
"start_timestamp": start_timestamp,
|
||||
"new_ip_address": vm.new_ip_address
|
||||
}
|
||||
else:
|
||||
if vm_status != virt_check_tracker[vm.vm_name]["status"]:
|
||||
end_timestamp = datetime.now()
|
||||
start_timestamp = virt_check_tracker[vm.vm_name]["start_timestamp"]
|
||||
duration = (end_timestamp - start_timestamp).total_seconds()
|
||||
virt_check_tracker[vm.vm_name]["end_timestamp"] = end_timestamp.isoformat()
|
||||
virt_check_tracker[vm.vm_name]["duration"] = duration
|
||||
virt_check_tracker[vm.vm_name]["start_timestamp"] = start_timestamp.isoformat()
|
||||
if vm.new_ip_address:
|
||||
virt_check_tracker[vm.vm_name]["new_ip_address"] = vm.new_ip_address
|
||||
if self.only_failures:
|
||||
if not virt_check_tracker[vm.vm_name]["status"]:
|
||||
virt_check_telemetry.append(VirtCheck(virt_check_tracker[vm.vm_name]))
|
||||
else:
|
||||
virt_check_telemetry.append(VirtCheck(virt_check_tracker[vm.vm_name]))
|
||||
del virt_check_tracker[vm.vm_name]
|
||||
time.sleep(self.interval)
|
||||
virt_check_end_time_stamp = datetime.now()
|
||||
for vm in virt_check_tracker.keys():
|
||||
final_start_timestamp = virt_check_tracker[vm]["start_timestamp"]
|
||||
final_duration = (virt_check_end_time_stamp - final_start_timestamp).total_seconds()
|
||||
virt_check_tracker[vm]["end_timestamp"] = virt_check_end_time_stamp.isoformat()
|
||||
virt_check_tracker[vm]["duration"] = final_duration
|
||||
virt_check_tracker[vm]["start_timestamp"] = final_start_timestamp.isoformat()
|
||||
if self.only_failures:
|
||||
if not virt_check_tracker[vm]["status"]:
|
||||
virt_check_telemetry.append(VirtCheck(virt_check_tracker[vm]))
|
||||
else:
|
||||
virt_check_telemetry.append(VirtCheck(virt_check_tracker[vm]))
|
||||
virt_check_telemetry_queue.put(virt_check_telemetry)
|
||||
@@ -16,9 +16,9 @@ google-cloud-compute==1.22.0
|
||||
ibm_cloud_sdk_core==3.18.0
|
||||
ibm_vpc==0.20.0
|
||||
jinja2==3.1.6
|
||||
krkn-lib==5.1.0
|
||||
krkn-lib==5.1.11
|
||||
lxml==5.1.0
|
||||
kubernetes==28.1.0
|
||||
kubernetes==34.1.0
|
||||
numpy==1.26.4
|
||||
pandas==2.2.0
|
||||
openshift-client==1.0.21
|
||||
@@ -38,3 +38,4 @@ zope.interface==5.4.0
|
||||
|
||||
git+https://github.com/vmware/vsphere-automation-sdk-python.git@v8.0.0.0
|
||||
cryptography>=42.0.4 # not directly required, pinned by Snyk to avoid a vulnerability
|
||||
protobuf>=4.25.8 # not directly required, pinned by Snyk to avoid a vulnerability
|
||||
|
||||
109
run_kraken.py
109
run_kraken.py
@@ -11,12 +11,12 @@ import uuid
|
||||
import time
|
||||
import queue
|
||||
import threading
|
||||
from typing import Optional
|
||||
|
||||
from krkn_lib.elastic.krkn_elastic import KrknElastic
|
||||
from krkn_lib.models.elastic import ElasticChaosRunTelemetry
|
||||
from krkn_lib.models.krkn import ChaosRunOutput, ChaosRunAlertSummary
|
||||
from krkn_lib.prometheus.krkn_prometheus import KrknPrometheus
|
||||
import krkn.performance_dashboards.setup as performance_dashboards
|
||||
import krkn.prometheus as prometheus_plugin
|
||||
import server as server
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
@@ -29,10 +29,16 @@ from krkn_lib.utils.functions import get_yaml_item_value, get_junit_test_case
|
||||
|
||||
from krkn.utils import TeeLogHandler
|
||||
from krkn.utils.HealthChecker import HealthChecker
|
||||
from krkn.utils.VirtChecker import VirtChecker
|
||||
from krkn.scenario_plugins.scenario_plugin_factory import (
|
||||
ScenarioPluginFactory,
|
||||
ScenarioPluginNotFound,
|
||||
)
|
||||
from krkn.rollback.config import RollbackConfig
|
||||
from krkn.rollback.command import (
|
||||
list_rollback as list_rollback_command,
|
||||
execute_rollback as execute_rollback_command,
|
||||
)
|
||||
|
||||
# removes TripleDES warning
|
||||
import warnings
|
||||
@@ -40,13 +46,13 @@ warnings.filterwarnings(action='ignore', module='.*paramiko.*')
|
||||
|
||||
report_file = ""
|
||||
|
||||
|
||||
# Main function
|
||||
def main(cfg) -> int:
|
||||
def main(options, command: Optional[str]) -> int:
|
||||
# Start kraken
|
||||
print(pyfiglet.figlet_format("kraken"))
|
||||
logging.info("Starting kraken")
|
||||
|
||||
cfg = options.cfg
|
||||
# Parse and read the config
|
||||
if os.path.isfile(cfg):
|
||||
with open(cfg, "r") as f:
|
||||
@@ -62,6 +68,18 @@ def main(cfg) -> int:
|
||||
config["kraken"], "publish_kraken_status", False
|
||||
)
|
||||
port = get_yaml_item_value(config["kraken"], "port", 8081)
|
||||
RollbackConfig.register(
|
||||
auto=get_yaml_item_value(
|
||||
config["kraken"],
|
||||
"auto_rollback",
|
||||
False
|
||||
),
|
||||
versions_directory=get_yaml_item_value(
|
||||
config["kraken"],
|
||||
"rollback_versions_directory",
|
||||
"/tmp/kraken-rollback"
|
||||
),
|
||||
)
|
||||
signal_address = get_yaml_item_value(
|
||||
config["kraken"], "signal_address", "0.0.0.0"
|
||||
)
|
||||
@@ -69,14 +87,6 @@ def main(cfg) -> int:
|
||||
wait_duration = get_yaml_item_value(config["tunings"], "wait_duration", 60)
|
||||
iterations = get_yaml_item_value(config["tunings"], "iterations", 1)
|
||||
daemon_mode = get_yaml_item_value(config["tunings"], "daemon_mode", False)
|
||||
deploy_performance_dashboards = get_yaml_item_value(
|
||||
config["performance_monitoring"], "deploy_dashboards", False
|
||||
)
|
||||
dashboard_repo = get_yaml_item_value(
|
||||
config["performance_monitoring"],
|
||||
"repo",
|
||||
"https://github.com/cloud-bulldozer/performance-dashboards.git",
|
||||
)
|
||||
|
||||
prometheus_url = config["performance_monitoring"].get("prometheus_url")
|
||||
prometheus_bearer_token = config["performance_monitoring"].get(
|
||||
@@ -121,7 +131,8 @@ def main(cfg) -> int:
|
||||
config["performance_monitoring"], "check_critical_alerts", False
|
||||
)
|
||||
telemetry_api_url = config["telemetry"].get("api_url")
|
||||
health_check_config = config["health_checks"]
|
||||
health_check_config = get_yaml_item_value(config, "health_checks",{})
|
||||
kubevirt_check_config = get_yaml_item_value(config, "kubevirt_checks", {})
|
||||
|
||||
# Initialize clients
|
||||
if not os.path.isfile(kubeconfig_path) and not os.path.isfile(
|
||||
@@ -240,9 +251,18 @@ def main(cfg) -> int:
|
||||
|
||||
logging.info("Server URL: %s" % kubecli.get_host())
|
||||
|
||||
# Deploy performance dashboards
|
||||
if deploy_performance_dashboards:
|
||||
performance_dashboards.setup(dashboard_repo, distribution)
|
||||
if command == "list-rollback":
|
||||
sys.exit(
|
||||
list_rollback_command(
|
||||
options.run_uuid, options.scenario_type
|
||||
)
|
||||
)
|
||||
elif command == "execute-rollback":
|
||||
sys.exit(
|
||||
execute_rollback_command(
|
||||
telemetry_ocp, options.run_uuid, options.scenario_type
|
||||
)
|
||||
)
|
||||
|
||||
# Initialize the start iteration to 0
|
||||
iteration = 0
|
||||
@@ -306,6 +326,10 @@ def main(cfg) -> int:
|
||||
args=(health_check_config, health_check_telemetry_queue))
|
||||
health_check_worker.start()
|
||||
|
||||
kubevirt_check_telemetry_queue = queue.Queue()
|
||||
kubevirt_checker = VirtChecker(kubevirt_check_config, iterations=iterations, krkn_lib=kubecli)
|
||||
kubevirt_checker.batch_list(kubevirt_check_telemetry_queue)
|
||||
|
||||
# Loop to run the chaos starts here
|
||||
while int(iteration) < iterations and run_signal != "STOP":
|
||||
# Inject chaos scenarios specified in the config
|
||||
@@ -351,10 +375,12 @@ def main(cfg) -> int:
|
||||
prometheus_plugin.critical_alerts(
|
||||
prometheus,
|
||||
summary,
|
||||
elastic_search,
|
||||
run_uuid,
|
||||
scenario_type,
|
||||
start_time,
|
||||
datetime.datetime.now(),
|
||||
elastic_alerts_index
|
||||
)
|
||||
|
||||
chaos_output.critical_alerts = summary
|
||||
@@ -367,6 +393,7 @@ def main(cfg) -> int:
|
||||
|
||||
iteration += 1
|
||||
health_checker.current_iterations += 1
|
||||
kubevirt_checker.current_iterations += 1
|
||||
|
||||
# telemetry
|
||||
# in order to print decoded telemetry data even if telemetry collection
|
||||
@@ -378,6 +405,17 @@ def main(cfg) -> int:
|
||||
chaos_telemetry.health_checks = health_check_telemetry_queue.get_nowait()
|
||||
except queue.Empty:
|
||||
chaos_telemetry.health_checks = None
|
||||
|
||||
kubevirt_checker.thread_join()
|
||||
kubevirt_check_telem = []
|
||||
i =0
|
||||
while i <= kubevirt_checker.threads_limit:
|
||||
if not kubevirt_check_telemetry_queue.empty():
|
||||
kubevirt_check_telem.extend(kubevirt_check_telemetry_queue.get_nowait())
|
||||
else:
|
||||
break
|
||||
i+= 1
|
||||
chaos_telemetry.virt_checks = kubevirt_check_telem
|
||||
|
||||
# if platform is openshift will be collected
|
||||
# Cloud platform and network plugins metadata
|
||||
@@ -532,7 +570,13 @@ def main(cfg) -> int:
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Initialize the parser to read the config
|
||||
parser = optparse.OptionParser()
|
||||
parser = optparse.OptionParser(
|
||||
usage="%prog [options] [command]\n\n"
|
||||
"Commands:\n"
|
||||
" list-rollback List rollback version files in a tree-like format\n"
|
||||
" execute-rollback Execute rollback version files and cleanup if successful\n\n"
|
||||
"If no command is specified, kraken will run chaos scenarios.",
|
||||
)
|
||||
parser.add_option(
|
||||
"-c",
|
||||
"--config",
|
||||
@@ -569,7 +613,34 @@ if __name__ == "__main__":
|
||||
default=None,
|
||||
)
|
||||
|
||||
# Add rollback command options
|
||||
parser.add_option(
|
||||
"-r",
|
||||
"--run_uuid",
|
||||
dest="run_uuid",
|
||||
help="run UUID to filter rollback operations",
|
||||
default=None,
|
||||
)
|
||||
|
||||
parser.add_option(
|
||||
"-s",
|
||||
"--scenario_type",
|
||||
dest="scenario_type",
|
||||
help="scenario type to filter rollback operations",
|
||||
default=None,
|
||||
)
|
||||
|
||||
parser.add_option(
|
||||
"-d",
|
||||
"--debug",
|
||||
dest="debug",
|
||||
help="enable debug logging",
|
||||
default=False,
|
||||
)
|
||||
|
||||
(options, args) = parser.parse_args()
|
||||
|
||||
# If no command or regular execution, continue with existing logic
|
||||
report_file = options.output
|
||||
tee_handler = TeeLogHandler()
|
||||
handlers = [
|
||||
@@ -579,7 +650,7 @@ if __name__ == "__main__":
|
||||
]
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
level=logging.DEBUG if options.debug else logging.INFO,
|
||||
format="%(asctime)s [%(levelname)s] %(message)s",
|
||||
handlers=handlers,
|
||||
)
|
||||
@@ -638,7 +709,9 @@ if __name__ == "__main__":
|
||||
if option_error:
|
||||
retval = 1
|
||||
else:
|
||||
retval = main(options.cfg)
|
||||
# Check if command is provided as positional argument
|
||||
command = args[0] if args else None
|
||||
retval = main(options, command)
|
||||
|
||||
junit_endtime = time.time()
|
||||
|
||||
|
||||
@@ -5,6 +5,7 @@ image: quay.io/krkn-chaos/krkn-hog
|
||||
namespace: default
|
||||
cpu-load-percentage: 90
|
||||
cpu-method: all
|
||||
# node-name: "worker-0" # Uncomment to target a specific node by name
|
||||
node-selector: "node-role.kubernetes.io/worker="
|
||||
number-of-nodes: 2
|
||||
taints: [] #example ["node-role.kubernetes.io/master:NoSchedule"]
|
||||
|
||||
@@ -6,10 +6,11 @@ namespace: default
|
||||
io-block-size: 1m
|
||||
io-write-bytes: 1g
|
||||
io-target-pod-folder: /hog-data
|
||||
# node-name: "worker-0" # Uncomment to target a specific node by name
|
||||
io-target-pod-volume:
|
||||
name: node-volume
|
||||
hostPath:
|
||||
path: /root # a path writable by kubelet in the root filesystem of the node
|
||||
node-selector: "node-role.kubernetes.io/worker="
|
||||
number-of-nodes: ''
|
||||
taints: [] #example ["node-role.kubernetes.io/master:NoSchedule"]
|
||||
taints: [] #example ["node-role.kubernetes.io/master:NoSchedule"]
|
||||
@@ -4,6 +4,7 @@ hog-type: memory
|
||||
image: quay.io/krkn-chaos/krkn-hog
|
||||
namespace: default
|
||||
memory-vm-bytes: 90%
|
||||
# node-name: "worker-0" # Uncomment to target a specific node by name
|
||||
node-selector: "node-role.kubernetes.io/worker="
|
||||
number-of-nodes: ''
|
||||
taints: [] #example ["node-role.kubernetes.io/master:NoSchedule"]
|
||||
|
||||
@@ -1,14 +1,18 @@
|
||||
- id: node_network_filter
|
||||
image: "quay.io/krkn-chaos/krkn-network-chaos:latest"
|
||||
wait_duration: 300
|
||||
test_duration: 100
|
||||
label_selector: "kubernetes.io/hostname=minikube"
|
||||
wait_duration: 1
|
||||
test_duration: 10
|
||||
label_selector: "<node_selector>"
|
||||
service_account: ""
|
||||
taints: [] # example ["node-role.kubernetes.io/master:NoSchedule"]
|
||||
namespace: 'default'
|
||||
instance_count: 1
|
||||
execution: parallel
|
||||
ingress: false
|
||||
egress: true
|
||||
target: ''
|
||||
target: '<node_name>'
|
||||
interfaces: []
|
||||
ports:
|
||||
- 53
|
||||
- 2309
|
||||
protocols:
|
||||
- tcp
|
||||
@@ -2,13 +2,15 @@
|
||||
image: "quay.io/krkn-chaos/krkn-network-chaos:latest"
|
||||
wait_duration: 1
|
||||
test_duration: 60
|
||||
label_selector: "app=network-attacked"
|
||||
label_selector: "<pod_selector>"
|
||||
service_account: ""
|
||||
taints: [] # example ["node-role.kubernetes.io/master:NoSchedule"]
|
||||
namespace: 'default'
|
||||
instance_count: 1
|
||||
execution: parallel
|
||||
ingress: false
|
||||
egress: true
|
||||
target: ""
|
||||
target: "<pod_name>"
|
||||
interfaces: []
|
||||
protocols:
|
||||
- tcp
|
||||
|
||||
@@ -5,6 +5,7 @@ service_name: nginx-service # name of the service to be hijacked
|
||||
service_namespace: default # The namespace where the target service is located
|
||||
image: quay.io/krkn-chaos/krkn-service-hijacking:v0.1.3 # Image of the krkn web service to be deployed to receive traffic.
|
||||
chaos_duration: 30 # Total duration of the chaos scenario in seconds.
|
||||
privileged: True # True or false if need privileged securityContext to run
|
||||
plan:
|
||||
- resource: "/list/index.php" # Specifies the resource or path to respond to in the scenario. For paths, both the path and query parameters are captured but ignored.
|
||||
# For resources, only query parameters are captured.
|
||||
|
||||
7
scenarios/kubevirt/kubevirt-vm-outage.yaml
Normal file
7
scenarios/kubevirt/kubevirt-vm-outage.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
scenarios:
|
||||
- name: "kubevirt outage test"
|
||||
scenario: kubevirt_vm_outage
|
||||
parameters:
|
||||
vm_name: <vm-name>
|
||||
namespace: <namespace>
|
||||
timeout: 60
|
||||
@@ -1,6 +1,15 @@
|
||||
# yaml-language-server: $schema=../plugin.schema.json
|
||||
- id: kill-pods
|
||||
config:
|
||||
namespace_pattern: ^acme-air$
|
||||
namespace_pattern: "kube-system"
|
||||
name_pattern: .*
|
||||
krkn_pod_recovery_time: 120
|
||||
krkn_pod_recovery_time: 60
|
||||
kill: 1 # num of pods to kill
|
||||
#Not needed by default, but can be used if you want to target pods on specific nodes
|
||||
# Option 1: Target pods on nodes with specific labels [master/worker nodes]
|
||||
node_label_selector: node-role.kubernetes.io/control-plane= # Target control-plane nodes (works on both k8s and openshift)
|
||||
# Option 2: Target pods of specific nodes (testing mixed node types)
|
||||
# node_names:
|
||||
# - ip-10-0-31-8.us-east-2.compute.internal # Worker node 1
|
||||
# - ip-10-0-48-188.us-east-2.compute.internal # Worker node 2
|
||||
# - ip-10-0-14-59.us-east-2.compute.internal # Master node 1
|
||||
20
scenarios/openshift/egress_ip_ovn.yml
Normal file
20
scenarios/openshift/egress_ip_ovn.yml
Normal file
@@ -0,0 +1,20 @@
|
||||
# EgressIP failovr scenario - blocks OVN healthcheck port 9107 for EgressIP to move to another node
|
||||
|
||||
- id: node_network_filter
|
||||
image: "quay.io/krkn-chaos/krkn-network-chaos:latest"
|
||||
wait_duration: 60
|
||||
test_duration: 30
|
||||
label_selector: "k8s.ovn.org/egress-assignable="
|
||||
service_account: ""
|
||||
taints: []
|
||||
namespace: 'default'
|
||||
instance_count: 1
|
||||
execution: serial
|
||||
ingress: true
|
||||
egress: false
|
||||
target: ''
|
||||
interfaces: []
|
||||
ports:
|
||||
- 9107
|
||||
protocols:
|
||||
- tcp
|
||||
@@ -4,3 +4,4 @@
|
||||
namespace_pattern: ^openshift-etcd$
|
||||
label_selector: k8s-app=etcd
|
||||
krkn_pod_recovery_time: 120
|
||||
exclude_label: "" # excludes pods marked with this label from chaos
|
||||
|
||||
@@ -7,10 +7,12 @@ node_scenarios:
|
||||
timeout: 360
|
||||
duration: 120
|
||||
cloud_type: ibm
|
||||
disable_ssl_verification: true # Set to true for CI environments with certificate issues
|
||||
- actions:
|
||||
- node_reboot_scenario
|
||||
node_name:
|
||||
label_selector: node-role.kubernetes.io/worker
|
||||
instance_count: 1
|
||||
timeout: 120
|
||||
cloud_type: ibm
|
||||
cloud_type: ibm
|
||||
disable_ssl_verification: true # Set to true for CI environments with certificate issues
|
||||
@@ -4,4 +4,5 @@
|
||||
namespace_pattern: ^openshift-apiserver$
|
||||
label_selector: app=openshift-apiserver-a
|
||||
krkn_pod_recovery_time: 120
|
||||
exclude_label: "" # excludes pods marked with this label from chaos
|
||||
|
||||
|
||||
@@ -4,4 +4,5 @@
|
||||
namespace_pattern: ^openshift-kube-apiserver$
|
||||
label_selector: app=openshift-kube-apiserver
|
||||
krkn_pod_recovery_time: 120
|
||||
exclude_label: "" # excludes pods marked with this label from chaos
|
||||
|
||||
|
||||
@@ -2,4 +2,5 @@
|
||||
config:
|
||||
namespace_pattern: ^openshift-monitoring$
|
||||
label_selector: statefulset.kubernetes.io/pod-name=prometheus-k8s-0
|
||||
krkn_pod_recovery_time: 120
|
||||
krkn_pod_recovery_time: 120
|
||||
exclude_label: "" # excludes pods marked with this label from chaos
|
||||
@@ -5,3 +5,4 @@
|
||||
name_pattern: .*
|
||||
kill: 3
|
||||
krkn_pod_recovery_time: 120
|
||||
exclude_label: "" # excludes pods marked with this label from chaos
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user