mirror of
https://github.com/krkn-chaos/krkn.git
synced 2026-03-11 14:11:35 +00:00
Compare commits
22 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fef77cfc0e | ||
|
|
eb2eabe029 | ||
|
|
f7f1b2dfb0 | ||
|
|
61356fd70b | ||
|
|
067969a81a | ||
|
|
972ac12921 | ||
|
|
ea813748ae | ||
|
|
782d04c1b1 | ||
|
|
2fb58f9897 | ||
|
|
5712721410 | ||
|
|
5567c06cd0 | ||
|
|
0ad4c11356 | ||
|
|
f6f686e8fe | ||
|
|
3a66f8a5a3 | ||
|
|
585d519687 | ||
|
|
e40fedcd44 | ||
|
|
1bb5b8ad04 | ||
|
|
725d58c8ce | ||
|
|
c6058da7a7 | ||
|
|
06a8ed220c | ||
|
|
2c6b50bcdc | ||
|
|
ed97c8df2b |
2
.github/workflows/functional_tests.yaml
vendored
2
.github/workflows/functional_tests.yaml
vendored
@@ -34,7 +34,7 @@ jobs:
|
||||
- name: Check out Kraken
|
||||
uses: actions/checkout@v3
|
||||
- name: Checkout Pull Request
|
||||
run: hub pr checkout ${{ github.event.issue.number }}
|
||||
run: gh pr checkout ${{ github.event.issue.number }}
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
- name: Install OC CLI
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
# Krkn aka Kraken
|
||||
[](https://quay.io/repository/redhat-chaos/krkn?tab=tags&tag=latest)
|
||||

|
||||
|
||||

|
||||
|
||||
@@ -65,7 +66,7 @@ Scenario type | Kubernetes | OpenShift
|
||||
[Time Scenarios](docs/time_scenarios.md) | :x: | :heavy_check_mark: |
|
||||
[Hog Scenarios: CPU, Memory](docs/arcaflow_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
|
||||
[Cluster Shut Down Scenarios](docs/cluster_shut_down_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
|
||||
[Namespace Scenarios](docs/namespace_scenarios.md) | :heavy_check_mark: | :heavy_check_mark: |
|
||||
[Service Disruption Scenarios](docs/service_disruption_scenarios.md.md) | :heavy_check_mark: | :heavy_check_mark: |
|
||||
[Zone Outage Scenarios](docs/zone_outage.md) | :heavy_check_mark: | :heavy_check_mark: |
|
||||
[Application_outages](docs/application_outages.md) | :heavy_check_mark: | :heavy_check_mark: |
|
||||
[PVC scenario](docs/pvc_scenario.md) | :heavy_check_mark: | :heavy_check_mark: |
|
||||
@@ -128,5 +129,5 @@ Please read [this file]((CI/README.md#adding-a-test-case)) for more information
|
||||
|
||||
### Community
|
||||
Key Members(slack_usernames/full name): paigerube14/Paige Rubendall, mffiedler/Mike Fiedler, ravielluri/Naga Ravi Chaitanya Elluri.
|
||||
* [**#sig-scalability on Kubernetes Slack**](https://kubernetes.slack.com)
|
||||
* [**#krkn on Kubernetes Slack**](https://kubernetes.slack.com)
|
||||
* [**#forum-chaos on CoreOS Slack internal to Red Hat**](https://coreos.slack.com)
|
||||
|
||||
16
ROADMAP.md
16
ROADMAP.md
@@ -2,10 +2,12 @@
|
||||
|
||||
Following are a list of enhancements that we are planning to work on adding support in Krkn. Of course any help/contributions are greatly appreciated.
|
||||
|
||||
- [Ability to run multiple chaos scenarios in parallel under load to mimic real world outages](https://github.com/redhat-chaos/krkn/issues/424)
|
||||
- [Centralized storage for chaos experiments artifacts](https://github.com/redhat-chaos/krkn/issues/423)
|
||||
- [Support for causing DNS outages](https://github.com/redhat-chaos/krkn/issues/394)
|
||||
- [Support for pod level network traffic shaping](https://github.com/redhat-chaos/krkn/issues/393)
|
||||
- [Ability to visualize the metrics that are being captured by Kraken and stored in Elasticsearch](https://github.com/redhat-chaos/krkn/issues/124)
|
||||
- Support for running all the scenarios of Kraken on Kubernetes distribution - see https://github.com/redhat-chaos/krkn/issues/185, https://github.com/redhat-chaos/krkn/issues/186
|
||||
- Continue to improve [Chaos Testing Guide](https://redhat-chaos.github.io/krkn) in terms of adding best practices, test environment recommendations and scenarios to make sure the OpenShift platform, as well the applications running on top it, are resilient and performant under chaotic conditions.
|
||||
- [ ] [Ability to run multiple chaos scenarios in parallel under load to mimic real world outages](https://github.com/redhat-chaos/krkn/issues/424)
|
||||
- [x] [Centralized storage for chaos experiments artifacts](https://github.com/redhat-chaos/krkn/issues/423)
|
||||
- [ ] [Support for causing DNS outages](https://github.com/redhat-chaos/krkn/issues/394)
|
||||
- [ ] [Support for pod level network traffic shaping](https://github.com/redhat-chaos/krkn/issues/393)
|
||||
- [ ] [Ability to visualize the metrics that are being captured by Kraken and stored in Elasticsearch](https://github.com/redhat-chaos/krkn/issues/124)
|
||||
- [ ] Support for running all the scenarios of Kraken on Kubernetes distribution - see https://github.com/redhat-chaos/krkn/issues/185, https://github.com/redhat-chaos/krkn/issues/186
|
||||
- [ ] Continue to improve [Chaos Testing Guide](https://redhat-chaos.github.io/krkn) in terms of adding best practices, test environment recommendations and scenarios to make sure the OpenShift platform, as well the applications running on top it, are resilient and performant under chaotic conditions.
|
||||
- [ ] [Switch documentation references to Kubernetes](https://github.com/redhat-chaos/krkn/issues/495)
|
||||
- [ ] [OCP and Kubernetes functionalities segregation](https://github.com/redhat-chaos/krkn/issues/497)
|
||||
@@ -16,6 +16,42 @@
|
||||
description: etcd leader changes observed
|
||||
severity: warning
|
||||
|
||||
- expr: (last_over_time(etcd_mvcc_db_total_size_in_bytes[5m]) / last_over_time(etcd_server_quota_backend_bytes[5m]))*100 > 95
|
||||
description: etcd cluster database is running full.
|
||||
severity: critical
|
||||
|
||||
- expr: (last_over_time(etcd_mvcc_db_total_size_in_use_in_bytes[5m]) / last_over_time(etcd_mvcc_db_total_size_in_bytes[5m])) < 0.5
|
||||
description: etcd database size in use is less than 50% of the actual allocated storage.
|
||||
severity: warning
|
||||
|
||||
- expr: rate(etcd_server_proposals_failed_total{job=~".*etcd.*"}[15m]) > 5
|
||||
description: etcd cluster has high number of proposal failures.
|
||||
severity: warning
|
||||
|
||||
- expr: histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket{job=~".*etcd.*"}[5m])) > 0.15
|
||||
description: etcd cluster member communication is slow.
|
||||
severity: warning
|
||||
|
||||
- expr: histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{job=~".*etcd.*", grpc_method!="Defragment", grpc_type="unary"}[5m])) without(grpc_type)) > 0.15
|
||||
description: etcd grpc requests are slow.
|
||||
severity: critical
|
||||
|
||||
- expr: 100 * sum(rate(grpc_server_handled_total{job=~".*etcd.*", grpc_code=~"Unknown|FailedPrecondition|ResourceExhausted|Internal|Unavailable|DataLoss|DeadlineExceeded"}[5m])) without (grpc_type, grpc_code) / sum(rate(grpc_server_handled_total{job=~".*etcd.*"}[5m])) without (grpc_type, grpc_code) > 5
|
||||
description: etcd cluster has high number of failed grpc requests.
|
||||
severity: critical
|
||||
|
||||
- expr: etcd_server_has_leader{job=~".*etcd.*"} == 0
|
||||
description: etcd cluster has no leader.
|
||||
severity: warning
|
||||
|
||||
- expr: sum(up{job=~".*etcd.*"} == bool 1) without (instance) < ((count(up{job=~".*etcd.*"}) without (instance) + 1) / 2)
|
||||
description: etcd cluster has insufficient number of members.
|
||||
severity: warning
|
||||
|
||||
- expr: max without (endpoint) ( sum without (instance) (up{job=~".*etcd.*"} == bool 0) or count without (To) ( sum without (instance) (rate(etcd_network_peer_sent_failures_total{job=~".*etcd.*"}[120s])) > 0.01 )) > 0
|
||||
description: etcd cluster members are down.
|
||||
severity: warning
|
||||
|
||||
# API server
|
||||
- expr: avg_over_time(histogram_quantile(0.99, sum(irate(apiserver_request_duration_seconds_bucket{apiserver="kube-apiserver", verb=~"POST|PUT|DELETE|PATCH", subresource!~"log|exec|portforward|attach|proxy"}[2m])) by (le, resource, verb))[10m:]) > 1
|
||||
description: 10 minutes avg. 99th mutating API call latency for {{$labels.verb}}/{{$labels.resource}} higher than 1 second. {{$value}}s
|
||||
|
||||
@@ -11,6 +11,7 @@ kraken:
|
||||
- arcaflow_scenarios:
|
||||
- scenarios/arcaflow/cpu-hog/input.yaml
|
||||
- scenarios/arcaflow/memory-hog/input.yaml
|
||||
- scenarios/arcaflow/io-hog/input.yaml
|
||||
- application_outages:
|
||||
- scenarios/openshift/app_outage.yaml
|
||||
- container_scenarios: # List of chaos pod scenarios to load
|
||||
@@ -20,6 +21,7 @@ kraken:
|
||||
- scenarios/openshift/regex_openshift_pod_kill.yml
|
||||
- scenarios/openshift/vmware_node_scenarios.yml
|
||||
- scenarios/openshift/network_chaos_ingress.yml
|
||||
- scenarios/openshift/prom_kill.yml
|
||||
- node_scenarios: # List of chaos node scenarios to load
|
||||
- scenarios/openshift/node_scenarios_example.yml
|
||||
- plugin_scenarios:
|
||||
@@ -37,7 +39,7 @@ kraken:
|
||||
- cluster_shut_down_scenarios:
|
||||
- - scenarios/openshift/cluster_shut_down_scenario.yml
|
||||
- scenarios/openshift/post_action_shut_down.py
|
||||
- namespace_scenarios:
|
||||
- service_disruption_scenarios:
|
||||
- - scenarios/openshift/regex_namespace.yaml
|
||||
- - scenarios/openshift/ingress_namespace.yaml
|
||||
- scenarios/openshift/post_action_namespace.py
|
||||
@@ -87,3 +89,10 @@ telemetry:
|
||||
# For unstable/slow connection is better to keep this value low
|
||||
# increasing the number of backup_threads, in this way, on upload failure, the retry will happen only on the
|
||||
# failed chunk without affecting the whole upload.
|
||||
logs_backup: True
|
||||
logs_filter_patterns:
|
||||
- "(\\w{3}\\s\\d{1,2}\\s\\d{2}:\\d{2}:\\d{2}\\.\\d+).+" # Sep 9 11:20:36.123425532
|
||||
- "kinit (\\d+/\\d+/\\d+\\s\\d{2}:\\d{2}:\\d{2})\\s+" # kinit 2023/09/15 11:20:36 log
|
||||
- "(\\d{4}-\\d{2}-\\d{2}T\\d{2}:\\d{2}:\\d{2}\\.\\d+Z).+" # 2023-09-15T11:20:36.123425532Z log
|
||||
oc_cli_path: /usr/bin/oc # optional, if not specified will be search in $PATH
|
||||
|
||||
|
||||
@@ -13,6 +13,7 @@ kraken:
|
||||
- plugin_scenarios: # List of chaos pod scenarios to load
|
||||
- scenarios/openshift/etcd.yml
|
||||
- scenarios/openshift/regex_openshift_pod_kill.yml
|
||||
- scenarios/openshift/prom_kill.yml
|
||||
- node_scenarios: # List of chaos node scenarios to load
|
||||
- scenarios/openshift/node_scenarios_example.yml
|
||||
- plugin_scenarios:
|
||||
@@ -26,7 +27,7 @@ kraken:
|
||||
- cluster_shut_down_scenarios:
|
||||
- - scenarios/openshift/cluster_shut_down_scenario.yml
|
||||
- scenarios/openshift/post_action_shut_down.py
|
||||
- namespace_scenarios:
|
||||
- service_disruption_scenarios:
|
||||
- scenarios/openshift/regex_namespace.yaml
|
||||
- scenarios/openshift/ingress_namespace.yaml
|
||||
- zone_outages:
|
||||
|
||||
@@ -139,6 +139,39 @@ metrics:
|
||||
- query: histogram_quantile(0.99,sum(rate(etcd_request_duration_seconds_bucket[2m])) by (le,operation,apiserver)) > 0
|
||||
metricName: P99APIEtcdRequestLatency
|
||||
|
||||
- query: sum(grpc_server_started_total{namespace="openshift-etcd",grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{namespace="openshift-etcd",grpc_service="etcdserverpb.Watch",grpc_type="bidi_stream"})
|
||||
metricName: ActiveWatchStreams
|
||||
|
||||
- query: sum(grpc_server_started_total{namespace="openshift-etcd",grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"}) - sum(grpc_server_handled_total{namespace="openshift-etcd",grpc_service="etcdserverpb.Lease",grpc_type="bidi_stream"})
|
||||
metricName: ActiveLeaseStreams
|
||||
|
||||
- query: sum(rate(etcd_debugging_snap_save_total_duration_seconds_sum{namespace="openshift-etcd"}[2m]))
|
||||
metricName: snapshotSaveLatency
|
||||
|
||||
- query: sum(rate(etcd_server_heartbeat_send_failures_total{namespace="openshift-etcd"}[2m]))
|
||||
metricName: HeartBeatFailures
|
||||
|
||||
- query: sum(rate(etcd_server_health_failures{namespace="openshift-etcd"}[2m]))
|
||||
metricName: HealthFailures
|
||||
|
||||
- query: sum(rate(etcd_server_slow_apply_total{namespace="openshift-etcd"}[2m]))
|
||||
metricName: SlowApplies
|
||||
|
||||
- query: sum(rate(etcd_server_slow_read_indexes_total{namespace="openshift-etcd"}[2m]))
|
||||
metricName: SlowIndexRead
|
||||
|
||||
- query: sum(etcd_server_proposals_pending)
|
||||
metricName: PendingProposals
|
||||
|
||||
- query: histogram_quantile(1.0, sum(rate(etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket[1m])) by (le, instance))
|
||||
metricName: CompactionMaxPause
|
||||
|
||||
- query: sum by (instance) (apiserver_storage_objects)
|
||||
metricName: etcdTotalObjectCount
|
||||
|
||||
- query: topk(500, max by(resource) (apiserver_storage_objects))
|
||||
metricName: etcdTopObectCount
|
||||
|
||||
# Cluster metrics
|
||||
- query: count(kube_namespace_created)
|
||||
metricName: namespaceCount
|
||||
|
||||
@@ -14,7 +14,7 @@ COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
|
||||
# Install dependencies
|
||||
RUN yum install -y git python39 python3-pip jq gettext wget && \
|
||||
python3.9 -m pip install -U pip && \
|
||||
git clone https://github.com/redhat-chaos/krkn.git --branch v1.4.2 /root/kraken && \
|
||||
git clone https://github.com/redhat-chaos/krkn.git --branch v1.4.7 /root/kraken && \
|
||||
mkdir -p /root/.kube && cd /root/kraken && \
|
||||
pip3.9 install -r requirements.txt && \
|
||||
pip3.9 install virtualenv && \
|
||||
|
||||
@@ -14,7 +14,7 @@ COPY --from=azure-cli /usr/local/bin/az /usr/bin/az
|
||||
# Install dependencies
|
||||
RUN yum install -y git python39 python3-pip jq gettext wget && \
|
||||
python3.9 -m pip install -U pip && \
|
||||
git clone https://github.com/redhat-chaos/krkn.git --branch v1.4.2 /root/kraken && \
|
||||
git clone https://github.com/redhat-chaos/krkn.git --branch v1.4.7 /root/kraken && \
|
||||
mkdir -p /root/.kube && cd /root/kraken && \
|
||||
pip3.9 install -r requirements.txt && \
|
||||
pip3.9 install virtualenv && \
|
||||
|
||||
@@ -7,6 +7,7 @@ The engine uses containers to execute plugins and runs them either locally in Do
|
||||
#### Hog scenarios:
|
||||
- [CPU Hog](arcaflow_scenarios/cpu_hog.md)
|
||||
- [Memory Hog](arcaflow_scenarios/memory_hog.md)
|
||||
- [I/O Hog](arcaflow_scenarios/io_hog.md)
|
||||
|
||||
|
||||
### Prequisites
|
||||
@@ -64,4 +65,6 @@ Each step is represented by a container that will be executed from the deployer
|
||||
Note that we provide the scenarios as a template, but they can be manipulated to define more complex workflows.
|
||||
To have more details regarding the arcaflow workflows architecture and syntax it is suggested to refer to the [Arcaflow Documentation](https://arcalot.io/arcaflow/).
|
||||
|
||||
|
||||
This edit is no longer in quay image
|
||||
Working on fix in ticket: https://issues.redhat.com/browse/CHAOS-494
|
||||
This will effect all versions 4.12 and higher of OpenShift
|
||||
21
docs/arcaflow_scenarios/io_hog.md
Normal file
21
docs/arcaflow_scenarios/io_hog.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# I/O Hog
|
||||
This scenario is based on the arcaflow [arcaflow-plugin-stressng](https://github.com/arcalot/arcaflow-plugin-stressng) plugin.
|
||||
The purpose of this scenario is to create disk pressure on a particular node of the Kubernetes/OpenShift cluster for a time span.
|
||||
The scenario allows to attach a node path to the pod as a `hostPath` volume.
|
||||
To enable this plugin add the pointer to the scenario input file `scenarios/arcaflow/io-hog/input.yaml` as described in the
|
||||
Usage section.
|
||||
This scenario takes a list of objects named `input_list` with the following properties:
|
||||
|
||||
- **kubeconfig :** *string* the kubeconfig needed by the deployer to deploy the sysbench plugin in the target cluster
|
||||
- **namespace :** *string* the namespace where the scenario container will be deployed
|
||||
**Note:** this parameter will be automatically filled by kraken if the `kubeconfig_path` property is correctly set
|
||||
- **node_selector :** *key-value map* the node label that will be used as `nodeSelector` by the pod to target a specific cluster node
|
||||
- **duration :** *string* stop stress test after N seconds. One can also specify the units of time in seconds, minutes, hours, days or years with the suffix s, m, h, d or y.
|
||||
- **target_pod_folder :** *string* the path in the pod where the volume is mounted
|
||||
- **target_pod_volume :** *object* the `hostPath` volume definition in the [Kubernetes/OpenShift](https://docs.openshift.com/container-platform/3.11/install_config/persistent_storage/using_hostpath.html) format, that will be attached to the pod as a volume
|
||||
- **io_write_bytes :** *string* writes N bytes for each hdd process. The size can be expressed as % of free space on the file system or in units of Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g
|
||||
- **io_block_size :** *string* size of each write in bytes. Size can be from 1 byte to 4m.
|
||||
|
||||
To perform several load tests in the same run simultaneously (eg. stress two or more nodes in the same run) add another item
|
||||
to the `input_list` with the same properties (and eventually different values eg. different node_selectors
|
||||
to schedule the pod on different nodes). To reduce (or increase) the parallelism change the value `parallelism` in `workload.yaml` file
|
||||
@@ -1,6 +1,6 @@
|
||||
### Delete Namespace Scenarios
|
||||
### Service Disruption Scenarios (Previously Delete Namespace Scenario)
|
||||
|
||||
Using this type of scenario configuration one is able to delete a specific namespace, or a namespace matching a certain regex string.
|
||||
Using this type of scenario configuration one is able to delete crucial objects in a specific namespace, or a namespace matching a certain regex string.
|
||||
|
||||
Configuration Options:
|
||||
|
||||
@@ -27,12 +27,20 @@ scenarios:
|
||||
sleep: 15
|
||||
```
|
||||
|
||||
**NOTE:** Many openshift namespaces have finalizers built that protect the namespace from being fully deleted: see documentation [here](https://kubernetes.io/blog/2021/05/14/using-finalizers-to-control-deletion/).
|
||||
The namespaces that do have finalizers enabled will be in left in a terminating state but all the pods running on that namespace will get deleted.
|
||||
|
||||
### Steps
|
||||
|
||||
This scenario will select a namespace (or multiple) dependent on the configuration and will kill all of the below object types in that namespace and will wait for them to be Running in the post action
|
||||
1. Services
|
||||
2. Daemonsets
|
||||
3. Statefulsets
|
||||
4. Replicasets
|
||||
5. Deployments
|
||||
|
||||
|
||||
#### Post Action
|
||||
|
||||
In all scenarios we do a post chaos check to wait and verify the specific component.
|
||||
We do a post chaos check to wait and verify the specific objects in each namespace are Ready
|
||||
|
||||
Here there are two options:
|
||||
|
||||
@@ -47,8 +55,8 @@ See [scenarios/post_action_namespace.py](https://github.com/cloud-bulldozer/krak
|
||||
```
|
||||
|
||||
|
||||
2. Allow kraken to wait and check the killed namespaces become 'Active' again. Kraken keeps a list of the specific
|
||||
namespaces that were killed to verify all that were affected recover properly.
|
||||
1. Allow kraken to wait and check all killed objects in the namespaces become 'Running' again. Kraken keeps a list of the specific
|
||||
objects in namespaces that were killed to verify all that were affected recover properly.
|
||||
|
||||
```
|
||||
wait_time: <seconds to wait for namespace to recover>
|
||||
@@ -1,132 +0,0 @@
|
||||
import time
|
||||
import random
|
||||
import logging
|
||||
import kraken.cerberus.setup as cerberus
|
||||
import kraken.post_actions.actions as post_actions
|
||||
import yaml
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
from krkn_lib.telemetry import KrknTelemetry
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
|
||||
|
||||
# krkn_lib
|
||||
def run(
|
||||
scenarios_list,
|
||||
config,
|
||||
wait_duration,
|
||||
failed_post_scenarios,
|
||||
kubeconfig_path,
|
||||
kubecli: KrknKubernetes,
|
||||
telemetry: KrknTelemetry
|
||||
) -> (list[str], list[ScenarioTelemetry]):
|
||||
scenario_telemetries: list[ScenarioTelemetry] = []
|
||||
failed_scenarios = []
|
||||
for scenario_config in scenarios_list:
|
||||
scenario_telemetry = ScenarioTelemetry()
|
||||
scenario_telemetry.scenario = scenario_config[0]
|
||||
scenario_telemetry.startTimeStamp = time.time()
|
||||
telemetry.set_parameters_base64(scenario_telemetry, scenario_config[0])
|
||||
try:
|
||||
if len(scenario_config) > 1:
|
||||
pre_action_output = post_actions.run(kubeconfig_path, scenario_config[1])
|
||||
else:
|
||||
pre_action_output = ""
|
||||
with open(scenario_config[0], "r") as f:
|
||||
scenario_config_yaml = yaml.full_load(f)
|
||||
for scenario in scenario_config_yaml["scenarios"]:
|
||||
scenario_namespace = scenario.get("namespace", "")
|
||||
scenario_label = scenario.get("label_selector", "")
|
||||
if scenario_namespace is not None and scenario_namespace.strip() != "":
|
||||
if scenario_label is not None and scenario_label.strip() != "":
|
||||
logging.error("You can only have namespace or label set in your namespace scenario")
|
||||
logging.error(
|
||||
"Current scenario config has namespace '%s' and label selector '%s'"
|
||||
% (scenario_namespace, scenario_label)
|
||||
)
|
||||
logging.error(
|
||||
"Please set either namespace to blank ('') or label_selector to blank ('') to continue"
|
||||
)
|
||||
# removed_exit
|
||||
# sys.exit(1)
|
||||
raise RuntimeError()
|
||||
delete_count = scenario.get("delete_count", 1)
|
||||
run_count = scenario.get("runs", 1)
|
||||
run_sleep = scenario.get("sleep", 10)
|
||||
wait_time = scenario.get("wait_time", 30)
|
||||
killed_namespaces = []
|
||||
start_time = int(time.time())
|
||||
for i in range(run_count):
|
||||
namespaces = kubecli.check_namespaces([scenario_namespace], scenario_label)
|
||||
for j in range(delete_count):
|
||||
if len(namespaces) == 0:
|
||||
logging.error(
|
||||
"Couldn't delete %s namespaces, not enough namespaces matching %s with label %s"
|
||||
% (str(run_count), scenario_namespace, str(scenario_label))
|
||||
)
|
||||
# removed_exit
|
||||
# sys.exit(1)
|
||||
raise RuntimeError()
|
||||
selected_namespace = namespaces[random.randint(0, len(namespaces) - 1)]
|
||||
killed_namespaces.append(selected_namespace)
|
||||
try:
|
||||
kubecli.delete_namespace(selected_namespace)
|
||||
logging.info("Delete on namespace %s was successful" % str(selected_namespace))
|
||||
except Exception as e:
|
||||
logging.info("Delete on namespace %s was unsuccessful" % str(selected_namespace))
|
||||
logging.info("Namespace action error: " + str(e))
|
||||
# removed_exit
|
||||
# sys.exit(1)
|
||||
raise RuntimeError()
|
||||
namespaces.remove(selected_namespace)
|
||||
logging.info("Waiting %s seconds between namespace deletions" % str(run_sleep))
|
||||
time.sleep(run_sleep)
|
||||
|
||||
logging.info("Waiting for the specified duration: %s" % wait_duration)
|
||||
time.sleep(wait_duration)
|
||||
if len(scenario_config) > 1:
|
||||
try:
|
||||
failed_post_scenarios = post_actions.check_recovery(
|
||||
kubeconfig_path, scenario_config, failed_post_scenarios, pre_action_output
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error("Failed to run post action checks: %s" % e)
|
||||
# removed_exit
|
||||
# sys.exit(1)
|
||||
raise RuntimeError()
|
||||
else:
|
||||
failed_post_scenarios = check_active_namespace(killed_namespaces, wait_time, kubecli)
|
||||
end_time = int(time.time())
|
||||
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
|
||||
except (Exception, RuntimeError):
|
||||
scenario_telemetry.exitStatus = 1
|
||||
failed_scenarios.append(scenario_config[0])
|
||||
telemetry.log_exception(scenario_config[0])
|
||||
else:
|
||||
scenario_telemetry.exitStatus = 0
|
||||
scenario_telemetry.endTimeStamp = time.time()
|
||||
scenario_telemetries.append(scenario_telemetry)
|
||||
return failed_scenarios, scenario_telemetries
|
||||
|
||||
# krkn_lib
|
||||
def check_active_namespace(killed_namespaces, wait_time, kubecli: KrknKubernetes):
|
||||
active_namespace = []
|
||||
timer = 0
|
||||
while timer < wait_time and killed_namespaces:
|
||||
for namespace_name in killed_namespaces:
|
||||
if namespace_name in kubecli.list_namespaces():
|
||||
response = kubecli.get_namespace_status(namespace_name).strip()
|
||||
if response != "Active":
|
||||
continue
|
||||
else:
|
||||
active_namespace.append(namespace_name)
|
||||
killed_namespaces = set(killed_namespaces) - set(active_namespace)
|
||||
if len(killed_namespaces) == 0:
|
||||
return []
|
||||
|
||||
timer += 5
|
||||
time.sleep(5)
|
||||
logging.info("Waiting 5 seconds for namespaces to become active")
|
||||
|
||||
logging.error("Namespaces are still not active after waiting " + str(wait_time) + "seconds")
|
||||
logging.error("Non active namespaces " + str(killed_namespaces))
|
||||
return killed_namespaces
|
||||
310
kraken/service_disruption/common_service_disruption_functions.py
Normal file
310
kraken/service_disruption/common_service_disruption_functions.py
Normal file
@@ -0,0 +1,310 @@
|
||||
import time
|
||||
import random
|
||||
import logging
|
||||
import kraken.cerberus.setup as cerberus
|
||||
import kraken.post_actions.actions as post_actions
|
||||
import yaml
|
||||
from krkn_lib.k8s import KrknKubernetes
|
||||
from krkn_lib.telemetry import KrknTelemetry
|
||||
from krkn_lib.models.telemetry import ScenarioTelemetry
|
||||
|
||||
|
||||
def delete_objects(kubecli, namespace):
|
||||
|
||||
services = delete_all_services_namespace(kubecli, namespace)
|
||||
daemonsets = delete_all_daemonset_namespace(kubecli, namespace)
|
||||
statefulsets = delete_all_statefulsets_namespace(kubecli, namespace)
|
||||
replicasets = delete_all_replicaset_namespace(kubecli, namespace)
|
||||
deployments = delete_all_deployment_namespace(kubecli, namespace)
|
||||
|
||||
objects = { "daemonsets": daemonsets,
|
||||
"deployments": deployments,
|
||||
"replicasets": replicasets,
|
||||
"statefulsets": statefulsets,
|
||||
"services": services
|
||||
}
|
||||
|
||||
return objects
|
||||
|
||||
def get_list_running_pods(kubecli: KrknKubernetes, namespace: str):
|
||||
running_pods = []
|
||||
pods = kubecli.list_pods(namespace)
|
||||
for pod in pods:
|
||||
pod_status = kubecli.get_pod_info(pod, namespace)
|
||||
if pod_status and pod_status.status == "Running":
|
||||
running_pods.append(pod)
|
||||
logging.info('all running pods ' + str(running_pods))
|
||||
return running_pods
|
||||
|
||||
def delete_all_deployment_namespace(kubecli: KrknKubernetes, namespace: str):
|
||||
"""
|
||||
Delete all the deployments in the specified namespace
|
||||
|
||||
:param kubecli: krkn kubernetes python package
|
||||
:param namespace: namespace
|
||||
"""
|
||||
try:
|
||||
deployments = kubecli.get_deployment_ns(namespace)
|
||||
for deployment in deployments:
|
||||
logging.info("Deleting deployment" + deployment)
|
||||
kubecli.delete_deployment(deployment,namespace)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"Exception when calling delete_all_deployment_namespace: %s\n",
|
||||
str(e),
|
||||
)
|
||||
raise e
|
||||
|
||||
return deployments
|
||||
|
||||
def delete_all_daemonset_namespace(kubecli: KrknKubernetes, namespace: str):
|
||||
"""
|
||||
Delete all the daemonset in the specified namespace
|
||||
|
||||
:param kubecli: krkn kubernetes python package
|
||||
:param namespace: namespace
|
||||
"""
|
||||
try:
|
||||
daemonsets = kubecli.get_daemonset(namespace)
|
||||
for daemonset in daemonsets:
|
||||
logging.info("Deleting daemonset" + daemonset)
|
||||
kubecli.delete_daemonset(daemonset,namespace)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"Exception when calling delete_all_daemonset_namespace: %s\n",
|
||||
str(e),
|
||||
)
|
||||
raise e
|
||||
|
||||
return daemonsets
|
||||
|
||||
def delete_all_statefulsets_namespace(kubecli: KrknKubernetes, namespace: str):
|
||||
"""
|
||||
Delete all the statefulsets in the specified namespace
|
||||
|
||||
|
||||
:param kubecli: krkn kubernetes python package
|
||||
:param namespace: namespace
|
||||
"""
|
||||
try:
|
||||
statefulsets = kubecli.get_all_statefulset(namespace)
|
||||
for statefulset in statefulsets:
|
||||
logging.info("Deleting statefulsets" + statefulsets)
|
||||
kubecli.delete_statefulset(statefulset,namespace)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"Exception when calling delete_all_statefulsets_namespace: %s\n",
|
||||
str(e),
|
||||
)
|
||||
raise e
|
||||
|
||||
return statefulsets
|
||||
|
||||
def delete_all_replicaset_namespace(kubecli: KrknKubernetes, namespace: str):
|
||||
"""
|
||||
Delete all the replicasets in the specified namespace
|
||||
|
||||
:param kubecli: krkn kubernetes python package
|
||||
:param namespace: namespace
|
||||
"""
|
||||
try:
|
||||
replicasets = kubecli.get_all_replicasets(namespace)
|
||||
for replicaset in replicasets:
|
||||
logging.info("Deleting replicaset" + replicaset)
|
||||
kubecli.delete_replicaset(replicaset,namespace)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"Exception when calling delete_all_replicaset_namespace: %s\n",
|
||||
str(e),
|
||||
)
|
||||
raise e
|
||||
|
||||
return replicasets
|
||||
|
||||
def delete_all_services_namespace(kubecli: KrknKubernetes, namespace: str):
|
||||
"""
|
||||
Delete all the services in the specified namespace
|
||||
|
||||
|
||||
:param kubecli: krkn kubernetes python package
|
||||
:param namespace: namespace
|
||||
"""
|
||||
try:
|
||||
services = kubecli.get_all_services(namespace)
|
||||
for service in services:
|
||||
logging.info("Deleting services" + service)
|
||||
kubecli.delete_services(service,namespace)
|
||||
except Exception as e:
|
||||
logging.error(
|
||||
"Exception when calling delete_all_services_namespace: %s\n",
|
||||
str(e),
|
||||
)
|
||||
raise e
|
||||
|
||||
return services
|
||||
|
||||
# krkn_lib
|
||||
def run(
|
||||
scenarios_list,
|
||||
config,
|
||||
wait_duration,
|
||||
failed_post_scenarios,
|
||||
kubeconfig_path,
|
||||
kubecli: KrknKubernetes,
|
||||
telemetry: KrknTelemetry
|
||||
) -> (list[str], list[ScenarioTelemetry]):
|
||||
scenario_telemetries: list[ScenarioTelemetry] = []
|
||||
failed_scenarios = []
|
||||
for scenario_config in scenarios_list:
|
||||
scenario_telemetry = ScenarioTelemetry()
|
||||
scenario_telemetry.scenario = scenario_config[0]
|
||||
scenario_telemetry.startTimeStamp = time.time()
|
||||
telemetry.set_parameters_base64(scenario_telemetry, scenario_config[0])
|
||||
try:
|
||||
if len(scenario_config) > 1:
|
||||
pre_action_output = post_actions.run(kubeconfig_path, scenario_config[1])
|
||||
else:
|
||||
pre_action_output = ""
|
||||
with open(scenario_config[0], "r") as f:
|
||||
scenario_config_yaml = yaml.full_load(f)
|
||||
for scenario in scenario_config_yaml["scenarios"]:
|
||||
scenario_namespace = scenario.get("namespace", "")
|
||||
scenario_label = scenario.get("label_selector", "")
|
||||
if scenario_namespace is not None and scenario_namespace.strip() != "":
|
||||
if scenario_label is not None and scenario_label.strip() != "":
|
||||
logging.error("You can only have namespace or label set in your namespace scenario")
|
||||
logging.error(
|
||||
"Current scenario config has namespace '%s' and label selector '%s'"
|
||||
% (scenario_namespace, scenario_label)
|
||||
)
|
||||
logging.error(
|
||||
"Please set either namespace to blank ('') or label_selector to blank ('') to continue"
|
||||
)
|
||||
# removed_exit
|
||||
# sys.exit(1)
|
||||
raise RuntimeError()
|
||||
delete_count = scenario.get("delete_count", 1)
|
||||
run_count = scenario.get("runs", 1)
|
||||
run_sleep = scenario.get("sleep", 10)
|
||||
wait_time = scenario.get("wait_time", 30)
|
||||
|
||||
start_time = int(time.time())
|
||||
for i in range(run_count):
|
||||
killed_namespaces = {}
|
||||
namespaces = kubecli.check_namespaces([scenario_namespace], scenario_label)
|
||||
for j in range(delete_count):
|
||||
if len(namespaces) == 0:
|
||||
logging.error(
|
||||
"Couldn't delete %s namespaces, not enough namespaces matching %s with label %s"
|
||||
% (str(run_count), scenario_namespace, str(scenario_label))
|
||||
)
|
||||
# removed_exit
|
||||
# sys.exit(1)
|
||||
raise RuntimeError()
|
||||
selected_namespace = namespaces[random.randint(0, len(namespaces) - 1)]
|
||||
logging.info('Delete objects in selected namespace: ' + selected_namespace )
|
||||
try:
|
||||
# delete all pods in namespace
|
||||
objects = delete_objects(kubecli,selected_namespace)
|
||||
killed_namespaces[selected_namespace] = objects
|
||||
logging.info("Deleted all objects in namespace %s was successful" % str(selected_namespace))
|
||||
except Exception as e:
|
||||
logging.info("Delete all objects in namespace %s was unsuccessful" % str(selected_namespace))
|
||||
logging.info("Namespace action error: " + str(e))
|
||||
raise RuntimeError()
|
||||
namespaces.remove(selected_namespace)
|
||||
logging.info("Waiting %s seconds between namespace deletions" % str(run_sleep))
|
||||
time.sleep(run_sleep)
|
||||
|
||||
logging.info("Waiting for the specified duration: %s" % wait_duration)
|
||||
time.sleep(wait_duration)
|
||||
if len(scenario_config) > 1:
|
||||
try:
|
||||
failed_post_scenarios = post_actions.check_recovery(
|
||||
kubeconfig_path, scenario_config, failed_post_scenarios, pre_action_output
|
||||
)
|
||||
except Exception as e:
|
||||
logging.error("Failed to run post action checks: %s" % e)
|
||||
# removed_exit
|
||||
# sys.exit(1)
|
||||
raise RuntimeError()
|
||||
else:
|
||||
failed_post_scenarios = check_all_running_deployment(killed_namespaces, wait_time, kubecli)
|
||||
|
||||
end_time = int(time.time())
|
||||
cerberus.publish_kraken_status(config, failed_post_scenarios, start_time, end_time)
|
||||
except (Exception, RuntimeError):
|
||||
scenario_telemetry.exitStatus = 1
|
||||
failed_scenarios.append(scenario_config[0])
|
||||
telemetry.log_exception(scenario_config[0])
|
||||
else:
|
||||
scenario_telemetry.exitStatus = 0
|
||||
scenario_telemetry.endTimeStamp = time.time()
|
||||
scenario_telemetries.append(scenario_telemetry)
|
||||
return failed_scenarios, scenario_telemetries
|
||||
|
||||
|
||||
def check_all_running_pods(kubecli: KrknKubernetes, namespace_name, wait_time):
|
||||
|
||||
timer = 0
|
||||
while timer < wait_time:
|
||||
pod_list = kubecli.list_pods(namespace_name)
|
||||
pods_running = 0
|
||||
for pod in pod_list:
|
||||
pod_info = kubecli.get_pod_info(pod,namespace_name)
|
||||
if pod_info.status != "Running" and pod_info.status != "Succeeded":
|
||||
logging.info("Pods %s still not running or completed" % pod_info.name)
|
||||
break
|
||||
pods_running +=1
|
||||
if len(pod_list) == pods_running:
|
||||
break
|
||||
timer += 5
|
||||
time.sleep(5)
|
||||
logging.info("Waiting 5 seconds for pods to become active")
|
||||
|
||||
# krkn_lib
|
||||
def check_all_running_deployment(killed_namespaces, wait_time, kubecli: KrknKubernetes):
|
||||
|
||||
timer = 0
|
||||
while timer < wait_time and killed_namespaces:
|
||||
still_missing_ns = killed_namespaces.copy()
|
||||
for namespace_name, objects in killed_namespaces.items():
|
||||
still_missing_obj = objects.copy()
|
||||
for obj_name, obj_list in objects.items():
|
||||
if "deployments" == obj_name:
|
||||
deployments = kubecli.get_deployment_ns(namespace_name)
|
||||
if len(obj_list) == len(deployments):
|
||||
still_missing_obj.pop(obj_name)
|
||||
elif "replicasets" == obj_name:
|
||||
replicasets = kubecli.get_all_replicasets(namespace_name)
|
||||
if len(obj_list) == len(replicasets):
|
||||
still_missing_obj.pop(obj_name)
|
||||
elif "statefulsets" == obj_name:
|
||||
statefulsets = kubecli.get_all_statefulset(namespace_name)
|
||||
if len(obj_list) == len(statefulsets):
|
||||
still_missing_obj.pop(obj_name)
|
||||
elif "services" == obj_name:
|
||||
services = kubecli.get_all_services(namespace_name)
|
||||
if len(obj_list) == len(services):
|
||||
still_missing_obj.pop(obj_name)
|
||||
elif "daemonsets" == obj_name:
|
||||
daemonsets = kubecli.get_daemonset(namespace_name)
|
||||
if len(obj_list) == len(daemonsets):
|
||||
still_missing_obj.pop(obj_name)
|
||||
logging.info("Still missing objects " + str(still_missing_obj))
|
||||
killed_namespaces[namespace_name] = still_missing_obj.copy()
|
||||
if len(killed_namespaces[namespace_name].keys()) == 0:
|
||||
logging.info("Wait for pods to become running for namespace: " + namespace_name)
|
||||
check_all_running_pods(kubecli, namespace_name, wait_time)
|
||||
still_missing_ns.pop(namespace_name)
|
||||
killed_namespaces = still_missing_ns
|
||||
if len(killed_namespaces.keys()) == 0:
|
||||
return []
|
||||
|
||||
timer += 10
|
||||
time.sleep(10)
|
||||
logging.info("Waiting 10 seconds for objects in namespaces to become active")
|
||||
|
||||
logging.error("Objects are still not ready after waiting " + str(wait_time) + "seconds")
|
||||
logging.error("Non active namespaces " + str(killed_namespaces))
|
||||
return killed_namespaces
|
||||
@@ -65,7 +65,7 @@ def get_container_name(pod_name, namespace, kubecli:KrknKubernetes, container_na
|
||||
|
||||
# krkn_lib
|
||||
def skew_time(scenario, kubecli:KrknKubernetes):
|
||||
skew_command = "date --set "
|
||||
skew_command = "date --date "
|
||||
if scenario["action"] == "skew_date":
|
||||
skewed_date = "00-01-01"
|
||||
skew_command += skewed_date
|
||||
|
||||
@@ -1,40 +1,40 @@
|
||||
coverage
|
||||
datetime
|
||||
pyfiglet
|
||||
PyYAML>=5.1
|
||||
requests
|
||||
boto3
|
||||
google-api-python-client
|
||||
azure-mgmt-compute
|
||||
azure-keyvault
|
||||
azure-identity
|
||||
kubernetes
|
||||
oauth2client>=4.1.3
|
||||
python-openstackclient
|
||||
gitpython
|
||||
paramiko
|
||||
setuptools==65.5.1
|
||||
openshift-client
|
||||
python-ipmi
|
||||
podman-compose
|
||||
docker-compose
|
||||
docker
|
||||
jinja2==3.0.3
|
||||
itsdangerous==2.0.1
|
||||
werkzeug==2.2.3
|
||||
lxml >= 4.3.0
|
||||
pyVmomi >= 6.7
|
||||
zope.interface==5.4.0
|
||||
aliyun-python-sdk-core==2.13.36
|
||||
aliyun-python-sdk-ecs==4.24.25
|
||||
arcaflow-plugin-sdk>=0.9.0
|
||||
wheel
|
||||
service_identity
|
||||
git+https://github.com/vmware/vsphere-automation-sdk-python.git@v8.0.0.0
|
||||
git+https://github.com/redhat-chaos/arcaflow-plugin-kill-pod.git
|
||||
arcaflow >= 0.6.1
|
||||
prometheus_api_client
|
||||
arcaflow-plugin-sdk >= 0.10.0
|
||||
azure-identity
|
||||
azure-keyvault
|
||||
azure-mgmt-compute
|
||||
boto3==1.28.61
|
||||
coverage
|
||||
datetime
|
||||
docker
|
||||
docker-compose
|
||||
git+https://github.com/redhat-chaos/arcaflow-plugin-kill-pod.git
|
||||
git+https://github.com/vmware/vsphere-automation-sdk-python.git@v8.0.0.0
|
||||
gitpython
|
||||
google-api-python-client
|
||||
ibm_cloud_sdk_core
|
||||
ibm_vpc
|
||||
itsdangerous==2.0.1
|
||||
jinja2==3.0.3
|
||||
krkn-lib == 1.3.2
|
||||
kubernetes
|
||||
lxml >= 4.3.0
|
||||
oauth2client>=4.1.3
|
||||
openshift-client
|
||||
paramiko
|
||||
podman-compose
|
||||
prometheus_api_client
|
||||
pyVmomi >= 6.7
|
||||
pyfiglet
|
||||
pytest
|
||||
krkn-lib >= 1.0.0
|
||||
python-ipmi
|
||||
python-openstackclient
|
||||
requests
|
||||
service_identity
|
||||
setuptools==65.5.1
|
||||
werkzeug==2.2.3
|
||||
wheel
|
||||
zope.interface==5.4.0
|
||||
@@ -1,5 +1,5 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import yaml
|
||||
@@ -12,7 +12,7 @@ import kraken.litmus.common_litmus as common_litmus
|
||||
import kraken.time_actions.common_time_functions as time_actions
|
||||
import kraken.performance_dashboards.setup as performance_dashboards
|
||||
import kraken.pod_scenarios.setup as pod_scenarios
|
||||
import kraken.namespace_actions.common_namespace_functions as namespace_actions
|
||||
import kraken.service_disruption.common_service_disruption_functions as service_disruption
|
||||
import kraken.shut_down.common_shut_down_func as shut_down
|
||||
import kraken.node_actions.run as nodeaction
|
||||
import kraken.managedcluster_scenarios.run as managedcluster_scenarios
|
||||
@@ -92,7 +92,7 @@ def main(cfg):
|
||||
run_uuid = config["performance_monitoring"].get("uuid", "")
|
||||
enable_alerts = config["performance_monitoring"].get("enable_alerts", False)
|
||||
alert_profile = config["performance_monitoring"].get("alert_profile", "")
|
||||
check_critical_alerts = config["performance_monitoring"].get("check_critical_alerts", False)
|
||||
check_critical_alerts = config["performance_monitoring"].get("check_critical_alerts", False)
|
||||
|
||||
# Initialize clients
|
||||
if (not os.path.isfile(kubeconfig_path) and
|
||||
@@ -327,9 +327,9 @@ def main(cfg):
|
||||
|
||||
# Inject namespace chaos scenarios
|
||||
# krkn_lib
|
||||
elif scenario_type == "namespace_scenarios":
|
||||
logging.info("Running namespace scenarios")
|
||||
failed_post_scenarios, scenario_telemetries = namespace_actions.run(
|
||||
elif scenario_type == "service_disruption_scenarios":
|
||||
logging.info("Running service disruption scenarios")
|
||||
failed_post_scenarios, scenario_telemetries = service_disruption.run(
|
||||
scenarios_list,
|
||||
config,
|
||||
wait_duration,
|
||||
@@ -383,22 +383,32 @@ def main(cfg):
|
||||
logging.info("")
|
||||
|
||||
# telemetry
|
||||
# in order to print decoded telemetry data even if telemetry collection
|
||||
# is disabled, it's necessary to serialize the ChaosRunTelemetry object
|
||||
# to json, and recreate a new object from it.
|
||||
end_time = int(time.time())
|
||||
telemetry.collect_cluster_metadata(chaos_telemetry)
|
||||
decoded_chaos_run_telemetry = ChaosRunTelemetry(json.loads(chaos_telemetry.to_json()))
|
||||
logging.info(f"Telemetry data:\n{decoded_chaos_run_telemetry.to_json()}")
|
||||
if config["telemetry"]["enabled"]:
|
||||
logging.info(f"telemetry data will be stored on s3 bucket folder: {telemetry_request_id}")
|
||||
logging.info(f"telemetry upload log: {safe_logger.log_file_name}")
|
||||
try:
|
||||
telemetry.send_telemetry(config["telemetry"], telemetry_request_id, chaos_telemetry)
|
||||
safe_logger.info("archives download started:")
|
||||
prometheus_archive_files = telemetry.get_ocp_prometheus_data(config["telemetry"], telemetry_request_id)
|
||||
safe_logger.info("archives upload started:")
|
||||
telemetry.put_ocp_prometheus_data(config["telemetry"], prometheus_archive_files, telemetry_request_id)
|
||||
if config["telemetry"]["prometheus_backup"]:
|
||||
safe_logger.info("archives download started:")
|
||||
prometheus_archive_files = telemetry.get_ocp_prometheus_data(config["telemetry"], telemetry_request_id)
|
||||
safe_logger.info("archives upload started:")
|
||||
telemetry.put_ocp_prometheus_data(config["telemetry"], prometheus_archive_files, telemetry_request_id)
|
||||
if config["telemetry"]["logs_backup"]:
|
||||
telemetry.put_ocp_logs(telemetry_request_id, config["telemetry"], start_time, end_time)
|
||||
except Exception as e:
|
||||
logging.error(f"failed to send telemetry data: {str(e)}")
|
||||
else:
|
||||
logging.info("telemetry collection disabled, skipping.")
|
||||
|
||||
# Capture the end time
|
||||
end_time = int(time.time())
|
||||
|
||||
|
||||
# Capture metrics for the run
|
||||
if capture_metrics:
|
||||
@@ -431,7 +441,7 @@ def main(cfg):
|
||||
else:
|
||||
logging.error("Alert profile is not defined")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if litmus_uninstall and litmus_installed:
|
||||
common_litmus.delete_chaos(litmus_namespace, kubecli)
|
||||
common_litmus.delete_chaos_experiments(litmus_namespace, kubecli)
|
||||
|
||||
@@ -64,7 +64,7 @@ steps:
|
||||
input:
|
||||
kubeconfig: !expr $.input.kubeconfig
|
||||
stressng:
|
||||
plugin: quay.io/arcalot/arcaflow-plugin-stressng:0.3.0
|
||||
plugin: quay.io/arcalot/arcaflow-plugin-stressng:0.3.1
|
||||
step: workload
|
||||
input:
|
||||
cleanup: "true"
|
||||
|
||||
10
scenarios/arcaflow/io-hog/config.yaml
Normal file
10
scenarios/arcaflow/io-hog/config.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
deployer:
|
||||
connection: {}
|
||||
type: kubernetes
|
||||
log:
|
||||
level: debug
|
||||
logged_outputs:
|
||||
error:
|
||||
level: error
|
||||
success:
|
||||
level: debug
|
||||
13
scenarios/arcaflow/io-hog/input.yaml
Normal file
13
scenarios/arcaflow/io-hog/input.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
input_list:
|
||||
- duration: 30s
|
||||
io_block_size: 1m
|
||||
io_workers: 1
|
||||
io_write_bytes: 10m
|
||||
kubeconfig: ''
|
||||
namespace: default
|
||||
node_selector: {}
|
||||
target_pod_folder: /hog-data
|
||||
target_pod_volume:
|
||||
hostPath:
|
||||
path: /tmp
|
||||
name: node-volume
|
||||
138
scenarios/arcaflow/io-hog/sub-workflow.yaml
Normal file
138
scenarios/arcaflow/io-hog/sub-workflow.yaml
Normal file
@@ -0,0 +1,138 @@
|
||||
input:
|
||||
root: RootObject
|
||||
objects:
|
||||
RootObject:
|
||||
id: RootObject
|
||||
properties:
|
||||
kubeconfig:
|
||||
display:
|
||||
description: The complete kubeconfig file as a string
|
||||
name: Kubeconfig file contents
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
namespace:
|
||||
display:
|
||||
description: The namespace where the container will be deployed
|
||||
name: Namespace
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
node_selector:
|
||||
display:
|
||||
description: kubernetes node name where the plugin must be deployed
|
||||
type:
|
||||
type_id: map
|
||||
values:
|
||||
type_id: string
|
||||
keys:
|
||||
type_id: string
|
||||
required: true
|
||||
duration:
|
||||
display:
|
||||
name: duration the scenario expressed in seconds
|
||||
description: stop stress test after T seconds. One can also specify the units of time in
|
||||
seconds, minutes, hours, days or years with the suffix s, m, h, d or y
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
io_workers:
|
||||
display:
|
||||
description: number of workers
|
||||
name: start N workers continually writing, reading and removing temporary files
|
||||
type:
|
||||
type_id: integer
|
||||
required: true
|
||||
io_block_size:
|
||||
display:
|
||||
description: single write size
|
||||
name: specify size of each write in bytes. Size can be from 1 byte to 4MB.
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
io_write_bytes:
|
||||
display:
|
||||
description: Total number of bytes written
|
||||
name: write N bytes for each hdd process, the default is 1 GB. One can specify the size
|
||||
as % of free space on the file system or in units of Bytes, KBytes, MBytes and
|
||||
GBytes using the suffix b, k, m or g
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
target_pod_folder:
|
||||
display:
|
||||
description: Target Folder
|
||||
name: Folder in the pod where the test will be executed and the test files will be written
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
target_pod_volume:
|
||||
display:
|
||||
name: kubernetes volume definition
|
||||
description: the volume that will be attached to the pod. In order to stress
|
||||
the node storage only hosPath mode is currently supported
|
||||
type:
|
||||
type_id: object
|
||||
id: k8s_volume
|
||||
properties:
|
||||
name:
|
||||
display:
|
||||
description: name of the volume (must match the name in pod definition)
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
hostPath:
|
||||
display:
|
||||
description: hostPath options expressed as string map (key-value)
|
||||
type:
|
||||
type_id: map
|
||||
values:
|
||||
type_id: string
|
||||
keys:
|
||||
type_id: string
|
||||
required: true
|
||||
required: true
|
||||
|
||||
steps:
|
||||
kubeconfig:
|
||||
plugin: quay.io/arcalot/arcaflow-plugin-kubeconfig:0.2.0
|
||||
input:
|
||||
kubeconfig: !expr $.input.kubeconfig
|
||||
stressng:
|
||||
plugin: quay.io/arcalot/arcaflow-plugin-stressng:0.3.1
|
||||
step: workload
|
||||
input:
|
||||
cleanup: "true"
|
||||
StressNGParams:
|
||||
timeout: !expr $.input.duration
|
||||
workdir: !expr $.input.target_pod_folder
|
||||
stressors:
|
||||
- stressor: hdd
|
||||
hdd: !expr $.input.io_workers
|
||||
hdd_bytes: !expr $.input.io_write_bytes
|
||||
hdd_write_size: !expr $.input.io_block_size
|
||||
|
||||
deploy:
|
||||
type: kubernetes
|
||||
connection: !expr $.steps.kubeconfig.outputs.success.connection
|
||||
pod:
|
||||
metadata:
|
||||
namespace: !expr $.input.namespace
|
||||
labels:
|
||||
arcaflow: stressng
|
||||
spec:
|
||||
nodeSelector: !expr $.input.node_selector
|
||||
pluginContainer:
|
||||
imagePullPolicy: Always
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- mountPath: /hog-data
|
||||
name: node-volume
|
||||
volumes:
|
||||
- !expr $.input.target_pod_volume
|
||||
|
||||
outputs:
|
||||
success:
|
||||
stressng: !expr $.steps.stressng.outputs.success
|
||||
|
||||
113
scenarios/arcaflow/io-hog/workflow.yaml
Normal file
113
scenarios/arcaflow/io-hog/workflow.yaml
Normal file
@@ -0,0 +1,113 @@
|
||||
input:
|
||||
root: RootObject
|
||||
objects:
|
||||
RootObject:
|
||||
id: RootObject
|
||||
properties:
|
||||
input_list:
|
||||
type:
|
||||
type_id: list
|
||||
items:
|
||||
id: input_item
|
||||
type_id: object
|
||||
properties:
|
||||
kubeconfig:
|
||||
display:
|
||||
description: The complete kubeconfig file as a string
|
||||
name: Kubeconfig file contents
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
namespace:
|
||||
display:
|
||||
description: The namespace where the container will be deployed
|
||||
name: Namespace
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
node_selector:
|
||||
display:
|
||||
description: kubernetes node name where the plugin must be deployed
|
||||
type:
|
||||
type_id: map
|
||||
values:
|
||||
type_id: string
|
||||
keys:
|
||||
type_id: string
|
||||
required: true
|
||||
duration:
|
||||
display:
|
||||
name: duration the scenario expressed in seconds
|
||||
description: stop stress test after T seconds. One can also specify the units of time in
|
||||
seconds, minutes, hours, days or years with the suffix s, m, h, d or y
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
io_workers:
|
||||
display:
|
||||
description: number of workers
|
||||
name: start N workers continually writing, reading and removing temporary files
|
||||
type:
|
||||
type_id: integer
|
||||
required: true
|
||||
io_block_size:
|
||||
display:
|
||||
description: single write size
|
||||
name: specify size of each write in bytes. Size can be from 1 byte to 4MB.
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
io_write_bytes:
|
||||
display:
|
||||
description: Total number of bytes written
|
||||
name: write N bytes for each hdd process, the default is 1 GB. One can specify the size
|
||||
as % of free space on the file system or in units of Bytes, KBytes, MBytes and
|
||||
GBytes using the suffix b, k, m or g
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
target_pod_folder:
|
||||
display:
|
||||
description: Target Folder
|
||||
name: Folder in the pod where the test will be executed and the test files will be written
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
target_pod_volume:
|
||||
display:
|
||||
name: kubernetes volume definition
|
||||
description: the volume that will be attached to the pod. In order to stress
|
||||
the node storage only hosPath mode is currently supported
|
||||
type:
|
||||
type_id: object
|
||||
id: k8s_volume
|
||||
properties:
|
||||
name:
|
||||
display:
|
||||
description: name of the volume (must match the name in pod definition)
|
||||
type:
|
||||
type_id: string
|
||||
required: true
|
||||
hostPath:
|
||||
display:
|
||||
description: hostPath options expressed as string map (key-value)
|
||||
type:
|
||||
type_id: map
|
||||
values:
|
||||
type_id: string
|
||||
keys:
|
||||
type_id: string
|
||||
required: true
|
||||
required: true
|
||||
steps:
|
||||
workload_loop:
|
||||
kind: foreach
|
||||
items: !expr $.input.input_list
|
||||
workflow: sub-workflow.yaml
|
||||
parallelism: 1000
|
||||
outputs:
|
||||
success:
|
||||
workloads: !expr $.steps.workload_loop.outputs.success.data
|
||||
|
||||
|
||||
|
||||
@@ -56,7 +56,7 @@ steps:
|
||||
input:
|
||||
kubeconfig: !expr $.input.kubeconfig
|
||||
stressng:
|
||||
plugin: quay.io/arcalot/arcaflow-plugin-stressng:0.3.0
|
||||
plugin: quay.io/arcalot/arcaflow-plugin-stressng:0.3.1
|
||||
step: workload
|
||||
input:
|
||||
cleanup: "true"
|
||||
|
||||
9
scenarios/openshift/prom_kill.yml
Normal file
9
scenarios/openshift/prom_kill.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
- id: kill-pods
|
||||
config:
|
||||
namespace_pattern: ^openshift-monitoring$
|
||||
label_selector: statefulset.kubernetes.io/pod-name=prometheus-k8s-0
|
||||
- id: wait-for-pods
|
||||
config:
|
||||
namespace_pattern: ^openshift-monitoring$
|
||||
label_selector: statefulset.kubernetes.io/pod-name=prometheus-k8s-0
|
||||
count: 1
|
||||
Reference in New Issue
Block a user