Compare commits

..

9 Commits

Author SHA1 Message Date
Paige Patton
edd0159251 adding health check global variables (#798)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 4m12s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Paige Patton <prubenda@redhat.com>
2025-05-07 15:47:03 +02:00
Naga Ravi Chaitanya Elluri
cf9f7702ed fix: requirements.txt to reduce vulnerabilities (#795)
The following vulnerabilities are fixed by pinning transitive dependencies:
- https://snyk.io/vuln/SNYK-PYTHON-SETUPTOOLS-9964606

Co-authored-by: snyk-bot <snyk-bot@snyk.io>
Co-authored-by: Tullio Sebastiani <tsebastiani@users.noreply.github.com>
2025-05-07 15:46:16 +02:00
Tullio Sebastiani
cfe624f153 changed get_node_ip to krkn-lib and removed kubectl dependency (#799)
* changed get_node_ip to krkn-lib and removed kubectl dependency

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

* updated krkn-lib to 5.0.1

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>

---------

Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2025-05-07 15:43:27 +02:00
Paige Patton
62f50db195 removing litmus sa (#797)
Signed-off-by: Paige Patton <prubenda@redhat.com>
2025-05-07 15:41:49 +02:00
yogananth subramanian
aee838d3ac Fix: Add support for tains (#790) (#791)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 4m28s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
2025-05-06 12:51:59 -04:00
Tullio Sebastiani
3b4d8a13f9 network_chaos_ng_scenarios configuration fixes (#794)
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 4m9s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Tullio Sebastiani <tsebasti@redhat.com>
2025-05-02 17:53:14 +02:00
Naga Ravi Chaitanya Elluri
a86bb6ab95 Refactor docs to point to krkn-chaos.dev
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 56s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Naga Ravi Chaitanya Elluri <nelluri@redhat.com>
2025-05-01 09:19:35 -04:00
Paige Patton
7f0110972b updating tuple type for health checks
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 8m58s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Paige Patton <prubenda@redhat.com>
2025-04-28 08:24:14 -04:00
Paige Patton
126f4ebb35 logging getting into ingress shaping file
Some checks failed
Functional & Unit Tests / Functional & Unit Tests (push) Failing after 21s
Functional & Unit Tests / Generate Coverage Badge (push) Has been skipped
Signed-off-by: Paige Patton <prubenda@redhat.com>
2025-04-21 13:36:11 -04:00
16 changed files with 85 additions and 190 deletions

View File

@@ -9,96 +9,16 @@ Chaos and resiliency testing tool for Kubernetes.
Kraken injects deliberate failures into Kubernetes clusters to check if it is resilient to turbulent conditions.
### Website
[Kraken Website](https://krkn-chaos.dev) is the one stop shop for all things Kraken.
The website contains comprehensive information about the workflow, supported scenarios, and detailed descriptions of each scenario. It also provides the necessary configurations needed to run Kraken, along with insights into performance monitoring and signaling features.
### Workflow
![Kraken workflow](media/kraken-workflow.png)
<!-- ### Demo
[![Kraken demo](media/KrakenStarting.png)](https://youtu.be/LN-fZywp_mo "Kraken Demo - Click to Watch!") -->
<!-- ### Chaos Testing Guide
[Guide](docs/index.md) encapsulates:
- Test methodology that needs to be embraced.
- Best practices that an Kubernetes cluster, platform and applications running on top of it should take into account for best user experience, performance, resilience and reliability.
- Tooling.
- Scenarios supported.
- Test environment recommendations as to how and where to run chaos tests.
- Chaos testing in practice.
The guide is hosted at https://krkn-chaos.github.io/krkn. -->
### How to Get Started
Instructions on how to setup, configure and run Kraken can be found at [Installation](https://krkn-chaos.dev/docs/installation/).
You may consider utilizing the chaos recommendation tool prior to initiating the chaos runs to profile the application service(s) under test. This tool discovers a list of Krkn scenarios with a high probability of causing failures or disruptions to your application service(s). The tool can be accessed at [Chaos-Recommender](https://krkn-chaos.dev/docs/chaos-recommender/).
See the [getting started doc](https://krkn-chaos.dev/docs/getting-started/) on support on how to get started with your own custom scenario or editing current scenarios for your specific usage.
After installation, refer back to the below sections for supported scenarios and how to tweak the kraken config to load them on your cluster.
<!-- #### Running Kraken with minimal configuration tweaks
For cases where you want to run Kraken with minimal configuration changes, refer to [krkn-hub](https://github.com/krkn-chaos/krkn-hub). One use case is CI integration where you do not want to carry around different configuration files for the scenarios. -->
### Config
Instructions on how to setup the config and the options supported can be found at [Config](docs/config.md).
<!-- ### Kubernetes chaos scenarios supported
Scenario type | Kubernetes
--------------------------- | ------------- |
[Pod Scenarios](docs/pod_scenarios.md) | :heavy_check_mark: |
[Pod Network Scenarios](docs/pod_network_scenarios.md) | :x: |
[Container Scenarios](docs/container_scenarios.md) | :heavy_check_mark: |
[Node Scenarios](docs/node_scenarios.md) | :heavy_check_mark: |
[Time Scenarios](docs/time_scenarios.md) | :heavy_check_mark: |
[Hog Scenarios: CPU, Memory](docs/hog_scenarios.md) | :heavy_check_mark: |
[Cluster Shut Down Scenarios](docs/cluster_shut_down_scenarios.md) | :heavy_check_mark: |
[Service Disruption Scenarios](docs/service_disruption_scenarios.md) | :heavy_check_mark: |
[Zone Outage Scenarios](docs/zone_outage.md) | :heavy_check_mark: |
[Application_outages](docs/application_outages.md) | :heavy_check_mark: |
[PVC scenario](docs/pvc_scenario.md) | :heavy_check_mark: |
[Network_Chaos](docs/network_chaos.md) | :heavy_check_mark: |
[ManagedCluster Scenarios](docs/managedcluster_scenarios.md) | :heavy_check_mark: |
[Service Hijacking Scenarios](docs/service_hijacking_scenarios.md) | :heavy_check_mark: |
[SYN Flood Scenarios](docs/syn_flood_scenarios.md) | :heavy_check_mark: |
### Kraken scenario pass/fail criteria and report
It is important to make sure to check if the targeted component recovered from the chaos injection and also if the Kubernetes cluster is healthy as failures in one component can have an adverse impact on other components. Kraken does this by:
- Having built in checks for pod and node based scenarios to ensure the expected number of replicas and nodes are up. It also supports running custom scripts with the checks.
- Leveraging [Cerberus](https://github.com/krkn-chaos/cerberus) to monitor the cluster under test and consuming the aggregated go/no-go signal to determine pass/fail post chaos. It is highly recommended to turn on the Cerberus health check feature available in Kraken. Instructions on installing and setting up Cerberus can be found [here](https://github.com/openshift-scale/cerberus#installation) or can be installed from Kraken using the [instructions](https://github.com/krkn-chaos/krkn#setting-up-infrastructure-dependencies). Once Cerberus is up and running, set cerberus_enabled to True and cerberus_url to the url where Cerberus publishes go/no-go signal in the Kraken config file. Cerberus can monitor [application routes](https://github.com/redhat-chaos/cerberus/blob/main/docs/config.md#watch-routes) during the chaos and fails the run if it encounters downtime as it is a potential downtime in a customers, or users environment as well. It is especially important during the control plane chaos scenarios including the API server, Etcd, Ingress etc. It can be enabled by setting `check_applicaton_routes: True` in the [Kraken config](https://github.com/redhat-chaos/krkn/blob/main/config/config.yaml) provided application routes are being monitored in the [cerberus config](https://github.com/redhat-chaos/krkn/blob/main/config/cerberus.yaml).
- Leveraging built-in alert collection feature to fail the runs in case of critical alerts.
- Utilizing health check endpoints to observe application behavior during chaos injection [Health checks](docs/health_checks.md)
### Signaling
In CI runs or any external job it is useful to stop Kraken once a certain test or state gets reached. We created a way to signal to kraken to pause the chaos or stop it completely using a signal posted to a port of your choice.
For example if we have a test run loading the cluster running and kraken separately running; we want to be able to know when to start/stop the kraken run based on when the test run completes or gets to a certain loaded state.
More detailed information on enabling and leveraging this feature can be found [here](docs/signal.md).
### Performance monitoring
Monitoring the Kubernetes/OpenShift cluster to observe the impact of Kraken chaos scenarios on various components is key to find out the bottlenecks as it is important to make sure the cluster is healthy in terms if both recovery as well as performance during/after the failure has been injected. Instructions on enabling it can be found [here](docs/performance_dashboards.md).
### SLOs validation during and post chaos
- In addition to checking the recovery and health of the cluster and components under test, Kraken takes in a profile with the Prometheus expressions to validate and alerts, exits with a non-zero return code depending on the severity set. This feature can be used to determine pass/fail or alert on abnormalities observed in the cluster based on the metrics.
- Kraken also provides ability to check if any critical alerts are firing in the cluster post chaos and pass/fail's.
Information on enabling and leveraging this feature can be found [here](docs/SLOs_validation.md)
### OCM / ACM integration
Kraken supports injecting faults into [Open Cluster Management (OCM)](https://open-cluster-management.io/) and [Red Hat Advanced Cluster Management for Kubernetes (ACM)](https://www.krkn.com/en/technologies/management/advanced-cluster-management) managed clusters through [ManagedCluster Scenarios](docs/managedcluster_scenarios.md). -->
Instructions on how to setup, configure and run Kraken can be found in the [documentation](https://krkn-chaos.dev/docs/).
### Blogs and other useful resources
@@ -110,6 +30,7 @@ Kraken supports injecting faults into [Open Cluster Management (OCM)](https://op
- Blog post on supercharging chaos testing using AI integration in Krkn: https://www.redhat.com/en/blog/supercharging-chaos-testing-using-ai
- Blog post announcing Krkn joining CNCF Sandbox: https://www.redhat.com/en/blog/krknchaos-joining-cncf-sandbox
### Roadmap
Enhancements being planned can be found in the [roadmap](ROADMAP.md).
@@ -119,15 +40,6 @@ We are always looking for more enhancements, fixes to make it better, any contri
[More information on how to Contribute](https://krkn-chaos.dev/docs/contribution-guidelines/contribute/)
If adding a new scenario or tweaking the main config, be sure to add in updates into the CI to be sure the CI is up to date.
Please read [this file](https://krkn-chaos.dev/docs/getting-started/#adding-new-scenarios) for more information on updates.
### Scenario Plugin Development
If you're gearing up to develop new scenarios, take a moment to review our
[Scenario Plugin API Documentation](docs/scenario_plugin_api.md).
Its the perfect starting point to tap into your chaotic creativity!
### Community
Key Members(slack_usernames/full name): paigerube14/Paige Rubendall, mffiedler/Mike Fiedler, tsebasti/Tullio Sebastiani, yogi/Yogananth Subramanian, sahil/Sahil Shah, pradeep/Pradeep Surisetty and ravielluri/Naga Ravi Chaitanya Elluri.

View File

@@ -45,6 +45,8 @@ kraken:
- scenarios/kube/service_hijacking.yaml
- syn_flood_scenarios:
- scenarios/kube/syn_flood.yaml
- network_chaos_ng_scenarios:
- scenarios/kube/network-filter.yml
cerberus:
cerberus_enabled: False # Enable it when cerberus is previously installed

View File

@@ -25,10 +25,6 @@ RUN dnf update -y
ENV KUBECONFIG /home/krkn/.kube/config
# install kubectl
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" &&\
cp kubectl /usr/local/bin/kubectl && chmod +x /usr/local/bin/kubectl &&\
cp kubectl /usr/bin/kubectl && chmod +x /usr/bin/kubectl
# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo
RUN dnf update && dnf install -y --setopt=install_weak_deps=False \

View File

@@ -367,6 +367,60 @@
"default": "True",
"required": "false"
},
{
"name": "health-check-interval",
"short_description": "Heath check interval",
"description": "How often to check the health check urls",
"variable": "HEALTH_CHECK_INTERVAL",
"type": "number",
"default": "2",
"required": "false"
},
{
"name": "health-check-url",
"short_description": "Health check url",
"description": "Url to check the health of",
"variable": "HEALTH_CHECK_URL",
"type": "string",
"default": "",
"required": "false"
},
{
"name": "health-check-auth",
"short_description": "Health check authentication tuple",
"description": "Authentication tuple to authenticate into health check URL",
"variable": "HEALTH_CHECK_AUTH",
"type": "string",
"default": "",
"required": "false"
},
{
"name": "health-check-bearer-token",
"short_description": "Health check bearer token",
"description": "Bearer token to authenticate into health check URL",
"variable": "HEALTH_CHECK_BEARER_TOKEN",
"type": "string",
"default": "",
"required": "false"
},
{
"name": "health-check-exit",
"short_description": "Health check exit on failure",
"description": "Exit on failure when health check URL is not able to connect",
"variable": "HEALTH_CHECK_EXIT_ON_FAILURE",
"type": "string",
"default": "",
"required": "false"
},
{
"name": "health-check-verify",
"short_description": "SSL Verification of health check url",
"description": "SSL Verification to authenticate into health check URL",
"variable": "HEALTH_CHECK_VERIFY",
"type": "string",
"default": "false",
"required": "false"
},
{
"name": "krkn-debug",
"short_description": "Krkn debug mode",
@@ -378,4 +432,4 @@
"default": "False",
"required": "false"
}
]
]

View File

@@ -18,6 +18,8 @@ these workloads will use a predetermined amount of resources for a specified dur
|`namespace`| string | the namespace where the stress workload will be deployed |
|`node-selector`| string (Optional) | defines the node selector for choosing target nodes. If not specified, one schedulable node in the cluster will be chosen at random. If multiple nodes match the selector, all of them will be subjected to stress. If number-of-nodes is specified, that many nodes will be randomly selected from those identified by the selector. |
|`number-of-nodes`| number (Optional) | restricts the number of selected nodes by the selector|
|`taints`| list (Optional) default [] | list of taints for which tolerations need to created. Example: ["node-role.kubernetes.io/master:NoSchedule"]|
#### `cpu-hog` options

View File

@@ -33,7 +33,7 @@ class HogsScenarioPlugin(AbstractScenarioPlugin):
else:
node_selector = scenario_config.node_selector
available_nodes = lib_telemetry.get_lib_kubernetes().list_schedulable_nodes(node_selector)
available_nodes = lib_telemetry.get_lib_kubernetes().list_nodes(node_selector)
if len(available_nodes) == 0:
raise Exception("no available nodes to schedule workload")

View File

@@ -49,6 +49,7 @@ class NativeScenarioPlugin(AbstractScenarioPlugin):
return [
"pod_disruption_scenarios",
"pod_network_scenarios",
"ingress_node_scenarios"
]
def start_monitoring(self, pool: PodsMonitorPool, scenarios: list[Any]):

View File

@@ -97,15 +97,6 @@ class NetworkScenarioConfig:
},
)
kraken_config: typing.Optional[str] = field(
default="",
metadata={
"name": "Kraken Config",
"description": "Path to the config file of Kraken. "
"Set this field if you wish to publish status onto Cerberus",
},
)
@dataclass
class NetworkScenarioSuccessOutput:
@@ -710,6 +701,7 @@ def network_chaos(
pod_module_template = env.get_template("pod_module.j2")
cli, batch_cli = kube_helper.setup_kubernetes(cfg.kubeconfig_path)
logging.info("Starting Ingress Network Chaos")
try:
node_interface_dict = get_node_interfaces(
cfg.node_interface_name,
@@ -721,16 +713,6 @@ def network_chaos(
except Exception:
return "error", NetworkScenarioErrorOutput(format_exc())
job_list = []
publish = False
if cfg.kraken_config:
failed_post_scenarios = ""
try:
with open(cfg.kraken_config, "r") as f:
config = yaml.full_load(f)
except Exception:
logging.error("Error reading Kraken config from %s" % cfg.kraken_config)
return "error", NetworkScenarioErrorOutput(format_exc())
publish = True
try:
if cfg.execution_type == "parallel":
@@ -747,13 +729,7 @@ def network_chaos(
)
)
logging.info("Waiting for parallel job to finish")
start_time = int(time.time())
wait_for_job(batch_cli, job_list[:], cfg.test_duration + 100)
end_time = int(time.time())
if publish:
cerberus.publish_kraken_status(
config, failed_post_scenarios, start_time, end_time
)
elif cfg.execution_type == "serial":
create_interfaces = True
@@ -773,18 +749,12 @@ def network_chaos(
)
)
logging.info("Waiting for serial job to finish")
start_time = int(time.time())
wait_for_job(batch_cli, job_list[:], cfg.test_duration + 100)
logging.info("Deleting jobs")
delete_jobs(cli, batch_cli, job_list[:])
job_list = []
logging.info("Waiting for wait_duration : %ss" % cfg.wait_duration)
time.sleep(cfg.wait_duration)
end_time = int(time.time())
if publish:
cerberus.publish_kraken_status(
config, failed_post_scenarios, start_time, end_time
)
create_interfaces = False
else:
@@ -799,7 +769,7 @@ def network_chaos(
execution_type=cfg.execution_type,
)
except Exception as e:
logging.error("Network Chaos exiting due to Exception - %s" % e)
logging.error("Ingress Network Chaos exiting due to Exception - %s" % e)
return "error", NetworkScenarioErrorOutput(format_exc())
finally:
delete_virtual_interfaces(cli, node_interface_dict.keys(), pod_module_template)

View File

@@ -14,8 +14,7 @@ class OPENSTACKCLOUD:
self.Wait = 30
# Get the instance ID of the node
def get_instance_id(self, node):
openstack_node_ip = nodeaction.get_node_ip(node)
def get_instance_id(self, openstack_node_ip):
openstack_node_name = self.get_openstack_nodename(openstack_node_ip)
return openstack_node_name
@@ -128,7 +127,8 @@ class openstack_node_scenarios(abstract_node_scenarios):
try:
logging.info("Starting node_start_scenario injection")
logging.info("Starting the node %s" % (node))
openstack_node_name = self.openstackcloud.get_instance_id(node)
openstack_node_ip = self.kubecli.get_node_ip(node)
openstack_node_name = self.openstackcloud.get_instance_id(openstack_node_ip)
self.openstackcloud.start_instances(openstack_node_name)
self.openstackcloud.wait_until_running(openstack_node_name, timeout, affected_node)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli, affected_node)
@@ -151,7 +151,8 @@ class openstack_node_scenarios(abstract_node_scenarios):
try:
logging.info("Starting node_stop_scenario injection")
logging.info("Stopping the node %s " % (node))
openstack_node_name = self.openstackcloud.get_instance_id(node)
openstack_node_ip = self.kubecli.get_node_ip(node)
openstack_node_name = self.openstackcloud.get_instance_id(openstack_node_ip)
self.openstackcloud.stop_instances(openstack_node_name)
self.openstackcloud.wait_until_stopped(openstack_node_name, timeout, affected_node)
logging.info("Node with instance name: %s is in stopped state" % (node))
@@ -173,7 +174,8 @@ class openstack_node_scenarios(abstract_node_scenarios):
try:
logging.info("Starting node_reboot_scenario injection")
logging.info("Rebooting the node %s" % (node))
openstack_node_name = self.openstackcloud.get_instance_id(node)
openstack_node_ip = self.kubecli.get_node_ip(node)
openstack_node_name = self.openstackcloud.get_instance_id(openstack_node_ip)
self.openstackcloud.reboot_instances(openstack_node_name)
nodeaction.wait_for_unknown_status(node, timeout, self.kubecli, affected_node)
nodeaction.wait_for_ready_status(node, timeout, self.kubecli, affected_node)

View File

@@ -11,9 +11,9 @@ class HealthChecker:
def __init__(self, iterations):
self.iterations = iterations
def make_request(self, url, auth=None, headers=None):
def make_request(self, url, auth=None, headers=None, verify=True):
response_data = {}
response = requests.get(url, auth=auth, headers=headers)
response = requests.get(url, auth=auth, headers=headers, verify=verify)
response_data["url"] = url
response_data["status"] = response.status_code == 200
response_data["status_code"] = response.status_code
@@ -26,18 +26,20 @@ class HealthChecker:
health_check_telemetry = []
health_check_tracker = {}
interval = health_check_config["interval"] if health_check_config["interval"] else 2
response_tracker = {config["url"]:True for config in health_check_config["config"]}
while self.current_iterations < self.iterations:
for config in health_check_config.get("config"):
auth, headers = None, None
verify_url = config["verify_url"] if "verify_url" in config else True
if config["url"]: url = config["url"]
if config["bearer_token"]:
bearer_token = "Bearer " + config["bearer_token"]
headers = {"Authorization": bearer_token}
if config["auth"]: auth = config["auth"]
response = self.make_request(url, auth, headers)
if config["auth"]: auth = tuple(config["auth"].split(','))
response = self.make_request(url, auth, headers, verify_url)
if response["status_code"] != 200:
if config["url"] not in health_check_tracker:

View File

@@ -15,7 +15,7 @@ google-cloud-compute==1.22.0
ibm_cloud_sdk_core==3.18.0
ibm_vpc==0.20.0
jinja2==3.1.6
krkn-lib==5.0.0
krkn-lib==5.0.1
lxml==5.1.0
kubernetes==28.1.0
numpy==1.26.4
@@ -30,7 +30,7 @@ python-openstackclient==6.5.0
requests==2.32.2
service_identity==24.1.0
PyYAML==6.0.1
setuptools==70.0.0
setuptools==78.1.1
werkzeug==3.0.6
wheel==0.42.0
zope.interface==5.4.0

View File

@@ -7,3 +7,4 @@ cpu-load-percentage: 90
cpu-method: all
node-selector: "node-role.kubernetes.io/worker="
number-of-nodes: 2
taints: [] #example ["node-role.kubernetes.io/master:NoSchedule"]

View File

@@ -11,4 +11,5 @@ io-target-pod-volume:
hostPath:
path: /root # a path writable by kubelet in the root filesystem of the node
node-selector: "node-role.kubernetes.io/worker="
number-of-nodes: ''
number-of-nodes: ''
taints: [] #example ["node-role.kubernetes.io/master:NoSchedule"]

View File

@@ -6,3 +6,4 @@ namespace: default
memory-vm-bytes: 90%
node-selector: "node-role.kubernetes.io/worker="
number-of-nodes: ''
taints: [] #example ["node-role.kubernetes.io/master:NoSchedule"]

View File

@@ -1,49 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: litmus-sa
namespace: litmus
labels:
name: litmus-sa
app.kubernetes.io/part-of: litmus
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: litmus-sa
labels:
name: litmus-sa
app.kubernetes.io/part-of: litmus
rules:
- apiGroups: [""]
resources: ["pods","events"]
verbs: ["create","list","get","patch","update","delete","deletecollection"]
- apiGroups: [""]
resources: ["pods/exec","pods/log"]
verbs: ["list","get","create"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["create","list","get","delete","deletecollection"]
- apiGroups: ["litmuschaos.io"]
resources: ["chaosengines","chaosexperiments","chaosresults"]
verbs: ["create","list","get","patch","update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: litmus-sa
labels:
name: litmus-sa
app.kubernetes.io/part-of: litmus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: litmus-sa
subjects:
- kind: ServiceAccount
name: litmus-sa
namespace: litmus