Compare commits

..

15 Commits

Author SHA1 Message Date
Sinith
a476d9383f Update KHV005.md (#403) 2020-11-08 18:42:41 +02:00
Hugo van Kemenade
6a3c7a885a Support Python 3.9 (#393) (#400)
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-07 15:59:44 +02:00
A N U S H
b6be309651 Added Greeting Github Actions (#382)
* Added Greeting Github Actions

* feat: Updated the Message

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-07 15:16:14 +02:00
Monish Singh
0d5b3d57d3 added the link of contribution page (#383)
* added the link of contribution page

users can directly go to the contribution page from here after reading the readme file

* added it to the table of contents

* Done

sorry for my prev. mistake, now its fixed.

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-07 15:07:39 +02:00
Milind Chawre
69057acf9b Adding --log-file option (#329) (#387) 2020-11-07 15:01:30 +02:00
Itay Shakury
e63200139e fix azure spn hunter (#372)
* fix azure spn hunter

* fix issues

* restore tests

* code style

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-10-19 13:53:50 +03:00
Itay Shakury
ad4cfe1c11 update gitignore (#371)
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-10-19 13:03:46 +03:00
Zoltán Reegn
24b5a709ad Increase evidence field length in plain report (#385)
Given that the Description tends to go over 100 characters as well, it
seems appropriate to loosen the restriction of the evidence field.

Fixes #111

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-10-19 12:49:43 +03:00
Jeff Rescignano
9cadc0ee41 Optimize images (#389) 2020-10-19 12:27:22 +03:00
danielsagi
3950a1c2f2 Fixed bug in etcd hunting (#364)
* fixed etcd version hunting typo

* changed self.protocol in other places on etcd hunting. this is a typo, protocol is a property of events, not hunters

Co-authored-by: Daniel Sagi <daniel@example.com>
Co-authored-by: Liz Rice <liz@lizrice.com>
2020-09-04 13:28:03 +01:00
Sanka Sathyaji
7530e6fee3 Update job.yml for Kubernetes cluster jobs (#367)
Existing job.yml has wrong command for command ["python", "kube-hunter,py"]. But it should change to command ["kube-hunter"]

Co-authored-by: Liz Rice <liz@lizrice.com>
2020-09-04 12:15:24 +01:00
danielsagi
72ae8c0719 reformatted files to pass new linting (#369)
Co-authored-by: Daniel Sagi <daniel@example.com>
2020-09-04 12:01:16 +01:00
danielsagi
b341124c20 Fixed bug in certificate hunting (#365)
* striping was incorrect due to multiple newlines in certificate returned from ssl.get_server_certificate

* changed ' to " for linting

Co-authored-by: Daniel Sagi <daniel@example.com>
2020-09-03 15:06:51 +01:00
danielsagi
3e06647b4c Added multistage build for Dockerfile (#362)
* removed unnecessary files from final image, using multistaged build

* added ebtables and tcpdump packages to multistage

Co-authored-by: Daniel Sagi <daniel@example.com>
2020-08-21 14:42:02 +03:00
danielsagi
cd1f79a658 fixed typo (#363) 2020-08-14 19:09:06 +03:00
37 changed files with 483 additions and 122 deletions

14
.github/workflows/greetings.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
name: Greetings
on: [pull_request, issues]
jobs:
greeting:
runs-on: ubuntu-latest
steps:
- uses: actions/first-interaction@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
issue-message: "Hola! @${{ github.actor }} 🥳 , You've just created an Issue!🌟 Thanks for making the Project Better"
pr-message: 'Submitted a PR already ?? @${{ github.actor }} . Sit tight until one of our amazing maintainers review it. Make sure you read the contributing guide'

1
.gitignore vendored
View File

@@ -24,6 +24,7 @@ var/
*.egg
*.spec
.eggs
pip-wheel-metadata
# Directory Cache Files
.DS_Store

View File

@@ -5,6 +5,7 @@ python:
- "3.6"
- "3.7"
- "3.8"
- "3.9"
install:
- pip install -r requirements.txt
- pip install -r requirements-dev.txt

View File

@@ -16,4 +16,14 @@ RUN make deps
COPY . .
RUN make install
FROM python:3.8-alpine
RUN apk add --no-cache \
tcpdump \
ebtables && \
apk upgrade --no-cache
COPY --from=builder /usr/local/lib/python3.8/site-packages /usr/local/lib/python3.8/site-packages
COPY --from=builder /usr/local/bin/kube-hunter /usr/local/bin/kube-hunter
ENTRYPOINT ["kube-hunter"]

View File

@@ -34,6 +34,7 @@ Table of Contents
* [Prerequisites](#prerequisites)
* [Container](#container)
* [Pod](#pod)
* [Contribution](#contribution)
## Hunting
@@ -174,5 +175,8 @@ The example `job.yaml` file defines a Job that will run kube-hunter in a pod, us
* Find the pod name with `kubectl describe job kube-hunter`
* View the test results with `kubectl logs <pod name>`
## Contribution
To read the contribution guidelines, <a href="https://github.com/aquasecurity/kube-hunter/blob/master/CONTRIBUTING.md"> Click here </a>
## License
This repository is available under the [Apache License 2.0](https://github.com/aquasecurity/kube-hunter/blob/master/LICENSE).

View File

@@ -12,7 +12,7 @@ Kubernetes API was accessed with Pod Service Account or without Authentication (
## Remediation
Secure acess to your Kubernetes API.
Secure access to your Kubernetes API.
It is recommended to explicitly specify a Service Account for all of your workloads (`serviceAccountName` in `Pod.Spec`), and manage their permissions according to the least privilege principal.
@@ -21,4 +21,4 @@ Consider opting out automatic mounting of SA token using `automountServiceAccoun
## References
- [Configure Service Accounts for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
- [Configure Service Accounts for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)

View File

@@ -8,7 +8,7 @@ spec:
containers:
- name: kube-hunter
image: aquasec/kube-hunter
command: ["python", "kube-hunter.py"]
command: ["kube-hunter"]
args: ["--pod"]
restartPolicy: Never
backoffLimit: 4

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

After

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -18,6 +18,7 @@ config = Config(
cidr=args.cidr,
include_patched_versions=args.include_patched_versions,
interface=args.interface,
log_file=args.log_file,
mapping=args.mapping,
network_timeout=args.network_timeout,
pod=args.pod,
@@ -25,7 +26,7 @@ config = Config(
remote=args.remote,
statistics=args.statistics,
)
setup_logger(args.log)
setup_logger(args.log, args.log_file)
set_config(config)
# Running all other registered plugins before execution

View File

@@ -4,7 +4,7 @@ from typing import Any, Optional
@dataclass
class Config:
""" Config is a configuration container.
"""Config is a configuration container.
It contains the following fields:
- active: Enable active hunters
- cidr: Network subnets to scan
@@ -13,6 +13,7 @@ class Config:
- interface: Interface scanning mode
- list_hunters: Print a list of existing hunters
- log_level: Log level
- log_file: Log File path
- mapping: Report only found components
- network_timeout: Timeout for network operations
- pod: From pod scanning mode
@@ -27,6 +28,7 @@ class Config:
dispatcher: Optional[Any] = None
include_patched_versions: bool = False
interface: bool = False
log_file: Optional[str] = None
mapping: bool = False
network_timeout: float = 5.0
pod: bool = False

View File

@@ -1,6 +1,5 @@
import logging
DEFAULT_LEVEL = logging.INFO
DEFAULT_LEVEL_NAME = logging.getLevelName(DEFAULT_LEVEL)
LOG_FORMAT = "%(asctime)s %(levelname)s %(name)s %(message)s"
@@ -10,7 +9,7 @@ logging.getLogger("scapy.runtime").setLevel(logging.CRITICAL)
logging.getLogger("scapy.loading").setLevel(logging.CRITICAL)
def setup_logger(level_name):
def setup_logger(level_name, logfile):
# Remove any existing handlers
# Unnecessary in Python 3.8 since `logging.basicConfig` has `force` parameter
for h in logging.getLogger().handlers[:]:
@@ -22,6 +21,9 @@ def setup_logger(level_name):
else:
log_level = getattr(logging, level_name.upper(), None)
log_level = log_level if isinstance(log_level, int) else None
logging.basicConfig(level=log_level or DEFAULT_LEVEL, format=LOG_FORMAT)
if logfile is None:
logging.basicConfig(level=log_level or DEFAULT_LEVEL, format=LOG_FORMAT)
else:
logging.basicConfig(filename=logfile, level=log_level or DEFAULT_LEVEL, format=LOG_FORMAT)
if not log_level:
logging.warning(f"Unknown log level '{level_name}', using {DEFAULT_LEVEL_NAME}")

View File

@@ -9,7 +9,9 @@ def parser_add_arguments(parser):
Contains initialization for all default arguments
"""
parser.add_argument(
"--list", action="store_true", help="Displays all tests in kubehunter (add --active flag to see active tests)",
"--list",
action="store_true",
help="Displays all tests in kubehunter (add --active flag to see active tests)",
)
parser.add_argument("--interface", action="store_true", help="Set hunting on all network interfaces")
@@ -19,7 +21,9 @@ def parser_add_arguments(parser):
parser.add_argument("--quick", action="store_true", help="Prefer quick scan (subnet 24)")
parser.add_argument(
"--include-patched-versions", action="store_true", help="Don't skip patched versions when scanning",
"--include-patched-versions",
action="store_true",
help="Don't skip patched versions when scanning",
)
parser.add_argument(
@@ -29,11 +33,17 @@ def parser_add_arguments(parser):
)
parser.add_argument(
"--mapping", action="store_true", help="Outputs only a mapping of the cluster's nodes",
"--mapping",
action="store_true",
help="Outputs only a mapping of the cluster's nodes",
)
parser.add_argument(
"--remote", nargs="+", metavar="HOST", default=list(), help="One or more remote ip/dns to hunt",
"--remote",
nargs="+",
metavar="HOST",
default=list(),
help="One or more remote ip/dns to hunt",
)
parser.add_argument("--active", action="store_true", help="Enables active hunting")
@@ -47,7 +57,17 @@ def parser_add_arguments(parser):
)
parser.add_argument(
"--report", type=str, default="plain", help="Set report type, options are: plain, yaml, json",
"--log-file",
type=str,
default=None,
help="Path to a log file to output all logs to",
)
parser.add_argument(
"--report",
type=str,
default="plain",
help="Set report type, options are: plain, yaml, json",
)
parser.add_argument(

View File

@@ -144,7 +144,8 @@ class NewHostEvent(Event):
logger.debug("Checking whether the cluster is deployed on azure's cloud")
# Leverage 3rd tool https://github.com/blrchen/AzureSpeed for Azure cloud ip detection
result = requests.get(
f"https://api.azurespeed.com/api/region?ipOrUrl={self.host}", timeout=config.network_timeout,
f"https://api.azurespeed.com/api/region?ipOrUrl={self.host}",
timeout=config.network_timeout,
).json()
return result["cloud"] or "NoCloud"
except requests.ConnectionError:
@@ -194,7 +195,11 @@ class K8sVersionDisclosure(Vulnerability, Event):
def __init__(self, version, from_endpoint, extra_info=""):
Vulnerability.__init__(
self, KubernetesCluster, "K8s Version Disclosure", category=InformationDisclosure, vid="KHV002",
self,
KubernetesCluster,
"K8s Version Disclosure",
category=InformationDisclosure,
vid="KHV002",
)
self.version = version
self.from_endpoint = from_endpoint

View File

@@ -46,7 +46,11 @@ class AzureMetadataApi(Vulnerability, Event):
def __init__(self, cidr):
Vulnerability.__init__(
self, Azure, "Azure Metadata Exposure", category=InformationDisclosure, vid="KHV003",
self,
Azure,
"Azure Metadata Exposure",
category=InformationDisclosure,
vid="KHV003",
)
self.cidr = cidr
self.evidence = "cidr: {}".format(cidr)
@@ -140,7 +144,9 @@ class FromPodHostDiscovery(Discovery):
def traceroute_discovery(self):
config = get_config()
node_internal_ip = srp1(
Ether() / IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout,
Ether() / IP(dst="1.1.1.1", ttl=1) / ICMP(),
verbose=0,
timeout=config.network_timeout,
)[IP].src
return [[node_internal_ip, "24"]]

View File

@@ -16,7 +16,11 @@ class AzureSpnExposure(Vulnerability, Event):
def __init__(self, container):
Vulnerability.__init__(
self, Azure, "Azure SPN Exposure", category=IdentityTheft, vid="KHV004",
self,
Azure,
"Azure SPN Exposure",
category=IdentityTheft,
vid="KHV004",
)
self.container = container
@@ -42,11 +46,16 @@ class AzureSpnHunter(Hunter):
logger.debug("failed getting pod info")
else:
pods_data = r.json().get("items", [])
suspicious_volume_names = []
for pod_data in pods_data:
for container in pod_data["spec"]["containers"]:
for mount in container["volumeMounts"]:
path = mount["mountPath"]
for volume in pod_data["spec"].get("volumes", []):
if volume.get("hostPath"):
path = volume["hostPath"]["path"]
if "/etc/kubernetes/azure.json".startswith(path):
suspicious_volume_names.append(volume["name"])
for container in pod_data["spec"]["containers"]:
for mount in container.get("volumeMounts", []):
if mount["name"] in suspicious_volume_names:
return {
"name": container["name"],
"pod": pod_data["metadata"]["name"],

View File

@@ -29,7 +29,11 @@ class ServerApiAccess(Vulnerability, Event):
name = "Unauthenticated access to API"
category = UnauthenticatedAccess
Vulnerability.__init__(
self, KubernetesCluster, name=name, category=category, vid="KHV005",
self,
KubernetesCluster,
name=name,
category=category,
vid="KHV005",
)
self.evidence = evidence
@@ -42,7 +46,11 @@ class ServerApiHTTPAccess(Vulnerability, Event):
name = "Insecure (HTTP) access to API"
category = UnauthenticatedAccess
Vulnerability.__init__(
self, KubernetesCluster, name=name, category=category, vid="KHV006",
self,
KubernetesCluster,
name=name,
category=category,
vid="KHV006",
)
self.evidence = evidence
@@ -54,7 +62,11 @@ class ApiInfoDisclosure(Vulnerability, Event):
else:
name += " as anonymous user"
Vulnerability.__init__(
self, KubernetesCluster, name=name, category=InformationDisclosure, vid="KHV007",
self,
KubernetesCluster,
name=name,
category=InformationDisclosure,
vid="KHV007",
)
self.evidence = evidence
@@ -89,12 +101,14 @@ class ListClusterRoles(ApiInfoDisclosure):
class CreateANamespace(Vulnerability, Event):
""" Creating a namespace might give an attacker an area with default (exploitable) permissions to run pods in.
"""
"""Creating a namespace might give an attacker an area with default (exploitable) permissions to run pods in."""
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Created a namespace", category=AccessRisk,
self,
KubernetesCluster,
name="Created a namespace",
category=AccessRisk,
)
self.evidence = evidence
@@ -105,14 +119,17 @@ class DeleteANamespace(Vulnerability, Event):
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Delete a namespace", category=AccessRisk,
self,
KubernetesCluster,
name="Delete a namespace",
category=AccessRisk,
)
self.evidence = evidence
class CreateARole(Vulnerability, Event):
""" Creating a role might give an attacker the option to harm the normal behavior of newly created pods
within the specified namespaces.
"""Creating a role might give an attacker the option to harm the normal behavior of newly created pods
within the specified namespaces.
"""
def __init__(self, evidence):
@@ -121,37 +138,46 @@ class CreateARole(Vulnerability, Event):
class CreateAClusterRole(Vulnerability, Event):
""" Creating a cluster role might give an attacker the option to harm the normal behavior of newly created pods
across the whole cluster
"""Creating a cluster role might give an attacker the option to harm the normal behavior of newly created pods
across the whole cluster
"""
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Created a cluster role", category=AccessRisk,
self,
KubernetesCluster,
name="Created a cluster role",
category=AccessRisk,
)
self.evidence = evidence
class PatchARole(Vulnerability, Event):
""" Patching a role might give an attacker the option to create new pods with custom roles within the
"""Patching a role might give an attacker the option to create new pods with custom roles within the
specific role's namespace scope
"""
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Patched a role", category=AccessRisk,
self,
KubernetesCluster,
name="Patched a role",
category=AccessRisk,
)
self.evidence = evidence
class PatchAClusterRole(Vulnerability, Event):
""" Patching a cluster role might give an attacker the option to create new pods with custom roles within the whole
"""Patching a cluster role might give an attacker the option to create new pods with custom roles within the whole
cluster scope.
"""
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Patched a cluster role", category=AccessRisk,
self,
KubernetesCluster,
name="Patched a cluster role",
category=AccessRisk,
)
self.evidence = evidence
@@ -161,7 +187,10 @@ class DeleteARole(Vulnerability, Event):
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Deleted a role", category=AccessRisk,
self,
KubernetesCluster,
name="Deleted a role",
category=AccessRisk,
)
self.evidence = evidence
@@ -171,7 +200,10 @@ class DeleteAClusterRole(Vulnerability, Event):
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Deleted a cluster role", category=AccessRisk,
self,
KubernetesCluster,
name="Deleted a cluster role",
category=AccessRisk,
)
self.evidence = evidence
@@ -181,7 +213,10 @@ class CreateAPod(Vulnerability, Event):
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Created A Pod", category=AccessRisk,
self,
KubernetesCluster,
name="Created A Pod",
category=AccessRisk,
)
self.evidence = evidence
@@ -191,7 +226,10 @@ class CreateAPrivilegedPod(Vulnerability, Event):
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Created A PRIVILEGED Pod", category=AccessRisk,
self,
KubernetesCluster,
name="Created A PRIVILEGED Pod",
category=AccessRisk,
)
self.evidence = evidence
@@ -201,7 +239,10 @@ class PatchAPod(Vulnerability, Event):
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Patched A Pod", category=AccessRisk,
self,
KubernetesCluster,
name="Patched A Pod",
category=AccessRisk,
)
self.evidence = evidence
@@ -211,7 +252,10 @@ class DeleteAPod(Vulnerability, Event):
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Deleted A Pod", category=AccessRisk,
self,
KubernetesCluster,
name="Deleted A Pod",
category=AccessRisk,
)
self.evidence = evidence
@@ -225,7 +269,7 @@ class ApiServerPassiveHunterFinished(Event):
# If we have a service account token we'll also trigger AccessApiServerWithToken below
@handler.subscribe(ApiServer)
class AccessApiServer(Hunter):
""" API Server Hunter
"""API Server Hunter
Checks if API server is accessible
"""
@@ -268,7 +312,10 @@ class AccessApiServer(Hunter):
try:
if not namespace:
r = requests.get(
f"{self.path}/api/v1/pods", headers=self.headers, verify=False, timeout=config.network_timeout,
f"{self.path}/api/v1/pods",
headers=self.headers,
verify=False,
timeout=config.network_timeout,
)
else:
r = requests.get(
@@ -319,7 +366,7 @@ class AccessApiServer(Hunter):
@handler.subscribe(ApiServer, predicate=lambda x: x.auth_token)
class AccessApiServerWithToken(AccessApiServer):
""" API Server Hunter
"""API Server Hunter
Accessing the API server using the service account token obtained from a compromised pod
"""
@@ -411,7 +458,8 @@ class AccessApiServerActive(ActiveHunter):
def patch_a_pod(self, namespace, pod_name):
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/api/v1/namespaces/{namespace}/pods/{pod_name}", data=json.dumps(data),
path=f"{self.path}/api/v1/namespaces/{namespace}/pods/{pod_name}",
data=json.dumps(data),
)
def create_namespace(self):
@@ -438,7 +486,8 @@ class AccessApiServerActive(ActiveHunter):
"rules": [{"apiGroups": [""], "resources": ["pods"], "verbs": ["get", "watch", "list"]}],
}
return self.create_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles", data=json.dumps(role),
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles",
data=json.dumps(role),
)
def create_a_cluster_role(self):
@@ -450,7 +499,8 @@ class AccessApiServerActive(ActiveHunter):
"rules": [{"apiGroups": [""], "resources": ["pods"], "verbs": ["get", "watch", "list"]}],
}
return self.create_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles", data=json.dumps(cluster_role),
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles",
data=json.dumps(cluster_role),
)
def delete_a_role(self, namespace, name):
@@ -477,7 +527,8 @@ class AccessApiServerActive(ActiveHunter):
def patch_a_cluster_role(self, cluster_role):
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles/{cluster_role}", data=json.dumps(data),
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles/{cluster_role}",
data=json.dumps(data),
)
def execute(self):

View File

@@ -17,7 +17,11 @@ class PossibleArpSpoofing(Vulnerability, Event):
def __init__(self):
Vulnerability.__init__(
self, KubernetesCluster, "Possible Arp Spoof", category=IdentityTheft, vid="KHV020",
self,
KubernetesCluster,
"Possible Arp Spoof",
category=IdentityTheft,
vid="KHV020",
)
@@ -55,7 +59,9 @@ class ArpSpoofHunter(ActiveHunter):
config = get_config()
self_ip = sr1(IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)[IP].dst
arp_responses, _ = srp(
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"), timeout=config.network_timeout, verbose=0,
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"),
timeout=config.network_timeout,
verbose=0,
)
# arp enabled on cluster and more than one pod on node

View File

@@ -17,7 +17,10 @@ class CapNetRawEnabled(Event, Vulnerability):
def __init__(self):
Vulnerability.__init__(
self, KubernetesCluster, name="CAP_NET_RAW Enabled", category=AccessRisk,
self,
KubernetesCluster,
name="CAP_NET_RAW Enabled",
category=AccessRisk,
)

View File

@@ -16,7 +16,11 @@ class CertificateEmail(Vulnerability, Event):
def __init__(self, email):
Vulnerability.__init__(
self, KubernetesCluster, "Certificate Includes Email Address", category=InformationDisclosure, vid="KHV021",
self,
KubernetesCluster,
"Certificate Includes Email Address",
category=InformationDisclosure,
vid="KHV021",
)
self.email = email
self.evidence = "email: {}".format(self.email)
@@ -42,7 +46,7 @@ class CertificateDiscovery(Hunter):
self.examine_certificate(cert)
def examine_certificate(self, cert):
c = cert.strip(ssl.PEM_HEADER).strip(ssl.PEM_FOOTER)
c = cert.strip(ssl.PEM_HEADER).strip("\n").strip(ssl.PEM_FOOTER).strip("\n")
certdata = base64.b64decode(c)
emails = re.findall(email_pattern, certdata)
for email in emails:

View File

@@ -33,7 +33,7 @@ class ServerApiVersionEndPointAccessPE(Vulnerability, Event):
class ServerApiVersionEndPointAccessDos(Vulnerability, Event):
"""Node not patched for CVE-2019-1002100. Depending on your RBAC settings,
a crafted json-patch could cause a Denial of Service."""
a crafted json-patch could cause a Denial of Service."""
def __init__(self, evidence):
Vulnerability.__init__(
@@ -52,7 +52,11 @@ class PingFloodHttp2Implementation(Vulnerability, Event):
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Possible Ping Flood Attack", category=DenialOfService, vid="KHV024",
self,
KubernetesCluster,
name="Possible Ping Flood Attack",
category=DenialOfService,
vid="KHV024",
)
self.evidence = evidence
@@ -63,7 +67,11 @@ class ResetFloodHttp2Implementation(Vulnerability, Event):
def __init__(self, evidence):
Vulnerability.__init__(
self, KubernetesCluster, name="Possible Reset Flood Attack", category=DenialOfService, vid="KHV025",
self,
KubernetesCluster,
name="Possible Reset Flood Attack",
category=DenialOfService,
vid="KHV025",
)
self.evidence = evidence
@@ -89,7 +97,11 @@ class IncompleteFixToKubectlCpVulnerability(Vulnerability, Event):
def __init__(self, binary_version):
Vulnerability.__init__(
self, KubectlClient, "Kubectl Vulnerable To CVE-2019-11246", category=RemoteCodeExec, vid="KHV027",
self,
KubectlClient,
"Kubectl Vulnerable To CVE-2019-11246",
category=RemoteCodeExec,
vid="KHV027",
)
self.binary_version = binary_version
self.evidence = "kubectl version: {}".format(self.binary_version)
@@ -101,7 +113,11 @@ class KubectlCpVulnerability(Vulnerability, Event):
def __init__(self, binary_version):
Vulnerability.__init__(
self, KubectlClient, "Kubectl Vulnerable To CVE-2019-1002101", category=RemoteCodeExec, vid="KHV028",
self,
KubectlClient,
"Kubectl Vulnerable To CVE-2019-1002101",
category=RemoteCodeExec,
vid="KHV028",
)
self.binary_version = binary_version
self.evidence = "kubectl version: {}".format(self.binary_version)

View File

@@ -16,7 +16,11 @@ class DashboardExposed(Vulnerability, Event):
def __init__(self, nodes):
Vulnerability.__init__(
self, KubernetesCluster, "Dashboard Exposed", category=RemoteCodeExec, vid="KHV029",
self,
KubernetesCluster,
"Dashboard Exposed",
category=RemoteCodeExec,
vid="KHV029",
)
self.evidence = "nodes: {}".format(" ".join(nodes)) if nodes else None

View File

@@ -18,7 +18,11 @@ class PossibleDnsSpoofing(Vulnerability, Event):
def __init__(self, kubedns_pod_ip):
Vulnerability.__init__(
self, KubernetesCluster, "Possible DNS Spoof", category=IdentityTheft, vid="KHV030",
self,
KubernetesCluster,
"Possible DNS Spoof",
category=IdentityTheft,
vid="KHV030",
)
self.kubedns_pod_ip = kubedns_pod_ip
self.evidence = "kube-dns at: {}".format(self.kubedns_pod_ip)
@@ -61,7 +65,9 @@ class DnsSpoofHunter(ActiveHunter):
self_ip = dns_info_res[IP].dst
arp_responses, _ = srp(
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"), timeout=config.network_timeout, verbose=0,
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"),
timeout=config.network_timeout,
verbose=0,
)
for _, response in arp_responses:
if response[Ether].src == kubedns_pod_mac:

View File

@@ -26,7 +26,11 @@ class EtcdRemoteWriteAccessEvent(Vulnerability, Event):
def __init__(self, write_res):
Vulnerability.__init__(
self, KubernetesCluster, name="Etcd Remote Write Access Event", category=RemoteCodeExec, vid="KHV031",
self,
KubernetesCluster,
name="Etcd Remote Write Access Event",
category=RemoteCodeExec,
vid="KHV031",
)
self.evidence = write_res
@@ -36,7 +40,11 @@ class EtcdRemoteReadAccessEvent(Vulnerability, Event):
def __init__(self, keys):
Vulnerability.__init__(
self, KubernetesCluster, name="Etcd Remote Read Access Event", category=AccessRisk, vid="KHV032",
self,
KubernetesCluster,
name="Etcd Remote Read Access Event",
category=AccessRisk,
vid="KHV032",
)
self.evidence = keys
@@ -135,7 +143,7 @@ class EtcdRemoteAccess(Hunter):
logger.debug(f"Trying to check etcd version remotely at {self.event.host}")
try:
r = requests.get(
f"{self.protocol}://{self.event.host}:{ETCD_PORT}/version",
f"{self.event.protocol}://{self.event.host}:{ETCD_PORT}/version",
verify=False,
timeout=config.network_timeout,
)
@@ -149,7 +157,9 @@ class EtcdRemoteAccess(Hunter):
logger.debug(f"Trying to access etcd insecurely at {self.event.host}")
try:
r = requests.get(
f"http://{self.event.host}:{ETCD_PORT}/version", verify=False, timeout=config.network_timeout,
f"http://{self.event.host}:{ETCD_PORT}/version",
verify=False,
timeout=config.network_timeout,
)
return r.content if r.status_code == 200 and r.content else False
except requests.exceptions.ConnectionError:
@@ -157,10 +167,10 @@ class EtcdRemoteAccess(Hunter):
def execute(self):
if self.insecure_access(): # make a decision between http and https protocol
self.protocol = "http"
self.event.protocol = "http"
if self.version_disclosure():
self.publish_event(EtcdRemoteVersionDisclosureEvent(self.version_evidence))
if self.protocol == "http":
if self.event.protocol == "http":
self.publish_event(EtcdAccessEnabledWithoutAuthEvent(self.version_evidence))
if self.db_keys_disclosure():
self.publish_event(EtcdRemoteReadAccessEvent(self.keys_evidence))

View File

@@ -35,7 +35,10 @@ class ExposedPodsHandler(Vulnerability, Event):
def __init__(self, pods):
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Pods", category=InformationDisclosure,
self,
component=Kubelet,
name="Exposed Pods",
category=InformationDisclosure,
)
self.pods = pods
self.evidence = f"count: {len(self.pods)}"
@@ -47,7 +50,11 @@ class AnonymousAuthEnabled(Vulnerability, Event):
def __init__(self):
Vulnerability.__init__(
self, component=Kubelet, name="Anonymous Authentication", category=RemoteCodeExec, vid="KHV036",
self,
component=Kubelet,
name="Anonymous Authentication",
category=RemoteCodeExec,
vid="KHV036",
)
@@ -56,7 +63,11 @@ class ExposedContainerLogsHandler(Vulnerability, Event):
def __init__(self):
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Container Logs", category=InformationDisclosure, vid="KHV037",
self,
component=Kubelet,
name="Exposed Container Logs",
category=InformationDisclosure,
vid="KHV037",
)
@@ -66,7 +77,11 @@ class ExposedRunningPodsHandler(Vulnerability, Event):
def __init__(self, count):
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Running Pods", category=InformationDisclosure, vid="KHV038",
self,
component=Kubelet,
name="Exposed Running Pods",
category=InformationDisclosure,
vid="KHV038",
)
self.count = count
self.evidence = "{} running pods".format(self.count)
@@ -77,7 +92,11 @@ class ExposedExecHandler(Vulnerability, Event):
def __init__(self):
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Exec On Container", category=RemoteCodeExec, vid="KHV039",
self,
component=Kubelet,
name="Exposed Exec On Container",
category=RemoteCodeExec,
vid="KHV039",
)
@@ -86,7 +105,11 @@ class ExposedRunHandler(Vulnerability, Event):
def __init__(self):
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Run Inside Container", category=RemoteCodeExec, vid="KHV040",
self,
component=Kubelet,
name="Exposed Run Inside Container",
category=RemoteCodeExec,
vid="KHV040",
)
@@ -95,7 +118,11 @@ class ExposedPortForwardHandler(Vulnerability, Event):
def __init__(self):
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Port Forward", category=RemoteCodeExec, vid="KHV041",
self,
component=Kubelet,
name="Exposed Port Forward",
category=RemoteCodeExec,
vid="KHV041",
)
@@ -105,7 +132,11 @@ class ExposedAttachHandler(Vulnerability, Event):
def __init__(self):
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Attaching To Container", category=RemoteCodeExec, vid="KHV042",
self,
component=Kubelet,
name="Exposed Attaching To Container",
category=RemoteCodeExec,
vid="KHV042",
)
@@ -115,7 +146,11 @@ class ExposedHealthzHandler(Vulnerability, Event):
def __init__(self, status):
Vulnerability.__init__(
self, component=Kubelet, name="Cluster Health Disclosure", category=InformationDisclosure, vid="KHV043",
self,
component=Kubelet,
name="Cluster Health Disclosure",
category=InformationDisclosure,
vid="KHV043",
)
self.status = status
self.evidence = f"status: {self.status}"
@@ -143,7 +178,11 @@ class PrivilegedContainers(Vulnerability, Event):
def __init__(self, containers):
Vulnerability.__init__(
self, component=KubernetesCluster, name="Privileged Container", category=AccessRisk, vid="KHV044",
self,
component=KubernetesCluster,
name="Privileged Container",
category=AccessRisk,
vid="KHV044",
)
self.containers = containers
self.evidence = f"pod: {containers[0][0]}, " f"container: {containers[0][1]}, " f"count: {len(containers)}"
@@ -154,7 +193,11 @@ class ExposedSystemLogs(Vulnerability, Event):
def __init__(self):
Vulnerability.__init__(
self, component=Kubelet, name="Exposed System Logs", category=InformationDisclosure, vid="KHV045",
self,
component=Kubelet,
name="Exposed System Logs",
category=InformationDisclosure,
vid="KHV045",
)
@@ -163,7 +206,11 @@ class ExposedKubeletCmdline(Vulnerability, Event):
def __init__(self, cmdline):
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Kubelet Cmdline", category=InformationDisclosure, vid="KHV046",
self,
component=Kubelet,
name="Exposed Kubelet Cmdline",
category=InformationDisclosure,
vid="KHV046",
)
self.cmdline = cmdline
self.evidence = f"cmdline: {self.cmdline}"
@@ -270,7 +317,9 @@ class SecureKubeletPortHunter(Hunter):
def test_container_logs(self):
config = get_config()
logs_url = self.path + KubeletHandlers.CONTAINERLOGS.value.format(
pod_namespace=self.pod["namespace"], pod_id=self.pod["name"], container_name=self.pod["container"],
pod_namespace=self.pod["namespace"],
pod_id=self.pod["name"],
container_name=self.pod["container"],
)
return self.session.get(logs_url, verify=False, timeout=config.network_timeout).status_code == 200
@@ -288,7 +337,11 @@ class SecureKubeletPortHunter(Hunter):
return (
"/cri/exec/"
in self.session.get(
exec_url, headers=headers, allow_redirects=False, verify=False, timeout=config.network_timeout,
exec_url,
headers=headers,
allow_redirects=False,
verify=False,
timeout=config.network_timeout,
).text
)
@@ -303,10 +356,16 @@ class SecureKubeletPortHunter(Hunter):
"Sec-Websocket-Protocol": "SPDY",
}
pf_url = self.path + KubeletHandlers.PORTFORWARD.value.format(
pod_namespace=self.pod["namespace"], pod_id=self.pod["name"], port=80,
pod_namespace=self.pod["namespace"],
pod_id=self.pod["name"],
port=80,
)
self.session.get(
pf_url, headers=headers, verify=False, stream=True, timeout=config.network_timeout,
pf_url,
headers=headers,
verify=False,
stream=True,
timeout=config.network_timeout,
).status_code == 200
# TODO: what to return?
@@ -314,7 +373,10 @@ class SecureKubeletPortHunter(Hunter):
def test_run_container(self):
config = get_config()
run_url = self.path + KubeletHandlers.RUN.value.format(
pod_namespace="test", pod_id="test", container_name="test", cmd="",
pod_namespace="test",
pod_id="test",
container_name="test",
cmd="",
)
# if we get a Method Not Allowed, we know we passed Authentication and Authorization.
return self.session.get(run_url, verify=False, timeout=config.network_timeout).status_code == 405
@@ -339,7 +401,10 @@ class SecureKubeletPortHunter(Hunter):
return (
"/cri/attach/"
in self.session.get(
attach_url, allow_redirects=False, verify=False, timeout=config.network_timeout,
attach_url,
allow_redirects=False,
verify=False,
timeout=config.network_timeout,
).text
)
@@ -347,7 +412,8 @@ class SecureKubeletPortHunter(Hunter):
def test_logs_endpoint(self):
config = get_config()
logs_url = self.session.get(
self.path + KubeletHandlers.LOGS.value.format(path=""), timeout=config.network_timeout,
self.path + KubeletHandlers.LOGS.value.format(path=""),
timeout=config.network_timeout,
).text
return "<pre>" in logs_url
@@ -355,7 +421,9 @@ class SecureKubeletPortHunter(Hunter):
def test_pprof_cmdline(self):
config = get_config()
cmd = self.session.get(
self.path + KubeletHandlers.PPROF_CMDLINE.value, verify=False, timeout=config.network_timeout,
self.path + KubeletHandlers.PPROF_CMDLINE.value,
verify=False,
timeout=config.network_timeout,
)
return cmd.text if cmd.status_code == 200 else None
@@ -647,7 +715,10 @@ class MaliciousIntentViaSecureKubeletPort(ActiveHunter):
)
self.rmdir_command(
run_request_url, directory_created, number_of_rmdir_attempts, seconds_to_wait_for_os_command,
run_request_url,
directory_created,
number_of_rmdir_attempts,
seconds_to_wait_for_os_command,
)
def check_file_exists(self, run_request_url, file):
@@ -718,7 +789,11 @@ class MaliciousIntentViaSecureKubeletPort(ActiveHunter):
return ProveAnonymousAuth.has_no_error_nor_exception(directory_exists)
def rmdir_command(
self, run_request_url, directory_to_remove, number_of_rmdir_attempts, seconds_to_wait_for_os_command,
self,
run_request_url,
directory_to_remove,
number_of_rmdir_attempts,
seconds_to_wait_for_os_command,
):
if self.check_directory_exists(run_request_url, directory_to_remove):
for _ in range(number_of_rmdir_attempts):
@@ -985,13 +1060,17 @@ class ProveRunHandler(ActiveHunter):
cmd=command,
)
return self.event.session.post(
f"{self.base_path}/{run_url}", verify=False, timeout=config.network_timeout,
f"{self.base_path}/{run_url}",
verify=False,
timeout=config.network_timeout,
).text
def execute(self):
config = get_config()
r = self.event.session.get(
f"{self.base_path}/" + KubeletHandlers.PODS.value, verify=False, timeout=config.network_timeout,
f"{self.base_path}/" + KubeletHandlers.PODS.value,
verify=False,
timeout=config.network_timeout,
)
if "items" in r.text:
pods_data = r.json()["items"]
@@ -1025,7 +1104,9 @@ class ProveContainerLogsHandler(ActiveHunter):
def execute(self):
config = get_config()
pods_raw = self.event.session.get(
self.base_url + KubeletHandlers.PODS.value, verify=False, timeout=config.network_timeout,
self.base_url + KubeletHandlers.PODS.value,
verify=False,
timeout=config.network_timeout,
).text
if "items" in pods_raw:
pods_data = json.loads(pods_raw)["items"]

View File

@@ -25,7 +25,11 @@ class WriteMountToVarLog(Vulnerability, Event):
def __init__(self, pods):
Vulnerability.__init__(
self, KubernetesCluster, "Pod With Mount To /var/log", category=PrivilegeEscalation, vid="KHV047",
self,
KubernetesCluster,
"Pod With Mount To /var/log",
category=PrivilegeEscalation,
vid="KHV047",
)
self.pods = pods
self.evidence = "pods: {}".format(", ".join((pod["metadata"]["name"] for pod in self.pods)))
@@ -37,7 +41,10 @@ class DirectoryTraversalWithKubelet(Vulnerability, Event):
def __init__(self, output):
Vulnerability.__init__(
self, KubernetesCluster, "Root Traversal Read On The Kubelet", category=PrivilegeEscalation,
self,
KubernetesCluster,
"Root Traversal Read On The Kubelet",
category=PrivilegeEscalation,
)
self.output = output
self.evidence = "output: {}".format(self.output)
@@ -82,7 +89,10 @@ class ProveVarLogMount(ActiveHunter):
def run(self, command, container):
run_url = KubeletHandlers.RUN.value.format(
podNamespace=container["namespace"], podID=container["pod"], containerName=container["name"], cmd=command,
podNamespace=container["namespace"],
podID=container["pod"],
containerName=container["name"],
cmd=command,
)
return self.event.session.post(f"{self.base_path}/{run_url}", verify=False).text
@@ -91,7 +101,9 @@ class ProveVarLogMount(ActiveHunter):
config = get_config()
logger.debug("accessing /pods manually on ProveVarLogMount")
pods = self.event.session.get(
f"{self.base_path}/" + KubeletHandlers.PODS.value, verify=False, timeout=config.network_timeout,
f"{self.base_path}/" + KubeletHandlers.PODS.value,
verify=False,
timeout=config.network_timeout,
).json()["items"]
for pod in pods:
volume = VarLogMountHunter(ExposedPodsHandler(pods=pods)).has_write_mount_to(pod, "/var/log")
@@ -117,7 +129,9 @@ class ProveVarLogMount(ActiveHunter):
path=re.sub(r"^/var/log", "", host_path) + symlink_name
)
content = self.event.session.get(
f"{self.base_path}/{path_in_logs_endpoint}", verify=False, timeout=config.network_timeout,
f"{self.base_path}/{path_in_logs_endpoint}",
verify=False,
timeout=config.network_timeout,
).text
# removing symlink
self.run(f"rm {mount_path}/{symlink_name}", container=container)
@@ -134,7 +148,10 @@ class ProveVarLogMount(ActiveHunter):
}
try:
output = self.traverse_read(
"/etc/shadow", container=cont, mount_path=mount_path, host_path=volume["hostPath"]["path"],
"/etc/shadow",
container=cont,
mount_path=mount_path,
host_path=volume["hostPath"]["path"],
)
self.publish_event(DirectoryTraversalWithKubelet(output=output))
except Exception:

View File

@@ -23,7 +23,11 @@ class KubeProxyExposed(Vulnerability, Event):
def __init__(self):
Vulnerability.__init__(
self, KubernetesCluster, "Proxy Exposed", category=InformationDisclosure, vid="KHV049",
self,
KubernetesCluster,
"Proxy Exposed",
category=InformationDisclosure,
vid="KHV049",
)
@@ -89,7 +93,9 @@ class ProveProxyExposed(ActiveHunter):
def execute(self):
config = get_config()
version_metadata = requests.get(
f"http://{self.event.host}:{self.event.port}/version", verify=False, timeout=config.network_timeout,
f"http://{self.event.host}:{self.event.port}/version",
verify=False,
timeout=config.network_timeout,
).json()
if "buildDate" in version_metadata:
self.event.evidence = "build date: {}".format(version_metadata["buildDate"])
@@ -107,11 +113,15 @@ class K8sVersionDisclosureProve(ActiveHunter):
def execute(self):
config = get_config()
version_metadata = requests.get(
f"http://{self.event.host}:{self.event.port}/version", verify=False, timeout=config.network_timeout,
f"http://{self.event.host}:{self.event.port}/version",
verify=False,
timeout=config.network_timeout,
).json()
if "gitVersion" in version_metadata:
self.publish_event(
K8sVersionDisclosure(
version=version_metadata["gitVersion"], from_endpoint="/version", extra_info="on kube-proxy",
version=version_metadata["gitVersion"],
from_endpoint="/version",
extra_info="on kube-proxy",
)
)

View File

@@ -28,7 +28,10 @@ class SecretsAccess(Vulnerability, Event):
def __init__(self, evidence):
Vulnerability.__init__(
self, component=KubernetesCluster, name="Access to pod's secrets", category=AccessRisk,
self,
component=KubernetesCluster,
name="Access to pod's secrets",
category=AccessRisk,
)
self.evidence = evidence

View File

@@ -12,7 +12,10 @@ class HTTPDispatcher:
dispatch_url = os.environ.get("KUBEHUNTER_HTTP_DISPATCH_URL", "https://localhost/")
try:
r = requests.request(
dispatch_method, dispatch_url, json=report, headers={"Content-Type": "application/json"},
dispatch_method,
dispatch_url,
json=report,
headers={"Content-Type": "application/json"},
)
r.raise_for_status()
logger.info(f"Report was dispatched to: {dispatch_url}")

View File

@@ -9,7 +9,7 @@ from kube_hunter.modules.report.collector import (
vulnerabilities_lock,
)
EVIDENCE_PREVIEW = 40
EVIDENCE_PREVIEW = 100
MAX_TABLE_WIDTH = 20
KB_LINK = "https://github.com/aquasecurity/kube-hunter/tree/master/docs/_kb"

View File

@@ -22,6 +22,8 @@ classifiers =
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Programming Language :: Python :: 3 :: Only
Topic :: Security
[options]

View File

@@ -11,12 +11,13 @@ def test_setup_logger_level():
("NOTEXISTS", logging.INFO),
("BASIC_FORMAT", logging.INFO),
]
logFile = None
for level, expected in test_cases:
setup_logger(level)
setup_logger(level, logFile)
actual = logging.getLogger().getEffectiveLevel()
assert actual == expected, f"{level} level should be {expected} (got {actual})"
def test_setup_logger_none():
setup_logger("NONE")
setup_logger("NONE", None)
assert logging.getLogger().manager.disable == logging.CRITICAL

View File

@@ -8,7 +8,7 @@ set_config(Config())
def test_presetcloud():
""" Testing if it doesn't try to run get_cloud if the cloud type is already set.
"""Testing if it doesn't try to run get_cloud if the cloud type is already set.
get_cloud(1.2.3.4) will result with an error
"""
expcted = "AWS"

View File

@@ -20,7 +20,9 @@ def test_ApiServer():
m.get("https://mockOther:443", text="elephant")
m.get("https://mockKubernetes:443", text='{"code":403}', status_code=403)
m.get(
"https://mockKubernetes:443/version", text='{"major": "1.14.10"}', status_code=200,
"https://mockKubernetes:443/version",
text='{"major": "1.14.10"}',
status_code=200,
)
e = Event()
@@ -44,11 +46,15 @@ def test_ApiServerWithServiceAccountToken():
counter = 0
with requests_mock.Mocker() as m:
m.get(
"https://mockKubernetes:443", request_headers={"Authorization": "Bearer very_secret"}, text='{"code":200}',
"https://mockKubernetes:443",
request_headers={"Authorization": "Bearer very_secret"},
text='{"code":200}',
)
m.get("https://mockKubernetes:443", text='{"code":403}', status_code=403)
m.get(
"https://mockKubernetes:443/version", text='{"major": "1.14.10"}', status_code=200,
"https://mockKubernetes:443/version",
text='{"major": "1.14.10"}',
status_code=200,
)
m.get("https://mockOther:443", text="elephant")

56
tests/hunting/test_aks.py Normal file
View File

@@ -0,0 +1,56 @@
# flake8: noqa: E402
import requests_mock
from kube_hunter.conf import Config, set_config
set_config(Config())
from kube_hunter.modules.hunting.kubelet import ExposedRunHandler
from kube_hunter.modules.hunting.aks import AzureSpnHunter
def test_AzureSpnHunter():
e = ExposedRunHandler()
e.host = "mockKubernetes"
e.port = 443
e.protocol = "https"
pod_template = '{{"items":[ {{"apiVersion":"v1","kind":"Pod","metadata":{{"name":"etc","namespace":"default"}},"spec":{{"containers":[{{"command":["sleep","99999"],"image":"ubuntu","name":"test","volumeMounts":[{{"mountPath":"/mp","name":"v"}}]}}],"volumes":[{{"hostPath":{{"path":"{}"}},"name":"v"}}]}}}} ]}}'
bad_paths = ["/", "/etc", "/etc/", "/etc/kubernetes", "/etc/kubernetes/azure.json"]
good_paths = ["/yo", "/etc/yo", "/etc/kubernetes/yo.json"]
for p in bad_paths:
with requests_mock.Mocker() as m:
m.get("https://mockKubernetes:443/pods", text=pod_template.format(p))
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c
for p in good_paths:
with requests_mock.Mocker() as m:
m.get("https://mockKubernetes:443/pods", text=pod_template.format(p))
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
with requests_mock.Mocker() as m:
pod_no_volume_mounts = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test"}],"volumes":[{"hostPath":{"path":"/whatever"},"name":"v"}]}} ]}'
m.get("https://mockKubernetes:443/pods", text=pod_no_volume_mounts)
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
with requests_mock.Mocker() as m:
pod_no_volumes = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test"}]}} ]}'
m.get("https://mockKubernetes:443/pods", text=pod_no_volumes)
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
with requests_mock.Mocker() as m:
pod_other_volume = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test","volumeMounts":[{"mountPath":"/mp","name":"v"}]}],"volumes":[{"emptyDir":{},"name":"v"}]}} ]}'
m.get("https://mockKubernetes:443/pods", text=pod_other_volume)
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None

View File

@@ -56,7 +56,8 @@ def test_AccessApiServer():
with requests_mock.Mocker() as m:
m.get("https://mockKubernetes:443/api", text="{}")
m.get(
"https://mockKubernetes:443/api/v1/namespaces", text='{"items":[{"metadata":{"name":"hello"}}]}',
"https://mockKubernetes:443/api/v1/namespaces",
text='{"items":[{"metadata":{"name":"hello"}}]}',
)
m.get(
"https://mockKubernetes:443/api/v1/pods",
@@ -64,10 +65,12 @@ def test_AccessApiServer():
{"metadata":{"name":"podB", "namespace":"namespaceB"}}]}',
)
m.get(
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/roles", status_code=403,
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/roles",
status_code=403,
)
m.get(
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles", text='{"items":[]}',
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles",
text='{"items":[]}',
)
m.get(
"https://mockkubernetes:443/version",
@@ -91,7 +94,8 @@ def test_AccessApiServer():
# TODO check that these responses reflect what Kubernetes does
m.get("https://mocktoken:443/api", text="{}")
m.get(
"https://mocktoken:443/api/v1/namespaces", text='{"items":[{"metadata":{"name":"hello"}}]}',
"https://mocktoken:443/api/v1/namespaces",
text='{"items":[{"metadata":{"name":"hello"}}]}',
)
m.get(
"https://mocktoken:443/api/v1/pods",
@@ -99,7 +103,8 @@ def test_AccessApiServer():
{"metadata":{"name":"podB", "namespace":"namespaceB"}}]}',
)
m.get(
"https://mocktoken:443/apis/rbac.authorization.k8s.io/v1/roles", status_code=403,
"https://mocktoken:443/apis/rbac.authorization.k8s.io/v1/roles",
status_code=403,
)
m.get(
"https://mocktoken:443/apis/rbac.authorization.k8s.io/v1/clusterroles",
@@ -228,10 +233,12 @@ def test_AccessApiServerActive():
)
m.post("https://mockKubernetes:443/api/v1/clusterroles", text="{}")
m.post(
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles", text="{}",
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles",
text="{}",
)
m.post(
"https://mockkubernetes:443/api/v1/namespaces/hello-namespace/pods", text="{}",
"https://mockkubernetes:443/api/v1/namespaces/hello-namespace/pods",
text="{}",
)
m.post(
"https://mockkubernetes:443" "/apis/rbac.authorization.k8s.io/v1/namespaces/hello-namespace/roles",

View File

@@ -73,8 +73,8 @@ def create_test_event_type_one():
def create_test_event_type_two():
exposed_existing_privileged_containers_via_secure_kubelet_port_event = ExposedExistingPrivilegedContainersViaSecureKubeletPort(
exposed_privileged_containers
exposed_existing_privileged_containers_via_secure_kubelet_port_event = (
ExposedExistingPrivilegedContainersViaSecureKubeletPort(exposed_privileged_containers)
)
exposed_existing_privileged_containers_via_secure_kubelet_port_event.host = "localhost"
exposed_existing_privileged_containers_via_secure_kubelet_port_event.session = requests.Session()