Linting Standards (#330)

Fix linting issues with flake8 and black.
Add pre-commit congifuration, update documnetation for it.
Apply linting check in Travis CI.
This commit is contained in:
Yehuda Chikvashvili
2020-04-05 05:22:24 +03:00
committed by GitHub
parent 9ddf3216ab
commit 0f1739262f
59 changed files with 1081 additions and 885 deletions

5
.flake8 Normal file
View File

@@ -0,0 +1,5 @@
[flake8]
ignore = E203, E266, E501, W503, B903
max-line-length = 120
max-complexity = 18
select = B,C,E,F,W,B9

1
.gitignore vendored
View File

@@ -28,3 +28,4 @@ var/
# Directory Cache Files
.DS_Store
thumbs.db
__pycache__

10
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,10 @@
repos:
- repo: https://github.com/psf/black
rev: stable
hooks:
- id: black
- repo: https://gitlab.com/pycqa/flake8
rev: 3.7.9
hooks:
- id: flake8
additional_dependencies: [flake8-bugbear]

View File

@@ -2,23 +2,19 @@ group: travis_latest
language: python
cache: pip
python:
#- "3.4"
#- "3.5"
- "3.6"
- "3.7"
- "3.8"
install:
- pip install -r requirements.txt
- pip install -r requirements-dev.txt
before_script:
# stop the build if there are Python syntax errors or undefined names
- flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
- flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- pip install pytest coverage pytest-cov
- make lint-check
script:
- python runtest.py
- make test
after_success:
- bash <(curl -s https://codecov.io/bash)
notifications:
on_success: change
on_failure: change # `always` will be the setting once code changes slow down
email:
on_success: change
on_failure: always

View File

@@ -1,4 +1,17 @@
Thank you for taking interest in contributing to kube-hunter!
## Contribution Guide
## Welcome Aboard
Thank you for taking interest in contributing to kube-hunter!
This guide will walk you through the development process of kube-hunter.
## Setting Up
kube-hunter is written in Python 3 and supports versions 3.6 and above.
You'll probably want to create a virtual environment for your local project.
Once you got your project and IDE set up, you can `make dev-deps` and start contributing!
You may also install a pre-commit hook to take care of linting - `pre-commit install`.
## Issues
- Feel free to open issues for any reason as long as you make it clear if this issue is about a bug/feature/hunter/question/comment.

View File

@@ -21,7 +21,13 @@ dev-deps:
.PHONY: lint
lint:
flake8 $(SRC)
black .
flake8
.PHONY: lint-check
lint-check:
flake8
black --check --diff .
.PHONY: test
test:

View File

@@ -2,6 +2,7 @@
[![Build Status](https://travis-ci.org/aquasecurity/kube-hunter.svg?branch=master)](https://travis-ci.org/aquasecurity/kube-hunter)
[![codecov](https://codecov.io/gh/aquasecurity/kube-hunter/branch/master/graph/badge.svg)](https://codecov.io/gh/aquasecurity/kube-hunter)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![License](https://img.shields.io/github/license/aquasecurity/kube-hunter)](https://github.com/aquasecurity/kube-hunter/blob/master/LICENSE)
[![Docker image](https://images.microbadger.com/badges/image/aquasec/kube-hunter.svg)](https://microbadger.com/images/aquasec/kube-hunter "Get your own image badge on microbadger.com")
@@ -13,7 +14,7 @@ kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was d
**Explore vulnerabilities**: The kube-hunter knowledge base includes articles about discoverable vulnerabilities and issues. When kube-hunter reports an issue, it will show its VID (Vulnerability ID) so you can look it up in the KB at https://aquasecurity.github.io/kube-hunter/
**Contribute**: We welcome contributions, especially new hunter modules that perform additional tests. If you would like to develop your modules please read [Guidelines For Developing Your First kube-hunter Module](kube_hunter/README.md).
**Contribute**: We welcome contributions, especially new hunter modules that perform additional tests. If you would like to develop your modules please read [Guidelines For Developing Your First kube-hunter Module](kube_hunter/CONTRIBUTING.md).
[![kube-hunter demo video](https://github.com/aquasecurity/kube-hunter/blob/master/kube-hunter-screenshot.png)](https://youtu.be/s2-6rTkH8a8?t=57s)

View File

@@ -1,2 +1,4 @@
from . import core
from . import modules
__all__ = [core, modules]

View File

@@ -13,29 +13,27 @@ config.reporter = get_reporter(config.report)
config.dispatcher = get_dispatcher(config.dispatch)
logger = logging.getLogger(__name__)
import kube_hunter
import kube_hunter # noqa
def interactive_set_config():
"""Sets config manually, returns True for success"""
options = [("Remote scanning",
"scans one or more specific IPs or DNS names"),
("Interface scanning",
"scans subnets on all local network interfaces"),
("IP range scanning", "scans a given IP range")]
options = [
("Remote scanning", "scans one or more specific IPs or DNS names"),
("Interface scanning", "scans subnets on all local network interfaces"),
("IP range scanning", "scans a given IP range"),
]
print("Choose one of the options below:")
for i, (option, explanation) in enumerate(options):
print("{}. {} ({})".format(i+1, option.ljust(20), explanation))
print("{}. {} ({})".format(i + 1, option.ljust(20), explanation))
choice = input("Your choice: ")
if choice == '1':
config.remote = input("Remotes (separated by a ','): ").\
replace(' ', '').split(',')
elif choice == '2':
if choice == "1":
config.remote = input("Remotes (separated by a ','): ").replace(" ", "").split(",")
elif choice == "2":
config.interface = True
elif choice == '3':
config.cidr = input("CIDR (example - 192.168.1.0/24): ").\
replace(' ', '')
elif choice == "3":
config.cidr = input("CIDR (example - 192.168.1.0/24): ").replace(" ", "")
else:
return False
return True
@@ -51,7 +49,7 @@ def list_hunters():
print("\n\nActive Hunters:\n---------------")
for hunter, docs in handler.active_hunters.items():
name, doc = hunter.parse_docs(docs)
print("* {}\n {}\n".format( name, doc))
print("* {}\n {}\n".format(name, doc))
global hunt_started_lock
@@ -61,19 +59,15 @@ hunt_started = False
def main():
global hunt_started
scan_options = [
config.pod,
config.cidr,
config.remote,
config.interface
]
scan_options = [config.pod, config.cidr, config.remote, config.interface]
try:
if config.list:
list_hunters()
return
if not any(scan_options):
if not interactive_set_config(): return
if not interactive_set_config():
return
with hunt_started_lock:
hunt_started = True
@@ -102,5 +96,5 @@ def main():
hunt_started_lock.release()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -3,16 +3,14 @@ import logging
from kube_hunter.conf.parser import parse_args
config = parse_args()
formatter = '%(asctime)s %(levelname)s %(name)s %(message)s'
formatter = "%(asctime)s %(levelname)s %(name)s %(message)s"
loglevel = getattr(logging, config.log.upper(), None)
if not loglevel:
logging.basicConfig(level=logging.INFO,
format=formatter)
logging.warning('Unknown log level selected, using info')
logging.basicConfig(level=logging.INFO, format=formatter)
logging.warning("Unknown log level selected, using info")
elif config.log.lower() != "none":
logging.basicConfig(level=loglevel,
format=formatter)
logging.basicConfig(level=loglevel, format=formatter)
import plugins
import plugins # noqa

View File

@@ -2,87 +2,56 @@ from argparse import ArgumentParser
def parse_args():
parser = ArgumentParser(
description='Kube-Hunter - hunts for security '
'weaknesses in Kubernetes clusters')
parser = ArgumentParser(description="kube-hunter - hunt for security weaknesses in Kubernetes clusters")
parser.add_argument(
'--list',
action="store_true",
help="Displays all tests in kubehunter "
"(add --active flag to see active tests)")
"--list", action="store_true", help="Displays all tests in kubehunter (add --active flag to see active tests)",
)
parser.add_argument("--interface", action="store_true", help="Set hunting on all network interfaces")
parser.add_argument("--pod", action="store_true", help="Set hunter as an insider pod")
parser.add_argument("--quick", action="store_true", help="Prefer quick scan (subnet 24)")
parser.add_argument(
'--interface',
action="store_true",
help="Set hunting on all network interfaces")
"--include-patched-versions", action="store_true", help="Don't skip patched versions when scanning",
)
parser.add_argument("--cidr", type=str, help="Set an ip range to scan, example: 192.168.0.0/16")
parser.add_argument(
"--mapping", action="store_true", help="Outputs only a mapping of the cluster's nodes",
)
parser.add_argument(
'--pod',
action="store_true",
help="Set hunter as an insider pod")
"--remote", nargs="+", metavar="HOST", default=list(), help="One or more remote ip/dns to hunt",
)
parser.add_argument("--active", action="store_true", help="Enables active hunting")
parser.add_argument(
'--quick',
action="store_true",
help="Prefer quick scan (subnet 24)")
parser.add_argument(
'--include-patched-versions',
action="store_true",
help="Don't skip patched versions when scanning")
parser.add_argument(
'--cidr',
type=str,
help="Set an ip range to scan, example: 192.168.0.0/16")
parser.add_argument(
'--mapping',
action="store_true",
help="Outputs only a mapping of the cluster's nodes")
parser.add_argument(
'--remote',
nargs='+',
metavar="HOST",
default=list(),
help="One or more remote ip/dns to hunt")
parser.add_argument(
'--active',
action="store_true",
help="Enables active hunting")
parser.add_argument(
'--log',
"--log",
type=str,
metavar="LOGLEVEL",
default='INFO',
help="Set log level, options are: debug, info, warn, none")
default="INFO",
help="Set log level, options are: debug, info, warn, none",
)
parser.add_argument(
'--report',
type=str,
default='plain',
help="Set report type, options are: plain, yaml, json")
"--report", type=str, default="plain", help="Set report type, options are: plain, yaml, json",
)
parser.add_argument(
'--dispatch',
"--dispatch",
type=str,
default='stdout',
default="stdout",
help="Where to send the report to, options are: "
"stdout, http (set KUBEHUNTER_HTTP_DISPATCH_URL and "
"KUBEHUNTER_HTTP_DISPATCH_METHOD environment variables to configure)")
"KUBEHUNTER_HTTP_DISPATCH_METHOD environment variables to configure)",
)
parser.add_argument(
'--statistics',
action="store_true",
help="Show hunting statistics")
parser.add_argument("--statistics", action="store_true", help="Show hunting statistics")
parser.add_argument(
'--network-timeout',
type=float,
default=5.0,
help="network operations timeout")
parser.add_argument("--network-timeout", type=float, default=5.0, help="network operations timeout")
return parser.parse_args()

View File

@@ -1,2 +1,4 @@
from . import types
from . import events
__all__ = [types, events]

View File

@@ -1,2 +1,4 @@
from .handler import *
from .handler import EventQueue, handler
from . import types
__all__ = [EventQueue, handler, types]

View File

@@ -50,10 +50,12 @@ class EventQueue(Queue, object):
def __new__unsubscribe_self(self, cls):
handler.hooks[event].remove((hook, predicate))
return object.__new__(self)
hook.__new__ = __new__unsubscribe_self
self.subscribe_event(event, hook=hook, predicate=predicate)
return hook
return wrapper
# getting uninstantiated event object
@@ -80,7 +82,7 @@ class EventQueue(Queue, object):
logger.debug(f"{hook} subscribed to {event}")
def apply_filters(self, event):
# if filters are subscribed, apply them on the event
# if filters are subscribed, apply them on the event
for hooked_event in self.filters.keys():
if hooked_event in event.__class__.__mro__:
for filter_hook, predicate in self.filters[hooked_event]:
@@ -101,7 +103,7 @@ class EventQueue(Queue, object):
event.previous = caller.event
event.hunter = caller.__class__
# applying filters on the event, before publishing it to subscribers.
# applying filters on the event, before publishing it to subscribers.
# if filter returned None, not proceeding to publish
event = self.apply_filters(event)
if event:

View File

@@ -1,10 +1,10 @@
from os.path import dirname, basename, isfile
import glob
from .common import *
from .common import * # noqa
# dynamically importing all modules in folder
files = glob.glob(dirname(__file__)+"/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith('__init__.py')):
files = glob.glob(dirname(__file__) + "/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith("__init__.py")):
if module_name != "handler":
exec('from .{} import *'.format(module_name))
exec("from .{} import *".format(module_name))

View File

@@ -2,9 +2,17 @@ import threading
import requests
import logging
from kube_hunter.core.types import InformationDisclosure, DenialOfService, RemoteCodeExec, IdentityTheft, \
PrivilegeEscalation, AccessRisk, UnauthenticatedAccess, KubernetesCluster
from kube_hunter.conf import config
from kube_hunter.core.types import (
InformationDisclosure,
DenialOfService,
RemoteCodeExec,
IdentityTheft,
PrivilegeEscalation,
AccessRisk,
UnauthenticatedAccess,
KubernetesCluster,
)
logger = logging.getLogger(__name__)
@@ -76,15 +84,17 @@ class Service(object):
class Vulnerability(object):
severity = dict({
InformationDisclosure: "medium",
DenialOfService: "medium",
RemoteCodeExec: "high",
IdentityTheft: "high",
PrivilegeEscalation: "high",
AccessRisk: "low",
UnauthenticatedAccess: "low"
})
severity = dict(
{
InformationDisclosure: "medium",
DenialOfService: "medium",
RemoteCodeExec: "high",
IdentityTheft: "high",
PrivilegeEscalation: "high",
AccessRisk: "low",
UnauthenticatedAccess: "low",
}
)
# TODO: make vid mandatory once migration is done
def __init__(self, component, name, category=None, vid="None"):
@@ -134,24 +144,24 @@ class NewHostEvent(Event):
if not self.cloud_type:
self.cloud_type = self.get_cloud()
return self.cloud_type
def get_cloud(self):
try:
logger.debug("Checking whether the cluster is deployed on azure's cloud")
# Leverage 3rd tool https://github.com/blrchen/AzureSpeed for Azure cloud ip detection
result = \
requests.get(f"https://api.azurespeed.com/api/region?ipOrUrl={self.host}",
timeout=config.network_timeout).json()
result = requests.get(
f"https://api.azurespeed.com/api/region?ipOrUrl={self.host}", timeout=config.network_timeout,
).json()
return result["cloud"] or "NoCloud"
except requests.ConnectionError:
logger.info(f"Failed to connect cloud type service", exc_info=True)
except Exception:
logger.warning(f"Unable to check cloud of {self.host}", exc_info=True)
return "NoCloud"
def __str__(self):
return str(self.host)
# Event's logical location to be used mainly for reports.
def location(self):
return str(self.host)
@@ -160,10 +170,10 @@ class NewHostEvent(Event):
class OpenPortEvent(Event):
def __init__(self, port):
self.port = port
def __str__(self):
return str(self.port)
# Event's logical location to be used mainly for reports.
def location(self):
if self.host:
@@ -190,13 +200,11 @@ class ReportDispatched(Event):
class K8sVersionDisclosure(Vulnerability, Event):
"""The kubernetes version could be obtained from the {} endpoint """
def __init__(self, version, from_endpoint, extra_info=""):
Vulnerability.__init__(
self,
KubernetesCluster,
"K8s Version Disclosure",
category=InformationDisclosure,
vid="KHV002")
self, KubernetesCluster, "K8s Version Disclosure", category=InformationDisclosure, vid="KHV002",
)
self.version = version
self.from_endpoint = from_endpoint
self.extra_info = extra_info

View File

@@ -6,10 +6,10 @@ class HunterBase(object):
"""returns tuple of (name, docs)"""
if not docs:
return __name__, "<no documentation>"
docs = docs.strip().split('\n')
docs = docs.strip().split("\n")
for i, line in enumerate(docs):
docs[i] = line.strip()
return docs[0], ' '.join(docs[1:]) if len(docs[1:]) else "<no documentation>"
return docs[0], " ".join(docs[1:]) if len(docs[1:]) else "<no documentation>"
@classmethod
def get_name(cls):
@@ -32,50 +32,57 @@ class Discovery(HunterBase):
pass
"""Kubernetes Components"""
class KubernetesCluster():
class KubernetesCluster:
"""Kubernetes Cluster"""
name = "Kubernetes Cluster"
class KubectlClient():
class KubectlClient:
"""The kubectl client binary is used by the user to interact with the cluster"""
name = "Kubectl Client"
class Kubelet(KubernetesCluster):
"""The kubelet is the primary "node agent" that runs on each node"""
name = "Kubelet"
class Azure(KubernetesCluster):
"""Azure Cluster"""
name = "Azure"
""" Categories """
class InformationDisclosure(object):
class InformationDisclosure:
name = "Information Disclosure"
class RemoteCodeExec(object):
class RemoteCodeExec:
name = "Remote Code Execution"
class IdentityTheft(object):
class IdentityTheft:
name = "Identity Theft"
class UnauthenticatedAccess(object):
class UnauthenticatedAccess:
name = "Unauthenticated Access"
class AccessRisk(object):
class AccessRisk:
name = "Access Risk"
class PrivilegeEscalation(KubernetesCluster):
name = "Privilege Escalation"
class DenialOfService(object):
class DenialOfService:
name = "Denial of Service"
from .events import handler # import is in the bottom to break import loops
# import is in the bottom to break import loops
from .events import handler # noqa

View File

@@ -1,3 +1,5 @@
from . import report
from . import discovery
from . import hunting
__all__ = [report, discovery, hunting]

View File

@@ -2,7 +2,7 @@ from os.path import dirname, basename, isfile
import glob
# dynamically importing all modules in folder
files = glob.glob(dirname(__file__)+"/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith('__init__.py')):
if not module_name.startswith('test_'):
exec('from .{} import *'.format(module_name))
files = glob.glob(dirname(__file__) + "/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith("__init__.py")):
if not module_name.startswith("test_"):
exec("from .{} import *".format(module_name))

View File

@@ -12,9 +12,9 @@ KNOWN_API_PORTS = [443, 6443, 8080]
logger = logging.getLogger(__name__)
class K8sApiService(Service, Event):
"""A Kubernetes API service"""
def __init__(self, protocol="https"):
Service.__init__(self, name="Unrecognized K8s API")
self.protocol = protocol
@@ -22,6 +22,7 @@ class K8sApiService(Service, Event):
class ApiServer(Service, Event):
"""The API server is in charge of all operations on the cluster."""
def __init__(self):
Service.__init__(self, name="API Server")
self.protocol = "https"
@@ -29,6 +30,7 @@ class ApiServer(Service, Event):
class MetricsServer(Service, Event):
"""The Metrics server is in charge of providing resource usage metrics for pods and nodes to the API server"""
def __init__(self):
Service.__init__(self, name="Metrics Server")
self.protocol = "https"
@@ -41,6 +43,7 @@ class ApiServiceDiscovery(Discovery):
"""API Service Discovery
Checks for the existence of K8s API Services
"""
def __init__(self, event):
self.event = event
self.session = requests.Session()
@@ -56,12 +59,12 @@ class ApiServiceDiscovery(Discovery):
def has_api_behaviour(self, protocol):
try:
r = self.session.get(f"{protocol}://{self.event.host}:{self.event.port}", timeout=config.network_timeout)
if ('k8s' in r.text) or ('"code"' in r.text and r.status_code != 200):
if ("k8s" in r.text) or ('"code"' in r.text and r.status_code != 200):
return True
except requests.exceptions.SSLError:
logger.debug(f"{[protocol]} protocol not accepted on {self.event.host}:{self.event.port}")
except Exception as e:
logger.debug(f"Exception on: {self.event.host}:{self.event.port}", exc_info=True)
except Exception:
logger.debug(f"Failed probing {self.event.host}:{self.event.port}", exc_info=True)
# Acts as a Filter for services, In the case that we can classify the API,
@@ -78,6 +81,7 @@ class ApiServiceClassify(EventFilterBase):
"""API Service Classifier
Classifies an API service
"""
def __init__(self, event):
self.event = event
self.classified = False
@@ -92,8 +96,8 @@ class ApiServiceClassify(EventFilterBase):
try:
endpoint = f"{self.event.protocol}://{self.event.host}:{self.event.port}/version"
versions = self.session.get(endpoint, timeout=config.network_timeout).json()
if 'major' in versions:
if versions.get('major') == "":
if "major" in versions:
if versions.get("major") == "":
self.event = MetricsServer()
else:
self.event = ApiServer()

View File

@@ -12,6 +12,7 @@ logger = logging.getLogger(__name__)
class KubeDashboardEvent(Service, Event):
"""A web-based Kubernetes user interface allows easy usage with operations on the cluster"""
def __init__(self, **kargs):
Service.__init__(self, name="Kubernetes Dashboard", **kargs)
@@ -21,6 +22,7 @@ class KubeDashboard(Discovery):
"""K8s Dashboard Discovery
Checks for the existence of a Dashboard
"""
def __init__(self, event):
self.event = event

View File

@@ -6,6 +6,7 @@ from kube_hunter.core.types import Discovery
class EtcdAccessEvent(Service, Event):
"""Etcd is a DB that stores cluster's data, it contains configuration and current
state information, and might contain secrets"""
def __init__(self):
Service.__init__(self, name="Etcd")
@@ -15,6 +16,7 @@ class EtcdRemoteAccess(Discovery):
"""Etcd service
check for the existence of etcd service
"""
def __init__(self, event):
self.event = event

View File

@@ -5,6 +5,7 @@ import requests
from enum import Enum
from netaddr import IPNetwork, IPAddress
from netifaces import AF_INET, ifaddresses, interfaces
from scapy.all import ICMP, IP, Ether, srp1
from kube_hunter.conf import config
from kube_hunter.core.events import handler
@@ -16,7 +17,7 @@ logger = logging.getLogger(__name__)
class RunningAsPodEvent(Event):
def __init__(self):
self.name = 'Running from within a pod'
self.name = "Running from within a pod"
self.auth_token = self.get_service_account_file("token")
self.client_cert = self.get_service_account_file("ca.crt")
self.namespace = self.get_service_account_file("namespace")
@@ -25,8 +26,9 @@ class RunningAsPodEvent(Event):
# Event's logical location to be used mainly for reports.
def location(self):
location = "Local to Pod"
if 'HOSTNAME' in os.environ:
location += "(" + os.environ['HOSTNAME'] + ")"
hostname = os.getenv("HOSTNAME")
if hostname:
location += f" ({hostname})"
return location
@@ -40,17 +42,20 @@ class RunningAsPodEvent(Event):
class AzureMetadataApi(Vulnerability, Event):
"""Access to the Azure Metadata API exposes information about the machines associated with the cluster"""
def __init__(self, cidr):
Vulnerability.__init__(self, Azure, "Azure Metadata Exposure", category=InformationDisclosure, vid="KHV003")
Vulnerability.__init__(
self, Azure, "Azure Metadata Exposure", category=InformationDisclosure, vid="KHV003",
)
self.cidr = cidr
self.evidence = "cidr: {}".format(cidr)
class HostScanEvent(Event):
def __init__(self, pod=False, active=False, predefined_hosts=list()):
def __init__(self, pod=False, active=False, predefined_hosts=None):
# flag to specify whether to get actual data from vulnerabilities
self.active = active
self.predefined_hosts = predefined_hosts
self.predefined_hosts = predefined_hosts or []
class HostDiscoveryHelpers:
@@ -69,6 +74,7 @@ class FromPodHostDiscovery(Discovery):
"""Host Discovery when running as pod
Generates ip adresses to scan, based on cluster/scan type
"""
def __init__(self, event):
self.event = event
@@ -87,11 +93,11 @@ class FromPodHostDiscovery(Discovery):
should_scan_apiserver = False
if self.event.kubeservicehost:
should_scan_apiserver = True
for subnet in subnets:
if self.event.kubeservicehost and self.event.kubeservicehost in IPNetwork(f"{subnet[0]}/{subnet[1]}"):
for ip, mask in subnets:
if self.event.kubeservicehost and self.event.kubeservicehost in IPNetwork(f"{ip}/{mask}"):
should_scan_apiserver = False
logger.debug(f"From pod scanning subnet {subnet[0]}/{subnet[1]}")
for ip in HostDiscoveryHelpers.generate_subnet(ip=subnet[0], sn=subnet[1]):
logger.debug(f"From pod scanning subnet {ip}/{mask}")
for ip in HostDiscoveryHelpers.generate_subnet(ip, mask):
self.publish_event(NewHostEvent(host=ip, cloud=cloud))
if should_scan_apiserver:
self.publish_event(NewHostEvent(host=IPAddress(self.event.kubeservicehost), cloud=cloud))
@@ -99,9 +105,14 @@ class FromPodHostDiscovery(Discovery):
def is_azure_pod(self):
try:
logger.debug("From pod attempting to access Azure Metadata API")
if requests.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01",
headers={"Metadata": "true"},
timeout=config.network_timeout).status_code == 200:
if (
requests.get(
"http://169.254.169.254/metadata/instance?api-version=2017-08-01",
headers={"Metadata": "true"},
timeout=config.network_timeout,
).status_code
== 200
):
return True
except requests.exceptions.ConnectionError:
logger.debug("Failed to connect Azure metadata server")
@@ -111,12 +122,10 @@ class FromPodHostDiscovery(Discovery):
def traceroute_discovery(self):
# getting external ip, to determine if cloud cluster
external_ip = requests.get("https://canhazip.com", timeout=config.network_timeout).text
from scapy.all import ICMP, IP, Ether, srp1
node_internal_ip = srp1(
Ether()/IP(dst="1.1.1.1", ttl=1)/ICMP(),
verbose=0,
timeout=config.network_timeout)[IP].src
Ether() / IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout,
)[IP].src
return [[node_internal_ip, "24"]], external_ip
# querying azure's interface metadata api | works only from a pod
@@ -125,11 +134,15 @@ class FromPodHostDiscovery(Discovery):
machine_metadata = requests.get(
"http://169.254.169.254/metadata/instance?api-version=2017-08-01",
headers={"Metadata": "true"},
timeout=config.network_timeout).json()
timeout=config.network_timeout,
).json()
address, subnet = "", ""
subnets = list()
for interface in machine_metadata["network"]["interface"]:
address, subnet = interface["ipv4"]["subnet"][0]["address"], interface["ipv4"]["subnet"][0]["prefix"]
address, subnet = (
interface["ipv4"]["subnet"][0]["address"],
interface["ipv4"]["subnet"][0]["prefix"],
)
subnet = subnet if not config.quick else "24"
logger.debug(f"From pod discovered subnet {address}/{subnet}")
subnets.append([address, subnet if not config.quick else "24"])
@@ -144,15 +157,16 @@ class HostDiscovery(Discovery):
"""Host Discovery
Generates ip adresses to scan, based on cluster/scan type
"""
def __init__(self, event):
self.event = event
def execute(self):
if config.cidr:
try:
ip, sn = config.cidr.split('/')
ip, sn = config.cidr.split("/")
except ValueError:
logger.exception(f"Unable to parse CIDR \"{config.cidr}\"")
logger.exception(f'Unable to parse CIDR "{config.cidr}"')
return
for ip in HostDiscoveryHelpers.generate_subnet(ip, sn=sn):
self.publish_event(NewHostEvent(host=ip))
@@ -164,20 +178,13 @@ class HostDiscovery(Discovery):
# for normal scanning
def scan_interfaces(self):
try:
logger.debug("HostDiscovery hunter attempting to get external IP address")
# getting external ip, to determine if cloud cluster
external_ip = requests.get("https://canhazip.com", timeout=config.network_timeout).text
except requests.ConnectionError:
logger.warning(f"Unable to determine external IP address, using 127.0.0.1", exc_info=True)
external_ip = "127.0.0.1"
for ip in self.generate_interfaces_subnet():
handler.publish_event(NewHostEvent(host=ip))
# generate all subnets from all internal network interfaces
def generate_interfaces_subnet(self, sn='24'):
def generate_interfaces_subnet(self, sn="24"):
for ifaceName in interfaces():
for ip in [i['addr'] for i in ifaddresses(ifaceName).setdefault(AF_INET, [])]:
for ip in [i["addr"] for i in ifaddresses(ifaceName).setdefault(AF_INET, [])]:
if not self.event.localhost and InterfaceTypes.LOCALHOST.value in ip.__str__():
continue
for ip in HostDiscoveryHelpers.generate_subnet(ip, sn):

View File

@@ -10,6 +10,7 @@ logger = logging.getLogger(__name__)
class KubectlClientEvent(Event):
"""The API server is in charge of all operations on the cluster."""
def __init__(self, version):
self.version = version
@@ -23,6 +24,7 @@ class KubectlClientDiscovery(Discovery):
"""Kubectl Client Discovery
Checks for the existence of a local kubectl client
"""
def __init__(self, event):
self.event = event
@@ -34,8 +36,8 @@ class KubectlClientDiscovery(Discovery):
if b"GitVersion" in version_info:
# extracting version from kubectl output
version_info = version_info.decode()
start = version_info.find('GitVersion')
version = version_info[start + len("GitVersion':\"") : version_info.find("\",", start)]
start = version_info.find("GitVersion")
version = version_info[start + len("GitVersion':\"") : version_info.find('",', start)]
except Exception:
logger.debug("Could not find kubectl client")
return version

View File

@@ -17,12 +17,14 @@ logger = logging.getLogger(__name__)
class ReadOnlyKubeletEvent(Service, Event):
"""The read-only port on the kubelet serves health probing endpoints,
and is relied upon by many kubernetes components"""
def __init__(self):
Service.__init__(self, name="Kubelet API (readonly)")
class SecureKubeletEvent(Service, Event):
"""The Kubelet is the main component in every Node, all pod operations goes through the kubelet"""
def __init__(self, cert=False, token=False, anonymous_auth=True, **kwargs):
self.cert = cert
self.token = token
@@ -35,17 +37,18 @@ class KubeletPorts(Enum):
READ_ONLY = 10255
@handler.subscribe(OpenPortEvent, predicate=lambda x: x.port == 10255 or x.port == 10250)
@handler.subscribe(OpenPortEvent, predicate=lambda x: x.port in [10250, 10255])
class KubeletDiscovery(Discovery):
"""Kubelet Discovery
Checks for the existence of a Kubelet service, and its open ports
"""
def __init__(self, event):
self.event = event
def get_read_only_access(self):
endpoint = f"http://{self.event.host}:{self.event.port}/pods"
logger.debug(f"Passive hunter is attempting to get kubelet read access at {endpoint}")
logger.debug(f"Trying to get kubelet read access at {endpoint}")
r = requests.get(endpoint, timeout=config.network_timeout)
if r.status_code == 200:
self.publish_event(ReadOnlyKubeletEvent())

View File

@@ -14,6 +14,7 @@ class PortDiscovery(Discovery):
"""Port Scanning
Scans Kubernetes known ports to determine open endpoints for discovery
"""
def __init__(self, event):
self.event = event
self.host = event.host

View File

@@ -11,6 +11,7 @@ logger = logging.getLogger(__name__)
class KubeProxyEvent(Event, Service):
"""proxies from a localhost address to the Kubernetes apiserver"""
def __init__(self):
Service.__init__(self, name="Kubernetes Proxy")
@@ -20,6 +21,7 @@ class KubeProxy(Discovery):
"""Proxy Discovery
Checks for the existence of a an open Proxy service
"""
def __init__(self, event):
self.event = event
self.host = event.host

View File

@@ -2,6 +2,6 @@ from os.path import dirname, basename, isfile
import glob
# dynamically importing all modules in folder
files = glob.glob(dirname(__file__)+"/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith('__init__.py')):
exec('from .{} import *'.format(module_name))
files = glob.glob(dirname(__file__) + "/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith("__init__.py")):
exec(f"from .{module_name} import *")

View File

@@ -13,8 +13,11 @@ logger = logging.getLogger(__name__)
class AzureSpnExposure(Vulnerability, Event):
"""The SPN is exposed, potentially allowing an attacker to gain access to the Azure subscription"""
def __init__(self, container):
Vulnerability.__init__(self, Azure, "Azure SPN Exposure", category=IdentityTheft, vid="KHV004")
Vulnerability.__init__(
self, Azure, "Azure SPN Exposure", category=IdentityTheft, vid="KHV004",
)
self.container = container
@@ -23,6 +26,7 @@ class AzureSpnHunter(Hunter):
"""AKS Hunting
Hunting Azure cluster deployments using specific known configurations
"""
def __init__(self, event):
self.event = event
self.base_url = f"https://{self.event.host}:{self.event.port}"
@@ -30,7 +34,7 @@ class AzureSpnHunter(Hunter):
# getting a container that has access to the azure.json file
def get_key_container(self):
endpoint = f"{self.base_url}/pods"
logger.debug("Passive Hunter is attempting to find container with access to azure.json file")
logger.debug("Trying to find container with access to azure.json file")
try:
r = requests.get(endpoint, verify=False, timeout=config.network_timeout)
except requests.Timeout:
@@ -41,11 +45,11 @@ class AzureSpnHunter(Hunter):
for container in pod_data["spec"]["containers"]:
for mount in container["volumeMounts"]:
path = mount["mountPath"]
if '/etc/kubernetes/azure.json'.startswith(path):
if "/etc/kubernetes/azure.json".startswith(path):
return {
"name": container["name"],
"pod": pod_data["metadata"]["name"],
"namespace": pod_data["metadata"]["namespace"]
"namespace": pod_data["metadata"]["namespace"],
}
def execute(self):
@@ -54,36 +58,28 @@ class AzureSpnHunter(Hunter):
self.publish_event(AzureSpnExposure(container=container))
""" Active Hunting """
@handler.subscribe(AzureSpnExposure)
class ProveAzureSpnExposure(ActiveHunter):
"""Azure SPN Hunter
Gets the azure subscription file on the host by executing inside a container
"""
def __init__(self, event):
self.event = event
self.base_url = f"https://{self.event.host}:{self.event.port}"
def run(self, command, container):
run_url = "/".join(
self.base_url,
"run",
container["namespace"],
container["pod"],
container["name"])
return requests.post(
run_url,
verify=False,
params={'cmd': command},
timeout=config.network_timeout)
run_url = "/".join(self.base_url, "run", container["namespace"], container["pod"], container["name"])
return requests.post(run_url, verify=False, params={"cmd": command}, timeout=config.network_timeout)
def execute(self):
try:
r = self.run("cat /etc/kubernetes/azure.json", container=self.event.container)
subscription = self.run("cat /etc/kubernetes/azure.json", container=self.event.container).json()
except requests.Timeout:
logger.debug("failed to run command in container", exc_info=True)
except json.decoder.JSONDecodeError:
logger.warning("failed to parse SPN")
else:
subscription = r.json()
if "subscriptionId" in subscription:
self.event.subscriptionId = subscription["subscriptionId"]
self.event.aadClientId = subscription["aadClientId"]

View File

@@ -8,7 +8,11 @@ from kube_hunter.modules.discovery.apiserver import ApiServer
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, K8sVersionDisclosure
from kube_hunter.core.types import Hunter, ActiveHunter, KubernetesCluster
from kube_hunter.core.types import AccessRisk, InformationDisclosure, UnauthenticatedAccess
from kube_hunter.core.types import (
AccessRisk,
InformationDisclosure,
UnauthenticatedAccess,
)
logger = logging.getLogger(__name__)
@@ -16,6 +20,7 @@ logger = logging.getLogger(__name__)
class ServerApiAccess(Vulnerability, Event):
"""The API Server port is accessible.
Depending on your RBAC settings this could expose access to or control of your cluster."""
def __init__(self, evidence, using_token):
if using_token:
name = "Access to API using service account token"
@@ -23,17 +28,22 @@ class ServerApiAccess(Vulnerability, Event):
else:
name = "Unauthenticated access to API"
category = UnauthenticatedAccess
Vulnerability.__init__(self, KubernetesCluster, name=name, category=category, vid="KHV005")
Vulnerability.__init__(
self, KubernetesCluster, name=name, category=category, vid="KHV005",
)
self.evidence = evidence
class ServerApiHTTPAccess(Vulnerability, Event):
"""The API Server port is accessible over HTTP, and therefore unencrypted.
Depending on your RBAC settings this could expose access to or control of your cluster."""
def __init__(self, evidence):
name = "Insecure (HTTP) access to API"
category = UnauthenticatedAccess
Vulnerability.__init__(self, KubernetesCluster, name=name, category=category, vid="KHV006")
Vulnerability.__init__(
self, KubernetesCluster, name=name, category=category, vid="KHV006",
)
self.evidence = evidence
@@ -43,7 +53,9 @@ class ApiInfoDisclosure(Vulnerability, Event):
name += " using service account token"
else:
name += " as anonymous user"
Vulnerability.__init__(self, KubernetesCluster, name=name, category=InformationDisclosure, vid="KHV007")
Vulnerability.__init__(
self, KubernetesCluster, name=name, category=InformationDisclosure, vid="KHV007",
)
self.evidence = evidence
@@ -79,18 +91,22 @@ class CreateANamespace(Vulnerability, Event):
""" Creating a namespace might give an attacker an area with default (exploitable) permissions to run pods in.
"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created a namespace",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="Created a namespace", category=AccessRisk,
)
self.evidence = evidence
class DeleteANamespace(Vulnerability, Event):
""" Deleting a namespace might give an attacker the option to affect application behavior """
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Delete a namespace",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="Delete a namespace", category=AccessRisk,
)
self.evidence = evidence
@@ -100,8 +116,7 @@ class CreateARole(Vulnerability, Event):
"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created a role",
category=AccessRisk)
Vulnerability.__init__(self, KubernetesCluster, name="Created a role", category=AccessRisk)
self.evidence = evidence
@@ -111,8 +126,9 @@ class CreateAClusterRole(Vulnerability, Event):
"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created a cluster role",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="Created a cluster role", category=AccessRisk,
)
self.evidence = evidence
@@ -122,8 +138,9 @@ class PatchARole(Vulnerability, Event):
"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Patched a role",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="Patched a role", category=AccessRisk,
)
self.evidence = evidence
@@ -133,8 +150,9 @@ class PatchAClusterRole(Vulnerability, Event):
"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Patched a cluster role",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="Patched a cluster role", category=AccessRisk,
)
self.evidence = evidence
@@ -142,8 +160,9 @@ class DeleteARole(Vulnerability, Event):
""" Deleting a role might allow an attacker to affect access to resources in the namespace"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Deleted a role",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="Deleted a role", category=AccessRisk,
)
self.evidence = evidence
@@ -151,8 +170,9 @@ class DeleteAClusterRole(Vulnerability, Event):
""" Deleting a cluster role might allow an attacker to affect access to resources in the cluster"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Deleted a cluster role",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="Deleted a cluster role", category=AccessRisk,
)
self.evidence = evidence
@@ -160,8 +180,9 @@ class CreateAPod(Vulnerability, Event):
""" Creating a new pod allows an attacker to run custom code"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created A Pod",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="Created A Pod", category=AccessRisk,
)
self.evidence = evidence
@@ -169,8 +190,9 @@ class CreateAPrivilegedPod(Vulnerability, Event):
""" Creating a new PRIVILEGED pod would gain an attacker FULL CONTROL over the cluster"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created A PRIVILEGED Pod",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="Created A PRIVILEGED Pod", category=AccessRisk,
)
self.evidence = evidence
@@ -178,8 +200,9 @@ class PatchAPod(Vulnerability, Event):
""" Patching a pod allows an attacker to compromise and control it """
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Patched A Pod",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="Patched A Pod", category=AccessRisk,
)
self.evidence = evidence
@@ -187,8 +210,9 @@ class DeleteAPod(Vulnerability, Event):
""" Deleting a pod allows an attacker to disturb applications on the cluster """
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Deleted A Pod",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="Deleted A Pod", category=AccessRisk,
)
self.evidence = evidence
@@ -214,11 +238,7 @@ class AccessApiServer(Hunter):
def access_api_server(self):
logger.debug(f"Passive Hunter is attempting to access the API at {self.path}")
try:
r = requests.get(
f"{self.path}/api",
headers=self.headers,
verify=False,
timeout=config.network_timeout)
r = requests.get(f"{self.path}/api", headers=self.headers, verify=False, timeout=config.network_timeout)
if r.status_code == 200 and r.content:
return r.content
except requests.exceptions.ConnectionError:
@@ -245,16 +265,15 @@ class AccessApiServer(Hunter):
try:
if not namespace:
r = requests.get(
f"{self.path}/api/v1/pods",
headers=self.headers,
verify=False,
timeout=config.network_timeout)
f"{self.path}/api/v1/pods", headers=self.headers, verify=False, timeout=config.network_timeout,
)
else:
r = requests.get(
f"{self.path}/api/v1/namespaces/{namespace}/pods",
headers=self.headers,
verify=False,
timeout=config.network_timeout)
timeout=config.network_timeout,
)
if r.status_code == 200:
resp = json.loads(r.content)
for item in resp["items"]:
@@ -278,11 +297,11 @@ class AccessApiServer(Hunter):
if namespaces:
self.publish_event(ListNamespaces(namespaces, self.with_token))
roles = self.get_items("{path}/apis/rbac.authorization.k8s.io/v1/roles".format(path=self.path))
roles = self.get_items(f"{self.path}/apis/rbac.authorization.k8s.io/v1/roles")
if roles:
self.publish_event(ListRoles(roles, self.with_token))
cluster_roles = self.get_items("{path}/apis/rbac.authorization.k8s.io/v1/clusterroles".format(path=self.path))
cluster_roles = self.get_items(f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles")
if cluster_roles:
self.publish_event(ListClusterRoles(cluster_roles, self.with_token))
@@ -321,19 +340,12 @@ class AccessApiServerActive(ActiveHunter):
self.path = f"{self.event.protocol}://{self.event.host}:{self.event.port}"
def create_item(self, path, data):
headers = {
"Content-Type": "application/json"
}
headers = {"Content-Type": "application/json"}
if self.event.auth_token:
headers["Authorization"] = f"Bearer {self.event.auth_token}"
try:
res = requests.post(
path,
verify=False,
data=data,
headers=headers,
timeout=config.network_timeout)
res = requests.post(path, verify=False, data=data, headers=headers, timeout=config.network_timeout)
if res.status_code in [200, 201, 202]:
parsed_content = json.loads(res.content)
return parsed_content["metadata"]["name"]
@@ -342,9 +354,7 @@ class AccessApiServerActive(ActiveHunter):
return None
def patch_item(self, path, data):
headers = {
"Content-Type": "application/json-patch+json"
}
headers = {"Content-Type": "application/json-patch+json"}
if self.event.auth_token:
headers["Authorization"] = f"Bearer {self.event.auth_token}"
try:
@@ -377,61 +387,38 @@ class AccessApiServerActive(ActiveHunter):
pod = {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": random_name
},
"metadata": {"name": random_name},
"spec": {
"containers": [
{
"name": random_name,
"image": "nginx:1.7.9",
"ports": [
{
"containerPort": 80
}
],
**privileged_value
}
{"name": random_name, "image": "nginx:1.7.9", "ports": [{"containerPort": 80}], **privileged_value}
]
}
},
}
return self.create_item(path=f"{self.path}/api/v1/namespaces/{namespace}/pods", data=json.dumps(pod))
def delete_a_pod(self, namespace, pod_name):
delete_timestamp = self.delete_item("{path}/api/v1/namespaces/{namespace}/pods/{name}".format(
path=self.path, name=pod_name, namespace=namespace))
if delete_timestamp is None:
delete_timestamp = self.delete_item(f"{self.path}/api/v1/namespaces/{namespace}/pods/{pod_name}")
if not delete_timestamp:
logger.error(f"Created pod {pod_name} in namespace {namespace} but unable to delete it")
return delete_timestamp
def patch_a_pod(self, namespace, pod_name):
data = [
{
"op": "add",
"path": "/hello",
"value": ["world"]
}
]
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/api/v1/namespaces/{namespace}/pods/{pod_name}",
data=json.dumps(data))
path=f"{self.path}/api/v1/namespaces/{namespace}/pods/{pod_name}", data=json.dumps(data),
)
def create_namespace(self):
random_name = (str(uuid.uuid4()))[0:5]
data = {
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": random_name,
"labels": {
"name": random_name
}
}
"metadata": {"name": random_name, "labels": {"name": random_name}},
}
return self.create_item(path=f"{self.path}/api/v1/namespaces", data=json.dumps(data))
def delete_namespace(self, namespace):
delete_timestamp = self.delete_item("{path}/api/v1/namespaces/{name}".format(path=self.path, name=namespace))
delete_timestamp = self.delete_item(f"{self.path}/api/v1/namespaces/{namespace}")
if delete_timestamp is None:
logger.error(f"Created namespace {namespace} but failed to delete it")
return delete_timestamp
@@ -441,45 +428,29 @@ class AccessApiServerActive(ActiveHunter):
role = {
"kind": "Role",
"apiVersion": "rbac.authorization.k8s.io/v1",
"metadata": {
"namespace": namespace,
"name": name
},
"rules": [
{
"apiGroups": [""],
"resources": ["pods"],
"verbs": ["get", "watch", "list"]
}
]
"metadata": {"namespace": namespace, "name": name},
"rules": [{"apiGroups": [""], "resources": ["pods"], "verbs": ["get", "watch", "list"]}],
}
return self.create_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles",
data=json.dumps(role))
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles", data=json.dumps(role),
)
def create_a_cluster_role(self):
name = str(uuid.uuid4())[0:5]
cluster_role = {
"kind": "ClusterRole",
"apiVersion": "rbac.authorization.k8s.io/v1",
"metadata": {
"name": name
},
"rules": [
{
"apiGroups": [""],
"resources": ["pods"],
"verbs": ["get", "watch", "list"]
}
]
"metadata": {"name": name},
"rules": [{"apiGroups": [""], "resources": ["pods"], "verbs": ["get", "watch", "list"]}],
}
return self.create_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles",
data=json.dumps(cluster_role))
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles", data=json.dumps(cluster_role),
)
def delete_a_role(self, namespace, name):
delete_timestamp = self.delete_item(
f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles/{name}")
f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles/{name}"
)
if delete_timestamp is None:
logger.error(f"Created role {name} in namespace {namespace} but unable to delete it")
return delete_timestamp
@@ -491,28 +462,17 @@ class AccessApiServerActive(ActiveHunter):
return delete_timestamp
def patch_a_role(self, namespace, role):
data = [
{
"op": "add",
"path": "/hello",
"value": ["world"]
}
]
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles/{role}",
data=json.dumps(data))
data=json.dumps(data),
)
def patch_a_cluster_role(self, cluster_role):
data = [
{
"op": "add",
"path": "/hello",
"value": ["world"]
}
]
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles/{cluster_role}",
data=json.dumps(data))
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles/{cluster_role}", data=json.dumps(data),
)
def execute(self):
# Try creating cluster-wide objects
@@ -529,13 +489,13 @@ class AccessApiServerActive(ActiveHunter):
patch_evidence = self.patch_a_cluster_role(cluster_role)
if patch_evidence:
self.publish_event(PatchAClusterRole(
f"Patched Cluster Role Name: {cluster_role} Patch evidence: {patch_evidence}"))
self.publish_event(
PatchAClusterRole(f"Patched Cluster Role Name: {cluster_role} Patch evidence: {patch_evidence}")
)
delete_timestamp = self.delete_a_cluster_role(cluster_role)
if delete_timestamp:
self.publish_event(DeleteAClusterRole(
f"Cluster role {cluster_role} deletion time {delete_timestamp}"))
self.publish_event(DeleteAClusterRole(f"Cluster role {cluster_role} deletion time {delete_timestamp}"))
# Try attacking all the namespaces we know about
if self.event.namespaces:
@@ -543,28 +503,31 @@ class AccessApiServerActive(ActiveHunter):
# Try creating and deleting a privileged pod
pod_name = self.create_a_pod(namespace, True)
if pod_name:
self.publish_event(CreateAPrivilegedPod(
f"Pod Name: {pod_name} Namespace: {namespace}"))
self.publish_event(CreateAPrivilegedPod(f"Pod Name: {pod_name} Namespace: {namespace}"))
delete_time = self.delete_a_pod(namespace, pod_name)
if delete_time:
self.publish_event(DeleteAPod(
f"Pod Name: {pod_name} deletion time: {delete_time}"))
self.publish_event(DeleteAPod(f"Pod Name: {pod_name} Deletion time: {delete_time}"))
# Try creating, patching and deleting an unprivileged pod
pod_name = self.create_a_pod(namespace, False)
if pod_name:
self.publish_event(CreateAPod(
f"Pod Name: {pod_name} Namespace: {namespace}"))
self.publish_event(CreateAPod(f"Pod Name: {pod_name} Namespace: {namespace}"))
patch_evidence = self.patch_a_pod(namespace, pod_name)
if patch_evidence:
self.publish_event(PatchAPod(
f"Pod Name: {pod_name} Namespace: {namespace} Patch evidence: {patch_evidence}"))
self.publish_event(
PatchAPod(
f"Pod Name: {pod_name} " f"Namespace: {namespace} " f"Patch evidence: {patch_evidence}"
)
)
delete_time = self.delete_a_pod(namespace, pod_name)
if delete_time:
self.publish_event(DeleteAPod(
f"Pod Name: {pod_name} Namespace: {namespace} Delete time: {delete_time}"))
self.publish_event(
DeleteAPod(
f"Pod Name: {pod_name} " f"Namespace: {namespace} " f"Delete time: {delete_time}"
)
)
role = self.create_a_role(namespace)
if role:
@@ -572,13 +535,21 @@ class AccessApiServerActive(ActiveHunter):
patch_evidence = self.patch_a_role(namespace, role)
if patch_evidence:
self.publish_event(PatchARole(
f"Patched Role Name: {role} Namespace: {namespace} Patch evidence: {patch_evidence}"))
self.publish_event(
PatchARole(
f"Patched Role Name: {role} "
f"Namespace: {namespace} "
f"Patch evidence: {patch_evidence}"
)
)
delete_time = self.delete_a_role(namespace, role)
if delete_time:
self.publish_event(DeleteARole(
f"Deleted role: {role} Namespace: {namespace} Delete time: {delete_time}"))
self.publish_event(
DeleteARole(
f"Deleted role: {role} " f"Namespace: {namespace} " f"Delete time: {delete_time}"
)
)
# Note: we are not binding any role or cluster role because
# in certain cases it might effect the running pod within the cluster (and we don't want to do that).
@@ -589,6 +560,7 @@ class ApiVersionHunter(Hunter):
"""Api Version Hunter
Tries to obtain the Api Server's version directly from /version endpoint
"""
def __init__(self, event):
self.event = event
self.path = f"{self.event.protocol}://{self.event.host}:{self.event.port}"
@@ -599,10 +571,12 @@ class ApiVersionHunter(Hunter):
def execute(self):
if self.event.auth_token:
logger.debug("Passive Hunter is attempting to access the API server version end point using the pod's"
f" service account token on {self.event.host}:{self.event.port} \t")
logger.debug(
"Trying to access the API server version endpoint using pod's"
f" service account token on {self.event.host}:{self.event.port} \t"
)
else:
logger.debug("Passive Hunter is attempting to access the API server version end point anonymously")
version = self.session.get(self.path + "/version", timeout=config.network_timeout).json()["gitVersion"]
logger.debug("Trying to access the API server version endpoint anonymously")
version = self.session.get(f"{self.path}/version", timeout=config.network_timeout).json()["gitVersion"]
logger.debug(f"Discovered version of api server {version}")
self.publish_event(K8sVersionDisclosure(version=version, from_endpoint="/version"))

View File

@@ -14,8 +14,11 @@ logger = logging.getLogger(__name__)
class PossibleArpSpoofing(Vulnerability, Event):
"""A malicious pod running on the cluster could potentially run an ARP Spoof attack
and perform a MITM between pods on the node."""
def __init__(self):
Vulnerability.__init__(self, KubernetesCluster, "Possible Arp Spoof", category=IdentityTheft, vid="KHV020")
Vulnerability.__init__(
self, KubernetesCluster, "Possible Arp Spoof", category=IdentityTheft, vid="KHV020",
)
@handler.subscribe(CapNetRawEnabled)
@@ -24,6 +27,7 @@ class ArpSpoofHunter(ActiveHunter):
Checks for the possibility of running an ARP spoof
attack from within a pod (results are based on the running node)
"""
def __init__(self, event):
self.event = event
@@ -47,11 +51,10 @@ class ArpSpoofHunter(ActiveHunter):
return False
def execute(self):
self_ip = sr1(IP(dst="1.1.1.1", ttl=1)/ICMP(), verbose=0, timeout=config.network_timeout)[IP].dst
self_ip = sr1(IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)[IP].dst
arp_responses, _ = srp(
Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(op=1, pdst=f"{self_ip}/24"),
timeout=config.netork_timeout,
verbose=0)
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"), timeout=config.netork_timeout, verbose=0,
)
# arp enabled on cluster and more than one pod on node
if len(arp_responses) > 1:

View File

@@ -14,10 +14,11 @@ class CapNetRawEnabled(Event, Vulnerability):
If an attacker manages to compromise a pod,
they could potentially take advantage of this capability to perform network
attacks on other pods running on the same node"""
def __init__(self):
Vulnerability.__init__(self, KubernetesCluster,
name="CAP_NET_RAW Enabled",
category=AccessRisk)
Vulnerability.__init__(
self, KubernetesCluster, name="CAP_NET_RAW Enabled", category=AccessRisk,
)
@handler.subscribe(RunningAsPodEvent)
@@ -25,6 +26,7 @@ class PodCapabilitiesHunter(Hunter):
"""Pod Capabilities Hunter
Checks for default enabled capabilities in a pod
"""
def __init__(self, event):
self.event = event

View File

@@ -13,11 +13,11 @@ email_pattern = re.compile(r"([a-z0-9]+@[a-z0-9]+\.[a-z0-9]+)")
class CertificateEmail(Vulnerability, Event):
"""Certificate includes an email address"""
def __init__(self, email):
Vulnerability.__init__(self, KubernetesCluster,
"Certificate Includes Email Address",
category=InformationDisclosure,
khv="KHV021")
Vulnerability.__init__(
self, KubernetesCluster, "Certificate Includes Email Address", category=InformationDisclosure, khv="KHV021",
)
self.email = email
self.evidence = "email: {}".format(self.email)
@@ -27,6 +27,7 @@ class CertificateDiscovery(Hunter):
"""Certificate Email Hunting
Checks for email addresses in kubernetes ssl certificates
"""
def __init__(self, event):
self.event = event
@@ -42,4 +43,4 @@ class CertificateDiscovery(Hunter):
certdata = base64.decodebytes(c)
emails = re.findall(email_pattern, certdata)
for email in emails:
self.publish_event( CertificateEmail(email=email) )
self.publish_event(CertificateEmail(email=email))

View File

@@ -4,89 +4,105 @@ from packaging import version
from kube_hunter.conf import config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, K8sVersionDisclosure
from kube_hunter.core.types import Hunter, KubernetesCluster, RemoteCodeExec, PrivilegeEscalation, \
DenialOfService, KubectlClient
from kube_hunter.core.types import (
Hunter,
KubernetesCluster,
RemoteCodeExec,
PrivilegeEscalation,
DenialOfService,
KubectlClient,
)
from kube_hunter.modules.discovery.kubectl import KubectlClientEvent
logger = logging.getLogger(__name__)
""" Cluster CVES """
class ServerApiVersionEndPointAccessPE(Vulnerability, Event):
"""Node is vulnerable to critical CVE-2018-1002105"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster,
name="Critical Privilege Escalation CVE",
category=PrivilegeEscalation,
vid="KHV022")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Critical Privilege Escalation CVE",
category=PrivilegeEscalation,
vid="KHV022",
)
self.evidence = evidence
class ServerApiVersionEndPointAccessDos(Vulnerability, Event):
"""Node not patched for CVE-2019-1002100. Depending on your RBAC settings,
a crafted json-patch could cause a Denial of Service."""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster,
name="Denial of Service to Kubernetes API Server",
category=DenialOfService,
vid="KHV023")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Denial of Service to Kubernetes API Server",
category=DenialOfService,
vid="KHV023",
)
self.evidence = evidence
class PingFloodHttp2Implementation(Vulnerability, Event):
"""Node not patched for CVE-2019-9512. an attacker could cause a
Denial of Service by sending specially crafted HTTP requests."""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster,
name="Possible Ping Flood Attack",
category=DenialOfService,
vid="KHV024")
Vulnerability.__init__(
self, KubernetesCluster, name="Possible Ping Flood Attack", category=DenialOfService, vid="KHV024",
)
self.evidence = evidence
class ResetFloodHttp2Implementation(Vulnerability, Event):
"""Node not patched for CVE-2019-9514. an attacker could cause a
Denial of Service by sending specially crafted HTTP requests."""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster,
name="Possible Reset Flood Attack",
category=DenialOfService,
vid="KHV025")
Vulnerability.__init__(
self, KubernetesCluster, name="Possible Reset Flood Attack", category=DenialOfService, vid="KHV025",
)
self.evidence = evidence
class ServerApiClusterScopedResourcesAccess(Vulnerability, Event):
"""Api Server not patched for CVE-2019-11247.
API server allows access to custom resources via wrong scope"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster,
name="Arbitrary Access To Cluster Scoped Resources",
category=PrivilegeEscalation,
vid="KHV026")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Arbitrary Access To Cluster Scoped Resources",
category=PrivilegeEscalation,
vid="KHV026",
)
self.evidence = evidence
""" Kubectl CVES """
class IncompleteFixToKubectlCpVulnerability(Vulnerability, Event):
"""The kubectl client is vulnerable to CVE-2019-11246,
an attacker could potentially execute arbitrary code on the client's machine"""
def __init__(self, binary_version):
Vulnerability.__init__(self, KubectlClient,
"Kubectl Vulnerable To CVE-2019-11246",
category=RemoteCodeExec,
vid="KHV027")
Vulnerability.__init__(
self, KubectlClient, "Kubectl Vulnerable To CVE-2019-11246", category=RemoteCodeExec, vid="KHV027",
)
self.binary_version = binary_version
self.evidence = "kubectl version: {}".format(self.binary_version)
class KubectlCpVulnerability(Vulnerability, Event):
"""The kubectl client is vulnerable to CVE-2019-1002101,
an attacker could potentially execute arbitrary code on the client's machine"""
def __init__(self, binary_version):
Vulnerability.__init__(self, KubectlClient,
"Kubectl Vulnerable To CVE-2019-1002101",
category=RemoteCodeExec,
vid="KHV028")
Vulnerability.__init__(
self, KubectlClient, "Kubectl Vulnerable To CVE-2019-1002101", category=RemoteCodeExec, vid="KHV028",
)
self.binary_version = binary_version
self.evidence = "kubectl version: {}".format(self.binary_version)
@@ -96,18 +112,18 @@ class CveUtils:
def get_base_release(full_ver):
# if LegacyVersion, converting manually to a base version
if type(full_ver) == version.LegacyVersion:
return version.parse('.'.join(full_ver._version.split('.')[:2]))
return version.parse('.'.join(map(str, full_ver._version.release[:2])))
return version.parse(".".join(full_ver._version.split(".")[:2]))
return version.parse(".".join(map(str, full_ver._version.release[:2])))
@staticmethod
def to_legacy(full_ver):
# converting version to version.LegacyVersion
return version.LegacyVersion('.'.join(map(str, full_ver._version.release)))
return version.LegacyVersion(".".join(map(str, full_ver._version.release)))
@staticmethod
def to_raw_version(v):
if type(v) != version.LegacyVersion:
return '.'.join(map(str, v._version.release))
return ".".join(map(str, v._version.release))
return v._version
@staticmethod
@@ -115,7 +131,8 @@ class CveUtils:
"""Function compares two versions, handling differences with conversion to LegacyVersion"""
# getting raw version, while striping 'v' char at the start. if exists.
# removing this char lets us safely compare the two version.
v1_raw, v2_raw = CveUtils.to_raw_version(v1).strip('v'), CveUtils.to_raw_version(v2).strip('v')
v1_raw = CveUtils.to_raw_version(v1).strip("v")
v2_raw = CveUtils.to_raw_version(v2).strip("v")
new_v1 = version.LegacyVersion(v1_raw)
new_v2 = version.LegacyVersion(v2_raw)
@@ -127,7 +144,7 @@ class CveUtils:
@staticmethod
def is_downstream_version(version):
return any(c in version for c in '+-~')
return any(c in version for c in "+-~")
@staticmethod
def is_vulnerable(fix_versions, check_version, ignore_downstream=False):
@@ -183,7 +200,7 @@ class K8sClusterCveHunter(Hunter):
ServerApiVersionEndPointAccessDos: ["1.11.8", "1.12.6", "1.13.4"],
ResetFloodHttp2Implementation: ["1.13.10", "1.14.6", "1.15.3"],
PingFloodHttp2Implementation: ["1.13.10", "1.14.6", "1.15.3"],
ServerApiClusterScopedResourcesAccess: ["1.13.9", "1.14.5", "1.15.2"]
ServerApiClusterScopedResourcesAccess: ["1.13.9", "1.14.5", "1.15.2"],
}
for vulnerability, fix_versions in cve_mapping.items():
if CveUtils.is_vulnerable(fix_versions, self.event.version, not config.include_patched_versions):
@@ -195,13 +212,14 @@ class KubectlCVEHunter(Hunter):
"""Kubectl CVE Hunter
Checks if the kubectl client is vulnerable to specific important CVEs
"""
def __init__(self, event):
self.event = event
def execute(self):
cve_mapping = {
KubectlCpVulnerability: ["1.11.9", "1.12.7", "1.13.5", "1.14.0"],
IncompleteFixToKubectlCpVulnerability: ["1.12.9", "1.13.6", "1.14.2"]
IncompleteFixToKubectlCpVulnerability: ["1.12.9", "1.13.6", "1.14.2"],
}
logger.debug(f"Checking known CVEs for kubectl version: {self.event.version}")
for vulnerability, fix_versions in cve_mapping.items():

View File

@@ -13,12 +13,12 @@ logger = logging.getLogger(__name__)
class DashboardExposed(Vulnerability, Event):
"""All operations on the cluster are exposed"""
def __init__(self, nodes):
Vulnerability.__init__(self, KubernetesCluster,
"Dashboard Exposed",
category=RemoteCodeExec,
vid="KHV029")
self.evidence = "nodes: {}".format(' '.join(nodes)) if nodes else None
Vulnerability.__init__(
self, KubernetesCluster, "Dashboard Exposed", category=RemoteCodeExec, vid="KHV029",
)
self.evidence = "nodes: {}".format(" ".join(nodes)) if nodes else None
@handler.subscribe(KubeDashboardEvent)
@@ -26,6 +26,7 @@ class KubeDashboard(Hunter):
"""Dashboard Hunting
Hunts open Dashboards, gets the type of nodes in the cluster
"""
def __init__(self, event):
self.event = event
@@ -33,7 +34,7 @@ class KubeDashboard(Hunter):
logger.debug("Passive hunter is attempting to get nodes types of the cluster")
r = requests.get(f"http://{self.event.host}:{self.event.port}/api/v1/node", timeout=config.network_timwout)
if r.status_code == 200 and "nodes" in r.text:
return list(map(lambda node: node["objectMeta"]["name"], json.loads(r.text)["nodes"]))
return [node["objectMeta"]["name"] for node in json.loads(r.text)["nodes"]]
def execute(self):
self.publish_event(DashboardExposed(nodes=self.get_nodes()))

View File

@@ -15,8 +15,11 @@ logger = logging.getLogger(__name__)
class PossibleDnsSpoofing(Vulnerability, Event):
"""A malicious pod running on the cluster could potentially run a DNS Spoof attack
and perform a MITM attack on applications running in the cluster."""
def __init__(self, kubedns_pod_ip):
Vulnerability.__init__(self, KubernetesCluster, "Possible DNS Spoof", category=IdentityTheft, vid="KHV030")
Vulnerability.__init__(
self, KubernetesCluster, "Possible DNS Spoof", category=IdentityTheft, vid="KHV030",
)
self.kubedns_pod_ip = kubedns_pod_ip
self.evidence = "kube-dns at: {}".format(self.kubedns_pod_ip)
@@ -28,11 +31,12 @@ class DnsSpoofHunter(ActiveHunter):
Checks for the possibility for a malicious pod to compromise DNS requests of the cluster
(results are based on the running node)
"""
def __init__(self, event):
self.event = event
def get_cbr0_ip_mac(self):
res = srp1(Ether()/IP(dst="1.1.1.1", ttl=1)/ICMP(), verbose=0, timeout=config.network_timeout)
res = srp1(Ether() / IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)
return res[IP].src, res.src
def extract_nameserver_ip(self):
@@ -47,29 +51,29 @@ class DnsSpoofHunter(ActiveHunter):
# getting actual pod ip of kube-dns service, by comparing the src mac of a dns response and arp scanning.
dns_info_res = srp1(
Ether()/IP(dst=kubedns_svc_ip)/UDP(dport=53)/DNS(rd=1, qd=DNSQR()),
Ether() / IP(dst=kubedns_svc_ip) / UDP(dport=53) / DNS(rd=1, qd=DNSQR()),
verbose=0,
timeout=config.network_timeout)
timeout=config.network_timeout,
)
kubedns_pod_mac = dns_info_res.src
self_ip = dns_info_res[IP].dst
arp_responses, _ = srp(
Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(op=1, pdst=f"{self_ip}/24"),
timeout=config.network_timeout,
verbose=0)
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"), timeout=config.network_timeout, verbose=0,
)
for _, response in arp_responses:
if response[Ether].src == kubedns_pod_mac:
return response[ARP].psrc, response.src
def execute(self):
logger.debug("Attempting to get kube-dns pod ip")
self_ip = sr1(IP(dst="1.1.1.1", ttl=1)/ICMP(), verbose=0, timeout=config.netork_timeout)[IP].dst
self_ip = sr1(IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.netork_timeout)[IP].dst
cbr0_ip, cbr0_mac = self.get_cbr0_ip_mac()
kubedns = self.get_kube_dns_ip_mac()
if kubedns:
kubedns_ip, kubedns_mac = kubedns
logger.debug(f"ip = {self_ip}, kubednsip = {kubedns_ip}, cbr0ip = {cbr0_ip}")
logger.debug(f"ip={self_ip} kubednsip={kubedns_ip} cbr0ip={cbr0_ip}")
if kubedns_mac != cbr0_mac:
# if self pod in the same subnet as kube-dns pod
self.publish_event(PossibleDnsSpoofing(kubedns_pod_ip=kubedns_ip))

View File

@@ -4,8 +4,15 @@ import requests
from kube_hunter.conf import config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, OpenPortEvent
from kube_hunter.core.types import ActiveHunter, Hunter, KubernetesCluster, \
InformationDisclosure, RemoteCodeExec, UnauthenticatedAccess, AccessRisk
from kube_hunter.core.types import (
ActiveHunter,
Hunter,
KubernetesCluster,
InformationDisclosure,
RemoteCodeExec,
UnauthenticatedAccess,
AccessRisk,
)
logger = logging.getLogger(__name__)
ETCD_PORT = 2379
@@ -19,10 +26,8 @@ class EtcdRemoteWriteAccessEvent(Vulnerability, Event):
def __init__(self, write_res):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd Remote Write Access Event",
category=RemoteCodeExec, vid="KHV031")
self, KubernetesCluster, name="Etcd Remote Write Access Event", category=RemoteCodeExec, vid="KHV031",
)
self.evidence = write_res
@@ -31,11 +36,8 @@ class EtcdRemoteReadAccessEvent(Vulnerability, Event):
def __init__(self, keys):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd Remote Read Access Event",
category=AccessRisk,
vid="KHV032")
self, KubernetesCluster, name="Etcd Remote Read Access Event", category=AccessRisk, vid="KHV032",
)
self.evidence = keys
@@ -44,10 +46,13 @@ class EtcdRemoteVersionDisclosureEvent(Vulnerability, Event):
def __init__(self, version):
Vulnerability.__init__(self, KubernetesCluster,
name="Etcd Remote version disclosure",
category=InformationDisclosure,
vid="KHV033")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd Remote version disclosure",
category=InformationDisclosure,
vid="KHV033",
)
self.evidence = version
@@ -57,10 +62,13 @@ class EtcdAccessEnabledWithoutAuthEvent(Vulnerability, Event):
gain access to the etcd"""
def __init__(self, version):
Vulnerability.__init__(self, KubernetesCluster,
name="Etcd is accessible using insecure connection (HTTP)",
category=UnauthenticatedAccess,
vid="KHV034")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd is accessible using insecure connection (HTTP)",
category=UnauthenticatedAccess,
vid="KHV034",
)
self.evidence = version
@@ -72,18 +80,17 @@ class EtcdRemoteAccessActive(ActiveHunter):
def __init__(self, event):
self.event = event
self.write_evidence = ''
self.write_evidence = ""
def db_keys_write_access(self):
logger.debug(f"Active hunter is attempting to write keys remotely on host {self.event.host}")
data = {
'value': 'remotely written data'
}
logger.debug(f"Trying to write keys remotely on host {self.event.host}")
data = {"value": "remotely written data"}
try:
r = requests.post(
f"{self.protocol}://{self.event.host}:{ETCD_PORT}/v2/keys/message",
data=data,
timeout=config.network_timeout)
timeout=config.network_timeout,
)
self.write_evidence = r.content if r.status_code == 200 and r.content else False
return self.write_evidence
except requests.exceptions.ConnectionError:
@@ -103,51 +110,50 @@ class EtcdRemoteAccess(Hunter):
def __init__(self, event):
self.event = event
self.version_evidence = ''
self.keys_evidence = ''
self.protocol = 'https'
self.version_evidence = ""
self.keys_evidence = ""
self.protocol = "https"
def db_keys_disclosure(self):
logger.debug(f"{self.event.host} Passive hunter is attempting to read etcd keys remotely")
try:
r = requests.get(
f"{self.protocol}://{self.eventhost}:{ETCD_PORT}/v2/keys",
verify=False,
timeout=config.network_timeout)
self.keys_evidence = r.content if r.status_code == 200 and r.content != '' else False
f"{self.protocol}://{self.eventhost}:{ETCD_PORT}/v2/keys", verify=False, timeout=config.network_timeout,
)
self.keys_evidence = r.content if r.status_code == 200 and r.content != "" else False
return self.keys_evidence
except requests.exceptions.ConnectionError:
return False
def version_disclosure(self):
logger.debug(f"{self.event.host} Passive hunter is attempting to check etcd version remotely")
logger.debug(f"Trying to check etcd version remotely at {self.event.host}")
try:
r = requests.get(
f"{self.protocol}://{self.event.host}:{ETCD_PORT}/version",
verify=False,
timeout=config.network_timeout)
timeout=config.network_timeout,
)
self.version_evidence = r.content if r.status_code == 200 and r.content else False
return self.version_evidence
except requests.exceptions.ConnectionError:
return False
def insecure_access(self):
logger.debug(f"{self.event.host} Passive hunter is attempting to access etcd insecurely")
logger.debug(f"Trying to access etcd insecurely at {self.event.host}")
try:
r = requests.get(
f"http://{self.event.host}:{ETCD_PORT}/version",
verify=False,
timeout=config.network_timeout)
f"http://{self.event.host}:{ETCD_PORT}/version", verify=False, timeout=config.network_timeout,
)
return r.content if r.status_code == 200 and r.content else False
except requests.exceptions.ConnectionError:
return False
def execute(self):
if self.insecure_access(): # make a decision between http and https protocol
self.protocol = 'http'
self.protocol = "http"
if self.version_disclosure():
self.publish_event(EtcdRemoteVersionDisclosureEvent(self.version_evidence))
if self.protocol == 'http':
if self.protocol == "http":
self.publish_event(EtcdAccessEnabledWithoutAuthEvent(self.version_evidence))
if self.db_keys_disclosure():
self.publish_event(EtcdRemoteReadAccessEvent(self.keys_evidence))

View File

@@ -9,9 +9,19 @@ import urllib3
from kube_hunter.conf import config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, K8sVersionDisclosure
from kube_hunter.core.types import Hunter, ActiveHunter, KubernetesCluster, Kubelet, InformationDisclosure, \
RemoteCodeExec, AccessRisk
from kube_hunter.modules.discovery.kubelet import ReadOnlyKubeletEvent, SecureKubeletEvent
from kube_hunter.core.types import (
Hunter,
ActiveHunter,
KubernetesCluster,
Kubelet,
InformationDisclosure,
RemoteCodeExec,
AccessRisk,
)
from kube_hunter.modules.discovery.kubelet import (
ReadOnlyKubeletEvent,
SecureKubeletEvent,
)
logger = logging.getLogger(__name__)
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
@@ -23,10 +33,11 @@ urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
class ExposedPodsHandler(Vulnerability, Event):
"""An attacker could view sensitive information about pods that are
bound to a Node using the /pods endpoint"""
def __init__(self, pods):
Vulnerability.__init__(self, Kubelet,
"Exposed Pods",
category=InformationDisclosure)
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Pods", category=InformationDisclosure,
)
self.pods = pods
self.evidence = f"count: {len(self.pods)}"
@@ -34,80 +45,79 @@ class ExposedPodsHandler(Vulnerability, Event):
class AnonymousAuthEnabled(Vulnerability, Event):
"""The kubelet is misconfigured, potentially allowing secure access to all requests on the kubelet,
without the need to authenticate"""
def __init__(self):
Vulnerability.__init__(self, Kubelet,
"Anonymous Authentication",
category=RemoteCodeExec,
vid="KHV036")
Vulnerability.__init__(
self, component=Kubelet, name="Anonymous Authentication", category=RemoteCodeExec, vid="KHV036",
)
class ExposedContainerLogsHandler(Vulnerability, Event):
"""Output logs from a running container are using the exposed /containerLogs endpoint"""
def __init__(self):
Vulnerability.__init__(self, Kubelet,
"Exposed Container Logs",
category=InformationDisclosure,
vid="KHV037")
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Container Logs", category=InformationDisclosure, vid="KHV037",
)
class ExposedRunningPodsHandler(Vulnerability, Event):
"""Outputs a list of currently running pods,
and some of their metadata, which can reveal sensitive information"""
def __init__(self, count):
Vulnerability.__init__(self, Kubelet, "Exposed Running Pods",
category=InformationDisclosure,
vid="KHV038")
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Running Pods", category=InformationDisclosure, vid="KHV038",
)
self.count = count
self.evidence = "{} running pods".format(self.count)
class ExposedExecHandler(Vulnerability, Event):
"""An attacker could run arbitrary commands on a container"""
def __init__(self):
Vulnerability.__init__(self, Kubelet,
"Exposed Exec On Container",
category=RemoteCodeExec,
vid="KHV039")
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Exec On Container", category=RemoteCodeExec, vid="KHV039",
)
class ExposedRunHandler(Vulnerability, Event):
"""An attacker could run an arbitrary command inside a container"""
def __init__(self):
Vulnerability.__init__(self, Kubelet,
"Exposed Run Inside Container",
category=RemoteCodeExec,
vid="KHV040")
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Run Inside Container", category=RemoteCodeExec, vid="KHV040",
)
class ExposedPortForwardHandler(Vulnerability, Event):
"""An attacker could set port forwarding rule on a pod"""
def __init__(self):
Vulnerability.__init__(self, Kubelet,
"Exposed Port Forward",
category=RemoteCodeExec,
vid="KHV041")
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Port Forward", category=RemoteCodeExec, vid="KHV041",
)
class ExposedAttachHandler(Vulnerability, Event):
"""Opens a websocket that could enable an attacker
to attach to a running container"""
def __init__(self):
Vulnerability.__init__(self, Kubelet,
"Exposed Attaching To Container",
category=RemoteCodeExec,
vid="KHV042")
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Attaching To Container", category=RemoteCodeExec, vid="KHV042",
)
class ExposedHealthzHandler(Vulnerability, Event):
"""By accessing the open /healthz handler,
an attacker could get the cluster health state without authenticating"""
def __init__(self, status):
Vulnerability.__init__(
self,
Kubelet,
"Cluster Health Disclosure",
category=InformationDisclosure,
vid="KHV043")
self, component=Kubelet, name="Cluster Health Disclosure", category=InformationDisclosure, vid="KHV043",
)
self.status = status
self.evidence = f"status: {self.status}"
@@ -115,40 +125,35 @@ class ExposedHealthzHandler(Vulnerability, Event):
class PrivilegedContainers(Vulnerability, Event):
"""A Privileged container exist on a node
could expose the node/cluster to unwanted root operations"""
def __init__(self, containers):
Vulnerability.__init__(self, KubernetesCluster,
"Privileged Container",
category=AccessRisk,
vid="KHV044")
Vulnerability.__init__(
self, component=KubernetesCluster, name="Privileged Container", category=AccessRisk, vid="KHV044",
)
self.containers = containers
self.evidence = f"pod: {containers[0][0]}, " \
f"container: {containers[0][1]}, " \
f"count: {len(containers)}"
self.evidence = f"pod: {containers[0][0]}, " f"container: {containers[0][1]}, " f"count: {len(containers)}"
class ExposedSystemLogs(Vulnerability, Event):
"""System logs are exposed from the /logs endpoint on the kubelet"""
def __init__(self):
Vulnerability.__init__(self, Kubelet,
"Exposed System Logs",
category=InformationDisclosure,
vid="KHV045")
Vulnerability.__init__(
self, component=Kubelet, name="Exposed System Logs", category=InformationDisclosure, vid="KHV045",
)
class ExposedKubeletCmdline(Vulnerability, Event):
"""Commandline flags that were passed to the kubelet can be obtained from the pprof endpoints"""
def __init__(self, cmdline):
Vulnerability.__init__(self, Kubelet,
"Exposed Kubelet Cmdline",
category=InformationDisclosure,
vid="KHV046")
Vulnerability.__init__(
self, component=Kubelet, name="Exposed Kubelet Cmdline", category=InformationDisclosure, vid="KHV046",
)
self.cmdline = cmdline
self.evidence = f"cmdline: {self.cmdline}"
""" Enum containing all of the kubelet handlers """
class KubeletHandlers(Enum):
# GET
PODS = "pods"
@@ -170,31 +175,31 @@ class KubeletHandlers(Enum):
PPROF_CMDLINE = "debug/pprof/cmdline"
""" dividing ports for seperate hunters """
@handler.subscribe(ReadOnlyKubeletEvent)
class ReadOnlyKubeletPortHunter(Hunter):
"""Kubelet Readonly Ports Hunter
Hunts specific endpoints on open ports in the readonly Kubelet server
"""
def __init__(self, event):
self.event = event
self.path = f"http://{self.event.host}:{self.event.port}/"
self.path = f"http://{self.event.host}:{self.event.port}"
self.pods_endpoint_data = ""
def get_k8s_version(self):
logger.debug("Passive hunter is attempting to find kubernetes version")
metrics = requests.get(self.path + "metrics", timeout=config.network_timeout).text
metrics = requests.get(f"{self.path}/metrics", timeout=config.network_timeout).text
for line in metrics.split("\n"):
if line.startswith("kubernetes_build_info"):
for info in line[line.find('{') + 1: line.find('}')].split(','):
for info in line[line.find("{") + 1 : line.find("}")].split(","):
k, v = info.split("=")
if k == "gitVersion":
return v.strip("\"")
return v.strip('"')
# returns list of tuples of Privileged container and their pod.
def find_privileged_containers(self):
logger.debug("Passive hunter is attempting to find privileged containers and their pods")
privileged_containers = list()
logger.debug("Trying to find privileged containers and their pods")
privileged_containers = []
if self.pods_endpoint_data:
for pod in self.pods_endpoint_data["items"]:
for container in pod["spec"]["containers"]:
@@ -204,12 +209,12 @@ class ReadOnlyKubeletPortHunter(Hunter):
def get_pods_endpoint(self):
logger.debug("Attempting to find pods endpoints")
response = requests.get(self.path + "pods", timeout=config.network_timeout)
response = requests.get(f"{self.path}/pods", timeout=config.network_timeout)
if "items" in response.text:
return json.loads(response.text)
return response.json()
def check_healthz_endpoint(self):
r = requests.get(self.path + "healthz", verify=False, timeout=config.network_timeout)
r = requests.get(f"{self.path}/healthz", verify=False, timeout=config.network_timeout)
return r.text if r.status_code == 200 else False
def execute(self):
@@ -218,10 +223,9 @@ class ReadOnlyKubeletPortHunter(Hunter):
privileged_containers = self.find_privileged_containers()
healthz = self.check_healthz_endpoint()
if k8s_version:
self.publish_event(K8sVersionDisclosure(
version=k8s_version,
from_endpoint="/metrics",
extra_info="on the Kubelet"))
self.publish_event(
K8sVersionDisclosure(version=k8s_version, from_endpoint="/metrics", extra_info="on Kubelet")
)
if privileged_containers:
self.publish_event(PrivilegedContainers(containers=privileged_containers))
if healthz:
@@ -235,8 +239,10 @@ class SecureKubeletPortHunter(Hunter):
"""Kubelet Secure Ports Hunter
Hunts specific endpoints on an open secured Kubelet
"""
class DebugHandlers(object):
""" all methods will return the handler name if successful """
def __init__(self, path, pod, session=None):
self.path = path
self.session = session if session else requests.Session()
@@ -245,9 +251,7 @@ class SecureKubeletPortHunter(Hunter):
# outputs logs from a specific container
def test_container_logs(self):
logs_url = self.path + KubeletHandlers.CONTAINERLOGS.value.format(
pod_namespace=self.pod["namespace"],
pod_id=self.pod["name"],
container_name=self.pod["container"]
pod_namespace=self.pod["namespace"], pod_id=self.pod["name"], container_name=self.pod["container"],
)
return self.session.get(logs_url, verify=False, timeout=config.network_timeout).status_code == 200
@@ -259,14 +263,14 @@ class SecureKubeletPortHunter(Hunter):
pod_namespace=self.pod["namespace"],
pod_id=self.pod["name"],
container_name=self.pod["container"],
cmd=""
cmd="",
)
return (
"/cri/exec/"
in self.session.get(
exec_url, headers=headers, allow_redirects=False, verify=False, timeout=config.network_timeout,
).text
)
return "/cri/exec/" in self.session.get(
exec_url,
headers=headers,
allow_redirects=False,
verify=False,
timeout=config.network_timeout).text
# need further investigation on websockets protocol for further implementation
def test_port_forward(self):
@@ -275,28 +279,20 @@ class SecureKubeletPortHunter(Hunter):
"Connection": "Upgrade",
"Sec-Websocket-Key": "s",
"Sec-Websocket-Version": "13",
"Sec-Websocket-Protocol": "SPDY"
"Sec-Websocket-Protocol": "SPDY",
}
pf_url = self.path + KubeletHandlers.PORTFORWARD.value.format(
pod_namespace=self.pod["namespace"],
pod_id=self.pod["name"],
port=80
pod_namespace=self.pod["namespace"], pod_id=self.pod["name"], port=80,
)
self.session.get(
pf_url,
headers=headers,
verify=False,
stream=True,
timeout=config.network_timeout).status_code == 200
pf_url, headers=headers, verify=False, stream=True, timeout=config.network_timeout,
).status_code == 200
# TODO: what to return?
# executes one command and returns output
def test_run_container(self):
run_url = self.path + KubeletHandlers.RUN.value.format(
pod_namespace='test',
pod_id='test',
container_name='test',
cmd=""
pod_namespace="test", pod_id="test", container_name="test", cmd="",
)
# if we get a Method Not Allowed, we know we passed Authentication and Authorization.
return self.session.get(run_url, verify=False, timeout=config.network_timeout).status_code == 405
@@ -314,48 +310,52 @@ class SecureKubeletPortHunter(Hunter):
pod_namespace=self.pod["namespace"],
pod_id=self.pod["name"],
container_name=self.pod["container"],
cmd=""
cmd="",
)
return (
"/cri/attach/"
in self.session.get(
attach_url, allow_redirects=False, verify=False, timeout=config.network_timeout,
).text
)
return "/cri/attach/" in self.session.get(
attach_url,
allow_redirects=False,
verify=False,
timeout=config.network_timeout).text
# checks access to logs endpoint
def test_logs_endpoint(self):
logs_url = self.session.get(
self.path + KubeletHandlers.LOGS.value.format(path=""),
timeout=config.network_timeout).text
self.path + KubeletHandlers.LOGS.value.format(path=""), timeout=config.network_timeout,
).text
return "<pre>" in logs_url
# returns the cmd line used to run the kubelet
def test_pprof_cmdline(self):
cmd = self.session.get(
self.path + KubeletHandlers.PPROF_CMDLINE.value,
verify=False,
timeout=config.network_timeout)
self.path + KubeletHandlers.PPROF_CMDLINE.value, verify=False, timeout=config.network_timeout,
)
return cmd.text if cmd.status_code == 200 else None
def __init__(self, event):
self.event = event
self.session = requests.Session()
if self.event.secure:
self.session.headers.update({"Authorization": "Bearer {}".format(self.event.auth_token)})
self.session.headers.update({"Authorization": f"Bearer {self.event.auth_token}"})
# self.session.cert = self.event.client_cert
# copy session to event
self.event.session = self.session
self.path = "https://{}:{}/".format(self.event.host, 10250)
self.kubehunter_pod = {"name": "kube-hunter", "namespace": "default", "container": "kube-hunter"}
self.path = "https://{self.event.host}:10250"
self.kubehunter_pod = {
"name": "kube-hunter",
"namespace": "default",
"container": "kube-hunter",
}
self.pods_endpoint_data = ""
def get_pods_endpoint(self):
response = self.session.get(self.path + "pods", verify=False, timeout=config.network_timeout)
response = self.session.get(f"{self.path}/pods", verify=False, timeout=config.network_timeout)
if "items" in response.text:
return response.json()
def check_healthz_endpoint(self):
r = requests.get(self.path + "healthz", verify=False, timeout=config.network_timeout)
r = requests.get(f"{self.path}/healthz", verify=False, timeout=config.network_timeout)
return r.text if r.status_code == 200 else False
def execute(self):
@@ -374,7 +374,7 @@ class SecureKubeletPortHunter(Hunter):
# if kube-hunter runs in a pod, we test with kube-hunter's pod
pod = self.kubehunter_pod if config.pod else self.get_random_pod()
if pod:
debug_handlers = self.DebugHandlers(self.path, pod=pod, session=self.session)
debug_handlers = self.DebugHandlers(self.path, pod, self.session)
try:
# TODO: use named expressions, introduced in python3.8
running_pods = debug_handlers.test_running_pods()
@@ -419,7 +419,7 @@ class SecureKubeletPortHunter(Hunter):
return {
"name": pod_data["metadata"]["name"],
"container": container_data["name"],
"namespace": pod_data["metadata"]["namespace"]
"namespace": pod_data["metadata"]["namespace"],
}
@@ -428,34 +428,39 @@ class ProveRunHandler(ActiveHunter):
"""Kubelet Run Hunter
Executes uname inside of a random container
"""
def __init__(self, event):
self.event = event
self.base_path = f"https://{self.event.host}:{self.event.port}/"
self.base_path = f"https://{self.event.host}:{self.event.port}"
def run(self, command, container):
run_url = KubeletHandlers.RUN.value.format(
pod_namespace=container["namespace"],
pod_id=container["pod"],
container_name=container["name"],
cmd=command
cmd=command,
)
return self.event.session.post(self.base_path + run_url, verify=False, timeout=config.network_timeout).text
return self.event.session.post(
f"{self.base_path}/{run_url}", verify=False, timeout=config.network_timeout,
).text
def execute(self):
pods_raw = self.event.session.get(
self.base_path + KubeletHandlers.PODS.value,
verify=False,
timeout=config.network_timeout).text
if "items" in pods_raw:
pods_data = json.loads(pods_raw)['items']
r = self.event.session.get(
self.base_path + KubeletHandlers.PODS.value, verify=False, timeout=config.network_timeout,
)
if "items" in r.text:
pods_data = r.json()["items"]
for pod_data in pods_data:
container_data = next((container_data for container_data in pod_data["spec"]["containers"]), None)
container_data = next(pod_data["spec"]["containers"])
if container_data:
output = self.run("uname -a", container={
"namespace": pod_data["metadata"]["namespace"],
"pod": pod_data["metadata"]["name"],
"name": container_data["name"]
})
output = self.run(
"uname -a",
container={
"namespace": pod_data["metadata"]["namespace"],
"pod": pod_data["metadata"]["name"],
"name": container_data["name"],
},
)
if output and "exited with" not in output:
self.event.evidence = "uname -a: " + output
break
@@ -466,6 +471,7 @@ class ProveContainerLogsHandler(ActiveHunter):
"""Kubelet Container Logs Hunter
Retrieves logs from a random container
"""
def __init__(self, event):
self.event = event
protocol = "https" if self.event.port == 10250 else "http"
@@ -473,24 +479,26 @@ class ProveContainerLogsHandler(ActiveHunter):
def execute(self):
pods_raw = self.event.session.get(
self.base_url + KubeletHandlers.PODS.value,
verify=False,
timeout=config.network_timeout).text
self.base_url + KubeletHandlers.PODS.value, verify=False, timeout=config.network_timeout,
).text
if "items" in pods_raw:
pods_data = json.loads(pods_raw)['items']
pods_data = json.loads(pods_raw)["items"]
for pod_data in pods_data:
container_data = next((container_data for container_data in pod_data["spec"]["containers"]), None)
container_data = next(pod_data["spec"]["containers"])
if container_data:
output = requests.get(self.base_url + KubeletHandlers.CONTAINERLOGS.value.format(
pod_namespace=pod_data["metadata"]["namespace"],
pod_id=pod_data["metadata"]["name"],
container_name=container_data["name"]
), verify=False, timeout=config.network_timeout)
container_name = container_data["name"]
output = requests.get(
f"{self.base_url}/"
+ KubeletHandlers.CONTAINERLOGS.value.format(
pod_namespace=pod_data["metadata"]["namespace"],
pod_id=pod_data["metadata"]["name"],
container_name=container_name,
),
verify=False,
timeout=config.network_timeout,
)
if output.status_code == 200 and output.text:
self.event.evidence = "{}: {}".format(
container_data["name"],
output.text.encode('utf-8')
)
self.event.evidence = f"{container_name}: {output.text}"
return
@@ -499,19 +507,21 @@ class ProveSystemLogs(ActiveHunter):
"""Kubelet System Logs Hunter
Retrieves commands from host's system audit
"""
def __init__(self, event):
self.event = event
self.base_url = "https://{host}:{port}/".format(host=self.event.host, port=self.event.port)
self.base_url = f"https://{self.event.host}:{self.event.port}"
def execute(self):
audit_logs = self.event.session.get(
self.base_url + KubeletHandlers.LOGS.value.format(path="audit/audit.log"),
f"{self.base_url}/" + KubeletHandlers.LOGS.value.format(path="audit/audit.log"),
verify=False,
timeout=config.network_timeout).text
logger.debug(f"accessed audit log of host: {audit_logs[:10]}")
timeout=config.network_timeout,
).text
logger.debug(f"Audit log of host {self.event.host}: {audit_logs[:10]}")
# iterating over proctitles and converting them into readable strings
proctitles = list()
proctitles = []
for proctitle in re.findall(r"proctitle=(\w+)", audit_logs):
proctitles.append(bytes.fromhex(proctitle).decode('utf-8').replace("\x00", " "))
proctitles.append(bytes.fromhex(proctitle).decode("utf-8").replace("\x00", " "))
self.event.proctitles = proctitles
self.event.evidence = "audit log: {}".format('; '.join(proctitles))
self.event.evidence = f"audit log: {proctitles}"

View File

@@ -1,37 +1,44 @@
import logging
import re
import uuid
from kube_hunter.conf import config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import ActiveHunter, Hunter, KubernetesCluster, PrivilegeEscalation
from kube_hunter.modules.hunting.kubelet import ExposedPodsHandler, ExposedRunHandler, KubeletHandlers
from kube_hunter.core.types import (
ActiveHunter,
Hunter,
KubernetesCluster,
PrivilegeEscalation,
)
from kube_hunter.modules.hunting.kubelet import (
ExposedPodsHandler,
ExposedRunHandler,
KubeletHandlers,
)
logger = logging.getLogger(__name__)
class WriteMountToVarLog(Vulnerability, Event):
"""A pod can create symlinks in the /var/log directory on the host, which can lead to a root directory traveral"""
def __init__(self, pods):
Vulnerability.__init__(
self,
KubernetesCluster,
"Pod With Mount To /var/log",
category=PrivilegeEscalation,
vid="KHV047")
self, KubernetesCluster, "Pod With Mount To /var/log", category=PrivilegeEscalation, vid="KHV047",
)
self.pods = pods
self.evidence = "pods: {}".format(', '.join((pod["metadata"]["name"] for pod in self.pods)))
self.evidence = "pods: {}".format(", ".join((pod["metadata"]["name"] for pod in self.pods)))
class DirectoryTraversalWithKubelet(Vulnerability, Event):
"""An attacker can run commands on pods with mount to /var/log,
and traverse read all files on the host filesystem"""
def __init__(self, output):
Vulnerability.__init__(
self,
KubernetesCluster,
"Root Traversal Read On The Kubelet",
category=PrivilegeEscalation)
self, KubernetesCluster, "Root Traversal Read On The Kubelet", category=PrivilegeEscalation,
)
self.output = output
self.evidence = "output: {}".format(self.output)
@@ -42,6 +49,7 @@ class VarLogMountHunter(Hunter):
Hunt pods that have write access to host's /var/log. in such case,
the pod can traverse read files on the host machine
"""
def __init__(self, event):
self.event = event
@@ -67,26 +75,23 @@ class ProveVarLogMount(ActiveHunter):
"""Prove /var/log Mount Hunter
Tries to read /etc/shadow on the host by running commands inside a pod with host mount to /var/log
"""
def __init__(self, event):
self.event = event
self.base_path = "https://{host}:{port}/".format(host=self.event.host, port=self.event.port)
self.base_path = f"https://{self.event.host}:{self.event.port}"
def run(self, command, container):
run_url = KubeletHandlers.RUN.value.format(
podNamespace=container["namespace"],
podID=container["pod"],
containerName=container["name"],
cmd=command
podNamespace=container["namespace"], podID=container["pod"], containerName=container["name"], cmd=command,
)
return self.event.session.post(self.base_path + run_url, verify=False).text
return self.event.session.post(f"{self.base_path}/{run_url}", verify=False).text
# TODO: replace with multiple subscription to WriteMountToVarLog as well
def get_varlog_mounters(self):
logger.debug("accessing /pods manually on ProveVarLogMount")
pods = self.event.session.get(
self.base_path + KubeletHandlers.PODS.value,
verify=False,
timeout=config.network_timeout).json()["items"]
f"{self.base_path}/" + KubeletHandlers.PODS.value, verify=False, timeout=config.network_timeout,
).json()["items"]
for pod in pods:
volume = VarLogMountHunter(ExposedPodsHandler(pods=pods)).has_write_mount_to(pod, "/var/log")
if volume:
@@ -104,15 +109,16 @@ class ProveVarLogMount(ActiveHunter):
"""Returns content of file on the host, and cleans trails"""
symlink_name = str(uuid.uuid4())
# creating symlink to file
self.run("ln -s {} {}/{}".format(host_file, mount_path, symlink_name), container=container)
self.run(f"ln -s {host_file} {mount_path}/{symlink_name}", container)
# following symlink with kubelet
path_in_logs_endpoint = KubeletHandlers.LOGS.value.format(path=host_path.strip('/var/log')+symlink_name)
path_in_logs_endpoint = KubeletHandlers.LOGS.value.format(
path=re.sub(r"^/var/log", "", host_path) + symlink_name
)
content = self.event.session.get(
self.base_path + path_in_logs_endpoint,
verify=False,
timeout=config.network_timeout).text
f"{self.base_path}/{path_in_logs_endpoint}", verify=False, timeout=config.network_timeout,
).text
# removing symlink
self.run("rm {}/{}".format(mount_path, symlink_name), container=container)
self.run(f"rm {mount_path}/{symlink_name}", container=container)
return content
def execute(self):
@@ -126,10 +132,8 @@ class ProveVarLogMount(ActiveHunter):
}
try:
output = self.traverse_read(
"/etc/shadow",
container=cont,
mount_path=mount_path,
host_path=volume["hostPath"]["path"])
"/etc/shadow", container=cont, mount_path=mount_path, host_path=volume["hostPath"]["path"],
)
self.publish_event(DirectoryTraversalWithKubelet(output=output))
except Exception:
logger.debug("Could not exploit /var/log", exc_info=True)

View File

@@ -6,7 +6,12 @@ from enum import Enum
from kube_hunter.conf import config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability, K8sVersionDisclosure
from kube_hunter.core.types import ActiveHunter, Hunter, KubernetesCluster, InformationDisclosure
from kube_hunter.core.types import (
ActiveHunter,
Hunter,
KubernetesCluster,
InformationDisclosure,
)
from kube_hunter.modules.discovery.dashboard import KubeDashboardEvent
from kube_hunter.modules.discovery.proxy import KubeProxyEvent
@@ -15,8 +20,11 @@ logger = logging.getLogger(__name__)
class KubeProxyExposed(Vulnerability, Event):
"""All operations on the cluster are exposed"""
def __init__(self):
Vulnerability.__init__(self, KubernetesCluster, "Proxy Exposed", category=InformationDisclosure, vid="KHV049")
Vulnerability.__init__(
self, KubernetesCluster, "Proxy Exposed", category=InformationDisclosure, vid="KHV049",
)
class Service(Enum):
@@ -28,23 +36,24 @@ class KubeProxy(Hunter):
"""Proxy Hunting
Hunts for a dashboard behind the proxy
"""
def __init__(self, event):
self.event = event
self.api_url = "http://{host}:{port}/api/v1".format(host=self.event.host, port=self.event.port)
self.api_url = f"http://{self.event.host}:{self.event.port}/api/v1"
def execute(self):
self.publish_event(KubeProxyExposed())
for namespace, services in self.services.items():
for service in services:
if service == Service.DASHBOARD.value:
logger.debug(f"Found a dashboard service \"{service}\"")
logger.debug(f"Found a dashboard service '{service}'")
# TODO: check if /proxy is a convention on other services
curr_path = f"api/v1/namespaces/{namespace}/services/{service}/proxy"
self.publish_event(KubeDashboardEvent(path=curr_path, secure=False))
@property
def namespaces(self):
resource_json = requests.get(self.api_url + "/namespaces", timeout=config.network_timeout).json()
resource_json = requests.get(f"{self.api_url}/namespaces", timeout=config.network_timeout).json()
return self.extract_names(resource_json)
@property
@@ -52,8 +61,8 @@ class KubeProxy(Hunter):
# map between namespaces and service names
services = dict()
for namespace in self.namespaces:
resource_path = "/namespaces/{ns}/services".format(ns=namespace)
resource_json = requests.get(self.api_url + resource_path, timeout=config.network_timeout).json()
resource_path = f"{self.api_url}/namespaces/{namespace}/services"
resource_json = requests.get(resource_path, timeout=config.network_timeout).json()
services[namespace] = self.extract_names(resource_json)
logger.debug(f"Enumerated services [{' '.join(services)}]")
return services
@@ -71,14 +80,14 @@ class ProveProxyExposed(ActiveHunter):
"""Build Date Hunter
Hunts when proxy is exposed, extracts the build date of kubernetes
"""
def __init__(self, event):
self.event = event
def execute(self):
version_metadata = requests.get(
f"http://{self.event.host}:{self.event.port}/version",
verify=False,
timeout=config.network_timeout).json()
f"http://{self.event.host}:{self.event.port}/version", verify=False, timeout=config.network_timeout,
).json()
if "buildDate" in version_metadata:
self.event.evidence = "build date: {}".format(version_metadata["buildDate"])
@@ -88,16 +97,17 @@ class K8sVersionDisclosureProve(ActiveHunter):
"""K8s Version Hunter
Hunts Proxy when exposed, extracts the version
"""
def __init__(self, event):
self.event = event
def execute(self):
version_metadata = requests.get(
f"http://{self.event.host}:{self.event.port}/version",
verify=False,
timeout=config.network_timeout).json()
f"http://{self.event.host}:{self.event.port}/version", verify=False, timeout=config.network_timeout,
).json()
if "gitVersion" in version_metadata:
self.publish_event(K8sVersionDisclosure(
version=version_metadata["gitVersion"],
from_endpoint="/version",
extra_info="on the kube-proxy"))
self.publish_event(
K8sVersionDisclosure(
version=version_metadata["gitVersion"], from_endpoint="/version", extra_info="on kube-proxy",
)
)

View File

@@ -13,8 +13,13 @@ class ServiceAccountTokenAccess(Vulnerability, Event):
""" Accessing the pod service account token gives an attacker the option to use the server API """
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Read access to pod's service account token",
category=AccessRisk, vid="KHV050")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Read access to pod's service account token",
category=AccessRisk,
vid="KHV050",
)
self.evidence = evidence
@@ -22,7 +27,9 @@ class SecretsAccess(Vulnerability, Event):
""" Accessing the pod's secrets within a compromised pod might disclose valuable data to a potential attacker"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Access to pod's secrets", category=AccessRisk)
Vulnerability.__init__(
self, component=KubernetesCluster, name="Access to pod's secrets", category=AccessRisk,
)
self.evidence = evidence
@@ -34,12 +41,15 @@ class AccessSecrets(Hunter):
def __init__(self, event):
self.event = event
self.secrets_evidence = ''
self.secrets_evidence = ""
def get_services(self):
logger.debug("Trying to access pod's secrets directory")
# get all files and subdirectories files:
self.secrets_evidence = [val for sublist in [[os.path.join(i[0], j) for j in i[2]] for i in os.walk('/var/run/secrets/')] for val in sublist]
self.secrets_evidence = []
for dirname, _, files in os.walk("/var/run/secrets/"):
for f in files:
self.secrets_evidence.append(os.path.join(dirname, f))
return True if (len(self.secrets_evidence) > 0) else False
def execute(self):

View File

@@ -1 +1,3 @@
from kube_hunter.modules.report.factory import get_reporter, get_dispatcher
__all__ = [get_reporter, get_dispatcher]

View File

@@ -1,6 +1,11 @@
from kube_hunter.core.types import Discovery
from kube_hunter.modules.report.collector import services, vulnerabilities, \
hunters, services_lock, vulnerabilities_lock
from kube_hunter.modules.report.collector import (
services,
vulnerabilities,
hunters,
services_lock,
vulnerabilities_lock,
)
class BaseReporter(object):
@@ -11,55 +16,48 @@ class BaseReporter(object):
for service in services:
node_location = str(service.host)
if node_location not in node_locations:
nodes.append({
"type": "Node/Master",
"location": str(service.host)
})
nodes.append({"type": "Node/Master", "location": node_location})
node_locations.add(node_location)
return nodes
def get_services(self):
with services_lock:
services_data = [{
"service": service.get_name(),
"location": f"{service.host}:"
f"{service.port}"
f"{service.get_path()}",
"description": service.explain()
} for service in services]
return services_data
return [
{"service": service.get_name(), "location": f"{service.host}:{service.port}{service.get_path()}"}
for service in services
]
def get_vulnerabilities(self):
with vulnerabilities_lock:
vulnerabilities_data = [{
"location": vuln.location(),
"vid": vuln.get_vid(),
"category": vuln.category.name,
"severity": vuln.get_severity(),
"vulnerability": vuln.get_name(),
"description": vuln.explain(),
"evidence": str(vuln.evidence),
"hunter": vuln.hunter.get_name()
} for vuln in vulnerabilities]
return vulnerabilities_data
return [
{
"location": vuln.location(),
"vid": vuln.get_vid(),
"category": vuln.category.name,
"severity": vuln.get_severity(),
"vulnerability": vuln.get_name(),
"description": vuln.explain(),
"evidence": str(vuln.evidence),
"hunter": vuln.hunter.get_name(),
}
for vuln in vulnerabilities
]
def get_hunter_statistics(self):
hunters_data = list()
hunters_data = []
for hunter, docs in hunters.items():
if not Discovery in hunter.__mro__:
if Discovery not in hunter.__mro__:
name, doc = hunter.parse_docs(docs)
hunters_data.append({
"name": name,
"description": doc,
"vulnerabilities": hunter.publishedVulnerabilities
})
hunters_data.append(
{"name": name, "description": doc, "vulnerabilities": hunter.publishedVulnerabilities}
)
return hunters_data
def get_report(self, *, statistics, **kwargs):
report = {
"nodes": self.get_nodes(),
"services": self.get_services(),
"vulnerabilities": self.get_vulnerabilities()
"vulnerabilities": self.get_vulnerabilities(),
}
if statistics:

View File

@@ -3,7 +3,14 @@ import threading
from kube_hunter.conf import config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Service, Vulnerability, HuntFinished, HuntStarted, ReportDispatched
from kube_hunter.core.events.types import (
Event,
Service,
Vulnerability,
HuntFinished,
HuntStarted,
ReportDispatched,
)
logger = logging.getLogger(__name__)
@@ -32,11 +39,11 @@ class Collector(object):
if Service in bases:
with services_lock:
services.append(self.event)
logger.info(f"Found open service \"{self.event.get_name()}\" at {self.event.location()}")
logger.info(f'Found open service "{self.event.get_name()}" at {self.event.location()}')
elif Vulnerability in bases:
with vulnerabilities_lock:
vulnerabilities.append(self.event)
logger.info(f"Found vulnerability \"{self.event.get_name()}\" in {self.event.location()}")
logger.info(f'Found vulnerability "{self.event.get_name()}" in {self.event.location()}')
class TablesPrinted(Event):

View File

@@ -2,39 +2,26 @@ import logging
import os
import requests
from kube_hunter.conf import config
logger = logging.getLogger(__name__)
class HTTPDispatcher(object):
def dispatch(self, report):
logger.debug("Dispatching report via HTTP")
dispatch_method = os.environ.get(
'KUBEHUNTER_HTTP_DISPATCH_METHOD',
'POST'
).upper()
dispatch_url = os.environ.get(
'KUBEHUNTER_HTTP_DISPATCH_URL',
'https://localhost/'
)
dispatch_method = os.environ.get("KUBEHUNTER_HTTP_DISPATCH_METHOD", "POST").upper()
dispatch_url = os.environ.get("KUBEHUNTER_HTTP_DISPATCH_URL", "https://localhost/")
try:
r = requests.request(
dispatch_method,
dispatch_url,
json=report,
headers={'Content-Type': 'application/json'}
dispatch_method, dispatch_url, json=report, headers={"Content-Type": "application/json"},
)
r.raise_for_status()
logger.info(f"Report was dispatched to: {dispatch_url}")
logger.debug(f"Dispatch responded {r.status_code} with: {r.text}")
except requests.HTTPError as e:
# specific http exceptions
logger.exception(f"Failed making HTTP {dispatch_method} to {dispatch_url}, status code {r.status_code}")
except Exception as e:
# default all exceptions
logger.exception(f"Could not dispatch report using HTTP {dispatch_method} to {dispatch_url}")
except requests.HTTPError:
logger.exception(f"Failed making HTTP {dispatch_method} to {dispatch_url}, " f"status code {r.status_code}")
except Exception:
logger.exception(f"Could not dispatch report to {dispatch_url}")
class STDOUTDispatcher(object):

View File

@@ -1,21 +1,23 @@
from kube_hunter.modules.report.json import JSONReporter
from kube_hunter.modules.report.yaml import YAMLReporter
from kube_hunter.modules.report.plain import PlainReporter
from kube_hunter.modules.report.dispatchers import \
STDOUTDispatcher, HTTPDispatcher
from kube_hunter.modules.report.dispatchers import STDOUTDispatcher, HTTPDispatcher
import logging
logger = logging.getLogger(__name__)
DEFAULT_REPORTER = "plain"
reporters = {
'yaml': YAMLReporter,
'json': JSONReporter,
'plain': PlainReporter
"yaml": YAMLReporter,
"json": JSONReporter,
"plain": PlainReporter,
}
DEFAULT_DISPATCHER = "stdout"
dispatchers = {
'stdout': STDOUTDispatcher,
'http': HTTPDispatcher
"stdout": STDOUTDispatcher,
"http": HTTPDispatcher,
}
@@ -23,13 +25,13 @@ def get_reporter(name):
try:
return reporters[name.lower()]()
except KeyError:
logger.warning(f"Unknown reporter \"{name}\", using plain")
return reporters['plain']()
logger.warning(f'Unknown reporter "{name}", using f{DEFAULT_REPORTER}')
return reporters[DEFAULT_REPORTER]()
def get_dispatcher(name):
try:
return dispatchers[name.lower()]()
except KeyError:
logger.warning(f"Unknown dispatcher \"{name}\", using stdout")
return dispatchers['stdout']()
logger.warning(f'Unknown dispatcher "{name}", using {DEFAULT_DISPATCHER}')
return dispatchers[DEFAULT_DISPATCHER]()

View File

@@ -3,8 +3,13 @@ from __future__ import print_function
from prettytable import ALL, PrettyTable
from kube_hunter.modules.report.base import BaseReporter
from kube_hunter.modules.report.collector import services, vulnerabilities, \
hunters, services_lock, vulnerabilities_lock
from kube_hunter.modules.report.collector import (
services,
vulnerabilities,
hunters,
services_lock,
vulnerabilities_lock,
)
EVIDENCE_PREVIEW = 40
MAX_TABLE_WIDTH = 20
@@ -12,7 +17,6 @@ KB_LINK = "https://github.com/aquasecurity/kube-hunter/tree/master/docs/_kb"
class PlainReporter(BaseReporter):
def get_report(self, *, statistics=None, mapping=None, **kwargs):
"""generates report tables"""
output = ""
@@ -74,19 +78,20 @@ class PlainReporter(BaseReporter):
with services_lock:
for service in services:
services_table.add_row(
[service.get_name(),
f"{service.host}:"
f"{service.port}"
f"{service.get_path()}",
service.explain()])
detected_services_ret = "\nDetected Services\n" \
f"{services_table}\n"
[service.get_name(), f"{service.host}:{service.port}{service.get_path()}", service.explain()]
)
detected_services_ret = f"\nDetected Services\n{services_table}\n"
return detected_services_ret
def vulns_table(self):
column_names = ["ID", "Location",
"Category", "Vulnerability",
"Description", "Evidence"]
column_names = [
"ID",
"Location",
"Category",
"Vulnerability",
"Description",
"Evidence",
]
vuln_table = PrettyTable(column_names, hrules=ALL)
vuln_table.align = "l"
vuln_table.max_width = MAX_TABLE_WIDTH
@@ -97,10 +102,23 @@ class PlainReporter(BaseReporter):
with vulnerabilities_lock:
for vuln in vulnerabilities:
evidence = str(vuln.evidence)[:EVIDENCE_PREVIEW] + "..." if len(str(vuln.evidence)) > EVIDENCE_PREVIEW else str(vuln.evidence)
row = [vuln.get_vid(), vuln.location(), vuln.category.name, vuln.get_name(), vuln.explain(), evidence]
evidence = str(vuln.evidence)
if len(evidence) > EVIDENCE_PREVIEW:
evidence = evidence[:EVIDENCE_PREVIEW] + "..."
row = [
vuln.get_vid(),
vuln.location(),
vuln.category.name,
vuln.get_name(),
vuln.explain(),
evidence,
]
vuln_table.add_row(row)
return "\nVulnerabilities\nFor further information about a vulnerability, search its ID in: \n{}\n{}\n".format(KB_LINK, vuln_table)
return (
"\nVulnerabilities\n"
"For further information about a vulnerability, search its ID in: \n"
f"{KB_LINK}\n{vuln_table}\n"
)
def hunters_table(self):
column_names = ["Name", "Description", "Vulnerabilities"]
@@ -115,4 +133,4 @@ class PlainReporter(BaseReporter):
hunter_statistics = self.get_hunter_statistics()
for item in hunter_statistics:
hunters_table.add_row([item.get("name"), item.get("description"), item.get("vulnerabilities")])
return "\nHunter Statistics\n{}\n".format(hunters_table)
return f"\nHunter Statistics\n{hunters_table}\n"

View File

@@ -2,6 +2,6 @@ from os.path import dirname, basename, isfile
import glob
# dynamically importing all modules in folder
files = glob.glob(dirname(__file__)+"/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith('__init__.py')):
exec('from .{} import *'.format(module_name))
files = glob.glob(dirname(__file__) + "/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith("__init__.py")):
exec(f"from .{module_name} import *")

32
pyproject.toml Normal file
View File

@@ -0,0 +1,32 @@
[tool.black]
line-length = 120
target-version = ['py36']
include = '\.pyi?$'
exclude = '''
(
\.eggs
| \.git
| \.hg
| \.mypy_cache
| \.tox
| \venv
| \.venv
| _build
| buck-out
| build
| dist
| \.vscode
| \.idea
| \.Python
| develop-eggs
| downloads
| eggs
| lib
| lib64
| parts
| sdist
| var
| .*\.egg-info
| \.DS_Store
)
'''

View File

@@ -10,3 +10,6 @@ setuptools_scm
twine
pyinstaller
staticx
black
pre-commit
flake8-bugbear

View File

@@ -1,12 +0,0 @@
#!/usr/bin/env python3
import argparse
import pytest
import tests
def main():
exit(pytest.main(['.']))
if __name__ == '__main__':
main()

View File

@@ -50,16 +50,11 @@ class PyInstallerCommand(Command):
for r in requirements:
command.extend(["--hidden-import", r.key])
command.append("kube_hunter/__main__.py")
print(' '.join(command))
print(" ".join(command))
check_call(command)
setup(
use_scm_version={
"fallback_version": "noversion"
},
cmdclass={
"dependencies": ListDependenciesCommand,
"pyinstaller": PyInstallerCommand,
},
use_scm_version={"fallback_version": "noversion"},
cmdclass={"dependencies": ListDependenciesCommand, "pyinstaller": PyInstallerCommand},
)

View File

@@ -15,13 +15,9 @@ def test_presetcloud():
def test_getcloud():
fake_host = "1.2.3.4"
expected_cloud = "Azure"
result = {
"cloud": expected_cloud
}
result = {"cloud": expected_cloud}
with requests_mock.mock() as m:
m.get(f'https://api.azurespeed.com/api/region?ipOrUrl={fake_host}',
text=json.dumps(result))
m.get(f"https://api.azurespeed.com/api/region?ipOrUrl={fake_host}", text=json.dumps(result))
hostEvent = NewHostEvent(host=fake_host)
assert hostEvent.cloud == expected_cloud

View File

@@ -6,20 +6,24 @@ from kube_hunter.core.events import handler
counter = 0
class OnceOnlyEvent(Service, Event):
def __init__(self):
Service.__init__(self, "Test Once Service")
class RegularEvent(Service, Event):
def __init__(self):
Service.__init__(self, "Test Service")
@handler.subscribe_once(OnceOnlyEvent)
class OnceHunter(Hunter):
def __init__(self, event):
global counter
counter += 1
@handler.subscribe(RegularEvent)
class RegularHunter(Hunter):
def __init__(self, event):

View File

@@ -7,44 +7,52 @@ from kube_hunter.core.events import handler
counter = 0
def test_ApiServer():
global counter
counter = 0
with requests_mock.Mocker() as m:
m.get('https://mockOther:443', text='elephant')
m.get('https://mockKubernetes:443', text='{"code":403}', status_code=403)
m.get('https://mockKubernetes:443/version', text='{"major": "1.14.10"}', status_code=200)
m.get("https://mockOther:443", text="elephant")
m.get("https://mockKubernetes:443", text='{"code":403}', status_code=403)
m.get(
"https://mockKubernetes:443/version", text='{"major": "1.14.10"}', status_code=200,
)
e = Event()
e.protocol = "https"
e.port = 443
e.host = 'mockOther'
e.host = "mockOther"
a = ApiServiceDiscovery(e)
a.execute()
e.host = 'mockKubernetes'
e.host = "mockKubernetes"
a.execute()
# Allow the events to be processed. Only the one to mockKubernetes should trigger an event
time.sleep(1)
assert counter == 1
def test_ApiServerWithServiceAccountToken():
global counter
counter = 0
with requests_mock.Mocker() as m:
m.get('https://mockKubernetes:443', request_headers={'Authorization':'Bearer very_secret'}, text='{"code":200}')
m.get('https://mockKubernetes:443', text='{"code":403}', status_code=403)
m.get('https://mockKubernetes:443/version', text='{"major": "1.14.10"}', status_code=200)
m.get('https://mockOther:443', text='elephant')
m.get(
"https://mockKubernetes:443", request_headers={"Authorization": "Bearer very_secret"}, text='{"code":200}',
)
m.get("https://mockKubernetes:443", text='{"code":403}', status_code=403)
m.get(
"https://mockKubernetes:443/version", text='{"major": "1.14.10"}', status_code=200,
)
m.get("https://mockOther:443", text="elephant")
e = Event()
e.protocol = "https"
e.port = 443
# We should discover an API Server regardless of whether we have a token
e.host = 'mockKubernetes'
e.host = "mockKubernetes"
a = ApiServiceDiscovery(e)
a.execute()
time.sleep(0.1)
@@ -57,7 +65,7 @@ def test_ApiServerWithServiceAccountToken():
assert counter == 2
# But we shouldn't generate an event if we don't see an error code or find the 'major' in /version
e.host = 'mockOther'
e.host = "mockOther"
a = ApiServiceDiscovery(e)
a.execute()
time.sleep(0.1)
@@ -65,11 +73,13 @@ def test_ApiServerWithServiceAccountToken():
def test_InsecureApiServer():
global counter
global counter
counter = 0
with requests_mock.Mocker() as m:
m.get('http://mockOther:8080', text='elephant')
m.get('http://mockKubernetes:8080', text="""{
m.get("http://mockOther:8080", text="elephant")
m.get(
"http://mockKubernetes:8080",
text="""{
"paths": [
"/api",
"/api/v1",
@@ -78,20 +88,21 @@ def test_InsecureApiServer():
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io"
]}""")
]}""",
)
m.get('http://mockKubernetes:8080/version', text='{"major": "1.14.10"}')
m.get('http://mockOther:8080/version', status_code=404)
m.get("http://mockKubernetes:8080/version", text='{"major": "1.14.10"}')
m.get("http://mockOther:8080/version", status_code=404)
e = Event()
e.protocol = "http"
e.port = 8080
e.host = 'mockOther'
e.host = "mockOther"
a = ApiServiceDiscovery(e)
a.execute()
e.host = 'mockKubernetes'
e.host = "mockKubernetes"
a.execute()
# Allow the events to be processed. Only the one to mockKubernetes should trigger an event
@@ -99,12 +110,11 @@ def test_InsecureApiServer():
assert counter == 1
# We should only generate an ApiServer event for a response that looks like it came from a Kubernetes node
@handler.subscribe(ApiServer)
class testApiServer(object):
def __init__(self, event):
print("Event")
assert event.host == 'mockKubernetes'
assert event.host == "mockKubernetes"
global counter
counter += 1
counter += 1

View File

@@ -1,12 +1,16 @@
import requests_mock
import time
from queue import Empty
from kube_hunter.modules.discovery.hosts import FromPodHostDiscovery, RunningAsPodEvent, HostScanEvent, AzureMetadataApi
from kube_hunter.core.events.types import Event, NewHostEvent
from kube_hunter.modules.discovery.hosts import (
FromPodHostDiscovery,
RunningAsPodEvent,
HostScanEvent,
AzureMetadataApi,
)
from kube_hunter.core.events.types import NewHostEvent
from kube_hunter.core.events import handler
from kube_hunter.conf import config
def test_FromPodHostDiscovery():
with requests_mock.Mocker() as m:
@@ -15,7 +19,9 @@ def test_FromPodHostDiscovery():
config.azure = False
config.remote = None
config.cidr = None
m.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", status_code=404)
m.get(
"http://169.254.169.254/metadata/instance?api-version=2017-08-01", status_code=404,
)
f = FromPodHostDiscovery(e)
assert not f.is_azure_pod()
# TODO For now we don't test the traceroute discovery version
@@ -23,8 +29,10 @@ def test_FromPodHostDiscovery():
# Test that we generate NewHostEvent for the addresses reported by the Azure Metadata API
config.azure = True
m.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", \
text='{"network":{"interface":[{"ipv4":{"subnet":[{"address": "3.4.5.6", "prefix": "255.255.255.252"}]}}]}}')
m.get(
"http://169.254.169.254/metadata/instance?api-version=2017-08-01",
text='{"network":{"interface":[{"ipv4":{"subnet":[{"address": "3.4.5.6", "prefix": "255.255.255.252"}]}}]}}',
)
assert f.is_azure_pod()
f.execute()
@@ -33,7 +41,7 @@ def test_FromPodHostDiscovery():
f.execute()
config.azure = False
config.remote = None
config.remote = None
config.cidr = "1.2.3.4/24"
f.execute()
@@ -44,7 +52,8 @@ class testHostDiscovery(object):
def __init__(self, event):
assert config.remote is not None or config.cidr is not None
assert config.remote == "1.2.3.4" or config.cidr == "1.2.3.4/24"
# In this set of tests we should only get as far as finding a host if it's Azure
# because we're not running the code that would normally be triggered by a HostScanEvent
@handler.subscribe(NewHostEvent)
@@ -52,9 +61,10 @@ class testHostDiscoveryEvent(object):
def __init__(self, event):
assert config.azure
assert str(event.host).startswith("3.4.5.")
assert config.remote is None
assert config.remote is None
assert config.cidr is None
# Test that we only report this event for Azure hosts
@handler.subscribe(AzureMetadataApi)
class testAzureMetadataApi(object):

View File

@@ -1,17 +1,27 @@
import requests_mock
import time
from kube_hunter.modules.hunting.apiserver import AccessApiServer, AccessApiServerWithToken, ServerApiAccess, AccessApiServerActive
from kube_hunter.modules.hunting.apiserver import ListNamespaces, ListPodsAndNamespaces, ListRoles, ListClusterRoles
from kube_hunter.modules.hunting.apiserver import (
AccessApiServer,
AccessApiServerWithToken,
ServerApiAccess,
AccessApiServerActive,
)
from kube_hunter.modules.hunting.apiserver import (
ListNamespaces,
ListPodsAndNamespaces,
ListRoles,
ListClusterRoles,
)
from kube_hunter.modules.hunting.apiserver import ApiServerPassiveHunterFinished
from kube_hunter.modules.hunting.apiserver import CreateANamespace, DeleteANamespace
from kube_hunter.modules.discovery.apiserver import ApiServer
from kube_hunter.core.events.types import Event, K8sVersionDisclosure
from kube_hunter.core.types import UnauthenticatedAccess, InformationDisclosure
from kube_hunter.core.events import handler
counter = 0
def test_ApiServerToken():
global counter
counter = 0
@@ -28,6 +38,7 @@ def test_ApiServerToken():
time.sleep(0.01)
assert counter == 0
def test_AccessApiServer():
global counter
counter = 0
@@ -38,16 +49,29 @@ def test_AccessApiServer():
e.protocol = "https"
with requests_mock.Mocker() as m:
m.get('https://mockKubernetes:443/api', text='{}')
m.get('https://mockKubernetes:443/api/v1/namespaces', text='{"items":[{"metadata":{"name":"hello"}}]}')
m.get('https://mockKubernetes:443/api/v1/pods',
m.get("https://mockKubernetes:443/api", text="{}")
m.get(
"https://mockKubernetes:443/api/v1/namespaces", text='{"items":[{"metadata":{"name":"hello"}}]}',
)
m.get(
"https://mockKubernetes:443/api/v1/pods",
text='{"items":[{"metadata":{"name":"podA", "namespace":"namespaceA"}}, \
{"metadata":{"name":"podB", "namespace":"namespaceB"}}]}')
m.get('https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/roles', status_code=403)
m.get('https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles', text='{"items":[]}')
m.get('https://mockkubernetes:443/version', text='{"major": "1","minor": "13+", "gitVersion": "v1.13.6-gke.13", \
"gitCommit": "fcbc1d20b6bca1936c0317743055ac75aef608ce", "gitTreeState": "clean", "buildDate": "2019-06-19T20:50:07Z", \
"goVersion": "go1.11.5b4", "compiler": "gc", "platform": "linux/amd64"}')
{"metadata":{"name":"podB", "namespace":"namespaceB"}}]}',
)
m.get(
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/roles", status_code=403,
)
m.get(
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles", text='{"items":[]}',
)
m.get(
"https://mockkubernetes:443/version",
text='{"major": "1","minor": "13+", "gitVersion": "v1.13.6-gke.13", \
"gitCommit": "fcbc1d20b6bca1936c0317743055ac75aef608ce", \
"gitTreeState": "clean", "buildDate": "2019-06-19T20:50:07Z", \
"goVersion": "go1.11.5b4", "compiler": "gc", \
"platform": "linux/amd64"}',
)
h = AccessApiServer(e)
h.execute()
@@ -60,17 +84,25 @@ def test_AccessApiServer():
counter = 0
with requests_mock.Mocker() as m:
# TODO check that these responses reflect what Kubernetes does
m.get('https://mockKubernetesToken:443/api', text='{}')
m.get('https://mockKubernetesToken:443/api/v1/namespaces', text='{"items":[{"metadata":{"name":"hello"}}]}')
m.get('https://mockKubernetesToken:443/api/v1/pods',
m.get("https://mocktoken:443/api", text="{}")
m.get(
"https://mocktoken:443/api/v1/namespaces", text='{"items":[{"metadata":{"name":"hello"}}]}',
)
m.get(
"https://mocktoken:443/api/v1/pods",
text='{"items":[{"metadata":{"name":"podA", "namespace":"namespaceA"}}, \
{"metadata":{"name":"podB", "namespace":"namespaceB"}}]}')
m.get('https://mockkubernetesToken:443/apis/rbac.authorization.k8s.io/v1/roles', status_code=403)
m.get('https://mockkubernetesToken:443/apis/rbac.authorization.k8s.io/v1/clusterroles',
text='{"items":[{"metadata":{"name":"my-role"}}]}')
{"metadata":{"name":"podB", "namespace":"namespaceB"}}]}',
)
m.get(
"https://mocktoken:443/apis/rbac.authorization.k8s.io/v1/roles", status_code=403,
)
m.get(
"https://mocktoken:443/apis/rbac.authorization.k8s.io/v1/clusterroles",
text='{"items":[{"metadata":{"name":"my-role"}}]}',
)
e.auth_token = "so-secret"
e.host = "mockKubernetesToken"
e.host = "mocktoken"
h = AccessApiServerWithToken(e)
h.execute()
@@ -78,12 +110,13 @@ def test_AccessApiServer():
time.sleep(0.01)
assert counter == 5
@handler.subscribe(ListNamespaces)
class test_ListNamespaces(object):
def __init__(self, event):
print("ListNamespaces")
assert event.evidence == ['hello']
if event.host == "mockKubernetesToken":
assert event.evidence == ["hello"]
if event.host == "mocktoken":
assert event.auth_token == "so-secret"
else:
assert event.auth_token is None
@@ -101,7 +134,7 @@ class test_ListPodsAndNamespaces(object):
assert pod["namespace"] == "namespaceA"
if pod["name"] == "podB":
assert pod["namespace"] == "namespaceB"
if event.host == "mockKubernetesToken":
if event.host == "mocktoken":
assert event.auth_token == "so-secret"
assert "token" in event.name
assert "anon" not in event.name
@@ -112,6 +145,7 @@ class test_ListPodsAndNamespaces(object):
global counter
counter += 1
# Should never see this because the API call in the test returns 403 status code
@handler.subscribe(ListRoles)
class test_ListRoles(object):
@@ -121,6 +155,7 @@ class test_ListRoles(object):
global counter
counter += 1
# Should only see this when we have a token because the API call returns an empty list of items
# in the test where we have no token
@handler.subscribe(ListClusterRoles)
@@ -131,6 +166,7 @@ class test_ListClusterRoles(object):
global counter
counter += 1
@handler.subscribe(ServerApiAccess)
class test_ServerApiAccess(object):
def __init__(self, event):
@@ -143,6 +179,7 @@ class test_ServerApiAccess(object):
global counter
counter += 1
@handler.subscribe(ApiServerPassiveHunterFinished)
class test_PassiveHunterFinished(object):
def __init__(self, event):
@@ -151,6 +188,7 @@ class test_PassiveHunterFinished(object):
global counter
counter += 1
def test_AccessApiServerActive():
e = ApiServerPassiveHunterFinished(namespaces=["hello-namespace"])
e.host = "mockKubernetes"
@@ -159,7 +197,9 @@ def test_AccessApiServerActive():
with requests_mock.Mocker() as m:
# TODO more tests here with real responses
m.post('https://mockKubernetes:443/api/v1/namespaces', text="""
m.post(
"https://mockKubernetes:443/api/v1/namespaces",
text="""
{
"kind": "Namespace",
"apiVersion": "v1",
@@ -179,14 +219,23 @@ def test_AccessApiServerActive():
"phase": "Active"
}
}
"""
)
m.post('https://mockKubernetes:443/api/v1/clusterroles', text='{}')
m.post('https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles', text='{}')
m.post('https://mockkubernetes:443/api/v1/namespaces/hello-namespace/pods', text='{}')
m.post('https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/namespaces/hello-namespace/roles', text='{}')
""",
)
m.post("https://mockKubernetes:443/api/v1/clusterroles", text="{}")
m.post(
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles", text="{}",
)
m.post(
"https://mockkubernetes:443/api/v1/namespaces/hello-namespace/pods", text="{}",
)
m.post(
"https://mockkubernetes:443" "/apis/rbac.authorization.k8s.io/v1/namespaces/hello-namespace/roles",
text="{}",
)
m.delete('https://mockKubernetes:443/api/v1/namespaces/abcde', text="""
m.delete(
"https://mockKubernetes:443/api/v1/namespaces/abcde",
text="""
{
"kind": "Namespace",
"apiVersion": "v1",
@@ -207,16 +256,19 @@ def test_AccessApiServerActive():
"phase": "Terminating"
}
}
""")
""",
)
h = AccessApiServerActive(e)
h.execute()
@handler.subscribe(CreateANamespace)
class test_CreateANamespace(object):
def __init__(self, event):
assert "abcde" in event.evidence
@handler.subscribe(DeleteANamespace)
class test_DeleteANamespace(object):
def __init__(self, event):

View File

@@ -1,12 +1,17 @@
import time
import requests_mock
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import K8sVersionDisclosure
from kube_hunter.modules.hunting.cves import K8sClusterCveHunter, ServerApiVersionEndPointAccessPE, ServerApiVersionEndPointAccessDos, CveUtils
from kube_hunter.modules.hunting.cves import (
K8sClusterCveHunter,
ServerApiVersionEndPointAccessPE,
ServerApiVersionEndPointAccessDos,
CveUtils,
)
cve_counter = 0
def test_K8sCveHunter():
global cve_counter
# because the hunter unregisters itself, we manually remove this option, so we can test it
@@ -36,45 +41,47 @@ class test_CVE_2018_1002105(object):
global cve_counter
cve_counter += 1
@handler.subscribe(ServerApiVersionEndPointAccessDos)
class test_CVE_2019_1002100(object):
class test_CVE_2019_1002100:
def __init__(self, event):
global cve_counter
cve_counter += 1
class test_CveUtils(object):
def test_is_downstream():
class TestCveUtils:
def test_is_downstream(self):
test_cases = (
('1', False),
('1.2', False),
('1.2-3', True),
('1.2-r3', True),
('1.2+3', True),
('1.2~3', True),
('1.2+a3f5cb2', True),
('1.2-9287543', True),
('v1', False),
('v1.2', False),
('v1.2-3', True),
('v1.2-r3', True),
('v1.2+3', True),
('v1.2~3', True),
('v1.2+a3f5cb2', True),
('v1.2-9287543', True),
('v1.13.9-gke.3', True)
("1", False),
("1.2", False),
("1.2-3", True),
("1.2-r3", True),
("1.2+3", True),
("1.2~3", True),
("1.2+a3f5cb2", True),
("1.2-9287543", True),
("v1", False),
("v1.2", False),
("v1.2-3", True),
("v1.2-r3", True),
("v1.2+3", True),
("v1.2~3", True),
("v1.2+a3f5cb2", True),
("v1.2-9287543", True),
("v1.13.9-gke.3", True),
)
for version, expected in test_cases:
actual = CveUtils.is_downstream_version(version)
assert actual == expected
def test_ignore_downstream():
def test_ignore_downstream(self):
test_cases = (
('v2.2-abcd', ['v1.1', 'v2.3'], False),
('v2.2-abcd', ['v1.1', 'v2.2'], False),
('v1.13.9-gke.3', ['v1.14.8'], False)
("v2.2-abcd", ["v1.1", "v2.3"], False),
("v2.2-abcd", ["v1.1", "v2.2"], False),
("v1.13.9-gke.3", ["v1.14.8"], False),
)
for check_version, fix_versions, expected in test_cases:
actual = CveUtils.is_vulnerable(check_version, fix_versions, True)
actual = CveUtils.is_vulnerable(fix_versions, check_version, True)
assert actual == expected

View File

@@ -1,6 +1,11 @@
from kube_hunter.modules.report import get_reporter, get_dispatcher
from kube_hunter.modules.report.factory import YAMLReporter, JSONReporter, \
PlainReporter, HTTPDispatcher, STDOUTDispatcher
from kube_hunter.modules.report.factory import (
YAMLReporter,
JSONReporter,
PlainReporter,
HTTPDispatcher,
STDOUTDispatcher,
)
def test_reporters():
@@ -8,7 +13,7 @@ def test_reporters():
("plain", PlainReporter),
("json", JSONReporter),
("yaml", YAMLReporter),
("notexists", PlainReporter)
("notexists", PlainReporter),
]
for report_type, expected in test_cases:
@@ -20,7 +25,7 @@ def test_dispatchers():
test_cases = [
("stdout", STDOUTDispatcher),
("http", HTTPDispatcher),
("notexists", STDOUTDispatcher)
("notexists", STDOUTDispatcher),
]
for dispatcher_type, expected in test_cases: