mirror of
https://github.com/aquasecurity/kube-hunter.git
synced 2026-02-14 18:09:56 +00:00
Compare commits
17 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bc47f08e88 | ||
|
|
3e1347290b | ||
|
|
7479aae9ba | ||
|
|
e8827b24f6 | ||
|
|
ff9f2c536f | ||
|
|
eb31026d8e | ||
|
|
a578726495 | ||
|
|
c442172715 | ||
|
|
d7df38fc95 | ||
|
|
9ce385a190 | ||
|
|
ebd8e2e405 | ||
|
|
585b490f19 | ||
|
|
6c4ad4f6fd | ||
|
|
e6a3c12098 | ||
|
|
2a7020682e | ||
|
|
e1896f3983 | ||
|
|
fc7fbbf1fc |
2
.flake8
2
.flake8
@@ -1,5 +1,5 @@
|
||||
[flake8]
|
||||
ignore = E203, E266, E501, W503, B903, T499
|
||||
ignore = E203, E266, E501, W503, B903, T499, B020
|
||||
max-line-length = 120
|
||||
max-complexity = 18
|
||||
select = B,C,E,F,W,B9,T4
|
||||
|
||||
2
.github/workflows/publish.yml
vendored
2
.github/workflows/publish.yml
vendored
@@ -39,7 +39,7 @@ jobs:
|
||||
password: ${{ secrets.ECR_SECRET_ACCESS_KEY }}
|
||||
- name: Get version
|
||||
id: get_version
|
||||
uses: crazy-max/ghaction-docker-meta@v1
|
||||
uses: crazy-max/ghaction-docker-meta@v3
|
||||
with:
|
||||
images: ${{ env.REP }}
|
||||
tag-semver: |
|
||||
|
||||
@@ -26,4 +26,7 @@ RUN apk add --no-cache \
|
||||
COPY --from=builder /usr/local/lib/python3.8/site-packages /usr/local/lib/python3.8/site-packages
|
||||
COPY --from=builder /usr/local/bin/kube-hunter /usr/local/bin/kube-hunter
|
||||
|
||||
# Add default plugins: https://github.com/aquasecurity/kube-hunter-plugins
|
||||
RUN pip install kube-hunter-arp-spoof>=0.0.3 kube-hunter-dns-spoof>=0.0.3
|
||||
|
||||
ENTRYPOINT ["kube-hunter"]
|
||||
|
||||
27
README.md
27
README.md
@@ -1,18 +1,7 @@
|
||||

|
||||
## Notice
|
||||
kube-hunter is not under active development anymore. If you're interested in scanning Kubernetes clusters for known vulnerabilities, we recommend using [Trivy](https://github.com/aquasecurity/trivy). Specifically, Trivy's Kubernetes [misconfiguration scanning](https://blog.aquasec.com/trivy-kubernetes-cis-benchmark-scanning) and [KBOM vulnerability scanning](https://blog.aquasec.com/scanning-kbom-for-vulnerabilities-with-trivy). Learn more in the [Trivy Docs](https://aquasecurity.github.io/trivy/).
|
||||
|
||||
[![GitHub Release][release-img]][release]
|
||||
![Downloads][download]
|
||||
![Docker Pulls][docker-pull]
|
||||
[](https://github.com/aquasecurity/kube-hunter/actions)
|
||||
[](https://codecov.io/gh/aquasecurity/kube-hunter)
|
||||
[](https://github.com/psf/black)
|
||||
[](https://github.com/aquasecurity/kube-hunter/blob/main/LICENSE)
|
||||
[](https://microbadger.com/images/aquasec/kube-hunter "Get your own image badge on microbadger.com")
|
||||
|
||||
[download]: https://img.shields.io/github/downloads/aquasecurity/kube-hunter/total?logo=github
|
||||
[release-img]: https://img.shields.io/github/release/aquasecurity/kube-hunter.svg?logo=github
|
||||
[release]: https://github.com/aquasecurity/kube-hunter/releases
|
||||
[docker-pull]: https://img.shields.io/docker/pulls/aquasec/kube-hunter?logo=docker&label=docker%20pulls%20%2F%20kube-hunter
|
||||
---
|
||||
|
||||
kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments. **You should NOT run kube-hunter on a Kubernetes cluster that you don't own!**
|
||||
|
||||
@@ -21,12 +10,9 @@ kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was d
|
||||
**Explore vulnerabilities**: The kube-hunter knowledge base includes articles about discoverable vulnerabilities and issues. When kube-hunter reports an issue, it will show its VID (Vulnerability ID) so you can look it up in the KB at https://aquasecurity.github.io/kube-hunter/
|
||||
_If you're interested in kube-hunter's integration with the Kubernetes ATT&CK Matrix [Continue Reading](#kuberentes-attck-matrix)_
|
||||
|
||||
**Contribute**: We welcome contributions, especially new hunter modules that perform additional tests. If you would like to develop your modules please read [Guidelines For Developing Your First kube-hunter Module](https://github.com/aquasecurity/kube-hunter/blob/main/CONTRIBUTING.md).
|
||||
[kube-hunter demo video](https://youtu.be/s2-6rTkH8a8?t=57s)
|
||||
|
||||
[](https://youtu.be/s2-6rTkH8a8?t=57s)
|
||||
|
||||
Table of Contents
|
||||
=================
|
||||
## Table of Contents
|
||||
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [Kubernetes ATT&CK Matrix](#kubernetes-attck-matrix)
|
||||
@@ -52,7 +38,6 @@ Table of Contents
|
||||
- [Contribution](#contribution)
|
||||
- [License](#license)
|
||||
|
||||
---
|
||||
## Kubernetes ATT&CK Matrix
|
||||
|
||||
kube-hunter now supports the new format of the Kubernetes ATT&CK matrix.
|
||||
@@ -245,7 +230,7 @@ python3 kube_hunter
|
||||
_If you want to use pyinstaller/py2exe you need to first run the install_imports.py script._
|
||||
|
||||
### Container
|
||||
Aqua Security maintains a containerized version of kube-hunter at `aquasec/kube-hunter`. This container includes this source code, plus an additional (closed source) reporting plugin for uploading results into a report that can be viewed at [kube-hunter.aquasec.com](https://kube-hunter.aquasec.com). Please note, that running the `aquasec/kube-hunter` container and uploading reports data are subject to additional [terms and conditions](https://kube-hunter.aquasec.com/eula.html).
|
||||
Aqua Security maintains a containerized version of kube-hunter at `aquasec/kube-hunter:aqua`. This container includes this source code, plus an additional (closed source) reporting plugin for uploading results into a report that can be viewed at [kube-hunter.aquasec.com](https://kube-hunter.aquasec.com). Please note, that running the `aquasec/kube-hunter` container and uploading reports data are subject to additional [terms and conditions](https://kube-hunter.aquasec.com/eula.html).
|
||||
|
||||
The Dockerfile in this repository allows you to build a containerized version without the reporting plugin.
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV002
|
||||
title: Kubernetes version disclosure
|
||||
categories: [Information Disclosure]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV003
|
||||
title: Azure Metadata Exposure
|
||||
categories: [Information Disclosure]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV004
|
||||
title: Azure SPN Exposure
|
||||
categories: [Identity Theft]
|
||||
severity: medium
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV005
|
||||
title: Access to Kubernetes API
|
||||
categories: [Information Disclosure, Unauthenticated Access]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV006
|
||||
title: Insecure (HTTP) access to Kubernetes API
|
||||
categories: [Unauthenticated Access]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV007
|
||||
title: Specific Access to Kubernetes API
|
||||
categories: [Access Risk]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV020
|
||||
title: Possible Arp Spoof
|
||||
categories: [IdentityTheft]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV021
|
||||
title: Certificate Includes Email Address
|
||||
categories: [Information Disclosure]
|
||||
severity: low
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV022
|
||||
title: Critical Privilege Escalation CVE
|
||||
categories: [Privilege Escalation]
|
||||
severity: critical
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV023
|
||||
title: Denial of Service to Kubernetes API Server
|
||||
categories: [Denial Of Service]
|
||||
severity: medium
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV024
|
||||
title: Possible Ping Flood Attack
|
||||
categories: [Denial Of Service]
|
||||
severity: medium
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV025
|
||||
title: Possible Reset Flood Attack
|
||||
categories: [Denial Of Service]
|
||||
severity: medium
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV026
|
||||
title: Arbitrary Access To Cluster Scoped Resources
|
||||
categories: [PrivilegeEscalation]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV027
|
||||
title: Kubectl Vulnerable To CVE-2019-11246
|
||||
categories: [Remote Code Execution]
|
||||
severity: medium
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV028
|
||||
title: Kubectl Vulnerable To CVE-2019-1002101
|
||||
categories: [Remote Code Execution]
|
||||
severity: medium
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV029
|
||||
title: Dashboard Exposed
|
||||
categories: [Remote Code Execution]
|
||||
severity: critical
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
@@ -12,4 +13,5 @@ An open Kubernetes Dashboard was detected. The Kubernetes Dashboard can be used
|
||||
|
||||
## Remediation
|
||||
|
||||
Do not leave the Dashboard insecured.
|
||||
Do not leave the Dashboard insecured.
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV030
|
||||
title: Possible DNS Spoof
|
||||
categories: [Identity Theft]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV031
|
||||
title: Etcd Remote Write Access Event
|
||||
categories: [Remote Code Execution]
|
||||
severity: critical
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV032
|
||||
title: Etcd Remote Read Access Event
|
||||
categories: [Access Risk]
|
||||
severity: critical
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV033
|
||||
title: Etcd Remote version disclosure
|
||||
categories: [Information Disclosure]
|
||||
severity: medium
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV034
|
||||
title: Etcd is accessible using insecure connection (HTTP)
|
||||
categories: [Unauthenticated Access]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV036
|
||||
title: Anonymous Authentication
|
||||
categories: [Remote Code Execution]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV037
|
||||
title: Exposed Container Logs
|
||||
categories: [Information Disclosure]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV038
|
||||
title: Exposed Running Pods
|
||||
categories: [Information Disclosure]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV039
|
||||
title: Exposed Exec On Container
|
||||
categories: [Remote Code Execution]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV040
|
||||
title: Exposed Run Inside Container
|
||||
categories: [Remote Code Execution]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV041
|
||||
title: Exposed Port Forward
|
||||
categories: [Remote Code Execution]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV042
|
||||
title: Exposed Attaching To Container
|
||||
categories: [Remote Code Execution]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV043
|
||||
title: Cluster Health Disclosure
|
||||
categories: [Information Disclosure]
|
||||
severity: low
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV044
|
||||
title: Privileged Container
|
||||
categories: [Access Risk]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV045
|
||||
title: Exposed System Logs
|
||||
categories: [Information Disclosure]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV046
|
||||
title: Exposed Kubelet Cmdline
|
||||
categories: [Information Disclosure]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV047
|
||||
title: Pod With Mount To /var/log
|
||||
categories: [Privilege Escalation]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV049
|
||||
title: kubectl proxy Exposed
|
||||
categories: [Information Disclosure]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV050
|
||||
title: Read access to Pod service account token
|
||||
categories: [Access Risk]
|
||||
severity: medium
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV051
|
||||
title: Exposed Existing Privileged Containers Via Secure Kubelet Port
|
||||
categories: [Access Risk]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV052
|
||||
title: Exposed Pods
|
||||
categories: [Information Disclosure]
|
||||
severity: medium
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
vid: KHV053
|
||||
title: AWS Metadata Exposure
|
||||
categories: [Information Disclosure]
|
||||
severity: high
|
||||
---
|
||||
|
||||
# {{ page.vid }} - {{ page.title }}
|
||||
|
||||
6
job.yaml
6
job.yaml
@@ -5,11 +5,13 @@ metadata:
|
||||
name: kube-hunter
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: kube-hunter
|
||||
spec:
|
||||
containers:
|
||||
- name: kube-hunter
|
||||
image: aquasec/kube-hunter
|
||||
image: aquasec/kube-hunter:0.6.8
|
||||
command: ["kube-hunter"]
|
||||
args: ["--pod"]
|
||||
restartPolicy: Never
|
||||
backoffLimit: 4
|
||||
|
||||
BIN
kube-hunter.png
BIN
kube-hunter.png
Binary file not shown.
|
Before Width: | Height: | Size: 19 KiB After Width: | Height: | Size: 25 KiB |
@@ -76,7 +76,7 @@ in order to prevent circular dependency bug.
|
||||
Following the above example, let's figure out the imports:
|
||||
```python
|
||||
from kube_hunter.core.types import Hunter
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
|
||||
from kube_hunter.core.events.types import OpenPortEvent
|
||||
|
||||
@@ -206,7 +206,7 @@ __Make sure to return the event from the execute method, or the event will not g
|
||||
|
||||
For example, if you don't want to hunt services found on a localhost IP, you can create the following module, in the `kube_hunter/modules/report/`
|
||||
```python
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Service, EventFilterBase
|
||||
|
||||
@handler.subscribe(Service)
|
||||
@@ -222,7 +222,7 @@ That means other Hunters that are subscribed to this Service will not get trigge
|
||||
That opens up a wide variety of possible operations, as this not only can __filter out__ events, but you can actually __change event attributes__, for example:
|
||||
|
||||
```python
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.types import InformationDisclosure
|
||||
from kube_hunter.core.events.types import Vulnerability, EventFilterBase
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@ config = Config(
|
||||
log_file=args.log_file,
|
||||
mapping=args.mapping,
|
||||
network_timeout=args.network_timeout,
|
||||
num_worker_threads=args.num_worker_threads,
|
||||
pod=args.pod,
|
||||
quick=args.quick,
|
||||
remote=args.remote,
|
||||
@@ -38,7 +39,7 @@ set_config(config)
|
||||
# Running all other registered plugins before execution
|
||||
pm.hook.load_plugin(args=args)
|
||||
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import HuntFinished, HuntStarted
|
||||
from kube_hunter.modules.discovery.hosts import RunningAsPodEvent, HostScanEvent
|
||||
from kube_hunter.modules.report import get_reporter, get_dispatcher
|
||||
|
||||
@@ -20,6 +20,7 @@ class Config:
|
||||
- log_file: Log File path
|
||||
- mapping: Report only found components
|
||||
- network_timeout: Timeout for network operations
|
||||
- num_worker_threads: Add a flag --threads to change the default 800 thread count of the event handler
|
||||
- pod: From pod scanning mode
|
||||
- quick: Quick scanning mode
|
||||
- remote: Hosts to scan
|
||||
@@ -36,6 +37,7 @@ class Config:
|
||||
log_file: Optional[str] = None
|
||||
mapping: bool = False
|
||||
network_timeout: float = 5.0
|
||||
num_worker_threads: int = 800
|
||||
pod: bool = False
|
||||
quick: bool = False
|
||||
remote: Optional[str] = None
|
||||
|
||||
@@ -4,10 +4,6 @@ DEFAULT_LEVEL = logging.INFO
|
||||
DEFAULT_LEVEL_NAME = logging.getLevelName(DEFAULT_LEVEL)
|
||||
LOG_FORMAT = "%(asctime)s %(levelname)s %(name)s %(message)s"
|
||||
|
||||
# Suppress logging from scapy
|
||||
logging.getLogger("scapy.runtime").setLevel(logging.CRITICAL)
|
||||
logging.getLogger("scapy.loading").setLevel(logging.CRITICAL)
|
||||
|
||||
|
||||
def setup_logger(level_name, logfile):
|
||||
# Remove any existing handlers
|
||||
|
||||
@@ -133,6 +133,14 @@ def parser_add_arguments(parser):
|
||||
|
||||
parser.add_argument("--network-timeout", type=float, default=5.0, help="network operations timeout")
|
||||
|
||||
parser.add_argument(
|
||||
"--num-worker-threads",
|
||||
type=int,
|
||||
default=800,
|
||||
help="In some environments the default thread count (800) can cause the process to crash. "
|
||||
"In the case of a crash try lowering the thread count",
|
||||
)
|
||||
|
||||
|
||||
def parse_args(add_args_hook):
|
||||
"""
|
||||
|
||||
@@ -1,3 +1,2 @@
|
||||
# flake8: noqa: E402
|
||||
from .handler import EventQueue, handler
|
||||
from . import types
|
||||
|
||||
@@ -366,4 +366,5 @@ class EventQueue(Queue):
|
||||
self.queue.clear()
|
||||
|
||||
|
||||
handler = EventQueue(800)
|
||||
config = get_config()
|
||||
handler = EventQueue(config.num_worker_threads)
|
||||
@@ -19,7 +19,7 @@ class HunterBase:
|
||||
def publish_event(self, event):
|
||||
# Import here to avoid circular import from events package.
|
||||
# imports are cached in python so this should not affect runtime
|
||||
from ..events import handler # noqa
|
||||
from ..events.event_handler import handler # noqa
|
||||
|
||||
handler.publish_event(event, caller=self)
|
||||
|
||||
|
||||
@@ -2,7 +2,7 @@ import logging
|
||||
import requests
|
||||
|
||||
from kube_hunter.core.types import Discovery
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import OpenPortEvent, Service, Event, EventFilterBase
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
|
||||
@@ -3,7 +3,7 @@ import logging
|
||||
import requests
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Event, OpenPortEvent, Service
|
||||
from kube_hunter.core.types import Discovery
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Event, OpenPortEvent, Service
|
||||
from kube_hunter.core.types import Discovery
|
||||
|
||||
|
||||
@@ -1,15 +1,17 @@
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import socket
|
||||
import logging
|
||||
import itertools
|
||||
import requests
|
||||
|
||||
from enum import Enum
|
||||
from netaddr import IPNetwork, IPAddress, AddrFormatError
|
||||
from netifaces import AF_INET, ifaddresses, interfaces, gateways
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.modules.discovery.kubernetes_client import list_all_k8s_cluster_nodes
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Event, NewHostEvent, Vulnerability
|
||||
from kube_hunter.core.types import Discovery, AWS, Azure, InstanceMetadataApiTechnique
|
||||
|
||||
@@ -137,7 +139,9 @@ class FromPodHostDiscovery(Discovery):
|
||||
elif self.is_aws_pod_v2():
|
||||
subnets, cloud = self.aws_metadata_v2_discovery()
|
||||
|
||||
subnets += self.gateway_discovery()
|
||||
gateway_subnet = self.gateway_discovery()
|
||||
if gateway_subnet:
|
||||
subnets.append(gateway_subnet)
|
||||
|
||||
should_scan_apiserver = False
|
||||
if self.event.kubeservicehost:
|
||||
@@ -217,7 +221,26 @@ class FromPodHostDiscovery(Discovery):
|
||||
# for pod scanning
|
||||
def gateway_discovery(self):
|
||||
"""Retrieving default gateway of pod, which is usually also a contact point with the host"""
|
||||
return [[gateways()["default"][AF_INET][0], "24"]]
|
||||
# read the default gateway directly from /proc
|
||||
# netifaces currently does not have a maintainer. so we backported to linux support only for this cause.
|
||||
# TODO: implement WMI queries for windows support
|
||||
# https://stackoverflow.com/a/6556951
|
||||
if sys.platform in ["linux", "linux2"]:
|
||||
try:
|
||||
from pyroute2 import IPDB
|
||||
|
||||
ip = IPDB()
|
||||
gateway_ip = ip.routes["default"]["gateway"]
|
||||
ip.release()
|
||||
return [gateway_ip, "24"]
|
||||
except Exception as x:
|
||||
logging.debug(f"Exception while fetching default gateway from container - {x}")
|
||||
finally:
|
||||
ip.release()
|
||||
else:
|
||||
logging.debug("Not running in a linux env, will not scan default subnet")
|
||||
|
||||
return False
|
||||
|
||||
# querying AWS's interface metadata api v1 | works only from a pod
|
||||
def aws_metadata_v1_discovery(self):
|
||||
@@ -338,13 +361,62 @@ class HostDiscovery(Discovery):
|
||||
|
||||
# generate all subnets from all internal network interfaces
|
||||
def generate_interfaces_subnet(self, sn="24"):
|
||||
for ifaceName in interfaces():
|
||||
for ip in [i["addr"] for i in ifaddresses(ifaceName).setdefault(AF_INET, [])]:
|
||||
if not self.event.localhost and InterfaceTypes.LOCALHOST.value in ip.__str__():
|
||||
if sys.platform == "win32":
|
||||
return self.generate_interfaces_subnet_windows()
|
||||
elif sys.platform in ["linux", "linux2"]:
|
||||
return self.generate_interfaces_subnet_linux()
|
||||
|
||||
def generate_interfaces_subnet_linux(self, sn="24"):
|
||||
try:
|
||||
from pyroute2 import IPRoute
|
||||
|
||||
ip = IPRoute()
|
||||
for i in ip.get_addr():
|
||||
# whitelist only ipv4 ips
|
||||
if i["family"] == socket.AF_INET:
|
||||
ipaddress = i[0].get_attr("IFA_ADDRESS")
|
||||
# TODO: add this instead of hardcoded 24 subnet, (add a flag for full scan option)
|
||||
# subnet = i['prefixlen']
|
||||
|
||||
# unless specified explicitly with localhost scan flag, skip localhost ip addresses
|
||||
if not self.event.localhost and ipaddress.startswith(InterfaceTypes.LOCALHOST.value):
|
||||
continue
|
||||
|
||||
ip_network = IPNetwork(f"{ipaddress}/{sn}")
|
||||
for ip in ip_network:
|
||||
yield ip
|
||||
except Exception as x:
|
||||
logging.debug(f"Exception while generating subnet scan from local interfaces: {x}")
|
||||
finally:
|
||||
ip.release()
|
||||
|
||||
def generate_interfaces_subnet_windows(self, sn="24"):
|
||||
from subprocess import check_output
|
||||
|
||||
local_subnets = (
|
||||
check_output(
|
||||
"powershell -NoLogo -NoProfile -NonInteractive -ExecutionPolicy bypass -Command "
|
||||
' "& {'
|
||||
"Get-NetIPConfiguration | Get-NetIPAddress | Where-Object {$_.AddressFamily -eq 'IPv4'}"
|
||||
" | Select-Object -Property IPAddress, PrefixLength | ConvertTo-Json "
|
||||
' "}',
|
||||
shell=True,
|
||||
)
|
||||
.decode()
|
||||
.strip()
|
||||
)
|
||||
try:
|
||||
subnets = json.loads(local_subnets)
|
||||
for subnet in subnets:
|
||||
if not self.event.localhost and subnet["IPAddress"].startswith(InterfaceTypes.LOCALHOST.value):
|
||||
continue
|
||||
for ip in IPNetwork(f"{ip}/{sn}"):
|
||||
ip_network = IPNetwork(f"{subnet['IPAddress']}/{sn}")
|
||||
for ip in ip_network:
|
||||
yield ip
|
||||
|
||||
except Exception as x:
|
||||
logging.debug(f"ERROR: Could not extract interface information using powershell - {x}")
|
||||
|
||||
|
||||
# for comparing prefixes
|
||||
class InterfaceTypes(Enum):
|
||||
|
||||
@@ -2,7 +2,7 @@ import logging
|
||||
import subprocess
|
||||
|
||||
from kube_hunter.core.types import Discovery
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import HuntStarted, Event
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -5,7 +5,7 @@ from enum import Enum
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.types import Discovery
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import OpenPortEvent, Event, Service
|
||||
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
@@ -2,7 +2,7 @@ import logging
|
||||
from socket import socket
|
||||
|
||||
from kube_hunter.core.types import Discovery
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import NewHostEvent, OpenPortEvent
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -3,7 +3,7 @@ import requests
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.types import Discovery
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Service, Event, OpenPortEvent
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -2,12 +2,10 @@
|
||||
from . import (
|
||||
aks,
|
||||
apiserver,
|
||||
arp,
|
||||
capabilities,
|
||||
certificates,
|
||||
cves,
|
||||
dashboard,
|
||||
dns,
|
||||
etcd,
|
||||
kubelet,
|
||||
mounts,
|
||||
|
||||
@@ -5,7 +5,7 @@ import requests
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.modules.hunting.kubelet import ExposedPodsHandler, SecureKubeletPortHunter
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Event, Vulnerability
|
||||
from kube_hunter.core.types import Hunter, ActiveHunter, MountServicePrincipalTechnique, Azure
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ import requests
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.modules.discovery.apiserver import ApiServer
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Vulnerability, Event, K8sVersionDisclosure
|
||||
from kube_hunter.core.types import Hunter, ActiveHunter, KubernetesCluster
|
||||
from kube_hunter.core.types.vulnerabilities import (
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
import logging
|
||||
|
||||
from scapy.all import ARP, IP, ICMP, Ether, sr1, srp
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.types import Event, Vulnerability
|
||||
from kube_hunter.core.types import ActiveHunter, KubernetesCluster, ARPPoisoningTechnique
|
||||
from kube_hunter.modules.hunting.capabilities import CapNetRawEnabled
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PossibleArpSpoofing(Vulnerability, Event):
|
||||
"""A malicious pod running on the cluster could potentially run an ARP Spoof attack
|
||||
and perform a MITM between pods on the node."""
|
||||
|
||||
def __init__(self):
|
||||
Vulnerability.__init__(
|
||||
self,
|
||||
KubernetesCluster,
|
||||
"Possible Arp Spoof",
|
||||
category=ARPPoisoningTechnique,
|
||||
vid="KHV020",
|
||||
)
|
||||
|
||||
|
||||
@handler.subscribe(CapNetRawEnabled)
|
||||
class ArpSpoofHunter(ActiveHunter):
|
||||
"""Arp Spoof Hunter
|
||||
Checks for the possibility of running an ARP spoof
|
||||
attack from within a pod (results are based on the running node)
|
||||
"""
|
||||
|
||||
def __init__(self, event):
|
||||
self.event = event
|
||||
|
||||
def try_getting_mac(self, ip):
|
||||
config = get_config()
|
||||
ans = sr1(ARP(op=1, pdst=ip), timeout=config.network_timeout, verbose=0)
|
||||
return ans[ARP].hwsrc if ans else None
|
||||
|
||||
def detect_l3_on_host(self, arp_responses):
|
||||
"""returns True for an existence of an L3 network plugin"""
|
||||
logger.debug("Attempting to detect L3 network plugin using ARP")
|
||||
unique_macs = list({response[ARP].hwsrc for _, response in arp_responses})
|
||||
|
||||
# if LAN addresses not unique
|
||||
if len(unique_macs) == 1:
|
||||
# if an ip outside the subnets gets a mac address
|
||||
outside_mac = self.try_getting_mac("1.1.1.1")
|
||||
# outside mac is the same as lan macs
|
||||
if outside_mac == unique_macs[0]:
|
||||
return True
|
||||
# only one mac address for whole LAN and outside
|
||||
return False
|
||||
|
||||
def execute(self):
|
||||
config = get_config()
|
||||
self_ip = sr1(IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)[IP].dst
|
||||
arp_responses, _ = srp(
|
||||
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"),
|
||||
timeout=config.network_timeout,
|
||||
verbose=0,
|
||||
)
|
||||
|
||||
# arp enabled on cluster and more than one pod on node
|
||||
if len(arp_responses) > 1:
|
||||
# L3 plugin not installed
|
||||
if not self.detect_l3_on_host(arp_responses):
|
||||
self.publish_event(PossibleArpSpoofing())
|
||||
@@ -2,7 +2,7 @@ import socket
|
||||
import logging
|
||||
|
||||
from kube_hunter.modules.discovery.hosts import RunningAsPodEvent
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Event, Vulnerability
|
||||
from kube_hunter.core.types import Hunter, ARPPoisoningTechnique, KubernetesCluster
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ import base64
|
||||
import re
|
||||
|
||||
from kube_hunter.core.types import Hunter, KubernetesCluster, GeneralSensitiveInformationTechnique
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Vulnerability, Event, Service
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -2,7 +2,7 @@ import logging
|
||||
from packaging import version
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
|
||||
from kube_hunter.core.events.types import K8sVersionDisclosure, Vulnerability, Event
|
||||
from kube_hunter.core.types import (
|
||||
|
||||
@@ -4,7 +4,7 @@ import requests
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.types import Hunter, AccessK8sDashboardTechnique, KubernetesCluster
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Vulnerability, Event
|
||||
from kube_hunter.modules.discovery.dashboard import KubeDashboardEvent
|
||||
|
||||
|
||||
@@ -1,90 +0,0 @@
|
||||
import re
|
||||
import logging
|
||||
|
||||
from scapy.all import IP, ICMP, UDP, DNS, DNSQR, ARP, Ether, sr1, srp1, srp
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.types import Event, Vulnerability
|
||||
from kube_hunter.core.types import ActiveHunter, KubernetesCluster, CoreDNSPoisoningTechnique
|
||||
from kube_hunter.modules.hunting.arp import PossibleArpSpoofing
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PossibleDnsSpoofing(Vulnerability, Event):
|
||||
"""A malicious pod running on the cluster could potentially run a DNS Spoof attack
|
||||
and perform a MITM attack on applications running in the cluster."""
|
||||
|
||||
def __init__(self, kubedns_pod_ip):
|
||||
Vulnerability.__init__(
|
||||
self,
|
||||
KubernetesCluster,
|
||||
"Possible DNS Spoof",
|
||||
category=CoreDNSPoisoningTechnique,
|
||||
vid="KHV030",
|
||||
)
|
||||
self.kubedns_pod_ip = kubedns_pod_ip
|
||||
self.evidence = f"kube-dns at: {self.kubedns_pod_ip}"
|
||||
|
||||
|
||||
# Only triggered with RunningAsPod base event
|
||||
@handler.subscribe(PossibleArpSpoofing)
|
||||
class DnsSpoofHunter(ActiveHunter):
|
||||
"""DNS Spoof Hunter
|
||||
Checks for the possibility for a malicious pod to compromise DNS requests of the cluster
|
||||
(results are based on the running node)
|
||||
"""
|
||||
|
||||
def __init__(self, event):
|
||||
self.event = event
|
||||
|
||||
def get_cbr0_ip_mac(self):
|
||||
config = get_config()
|
||||
res = srp1(Ether() / IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)
|
||||
return res[IP].src, res.src
|
||||
|
||||
def extract_nameserver_ip(self):
|
||||
with open("/etc/resolv.conf") as f:
|
||||
# finds first nameserver in /etc/resolv.conf
|
||||
match = re.search(r"nameserver (\d+.\d+.\d+.\d+)", f.read())
|
||||
if match:
|
||||
return match.group(1)
|
||||
|
||||
def get_kube_dns_ip_mac(self):
|
||||
config = get_config()
|
||||
kubedns_svc_ip = self.extract_nameserver_ip()
|
||||
|
||||
# getting actual pod ip of kube-dns service, by comparing the src mac of a dns response and arp scanning.
|
||||
dns_info_res = srp1(
|
||||
Ether() / IP(dst=kubedns_svc_ip) / UDP(dport=53) / DNS(rd=1, qd=DNSQR()),
|
||||
verbose=0,
|
||||
timeout=config.network_timeout,
|
||||
)
|
||||
kubedns_pod_mac = dns_info_res.src
|
||||
self_ip = dns_info_res[IP].dst
|
||||
|
||||
arp_responses, _ = srp(
|
||||
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"),
|
||||
timeout=config.network_timeout,
|
||||
verbose=0,
|
||||
)
|
||||
for _, response in arp_responses:
|
||||
if response[Ether].src == kubedns_pod_mac:
|
||||
return response[ARP].psrc, response.src
|
||||
|
||||
def execute(self):
|
||||
config = get_config()
|
||||
logger.debug("Attempting to get kube-dns pod ip")
|
||||
self_ip = sr1(IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)[IP].dst
|
||||
cbr0_ip, cbr0_mac = self.get_cbr0_ip_mac()
|
||||
|
||||
kubedns = self.get_kube_dns_ip_mac()
|
||||
if kubedns:
|
||||
kubedns_ip, kubedns_mac = kubedns
|
||||
logger.debug(f"ip={self_ip} kubednsip={kubedns_ip} cbr0ip={cbr0_ip}")
|
||||
if kubedns_mac != cbr0_mac:
|
||||
# if self pod in the same subnet as kube-dns pod
|
||||
self.publish_event(PossibleDnsSpoofing(kubedns_pod_ip=kubedns_ip))
|
||||
else:
|
||||
logger.debug("Could not get kubedns identity")
|
||||
@@ -2,7 +2,7 @@ import logging
|
||||
import requests
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Vulnerability, Event, OpenPortEvent
|
||||
from kube_hunter.core.types import (
|
||||
ActiveHunter,
|
||||
|
||||
@@ -9,7 +9,7 @@ import urllib3
|
||||
import uuid
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Vulnerability, Event, K8sVersionDisclosure
|
||||
from kube_hunter.core.types import (
|
||||
Hunter,
|
||||
|
||||
@@ -3,7 +3,7 @@ import re
|
||||
import uuid
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Event, Vulnerability
|
||||
from kube_hunter.core.types import ActiveHunter, Hunter, KubernetesCluster, HostPathMountPrivilegeEscalationTechnique
|
||||
from kube_hunter.modules.hunting.kubelet import (
|
||||
|
||||
@@ -4,7 +4,7 @@ import requests
|
||||
from enum import Enum
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Event, Vulnerability, K8sVersionDisclosure
|
||||
from kube_hunter.core.types import (
|
||||
ActiveHunter,
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import logging
|
||||
import os
|
||||
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import Vulnerability, Event
|
||||
from kube_hunter.core.types import Hunter, KubernetesCluster, AccessContainerServiceAccountTechnique
|
||||
from kube_hunter.modules.discovery.hosts import RunningAsPodEvent
|
||||
|
||||
@@ -2,7 +2,7 @@ import logging
|
||||
import threading
|
||||
|
||||
from kube_hunter.conf import get_config
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import (
|
||||
Event,
|
||||
Service,
|
||||
|
||||
@@ -31,8 +31,7 @@ zip_safe = False
|
||||
packages = find:
|
||||
install_requires =
|
||||
netaddr
|
||||
netifaces
|
||||
scapy>=2.4.3
|
||||
pyroute2
|
||||
requests
|
||||
PrettyTable
|
||||
urllib3>=1.24.3
|
||||
|
||||
@@ -1,11 +1,13 @@
|
||||
# flake8: noqa: E402
|
||||
import requests_mock
|
||||
import json
|
||||
|
||||
from kube_hunter.conf import Config, set_config
|
||||
from kube_hunter.core.events.types import NewHostEvent
|
||||
|
||||
set_config(Config())
|
||||
|
||||
from kube_hunter.core.events.types import NewHostEvent
|
||||
|
||||
|
||||
def test_presetcloud():
|
||||
"""Testing if it doesn't try to run get_cloud if the cloud type is already set.
|
||||
|
||||
@@ -4,7 +4,7 @@ from kube_hunter.conf import Config, set_config, get_config
|
||||
|
||||
set_config(Config(active=True))
|
||||
|
||||
from kube_hunter.core.events.handler import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.modules.discovery.apiserver import ApiServiceDiscovery
|
||||
from kube_hunter.modules.discovery.dashboard import KubeDashboard as KubeDashboardDiscovery
|
||||
from kube_hunter.modules.discovery.etcd import EtcdRemoteAccess as EtcdRemoteAccessDiscovery
|
||||
@@ -20,14 +20,12 @@ from kube_hunter.modules.hunting.apiserver import (
|
||||
AccessApiServerActive,
|
||||
AccessApiServerWithToken,
|
||||
)
|
||||
from kube_hunter.modules.hunting.arp import ArpSpoofHunter
|
||||
from kube_hunter.modules.hunting.capabilities import PodCapabilitiesHunter
|
||||
from kube_hunter.modules.hunting.certificates import CertificateDiscovery
|
||||
|
||||
from kube_hunter.modules.hunting.cves import K8sClusterCveHunter
|
||||
from kube_hunter.modules.hunting.cves import KubectlCVEHunter
|
||||
from kube_hunter.modules.hunting.dashboard import KubeDashboard
|
||||
from kube_hunter.modules.hunting.dns import DnsSpoofHunter
|
||||
from kube_hunter.modules.hunting.etcd import EtcdRemoteAccess, EtcdRemoteAccessActive
|
||||
from kube_hunter.modules.hunting.kubelet import (
|
||||
ProveAnonymousAuth,
|
||||
@@ -76,8 +74,6 @@ PASSIVE_HUNTERS = {
|
||||
ACTIVE_HUNTERS = {
|
||||
ProveAzureSpnExposure,
|
||||
AccessApiServerActive,
|
||||
ArpSpoofHunter,
|
||||
DnsSpoofHunter,
|
||||
EtcdRemoteAccessActive,
|
||||
ProveRunHandler,
|
||||
ProveContainerLogsHandler,
|
||||
|
||||
@@ -3,7 +3,7 @@ import time
|
||||
from kube_hunter.conf import Config, set_config
|
||||
from kube_hunter.core.types import Hunter
|
||||
from kube_hunter.core.events.types import Event, Service
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
|
||||
counter = 0
|
||||
first_run = True
|
||||
|
||||
@@ -8,7 +8,7 @@ set_config(Config())
|
||||
|
||||
from kube_hunter.modules.discovery.apiserver import ApiServer, ApiServiceDiscovery
|
||||
from kube_hunter.core.events.types import Event
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
|
||||
counter = 0
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ from kube_hunter.modules.discovery.hosts import (
|
||||
HostDiscoveryHelpers,
|
||||
)
|
||||
from kube_hunter.core.types import Hunter
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
import json
|
||||
import requests_mock
|
||||
import pytest
|
||||
|
||||
@@ -23,7 +23,7 @@ from kube_hunter.modules.hunting.apiserver import ApiServerPassiveHunterFinished
|
||||
from kube_hunter.modules.hunting.apiserver import CreateANamespace, DeleteANamespace
|
||||
from kube_hunter.modules.discovery.apiserver import ApiServer
|
||||
from kube_hunter.core.types import ExposedSensitiveInterfacesTechnique, AccessK8sApiServerTechnique
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
|
||||
counter = 0
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ set_config(Config())
|
||||
|
||||
from kube_hunter.core.events.types import Event
|
||||
from kube_hunter.modules.hunting.certificates import CertificateDiscovery, CertificateEmail
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
|
||||
|
||||
def test_CertificateDiscovery():
|
||||
|
||||
@@ -5,7 +5,7 @@ from kube_hunter.conf import Config, set_config
|
||||
|
||||
set_config(Config())
|
||||
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.core.events.types import K8sVersionDisclosure
|
||||
from kube_hunter.modules.hunting.cves import (
|
||||
K8sClusterCveHunter,
|
||||
|
||||
@@ -3,7 +3,7 @@ import requests_mock
|
||||
import urllib.parse
|
||||
import uuid
|
||||
|
||||
from kube_hunter.core.events import handler
|
||||
from kube_hunter.core.events.event_handler import handler
|
||||
from kube_hunter.modules.hunting.kubelet import (
|
||||
AnonymousAuthEnabled,
|
||||
ExposedExistingPrivilegedContainersViaSecureKubeletPort,
|
||||
|
||||
Reference in New Issue
Block a user