Compare commits

...

50 Commits

Author SHA1 Message Date
danielsagi
812cbe6dc6 removed aquadev 2020-12-06 15:52:01 +02:00
danielsagi
85cec7a128 Added codeql analysis workflow 2020-12-06 15:47:05 +02:00
danielsagi
f95df8172b added a release workflow for a linux binary (#421) 2020-12-04 13:45:03 +02:00
danielsagi
a3ad928f29 Bug Fix: Pyinstaller prettytable error (#419)
* added specific problematic hooks folder for when compiling with pyinstaller. added a fix for prettytable import

* fixed typo

* lint fix
2020-12-04 13:43:37 +02:00
danielsagi
22d6676e08 Removed Travis and Greetings workflows (#415)
* removed greetings workflow, and travis

* Update the build status badge to point to Github Actions
2020-12-04 13:42:38 +02:00
danielsagi
b9e0ef30e8 Removed Old Dependency For CAP_NET_RAW (#416)
* removed old dependency for cap_net_raw, by stop usage of tracerouting when running as a pod

* removed unused imports
2020-12-03 17:11:18 +02:00
RDxR10
693d668d0a Update apiserver.py (#397)
* Update apiserver.py

Added description of KHV007

* fixed linting issues

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-28 19:41:06 +02:00
RDxR10
2e4684658f Update certificates.py (#398)
* Update certificates.py

Regex expression update for email

* fixed linting issues

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-28 18:55:14 +02:00
Hugo van Kemenade
f5e8b14818 Migrate tests to GitHub Actions (#395) (#399)
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-28 17:34:30 +02:00
danielsagi
05094a9415 Fix lint comments (#414)
* removed unused get query to port forward

* moved existing code to comments

Co-authored-by: Liz Rice <liz@lizrice.com>
2020-11-28 17:16:57 +02:00
danielsagi
8acedf2e7d updated screenshot of aqua's site (#412) 2020-11-27 16:04:38 +02:00
danielsagi
14ca1b8bce Fixed false positive on test_run_handler (#411)
* fixed wrong check on test run handler

* changed method of testing to be using 404 with real post method
2020-11-19 17:41:33 +02:00
danielsagi
5a578fd8ab More intuitive message when ProveSystemLogs fails (#409)
* fixed wrong message for when proving audit logs

* fixed linting
2020-11-18 11:35:13 +02:00
danielsagi
bf7023d01c Added docs for exposed pods (#407)
* added doc _kb for exposed pods

* correlated the new khv to the Exposed pods vulnerability

* fixed linting
2020-11-17 15:22:06 +02:00
danielsagi
d7168af7d5 Change KB links to avd (#406)
* changed link to point to avd

* changed kb_links to be on base report module. and updated to point to avd. now json output returns the full avd url to the vulnerability

* switched to adding a new avd_reference instead of changed the VID

* added newline to fix linting
2020-11-17 14:03:18 +02:00
Hugo van Kemenade
35873baa12 Upgrade syntax for supported Python versions (#394) (#401)
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-16 20:40:28 +02:00
Sinith
a476d9383f Update KHV005.md (#403) 2020-11-08 18:42:41 +02:00
Hugo van Kemenade
6a3c7a885a Support Python 3.9 (#393) (#400)
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-07 15:59:44 +02:00
A N U S H
b6be309651 Added Greeting Github Actions (#382)
* Added Greeting Github Actions

* feat: Updated the Message

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-07 15:16:14 +02:00
Monish Singh
0d5b3d57d3 added the link of contribution page (#383)
* added the link of contribution page

users can directly go to the contribution page from here after reading the readme file

* added it to the table of contents

* Done

sorry for my prev. mistake, now its fixed.

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-07 15:07:39 +02:00
Milind Chawre
69057acf9b Adding --log-file option (#329) (#387) 2020-11-07 15:01:30 +02:00
Itay Shakury
e63200139e fix azure spn hunter (#372)
* fix azure spn hunter

* fix issues

* restore tests

* code style

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-10-19 13:53:50 +03:00
Itay Shakury
ad4cfe1c11 update gitignore (#371)
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-10-19 13:03:46 +03:00
Zoltán Reegn
24b5a709ad Increase evidence field length in plain report (#385)
Given that the Description tends to go over 100 characters as well, it
seems appropriate to loosen the restriction of the evidence field.

Fixes #111

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-10-19 12:49:43 +03:00
Jeff Rescignano
9cadc0ee41 Optimize images (#389) 2020-10-19 12:27:22 +03:00
danielsagi
3950a1c2f2 Fixed bug in etcd hunting (#364)
* fixed etcd version hunting typo

* changed self.protocol in other places on etcd hunting. this is a typo, protocol is a property of events, not hunters

Co-authored-by: Daniel Sagi <daniel@example.com>
Co-authored-by: Liz Rice <liz@lizrice.com>
2020-09-04 13:28:03 +01:00
Sanka Sathyaji
7530e6fee3 Update job.yml for Kubernetes cluster jobs (#367)
Existing job.yml has wrong command for command ["python", "kube-hunter,py"]. But it should change to command ["kube-hunter"]

Co-authored-by: Liz Rice <liz@lizrice.com>
2020-09-04 12:15:24 +01:00
danielsagi
72ae8c0719 reformatted files to pass new linting (#369)
Co-authored-by: Daniel Sagi <daniel@example.com>
2020-09-04 12:01:16 +01:00
danielsagi
b341124c20 Fixed bug in certificate hunting (#365)
* striping was incorrect due to multiple newlines in certificate returned from ssl.get_server_certificate

* changed ' to " for linting

Co-authored-by: Daniel Sagi <daniel@example.com>
2020-09-03 15:06:51 +01:00
danielsagi
3e06647b4c Added multistage build for Dockerfile (#362)
* removed unnecessary files from final image, using multistaged build

* added ebtables and tcpdump packages to multistage

Co-authored-by: Daniel Sagi <daniel@example.com>
2020-08-21 14:42:02 +03:00
danielsagi
cd1f79a658 fixed typo (#363) 2020-08-14 19:09:06 +03:00
Liz Rice
2428e2e869 docs: fix broken CONTRIBUTING link (#361) 2020-07-03 11:59:53 +03:00
Abdullah Garcia
daf53cb484 Two new kubelet active hunters. (#344)
* Introducing active hunters:

- FootholdViaSecureKubeletPort
- MaliciousIntentViaSecureKubeletPort

* Format

Updating code according to expected linting format.

* Format

Updating code according to expected linting format.

* Format

Updating code according to expected linting format.

* Format

Updating code according to expected linting format.

* Testing

Update code according to expected testing standards and implementation.

* Update documentation.

- Added some more mitigations and updated the references list.

* f-string is missing placeholders.

- flake8 is marking this line as an issue as it lacks a placeholder when indicating the use of f-string; corrected.

* Update kubelet.py

- Add network_timeout parameter into requests.post and requests.get execution.

* Update kubelet.py

- Modified name of variable.

* Update kubelet.py and test_kubelet.py

- Remove certificate authority.

* Update kubelet.py and test_kubelet.py.

- Introducing default number of rm attempts.

* Update kubelet.py and test_kubelet.py.

- Introduced number of rmdir and umount attempts.

* Update kubelet.py

- Modified filename to match kube-hunter description.

* Update several files.

- Instated the use of self.event.session for GET and POST requests.
- Testing modified accordingly to complete coverage of changes and introduced methods.
- Requirements changed such that the required version that supports sessions mocking is obtained.

* Update kubelet.py

- Introduced warnings for the following commands in case of failure: rm, rmdir, and umount.

* Update kubelet.py

- Remove "self.__class__.__name___" from self.event.evidence.

* Update kubelet.py

- Remove unnecessary message section.

* Update files.

- Address class change.
- Fix testing failure after removing message section.

* Update kubelet.py

- Provide POD and CONTAINER as part of the warning messages in the log.

Co-authored-by: Abdullah Garcia <abdullah.garcia@jpmorgan.com>
Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-06-29 09:20:49 +01:00
danielsagi
d6ca666447 Minor hunting bug fixes (#360)
* fixed f string

* fixed wrong iteration on list when getting random pod

* added '/' suffix to path on kubelet debug handlers tests

* also fixed minor bug in etcd, protocol was refrenced on the hunter and not on the event

* ran black format

* moved protocol to be https

* ran black again

* fixed PR comments

* ran black again, formatting
2020-06-26 15:04:29 +01:00
danielsagi
3ba926454a Added External Plugins Support (#357)
* added plugins submodule, created two hookspecs, one for adding arguments, one for running code after the argument parsing

* implemented plugins application on main file, changed mechanism for argument parsing

* changed previous parsing function to not create the ArgumentParser, and implemented it as a hook for the parsing mechanism

* added pluggy to required deps

* removed unecessary add_config import

* fixed formatting using black

* restored main link file from master

* moved import of parser to right before the register call, to avoid circular imports

* added tests for the plugins hooks

* removed blank line space

* black reformat
2020-06-19 15:20:15 +01:00
Konstantin Weddige
78e16729e0 Fix typo (#354)
This fixes #353
2020-06-08 13:47:40 +01:00
danielsagi
78c0133d9d removed an unnecessary f-string on an info logging (#355) 2020-06-08 15:04:29 +03:00
Liz Rice
4484ad734f Fix CertificateDiscovery hunter for Python3 (#350)
* update base64 decode for python3

* chore: remove lint error about imports
2020-05-11 10:42:31 +01:00
Yehuda Chikvashvili
a0127659b7 Decouple config and argument parsing (#342)
* Make config initialized explicitly
* Add mypy linting
* Make tests run individually
Resolve #341
2020-04-26 19:37:16 +03:00
Yehuda Chikvashvili
f034c8c7a1 Removed unused imports (#338)
* Update snippets in README.md
The README file had deprecated code snippets
* Remove unnecessary imports
* Complete tests for hunters registration

Resolves #334
2020-04-23 02:31:07 +03:00
mormamn
4cb2c8bad9 Dashboard hunter not working (#337)
* Fix dashboard hunter regression
Fix #336.
Add tests for dashboard hunter

Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
2020-04-13 04:06:13 +03:00
Yehuda Chikvashvili
14d73e201e Remove dynamic imports (#335)
* Remove plugins
Current usage of plugins is not pluggable and includes logging
stuff.
Move this to conf/logging.
* Removed dynamic imports
* Add tests for hunters registration
2020-04-13 02:56:13 +03:00
John Schaeffer
6d63f55d18 Updated logging init logic to not log on setting --log=none (#323)
* Fix "none" logging

Test for different logging levels, existing and none existing

Co-authored-by: yoavrotems <yoavrotems97@gmail.com>
Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
2020-04-12 16:56:53 +03:00
mormamn
124a51d84f Support ignoring IPs (#332)
* Support ignoring IPs

Closes #296
2020-04-07 21:47:50 +03:00
Yehuda Chikvashvili
0f1739262f Linting Standards (#330)
Fix linting issues with flake8 and black.
Add pre-commit congifuration, update documnetation for it.
Apply linting check in Travis CI.
2020-04-05 05:22:24 +03:00
mormamn
9ddf3216ab Optimize Cloud Discovery (#325)
* Optimized Cloud Discovery
Removed redundant actions of getting cloud type.
Make cloud discovery a lazy action.
Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
2020-03-29 22:59:38 +03:00
Yehuda Chikvashvili
e7585f4ed3 Logging revamped (#318)
* Refine logging
Use logger objects instead of global root logger
Fixes #308 
Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
2020-03-04 21:03:36 +02:00
Yehuda Chikvashvili
6c34a62e39 Network operations timeout (#317)
* Add network operations timeout

This commit adds --network-timeout flag, which value will be used for
network operations timeout configurable, so demanding user
can set it to desired value.
2020-03-04 16:59:18 +02:00
blrchen
69a31f87e9 Fix #127 Insecure Azure Cloud IP detection (#315)
* Fix Insecure Azure Cloud IP detection
Remove verify=False

Co-authored-by: blairch <15134348+blairch@users.noreply.github.com>
Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
2020-03-03 01:10:16 +02:00
dependabot[bot]
f33c04bd5b Bump nokogiri from 1.10.4 to 1.10.8 in /docs (#311)
Bumps [nokogiri](https://github.com/sparklemotion/nokogiri) from 1.10.4 to 1.10.8.
- [Release notes](https://github.com/sparklemotion/nokogiri/releases)
- [Changelog](https://github.com/sparklemotion/nokogiri/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sparklemotion/nokogiri/compare/v1.10.4...v1.10.8)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: Liz Rice <liz@lizrice.com>
2020-03-02 15:23:39 +00:00
89 changed files with 4122 additions and 1277 deletions

6
.flake8 Normal file
View File

@@ -0,0 +1,6 @@
[flake8]
ignore = E203, E266, E501, W503, B903, T499
max-line-length = 120
max-complexity = 18
select = B,C,E,F,W,B9,T4
mypy_config=mypy.ini

67
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@@ -0,0 +1,67 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ master ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
schedule:
- cron: '16 3 * * 1'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

12
.github/workflows/lint.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
name: Lint
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- uses: pre-commit/action@v2.0.0

52
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
on:
push:
# Sequence of patterns matched against refs/tags
tags:
- 'v*' # Push events to matching v*, i.e. v1.0, v20.15.10
name: Upload Release Asset
jobs:
build:
name: Upload Release Asset
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: |
python -m pip install -U pip
python -m pip install -r requirements-dev.txt
- name: Build project
shell: bash
run: |
make pyinstaller
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
draft: false
prerelease: false
- name: Upload Release Asset
id: upload-release-asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./dist/kube-hunter
asset_name: kube-hunter-linux-x86_64-${{ github.ref }}
asset_content_type: application/octet-stream

54
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,54 @@
name: Test
on: [push, pull_request]
env:
FORCE_COLOR: 1
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: ["3.6", "3.7", "3.8", "3.9"]
os: [ubuntu-20.04, ubuntu-18.04, ubuntu-16.04]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Get pip cache dir
id: pip-cache
run: |
echo "::set-output name=dir::$(pip cache dir)"
- name: Cache
uses: actions/cache@v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key:
${{ matrix.os }}-${{ matrix.python-version }}-${{ hashFiles('requirements-dev.txt') }}
restore-keys: |
${{ matrix.os }}-${{ matrix.python-version }}-
- name: Install dependencies
run: |
python -m pip install -U pip
python -m pip install -U wheel
python -m pip install -r requirements.txt
python -m pip install -r requirements-dev.txt
- name: Test
shell: bash
run: |
make test
- name: Upload coverage
uses: codecov/codecov-action@v1
with:
name: ${{ matrix.os }} Python ${{ matrix.python-version }}

3
.gitignore vendored
View File

@@ -24,7 +24,10 @@ var/
*.egg
*.spec
.eggs
pip-wheel-metadata
# Directory Cache Files
.DS_Store
thumbs.db
__pycache__
.mypy_cache

10
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,10 @@
repos:
- repo: https://github.com/psf/black
rev: stable
hooks:
- id: black
- repo: https://gitlab.com/pycqa/flake8
rev: 3.7.9
hooks:
- id: flake8
additional_dependencies: [flake8-bugbear]

View File

@@ -1,24 +0,0 @@
group: travis_latest
language: python
cache: pip
python:
#- "3.4"
#- "3.5"
- "3.6"
- "3.7"
install:
- pip install -r requirements.txt
- pip install -r requirements-dev.txt
before_script:
# stop the build if there are Python syntax errors or undefined names
- flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
- flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- pip install pytest coverage pytest-cov
script:
- python runtest.py
after_success:
- bash <(curl -s https://codecov.io/bash)
notifications:
on_success: change
on_failure: change # `always` will be the setting once code changes slow down

View File

@@ -1,4 +1,17 @@
Thank you for taking interest in contributing to kube-hunter!
## Contribution Guide
## Welcome Aboard
Thank you for taking interest in contributing to kube-hunter!
This guide will walk you through the development process of kube-hunter.
## Setting Up
kube-hunter is written in Python 3 and supports versions 3.6 and above.
You'll probably want to create a virtual environment for your local project.
Once you got your project and IDE set up, you can `make dev-deps` and start contributing!
You may also install a pre-commit hook to take care of linting - `pre-commit install`.
## Issues
- Feel free to open issues for any reason as long as you make it clear if this issue is about a bug/feature/hunter/question/comment.

View File

@@ -16,4 +16,14 @@ RUN make deps
COPY . .
RUN make install
FROM python:3.8-alpine
RUN apk add --no-cache \
tcpdump \
ebtables && \
apk upgrade --no-cache
COPY --from=builder /usr/local/lib/python3.8/site-packages /usr/local/lib/python3.8/site-packages
COPY --from=builder /usr/local/bin/kube-hunter /usr/local/bin/kube-hunter
ENTRYPOINT ["kube-hunter"]

View File

@@ -21,7 +21,13 @@ dev-deps:
.PHONY: lint
lint:
flake8 $(SRC)
black .
flake8
.PHONY: lint-check
lint-check:
flake8
black --check --diff .
.PHONY: test
test:
@@ -57,5 +63,5 @@ publish:
.PHONY: clean
clean:
rm -rf build/ dist/ *.egg-info/ .eggs/ .pytest_cache/ .coverage *.spec
rm -rf build/ dist/ *.egg-info/ .eggs/ .pytest_cache/ .mypy_cache .coverage *.spec
find . -type d -name __pycache__ -exec rm -rf '{}' +

View File

@@ -1,7 +1,8 @@
![kube-hunter](https://github.com/aquasecurity/kube-hunter/blob/master/kube-hunter.png)
[![Build Status](https://travis-ci.org/aquasecurity/kube-hunter.svg?branch=master)](https://travis-ci.org/aquasecurity/kube-hunter)
[![Build Status](https://github.com/aquasecurity/kube-hunter/workflows/Test/badge.svg)](https://github.com/aquasecurity/kube-hunter/actions)
[![codecov](https://codecov.io/gh/aquasecurity/kube-hunter/branch/master/graph/badge.svg)](https://codecov.io/gh/aquasecurity/kube-hunter)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![License](https://img.shields.io/github/license/aquasecurity/kube-hunter)](https://github.com/aquasecurity/kube-hunter/blob/master/LICENSE)
[![Docker image](https://images.microbadger.com/badges/image/aquasec/kube-hunter.svg)](https://microbadger.com/images/aquasec/kube-hunter "Get your own image badge on microbadger.com")
@@ -13,7 +14,7 @@ kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was d
**Explore vulnerabilities**: The kube-hunter knowledge base includes articles about discoverable vulnerabilities and issues. When kube-hunter reports an issue, it will show its VID (Vulnerability ID) so you can look it up in the KB at https://aquasecurity.github.io/kube-hunter/
**Contribute**: We welcome contributions, especially new hunter modules that perform additional tests. If you would like to develop your modules please read [Guidelines For Developing Your First kube-hunter Module](kube_hunter/README.md).
**Contribute**: We welcome contributions, especially new hunter modules that perform additional tests. If you would like to develop your modules please read [Guidelines For Developing Your First kube-hunter Module](https://github.com/aquasecurity/kube-hunter/blob/master/CONTRIBUTING.md).
[![kube-hunter demo video](https://github.com/aquasecurity/kube-hunter/blob/master/kube-hunter-screenshot.png)](https://youtu.be/s2-6rTkH8a8?t=57s)
@@ -33,6 +34,7 @@ Table of Contents
* [Prerequisites](#prerequisites)
* [Container](#container)
* [Pod](#pod)
* [Contribution](#contribution)
## Hunting
@@ -173,5 +175,8 @@ The example `job.yaml` file defines a Job that will run kube-hunter in a pod, us
* Find the pod name with `kubectl describe job kube-hunter`
* View the test results with `kubectl logs <pod name>`
## Contribution
To read the contribution guidelines, <a href="https://github.com/aquasecurity/kube-hunter/blob/master/CONTRIBUTING.md"> Click here </a>
## License
This repository is available under the [Apache License 2.0](https://github.com/aquasecurity/kube-hunter/blob/master/LICENSE).

View File

@@ -206,7 +206,7 @@ GEM
jekyll-seo-tag (~> 2.1)
minitest (5.12.2)
multipart-post (2.1.1)
nokogiri (1.10.4)
nokogiri (1.10.8)
mini_portile2 (~> 2.4.0)
octokit (4.14.0)
sawyer (~> 0.8.0, >= 0.5.3)

View File

@@ -12,7 +12,7 @@ Kubernetes API was accessed with Pod Service Account or without Authentication (
## Remediation
Secure acess to your Kubernetes API.
Secure access to your Kubernetes API.
It is recommended to explicitly specify a Service Account for all of your workloads (`serviceAccountName` in `Pod.Spec`), and manage their permissions according to the least privilege principal.
@@ -21,4 +21,4 @@ Consider opting out automatic mounting of SA token using `automountServiceAccoun
## References
- [Configure Service Accounts for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)
- [Configure Service Accounts for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)

40
docs/_kb/KHV051.md Normal file
View File

@@ -0,0 +1,40 @@
---
vid: KHV051
title: Exposed Existing Privileged Containers Via Secure Kubelet Port
categories: [Access Risk]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The kubelet is configured to allow anonymous (unauthenticated) requests to its HTTPs API. This may expose certain information and capabilities to an attacker with access to the kubelet API.
A privileged container is given access to all devices on the host and can work at the kernel level. It is declared using the `Pod.spec.containers[].securityContext.privileged` attribute. This may be useful for infrastructure containers that perform setup work on the host, but is a dangerous attack vector.
Furthermore, if the kubelet **and** the API server authentication mechanisms are (mis)configured such that anonymous requests can execute commands via the API within the containers (specifically privileged ones), a malicious actor can leverage such capabilities to do way more damage in the cluster than expected: e.g. start/modify process on host.
## Remediation
Ensure kubelet is protected using `--anonymous-auth=false` kubelet flag. Allow only legitimate users using `--client-ca-file` or `--authentication-token-webhook` kubelet flags. This is usually done by the installer or cloud provider.
Minimize the use of privileged containers.
Use Pod Security Policies to enforce using `privileged: false` policy.
Review the RBAC permissions to Kubernetes API server for the anonymous and default service account, including bindings.
Ensure node(s) runs active filesystem monitoring.
Set `--insecure-port=0` and remove `--insecure-bind-address=0.0.0.0` in the Kubernetes API server config.
Remove `AlwaysAllow` from `--authorization-mode` in the Kubernetes API server config. Alternatively, set `--anonymous-auth=false` in the Kubernetes API server config; this will depend on the API server version running.
## References
- [Kubelet authentication/authorization](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)
- [Privileged mode for pod containers](https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers)
- [Pod Security Policies - Privileged](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged)
- [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
- [KHV005 - Access to Kubernetes API]({{ site.baseurl }}{% link _kb/KHV005.md %})
- [KHV036 - Anonymous Authentication]({{ site.baseurl }}{% link _kb/KHV036.md %})

23
docs/_kb/KHV052.md Normal file
View File

@@ -0,0 +1,23 @@
---
vid: KHV052
title: Exposed Pods
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
An attacker could view sensitive information about pods that are bound to a Node using the exposed /pods endpoint
This can be done either by accessing the readonly port (default 10255), or from the secure kubelet port (10250)
## Remediation
Ensure kubelet is protected using `--anonymous-auth=false` kubelet flag. Allow only legitimate users using `--client-ca-file` or `--authentication-token-webhook` kubelet flags. This is usually done by the installer or cloud provider.
Disable the readonly port by using `--read-only-port=0` kubelet flag.
## References
- [Kubelet configuration](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/)
- [Kubelet authentication/authorization](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)

View File

@@ -8,7 +8,7 @@ spec:
containers:
- name: kube-hunter
image: aquasec/kube-hunter
command: ["python", "kube-hunter.py"]
command: ["kube-hunter"]
args: ["--pod"]
restartPolicy: Never
backoffLimit: 4

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -5,8 +5,6 @@ First, let's go through kube-hunter's basic architecture.
### Directory Structure
~~~
kube-hunter/
plugins/
# your plugin
kube_hunter/
core/
modules/
@@ -77,10 +75,10 @@ in order to prevent circular dependency bug.
Following the above example, let's figure out the imports:
```python
from ...core.types import Hunter
from ...core.events import handler
from kube_hunter.core.types import Hunter
from kube_hunter.core.events import handler
from ...core.events.types import OpenPortEvent
from kube_hunter.core.events.types import OpenPortEvent
@handler.subscribe(OpenPortEvent, predicate=lambda event: event.port == 30000)
class KubeDashboardDiscovery(Hunter):
@@ -92,13 +90,13 @@ class KubeDashboardDiscovery(Hunter):
As you can see, all of the types here come from the `core` module.
### Core Imports
relative import: `...core.events`
Absolute import: `kube_hunter.core.events`
|Name|Description|
|---|---|
|handler|Core object for using events, every module should import this object|
relative import `...core.events.types`
Absolute import `kube_hunter.core.events.types`
|Name|Description|
|---|---|
@@ -106,7 +104,7 @@ relative import `...core.events.types`
|Vulnerability|Base class for defining a new vulnerability|
|OpenPortEvent|Published when a new port is discovered. open port is assigned to the `port ` attribute|
relative import: `...core.types`
Absolute import: `kube_hunter.core.types`
|Type|Description|
|---|---|

View File

@@ -1,2 +0,0 @@
from . import core
from . import modules

View File

@@ -1,40 +1,69 @@
#!/usr/bin/env python3
# flake8: noqa: E402
import logging
import threading
from kube_hunter.conf import config
from kube_hunter.conf import Config, set_config
from kube_hunter.conf.parser import parse_args
from kube_hunter.conf.logging import setup_logger
from kube_hunter.plugins import initialize_plugin_manager
pm = initialize_plugin_manager()
# Using a plugin hook for adding arguments before parsing
args = parse_args(add_args_hook=pm.hook.parser_add_arguments)
config = Config(
active=args.active,
cidr=args.cidr,
include_patched_versions=args.include_patched_versions,
interface=args.interface,
log_file=args.log_file,
mapping=args.mapping,
network_timeout=args.network_timeout,
pod=args.pod,
quick=args.quick,
remote=args.remote,
statistics=args.statistics,
)
setup_logger(args.log, args.log_file)
set_config(config)
# Running all other registered plugins before execution
pm.hook.load_plugin(args=args)
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import HuntFinished, HuntStarted
from kube_hunter.modules.discovery.hosts import RunningAsPodEvent, HostScanEvent
from kube_hunter.modules.report import get_reporter, get_dispatcher
config.reporter = get_reporter(config.report)
config.dispatcher = get_dispatcher(config.dispatch)
import kube_hunter
logger = logging.getLogger(__name__)
config.dispatcher = get_dispatcher(args.dispatch)
config.reporter = get_reporter(args.report)
def interactive_set_config():
"""Sets config manually, returns True for success"""
options = [("Remote scanning",
"scans one or more specific IPs or DNS names"),
("Interface scanning",
"scans subnets on all local network interfaces"),
("IP range scanning", "scans a given IP range")]
options = [
("Remote scanning", "scans one or more specific IPs or DNS names"),
("Interface scanning", "scans subnets on all local network interfaces"),
("IP range scanning", "scans a given IP range"),
]
print("Choose one of the options below:")
for i, (option, explanation) in enumerate(options):
print("{}. {} ({})".format(i+1, option.ljust(20), explanation))
print("{}. {} ({})".format(i + 1, option.ljust(20), explanation))
choice = input("Your choice: ")
if choice == '1':
config.remote = input("Remotes (separated by a ','): ").\
replace(' ', '').split(',')
elif choice == '2':
if choice == "1":
config.remote = input("Remotes (separated by a ','): ").replace(" ", "").split(",")
elif choice == "2":
config.interface = True
elif choice == '3':
config.cidr = input("CIDR (example - 192.168.1.0/24): ").\
replace(' ', '')
elif choice == "3":
config.cidr = (
input("CIDR separated by a ',' (example - 192.168.0.0/16,!192.168.0.8/32,!192.168.1.0/24): ")
.replace(" ", "")
.split(",")
)
else:
return False
return True
@@ -44,35 +73,30 @@ def list_hunters():
print("\nPassive Hunters:\n----------------")
for hunter, docs in handler.passive_hunters.items():
name, doc = hunter.parse_docs(docs)
print("* {}\n {}\n".format(name, doc))
print(f"* {name}\n {doc}\n")
if config.active:
print("\n\nActive Hunters:\n---------------")
for hunter, docs in handler.active_hunters.items():
name, doc = hunter.parse_docs(docs)
print("* {}\n {}\n".format( name, doc))
print(f"* {name}\n {doc}\n")
global hunt_started_lock
hunt_started_lock = threading.Lock()
hunt_started = False
def main():
global hunt_started
scan_options = [
config.pod,
config.cidr,
config.remote,
config.interface
]
scan_options = [config.pod, config.cidr, config.remote, config.interface]
try:
if config.list:
if args.list:
list_hunters()
return
if not any(scan_options):
if not interactive_set_config(): return
if not interactive_set_config():
return
with hunt_started_lock:
hunt_started = True
@@ -85,10 +109,10 @@ def main():
# Blocking to see discovery output
handler.join()
except KeyboardInterrupt:
logging.debug("Kube-Hunter stopped by user")
logger.debug("Kube-Hunter stopped by user")
# happens when running a container without interactive option
except EOFError:
logging.error("\033[0;31mPlease run again with -it\033[0m")
logger.error("\033[0;31mPlease run again with -it\033[0m")
finally:
hunt_started_lock.acquire()
if hunt_started:
@@ -96,10 +120,10 @@ def main():
handler.publish_event(HuntFinished())
handler.join()
handler.free()
logging.debug("Cleaned Queue")
logger.debug("Cleaned Queue")
else:
hunt_started_lock.release()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -1,18 +1,52 @@
import logging
from kube_hunter.conf.parser import parse_args
from dataclasses import dataclass
from typing import Any, Optional
config = parse_args()
loglevel = getattr(logging, config.log.upper(), None)
@dataclass
class Config:
"""Config is a configuration container.
It contains the following fields:
- active: Enable active hunters
- cidr: Network subnets to scan
- dispatcher: Dispatcher object
- include_patched_version: Include patches in version comparison
- interface: Interface scanning mode
- list_hunters: Print a list of existing hunters
- log_level: Log level
- log_file: Log File path
- mapping: Report only found components
- network_timeout: Timeout for network operations
- pod: From pod scanning mode
- quick: Quick scanning mode
- remote: Hosts to scan
- report: Output format
- statistics: Include hunters statistics
"""
if not loglevel:
logging.basicConfig(level=logging.INFO,
format='%(message)s',
datefmt='%H:%M:%S')
logging.warning('Unknown log level selected, using info')
elif config.log.lower() != "none":
logging.basicConfig(level=loglevel,
format='%(message)s',
datefmt='%H:%M:%S')
active: bool = False
cidr: Optional[str] = None
dispatcher: Optional[Any] = None
include_patched_versions: bool = False
interface: bool = False
log_file: Optional[str] = None
mapping: bool = False
network_timeout: float = 5.0
pod: bool = False
quick: bool = False
remote: Optional[str] = None
reporter: Optional[Any] = None
statistics: bool = False
import plugins
_config: Optional[Config] = None
def get_config() -> Config:
if not _config:
raise ValueError("Configuration is not initialized")
return _config
def set_config(new_config: Config) -> None:
global _config
_config = new_config

View File

@@ -0,0 +1,29 @@
import logging
DEFAULT_LEVEL = logging.INFO
DEFAULT_LEVEL_NAME = logging.getLevelName(DEFAULT_LEVEL)
LOG_FORMAT = "%(asctime)s %(levelname)s %(name)s %(message)s"
# Suppress logging from scapy
logging.getLogger("scapy.runtime").setLevel(logging.CRITICAL)
logging.getLogger("scapy.loading").setLevel(logging.CRITICAL)
def setup_logger(level_name, logfile):
# Remove any existing handlers
# Unnecessary in Python 3.8 since `logging.basicConfig` has `force` parameter
for h in logging.getLogger().handlers[:]:
h.close()
logging.getLogger().removeHandler(h)
if level_name.upper() == "NONE":
logging.disable(logging.CRITICAL)
else:
log_level = getattr(logging, level_name.upper(), None)
log_level = log_level if isinstance(log_level, int) else None
if logfile is None:
logging.basicConfig(level=log_level or DEFAULT_LEVEL, format=LOG_FORMAT)
else:
logging.basicConfig(filename=logfile, level=log_level or DEFAULT_LEVEL, format=LOG_FORMAT)
if not log_level:
logging.warning(f"Unknown log level '{level_name}', using {DEFAULT_LEVEL_NAME}")

View File

@@ -1,82 +1,101 @@
from argparse import ArgumentParser
from kube_hunter.plugins import hookimpl
def parse_args():
parser = ArgumentParser(
description='Kube-Hunter - hunts for security '
'weaknesses in Kubernetes clusters')
@hookimpl
def parser_add_arguments(parser):
"""
This is the default hook implementation for parse_add_argument
Contains initialization for all default arguments
"""
parser.add_argument(
'--list',
"--list",
action="store_true",
help="Displays all tests in kubehunter "
"(add --active flag to see active tests)")
help="Displays all tests in kubehunter (add --active flag to see active tests)",
)
parser.add_argument("--interface", action="store_true", help="Set hunting on all network interfaces")
parser.add_argument("--pod", action="store_true", help="Set hunter as an insider pod")
parser.add_argument("--quick", action="store_true", help="Prefer quick scan (subnet 24)")
parser.add_argument(
'--interface',
"--include-patched-versions",
action="store_true",
help="Set hunting on all network interfaces")
help="Don't skip patched versions when scanning",
)
parser.add_argument(
'--pod',
action="store_true",
help="Set hunter as an insider pod")
parser.add_argument(
'--quick',
action="store_true",
help="Prefer quick scan (subnet 24)")
parser.add_argument(
'--include-patched-versions',
action="store_true",
help="Don't skip patched versions when scanning")
parser.add_argument(
'--cidr',
"--cidr",
type=str,
help="Set an ip range to scan, example: 192.168.0.0/16")
parser.add_argument(
'--mapping',
action="store_true",
help="Outputs only a mapping of the cluster's nodes")
help="Set an IP range to scan/ignore, example: '192.168.0.0/24,!192.168.0.8/32,!192.168.0.16/32'",
)
parser.add_argument(
'--remote',
nargs='+',
"--mapping",
action="store_true",
help="Outputs only a mapping of the cluster's nodes",
)
parser.add_argument(
"--remote",
nargs="+",
metavar="HOST",
default=list(),
help="One or more remote ip/dns to hunt")
help="One or more remote ip/dns to hunt",
)
parser.add_argument("--active", action="store_true", help="Enables active hunting")
parser.add_argument(
'--active',
action="store_true",
help="Enables active hunting")
parser.add_argument(
'--log',
"--log",
type=str,
metavar="LOGLEVEL",
default='INFO',
help="Set log level, options are: debug, info, warn, none")
default="INFO",
help="Set log level, options are: debug, info, warn, none",
)
parser.add_argument(
'--report',
"--log-file",
type=str,
default='plain',
help="Set report type, options are: plain, yaml, json")
default=None,
help="Path to a log file to output all logs to",
)
parser.add_argument(
'--dispatch',
"--report",
type=str,
default='stdout',
default="plain",
help="Set report type, options are: plain, yaml, json",
)
parser.add_argument(
"--dispatch",
type=str,
default="stdout",
help="Where to send the report to, options are: "
"stdout, http (set KUBEHUNTER_HTTP_DISPATCH_URL and "
"KUBEHUNTER_HTTP_DISPATCH_METHOD environment variables to configure)")
"KUBEHUNTER_HTTP_DISPATCH_METHOD environment variables to configure)",
)
parser.add_argument(
'--statistics',
action="store_true",
help="Show hunting statistics")
parser.add_argument("--statistics", action="store_true", help="Show hunting statistics")
return parser.parse_args()
parser.add_argument("--network-timeout", type=float, default=5.0, help="network operations timeout")
def parse_args(add_args_hook):
"""
Function handles all argument parsing
@param add_arguments: hook for adding arguments to it's given ArgumentParser parameter
@return: parsed arguments dict
"""
parser = ArgumentParser(description="kube-hunter - hunt for security weaknesses in Kubernetes clusters")
# adding all arguments to the parser
add_args_hook(parser=parser)
args = parser.parse_args()
if args.cidr:
args.cidr = args.cidr.replace(" ", "").split(",")
return args

View File

@@ -1,2 +1,3 @@
# flake8: noqa: E402
from . import types
from . import events

View File

@@ -1,2 +1,3 @@
from .handler import *
# flake8: noqa: E402
from .handler import EventQueue, handler
from . import types

View File

@@ -4,16 +4,17 @@ from collections import defaultdict
from queue import Queue
from threading import Thread
from kube_hunter.conf import config
from kube_hunter.conf import get_config
from kube_hunter.core.types import ActiveHunter, HunterBase
from kube_hunter.core.events.types import Vulnerability, EventFilterBase
from ..types import ActiveHunter, HunterBase
logger = logging.getLogger(__name__)
from ...core.events.types import Vulnerability, EventFilterBase
# Inherits Queue object, handles events asynchronously
class EventQueue(Queue, object):
class EventQueue(Queue):
def __init__(self, num_worker=10):
super(EventQueue, self).__init__()
super().__init__()
self.passive_hunters = dict()
self.active_hunters = dict()
self.all_hunters = dict()
@@ -49,14 +50,17 @@ class EventQueue(Queue, object):
def __new__unsubscribe_self(self, cls):
handler.hooks[event].remove((hook, predicate))
return object.__new__(self)
hook.__new__ = __new__unsubscribe_self
self.subscribe_event(event, hook=hook, predicate=predicate)
return hook
return wrapper
# getting uninstantiated event object
def subscribe_event(self, event, hook=None, predicate=None):
config = get_config()
if ActiveHunter in hook.__mro__:
if not config.active:
return
@@ -71,23 +75,22 @@ class EventQueue(Queue, object):
if EventFilterBase in hook.__mro__:
if hook not in self.filters[event]:
self.filters[event].append((hook, predicate))
logging.debug('{} filter subscribed to {}'.format(hook, event))
logger.debug(f"{hook} filter subscribed to {event}")
# registering hunters
elif hook not in self.hooks[event]:
self.hooks[event].append((hook, predicate))
logging.debug('{} subscribed to {}'.format(hook, event))
logger.debug(f"{hook} subscribed to {event}")
def apply_filters(self, event):
# if filters are subscribed, apply them on the event
# if filters are subscribed, apply them on the event
for hooked_event in self.filters.keys():
if hooked_event in event.__class__.__mro__:
for filter_hook, predicate in self.filters[hooked_event]:
if predicate and not predicate(event):
continue
logging.debug('Event {} got filtered with {}'.format(event.__class__, filter_hook))
logger.debug(f"Event {event.__class__} filtered with {filter_hook}")
event = filter_hook(event).execute()
# if filter decided to remove event, returning None
if not event:
@@ -96,12 +99,14 @@ class EventQueue(Queue, object):
# getting instantiated event object
def publish_event(self, event, caller=None):
config = get_config()
# setting event chain
if caller:
event.previous = caller.event
event.hunter = caller.__class__
# applying filters on the event, before publishing it to subscribers.
# applying filters on the event, before publishing it to subscribers.
# if filter returned None, not proceeding to publish
event = self.apply_filters(event)
if event:
@@ -120,7 +125,7 @@ class EventQueue(Queue, object):
if Vulnerability in event.__class__.__mro__:
caller.__class__.publishedVulnerabilities += 1
logging.debug('Event {} got published with {}'.format(event.__class__, event))
logger.debug(f"Event {event.__class__} got published with {event}")
self.put(hook(event))
# executes callbacks on dedicated thread as a daemon
@@ -128,22 +133,22 @@ class EventQueue(Queue, object):
while self.running:
try:
hook = self.get()
logging.debug("Executing {} with {}".format(hook.__class__, hook.event.__dict__))
logger.debug(f"Executing {hook.__class__} with {hook.event.__dict__}")
hook.execute()
except Exception as ex:
logging.debug("Exception: {} - {}".format(hook.__class__, ex))
logger.debug(ex, exc_info=True)
finally:
self.task_done()
logging.debug("closing thread...")
logger.debug("closing thread...")
def notifier(self):
time.sleep(2)
# should consider locking on unfinished_tasks
while self.unfinished_tasks > 0:
logging.debug("{} tasks left".format(self.unfinished_tasks))
logger.debug(f"{self.unfinished_tasks} tasks left")
time.sleep(3)
if self.unfinished_tasks == 1:
logging.debug("final hook is hanging")
logger.debug("final hook is hanging")
# stops execution of all daemons
def free(self):
@@ -151,4 +156,5 @@ class EventQueue(Queue, object):
with self.mutex:
self.queue.clear()
handler = EventQueue(800)

View File

@@ -1,9 +1,23 @@
import logging
import threading
import requests
from kube_hunter.core.types import InformationDisclosure, DenialOfService, RemoteCodeExec, IdentityTheft, PrivilegeEscalation, AccessRisk, UnauthenticatedAccess, KubernetesCluster
from kube_hunter.conf import get_config
from kube_hunter.core.types import (
InformationDisclosure,
DenialOfService,
RemoteCodeExec,
IdentityTheft,
PrivilegeEscalation,
AccessRisk,
UnauthenticatedAccess,
KubernetesCluster,
)
logger = logging.getLogger(__name__)
class EventFilterBase(object):
class EventFilterBase:
def __init__(self, event):
self.event = event
@@ -13,7 +27,8 @@ class EventFilterBase(object):
def execute(self):
return self.event
class Event(object):
class Event:
def __init__(self):
self.previous = None
self.hunter = None
@@ -47,11 +62,7 @@ class Event(object):
return history
""" Event Types """
# TODO: make proof an abstract method.
class Service(object):
class Service:
def __init__(self, name, path="", secure=True):
self.name = name
self.secure = secure
@@ -68,16 +79,18 @@ class Service(object):
return self.__doc__
class Vulnerability(object):
severity = dict({
InformationDisclosure: "medium",
DenialOfService: "medium",
RemoteCodeExec: "high",
IdentityTheft: "high",
PrivilegeEscalation: "high",
AccessRisk: "low",
UnauthenticatedAccess: "low"
})
class Vulnerability:
severity = dict(
{
InformationDisclosure: "medium",
DenialOfService: "medium",
RemoteCodeExec: "high",
IdentityTheft: "high",
PrivilegeEscalation: "high",
AccessRisk: "low",
UnauthenticatedAccess: "low",
}
)
# TODO: make vid mandatory once migration is done
def __init__(self, component, name, category=None, vid="None"):
@@ -104,37 +117,58 @@ class Vulnerability(object):
def get_severity(self):
return self.severity.get(self.category, "low")
global event_id_count_lock
event_id_count_lock = threading.Lock()
event_id_count = 0
""" Discovery/Hunting Events """
class NewHostEvent(Event):
def __init__(self, host, cloud=None):
global event_id_count
self.host = host
self.cloud = cloud
self.cloud_type = cloud
with event_id_count_lock:
self.event_id = event_id_count
event_id_count += 1
@property
def cloud(self):
if not self.cloud_type:
self.cloud_type = self.get_cloud()
return self.cloud_type
def get_cloud(self):
config = get_config()
try:
logger.debug("Checking whether the cluster is deployed on azure's cloud")
# Leverage 3rd tool https://github.com/blrchen/AzureSpeed for Azure cloud ip detection
result = requests.get(
f"https://api.azurespeed.com/api/region?ipOrUrl={self.host}",
timeout=config.network_timeout,
).json()
return result["cloud"] or "NoCloud"
except requests.ConnectionError:
logger.info("Failed to connect cloud type service", exc_info=True)
except Exception:
logger.warning(f"Unable to check cloud of {self.host}", exc_info=True)
return "NoCloud"
def __str__(self):
return str(self.host)
# Event's logical location to be used mainly for reports.
def location(self):
return str(self.host)
class OpenPortEvent(Event):
def __init__(self, port):
self.port = port
def __str__(self):
return str(self.port)
# Event's logical location to be used mainly for reports.
def location(self):
if self.host:
@@ -156,11 +190,17 @@ class ReportDispatched(Event):
pass
""" Core Vulnerabilities """
class K8sVersionDisclosure(Vulnerability, Event):
"""The kubernetes version could be obtained from the {} endpoint """
def __init__(self, version, from_endpoint, extra_info=""):
Vulnerability.__init__(self, KubernetesCluster, "K8s Version Disclosure", category=InformationDisclosure, vid="KHV002")
Vulnerability.__init__(
self,
KubernetesCluster,
"K8s Version Disclosure",
category=InformationDisclosure,
vid="KHV002",
)
self.version = version
self.from_endpoint = from_endpoint
self.extra_info = extra_info

View File

@@ -1,10 +0,0 @@
from os.path import dirname, basename, isfile
import glob
from .common import *
# dynamically importing all modules in folder
files = glob.glob(dirname(__file__)+"/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith('__init__.py')):
if module_name != "handler":
exec('from .{} import *'.format(module_name))

View File

@@ -1,4 +1,4 @@
class HunterBase(object):
class HunterBase:
publishedVulnerabilities = 0
@staticmethod
@@ -6,10 +6,10 @@ class HunterBase(object):
"""returns tuple of (name, docs)"""
if not docs:
return __name__, "<no documentation>"
docs = docs.strip().split('\n')
docs = docs.strip().split("\n")
for i, line in enumerate(docs):
docs[i] = line.strip()
return docs[0], ' '.join(docs[1:]) if len(docs[1:]) else "<no documentation>"
return docs[0], " ".join(docs[1:]) if len(docs[1:]) else "<no documentation>"
@classmethod
def get_name(cls):
@@ -32,50 +32,57 @@ class Discovery(HunterBase):
pass
"""Kubernetes Components"""
class KubernetesCluster():
class KubernetesCluster:
"""Kubernetes Cluster"""
name = "Kubernetes Cluster"
class KubectlClient():
class KubectlClient:
"""The kubectl client binary is used by the user to interact with the cluster"""
name = "Kubectl Client"
class Kubelet(KubernetesCluster):
"""The kubelet is the primary "node agent" that runs on each node"""
name = "Kubelet"
class Azure(KubernetesCluster):
"""Azure Cluster"""
name = "Azure"
""" Categories """
class InformationDisclosure(object):
class InformationDisclosure:
name = "Information Disclosure"
class RemoteCodeExec(object):
class RemoteCodeExec:
name = "Remote Code Execution"
class IdentityTheft(object):
class IdentityTheft:
name = "Identity Theft"
class UnauthenticatedAccess(object):
class UnauthenticatedAccess:
name = "Unauthenticated Access"
class AccessRisk(object):
class AccessRisk:
name = "Access Risk"
class PrivilegeEscalation(KubernetesCluster):
name = "Privilege Escalation"
class DenialOfService(object):
class DenialOfService:
name = "Denial of Service"
from .events import handler # import is in the bottom to break import loops
# import is in the bottom to break import loops
from .events import handler # noqa

View File

@@ -1,3 +1,4 @@
# flake8: noqa: E402
from . import report
from . import discovery
from . import hunting

View File

@@ -1,8 +1,11 @@
from os.path import dirname, basename, isfile
import glob
# dynamically importing all modules in folder
files = glob.glob(dirname(__file__)+"/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith('__init__.py')):
if not module_name.startswith('test_'):
exec('from .{} import *'.format(module_name))
# flake8: noqa: E402
from . import (
apiserver,
dashboard,
etcd,
hosts,
kubectl,
kubelet,
ports,
proxy,
)

View File

@@ -1,15 +1,20 @@
import json
import requests
import logging
import requests
from kube_hunter.core.types import Discovery
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import OpenPortEvent, Service, Event, EventFilterBase
from kube_hunter.conf import get_config
KNOWN_API_PORTS = [443, 6443, 8080]
logger = logging.getLogger(__name__)
class K8sApiService(Service, Event):
"""A Kubernetes API service"""
def __init__(self, protocol="https"):
Service.__init__(self, name="Unrecognized K8s API")
self.protocol = protocol
@@ -17,12 +22,15 @@ class K8sApiService(Service, Event):
class ApiServer(Service, Event):
"""The API server is in charge of all operations on the cluster."""
def __init__(self):
Service.__init__(self, name="API Server")
self.protocol = "https"
class MetricsServer(Service, Event):
"""The Metrics server is in charge of providing resource usage metrics for pods and nodes to the API server."""
"""The Metrics server is in charge of providing resource usage metrics for pods and nodes to the API server"""
def __init__(self):
Service.__init__(self, name="Metrics Server")
self.protocol = "https"
@@ -35,27 +43,29 @@ class ApiServiceDiscovery(Discovery):
"""API Service Discovery
Checks for the existence of K8s API Services
"""
def __init__(self, event):
self.event = event
self.session = requests.Session()
self.session.verify = False
def execute(self):
logging.debug("Attempting to discover an API service on {}:{}".format(self.event.host, self.event.port))
logger.debug(f"Attempting to discover an API service on {self.event.host}:{self.event.port}")
protocols = ["http", "https"]
for protocol in protocols:
if self.has_api_behaviour(protocol):
self.publish_event(K8sApiService(protocol))
def has_api_behaviour(self, protocol):
config = get_config()
try:
r = self.session.get("{}://{}:{}".format(protocol, self.event.host, self.event.port))
if ('k8s' in r.text) or ('"code"' in r.text and r.status_code != 200):
r = self.session.get(f"{protocol}://{self.event.host}:{self.event.port}", timeout=config.network_timeout)
if ("k8s" in r.text) or ('"code"' in r.text and r.status_code != 200):
return True
except requests.exceptions.SSLError:
logging.debug("{} protocol not accepted on {}:{}".format(protocol, self.event.host, self.event.port))
except Exception as e:
logging.debug("{} on {}:{}".format(e, self.event.host, self.event.port))
logger.debug(f"{[protocol]} protocol not accepted on {self.event.host}:{self.event.port}")
except Exception:
logger.debug(f"Failed probing {self.event.host}:{self.event.port}", exc_info=True)
# Acts as a Filter for services, In the case that we can classify the API,
@@ -72,6 +82,7 @@ class ApiServiceClassify(EventFilterBase):
"""API Service Classifier
Classifies an API service
"""
def __init__(self, event):
self.event = event
self.classified = False
@@ -79,20 +90,21 @@ class ApiServiceClassify(EventFilterBase):
self.session.verify = False
# Using the auth token if we can, for the case that authentication is needed for our checks
if self.event.auth_token:
self.session.headers.update({"Authorization": "Bearer {}".format(self.event.auth_token)})
self.session.headers.update({"Authorization": f"Bearer {self.event.auth_token}"})
def classify_using_version_endpoint(self):
"""Tries to classify by accessing /version. if could not access succeded, returns"""
config = get_config()
try:
r = self.session.get("{}://{}:{}/version".format(self.event.protocol, self.event.host, self.event.port))
versions = r.json()
if 'major' in versions:
if versions.get('major') == "":
endpoint = f"{self.event.protocol}://{self.event.host}:{self.event.port}/version"
versions = self.session.get(endpoint, timeout=config.network_timeout).json()
if "major" in versions:
if versions.get("major") == "":
self.event = MetricsServer()
else:
self.event = ApiServer()
except Exception as e:
logging.exception("Could not access /version on API service")
except Exception:
logging.warning("Could not access /version on API service", exc_info=True)
def execute(self):
discovered_protocol = self.event.protocol

View File

@@ -1,32 +1,42 @@
import json
import logging
import requests
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, OpenPortEvent, Service
from kube_hunter.core.types import Discovery
logger = logging.getLogger(__name__)
class KubeDashboardEvent(Service, Event):
"""A web-based Kubernetes user interface. allows easy usage with operations on the cluster"""
"""A web-based Kubernetes user interface allows easy usage with operations on the cluster"""
def __init__(self, **kargs):
Service.__init__(self, name="Kubernetes Dashboard", **kargs)
@handler.subscribe(OpenPortEvent, predicate=lambda x: x.port == 30000)
class KubeDashboard(Discovery):
"""K8s Dashboard Discovery
Checks for the existence of a Dashboard
"""
def __init__(self, event):
self.event = event
@property
def secure(self):
logging.debug("Attempting to discover an Api server to access dashboard")
r = requests.get("http://{}:{}/api/v1/service/default".format(self.event.host, self.event.port))
if "listMeta" in r.text and len(json.loads(r.text)["errors"]) == 0:
return False
config = get_config()
endpoint = f"http://{self.event.host}:{self.event.port}/api/v1/service/default"
logger.debug("Attempting to discover an Api server to access dashboard")
try:
r = requests.get(endpoint, timeout=config.network_timeout)
if "listMeta" in r.text and len(json.loads(r.text)["errors"]) == 0:
return False
except requests.Timeout:
logger.debug(f"failed getting {endpoint}", exc_info=True)
return True
def execute(self):

View File

@@ -1,24 +1,22 @@
import json
import logging
import requests
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, OpenPortEvent, Service
from kube_hunter.core.types import Discovery
class EtcdAccessEvent(Service, Event):
"""Etcd is a DB that stores cluster's data, it contains configuration and current
state information, and might contain secrets"""
def __init__(self):
Service.__init__(self, name="Etcd")
@handler.subscribe(OpenPortEvent, predicate= lambda p: p.port == 2379)
@handler.subscribe(OpenPortEvent, predicate=lambda p: p.port == 2379)
class EtcdRemoteAccess(Discovery):
"""Etcd service
check for the existence of etcd service
"""
def __init__(self, event):
self.event = event

View File

@@ -1,23 +1,23 @@
import os
import json
import logging
import socket
import sys
import time
import itertools
import requests
from enum import Enum
from netaddr import IPNetwork, IPAddress
from netifaces import AF_INET, ifaddresses, interfaces
from netaddr import IPNetwork, IPAddress, AddrFormatError
from netifaces import AF_INET, ifaddresses, interfaces, gateways
from kube_hunter.conf import config
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, NewHostEvent, Vulnerability
from kube_hunter.core.types import Discovery, InformationDisclosure, Azure
logger = logging.getLogger(__name__)
class RunningAsPodEvent(Event):
def __init__(self):
self.name = 'Running from within a pod'
self.name = "Running from within a pod"
self.auth_token = self.get_service_account_file("token")
self.client_cert = self.get_service_account_file("ca.crt")
self.namespace = self.get_service_account_file("namespace")
@@ -26,55 +26,66 @@ class RunningAsPodEvent(Event):
# Event's logical location to be used mainly for reports.
def location(self):
location = "Local to Pod"
if 'HOSTNAME' in os.environ:
location += "(" + os.environ['HOSTNAME'] + ")"
hostname = os.getenv("HOSTNAME")
if hostname:
location += f" ({hostname})"
return location
def get_service_account_file(self, file):
try:
with open("/var/run/secrets/kubernetes.io/serviceaccount/{file}".format(file=file)) as f:
with open(f"/var/run/secrets/kubernetes.io/serviceaccount/{file}") as f:
return f.read()
except IOError:
except OSError:
pass
class AzureMetadataApi(Vulnerability, Event):
"""Access to the Azure Metadata API exposes information about the machines associated with the cluster"""
def __init__(self, cidr):
Vulnerability.__init__(self, Azure, "Azure Metadata Exposure", category=InformationDisclosure, vid="KHV003")
Vulnerability.__init__(
self,
Azure,
"Azure Metadata Exposure",
category=InformationDisclosure,
vid="KHV003",
)
self.cidr = cidr
self.evidence = "cidr: {}".format(cidr)
self.evidence = f"cidr: {cidr}"
class HostScanEvent(Event):
def __init__(self, pod=False, active=False, predefined_hosts=list()):
self.active = active # flag to specify whether to get actual data from vulnerabilities
self.predefined_hosts = predefined_hosts
def __init__(self, pod=False, active=False, predefined_hosts=None):
# flag to specify whether to get actual data from vulnerabilities
self.active = active
self.predefined_hosts = predefined_hosts or []
class HostDiscoveryHelpers:
@staticmethod
def get_cloud(host):
try:
logging.debug("Checking whether the cluster is deployed on azure's cloud")
# azurespeed.com provide their API via HTTP only; the service can be queried with
# HTTPS, but doesn't show a proper certificate. Since no encryption is worse then
# any encryption, we go with the verify=false option for the time being. At least
# this prevents leaking internal IP addresses to passive eavesdropping.
# TODO: find a more secure service to detect cloud IPs
metadata = requests.get("https://www.azurespeed.com/api/region?ipOrUrl={ip}".format(ip=host), verify=False).text
except requests.ConnectionError as e:
logging.info("- unable to check cloud: {0}".format(e))
return
if "cloud" in metadata:
return json.loads(metadata)["cloud"]
# generator, generating a subnet by given a cidr
@staticmethod
def generate_subnet(ip, sn="24"):
logging.debug("HostDiscoveryHelpers.generate_subnet {0}/{1}".format(ip, sn))
subnet = IPNetwork('{ip}/{sn}'.format(ip=ip, sn=sn))
for ip in IPNetwork(subnet):
logging.debug("HostDiscoveryHelpers.generate_subnet yielding {0}".format(ip))
yield ip
def filter_subnet(subnet, ignore=None):
for ip in subnet:
if ignore and any(ip in s for s in ignore):
logger.debug(f"HostDiscoveryHelpers.filter_subnet ignoring {ip}")
else:
yield ip
@staticmethod
def generate_hosts(cidrs):
ignore = list()
scan = list()
for cidr in cidrs:
try:
if cidr.startswith("!"):
ignore.append(IPNetwork(cidr[1:]))
else:
scan.append(IPNetwork(cidr))
except AddrFormatError as e:
raise ValueError(f"Unable to parse CIDR {cidr}") from e
return itertools.chain.from_iterable(HostDiscoveryHelpers.filter_subnet(sb, ignore=ignore) for sb in scan)
@handler.subscribe(RunningAsPodEvent)
@@ -82,109 +93,117 @@ class FromPodHostDiscovery(Discovery):
"""Host Discovery when running as pod
Generates ip adresses to scan, based on cluster/scan type
"""
def __init__(self, event):
self.event = event
def execute(self):
config = get_config()
# Scan any hosts that the user specified
if config.remote or config.cidr:
self.publish_event(HostScanEvent())
else:
# Discover cluster subnets, we'll scan all these hosts
cloud = None
if self.is_azure_pod():
subnets, cloud = self.azure_metadata_discovery()
else:
subnets, cloud = self.traceroute_discovery()
subnets = self.gateway_discovery()
should_scan_apiserver = False
if self.event.kubeservicehost:
should_scan_apiserver = True
for subnet in subnets:
if self.event.kubeservicehost and self.event.kubeservicehost in IPNetwork("{}/{}".format(subnet[0], subnet[1])):
for ip, mask in subnets:
if self.event.kubeservicehost and self.event.kubeservicehost in IPNetwork(f"{ip}/{mask}"):
should_scan_apiserver = False
logging.debug("From pod scanning subnet {0}/{1}".format(subnet[0], subnet[1]))
for ip in HostDiscoveryHelpers.generate_subnet(ip=subnet[0], sn=subnet[1]):
logger.debug(f"From pod scanning subnet {ip}/{mask}")
for ip in IPNetwork(f"{ip}/{mask}"):
self.publish_event(NewHostEvent(host=ip, cloud=cloud))
if should_scan_apiserver:
self.publish_event(NewHostEvent(host=IPAddress(self.event.kubeservicehost), cloud=cloud))
def is_azure_pod(self):
config = get_config()
try:
logging.debug("From pod attempting to access Azure Metadata API")
if requests.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", headers={"Metadata":"true"}, timeout=5).status_code == 200:
logger.debug("From pod attempting to access Azure Metadata API")
if (
requests.get(
"http://169.254.169.254/metadata/instance?api-version=2017-08-01",
headers={"Metadata": "true"},
timeout=config.network_timeout,
).status_code
== 200
):
return True
except requests.exceptions.ConnectionError:
logger.debug("Failed to connect Azure metadata server")
return False
# for pod scanning
def traceroute_discovery(self):
external_ip = requests.get("http://canhazip.com").text # getting external ip, to determine if cloud cluster
from scapy.all import ICMP, IP, Ether, srp1
node_internal_ip = srp1(Ether() / IP(dst="google.com" , ttl=1) / ICMP(), verbose=0)[IP].src
return [ [node_internal_ip,"24"], ], external_ip
# for pod scanning
def gateway_discovery(self):
""" Retrieving default gateway of pod, which is usually also a contact point with the host """
return [[gateways()["default"][AF_INET][0], "24"]]
# querying azure's interface metadata api | works only from a pod
def azure_metadata_discovery(self):
logging.debug("From pod attempting to access azure's metadata")
machine_metadata = json.loads(requests.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", headers={"Metadata":"true"}).text)
config = get_config()
logger.debug("From pod attempting to access azure's metadata")
machine_metadata = requests.get(
"http://169.254.169.254/metadata/instance?api-version=2017-08-01",
headers={"Metadata": "true"},
timeout=config.network_timeout,
).json()
address, subnet = "", ""
subnets = list()
for interface in machine_metadata["network"]["interface"]:
address, subnet = interface["ipv4"]["subnet"][0]["address"], interface["ipv4"]["subnet"][0]["prefix"]
logging.debug("From pod discovered subnet {0}/{1}".format(address, subnet if not config.quick else "24"))
subnets.append([address,subnet if not config.quick else "24"])
address, subnet = (
interface["ipv4"]["subnet"][0]["address"],
interface["ipv4"]["subnet"][0]["prefix"],
)
subnet = subnet if not config.quick else "24"
logger.debug(f"From pod discovered subnet {address}/{subnet}")
subnets.append([address, subnet if not config.quick else "24"])
self.publish_event(AzureMetadataApi(cidr="{}/{}".format(address, subnet)))
self.publish_event(AzureMetadataApi(cidr=f"{address}/{subnet}"))
return subnets, "Azure"
@handler.subscribe(HostScanEvent)
class HostDiscovery(Discovery):
"""Host Discovery
Generates ip adresses to scan, based on cluster/scan type
"""
def __init__(self, event):
self.event = event
def execute(self):
config = get_config()
if config.cidr:
try:
ip, sn = config.cidr.split('/')
except ValueError as e:
logging.exception("unable to parse cidr")
return
cloud = HostDiscoveryHelpers.get_cloud(ip)
for ip in HostDiscoveryHelpers.generate_subnet(ip, sn=sn):
self.publish_event(NewHostEvent(host=ip, cloud=cloud))
for ip in HostDiscoveryHelpers.generate_hosts(config.cidr):
self.publish_event(NewHostEvent(host=ip))
elif config.interface:
self.scan_interfaces()
elif len(config.remote) > 0:
for host in config.remote:
self.publish_event(NewHostEvent(host=host, cloud=HostDiscoveryHelpers.get_cloud(host)))
self.publish_event(NewHostEvent(host=host))
# for normal scanning
def scan_interfaces(self):
try:
logging.debug("HostDiscovery hunter attempting to get external IP address")
external_ip = requests.get("http://canhazip.com").text # getting external ip, to determine if cloud cluster
except requests.ConnectionError as e:
logging.debug("unable to determine local IP address: {0}".format(e))
logging.info("~ default to 127.0.0.1")
external_ip = "127.0.0.1"
cloud = HostDiscoveryHelpers.get_cloud(external_ip)
for ip in self.generate_interfaces_subnet():
handler.publish_event(NewHostEvent(host=ip, cloud=cloud))
handler.publish_event(NewHostEvent(host=ip))
# generate all subnets from all internal network interfaces
def generate_interfaces_subnet(self, sn='24'):
def generate_interfaces_subnet(self, sn="24"):
for ifaceName in interfaces():
for ip in [i['addr'] for i in ifaddresses(ifaceName).setdefault(AF_INET, [])]:
for ip in [i["addr"] for i in ifaddresses(ifaceName).setdefault(AF_INET, [])]:
if not self.event.localhost and InterfaceTypes.LOCALHOST.value in ip.__str__():
continue
for ip in HostDiscoveryHelpers.generate_subnet(ip, sn):
for ip in IPNetwork(f"{ip}/{sn}"):
yield ip
# for comparing prefixes
class InterfaceTypes(Enum):
LOCALHOST = "127"

View File

@@ -1,27 +1,30 @@
import logging
import subprocess
import json
from kube_hunter.core.types import Discovery
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import HuntStarted, Event
logger = logging.getLogger(__name__)
class KubectlClientEvent(Event):
"""The API server is in charge of all operations on the cluster."""
def __init__(self, version):
self.version = version
def location(self):
return "local machine"
# Will be triggered on start of every hunt
@handler.subscribe(HuntStarted)
class KubectlClientDiscovery(Discovery):
"""Kubectl Client Discovery
Checks for the existence of a local kubectl client
"""
def __init__(self, event):
self.event = event
@@ -33,14 +36,14 @@ class KubectlClientDiscovery(Discovery):
if b"GitVersion" in version_info:
# extracting version from kubectl output
version_info = version_info.decode()
start = version_info.find('GitVersion')
version = version_info[start + len("GitVersion':\"") : version_info.find("\",", start)]
start = version_info.find("GitVersion")
version = version_info[start + len("GitVersion':\"") : version_info.find('",', start)]
except Exception:
logging.debug("Could not find kubectl client")
logger.debug("Could not find kubectl client")
return version
def execute(self):
logging.debug("Attempting to discover a local kubectl client")
logger.debug("Attempting to discover a local kubectl client")
version = self.get_kubectl_binary_version()
if version:
self.publish_event(KubectlClientEvent(version=version))

View File

@@ -1,25 +1,30 @@
import json
import logging
import requests
import urllib3
from enum import Enum
from kube_hunter.core.types import Discovery, Kubelet
from kube_hunter.conf import get_config
from kube_hunter.core.types import Discovery
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import OpenPortEvent, Vulnerability, Event, Service
from kube_hunter.core.events.types import OpenPortEvent, Event, Service
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
logger = logging.getLogger(__name__)
""" Services """
class ReadOnlyKubeletEvent(Service, Event):
"""The read-only port on the kubelet serves health probing endpoints, and is relied upon by many kubernetes components"""
"""The read-only port on the kubelet serves health probing endpoints,
and is relied upon by many kubernetes components"""
def __init__(self):
Service.__init__(self, name="Kubelet API (readonly)")
class SecureKubeletEvent(Service, Event):
"""The Kubelet is the main component in every Node, all pod operations goes through the kubelet"""
def __init__(self, cert=False, token=False, anonymous_auth=True, **kwargs):
self.cert = cert
self.token = token
@@ -31,22 +36,26 @@ class KubeletPorts(Enum):
SECURED = 10250
READ_ONLY = 10255
@handler.subscribe(OpenPortEvent, predicate= lambda x: x.port == 10255 or x.port == 10250)
@handler.subscribe(OpenPortEvent, predicate=lambda x: x.port in [10250, 10255])
class KubeletDiscovery(Discovery):
"""Kubelet Discovery
Checks for the existence of a Kubelet service, and its open ports
"""
def __init__(self, event):
self.event = event
def get_read_only_access(self):
logging.debug("Passive hunter is attempting to get kubelet read access at {}:{}".format(self.event.host, self.event.port))
r = requests.get("http://{host}:{port}/pods".format(host=self.event.host, port=self.event.port))
config = get_config()
endpoint = f"http://{self.event.host}:{self.event.port}/pods"
logger.debug(f"Trying to get kubelet read access at {endpoint}")
r = requests.get(endpoint, timeout=config.network_timeout)
if r.status_code == 200:
self.publish_event(ReadOnlyKubeletEvent())
def get_secure_access(self):
logging.debug("Attempting to get kubelet secure access")
logger.debug("Attempting to get kubelet secure access")
ping_status = self.ping_kubelet()
if ping_status == 200:
self.publish_event(SecureKubeletEvent(secure=False))
@@ -56,11 +65,13 @@ class KubeletDiscovery(Discovery):
self.publish_event(SecureKubeletEvent(secure=True, anonymous_auth=False))
def ping_kubelet(self):
logging.debug("Attempting to get pod info from kubelet")
config = get_config()
endpoint = f"https://{self.event.host}:{self.event.port}/pods"
logger.debug("Attempting to get pods info from kubelet")
try:
return requests.get("https://{host}:{port}/pods".format(host=self.event.host, port=self.event.port), verify=False).status_code
except Exception as ex:
logging.debug("Failed pinging https port 10250 on {} : {}".format(self.event.host, ex))
return requests.get(endpoint, verify=False, timeout=config.network_timeout).status_code
except Exception:
logger.debug(f"Failed pinging https port on {endpoint}", exc_info=True)
def execute(self):
if self.event.port == KubeletPorts.SECURED.value:

View File

@@ -1,29 +1,30 @@
import logging
from socket import socket
from kube_hunter.core.types import Discovery
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import NewHostEvent, OpenPortEvent
logger = logging.getLogger(__name__)
default_ports = [8001, 8080, 10250, 10255, 30000, 443, 6443, 2379]
@handler.subscribe(NewHostEvent)
class PortDiscovery(Discovery):
"""Port Scanning
Scans Kubernetes known ports to determine open endpoints for discovery
"""
def __init__(self, event):
self.event = event
self.host = event.host
self.port = event.port
def execute(self):
logging.debug("host {0} try ports: {1}".format(self.host, default_ports))
logger.debug(f"host {self.host} try ports: {default_ports}")
for single_port in default_ports:
if self.test_connection(self.host, single_port):
logging.debug("Reachable port found: {0}".format(single_port))
logger.debug(f"Reachable port found: {single_port}")
self.publish_event(OpenPortEvent(port=single_port))
@staticmethod
@@ -31,9 +32,12 @@ class PortDiscovery(Discovery):
s = socket()
s.settimeout(1.5)
try:
logger.debug(f"Scanning {host}:{port}")
success = s.connect_ex((str(host), port))
if success == 0:
return True
except: pass
finally: s.close()
except Exception:
logger.debug(f"Failed to probe {host}:{port}")
finally:
s.close()
return False

View File

@@ -1,23 +1,27 @@
import logging
import requests
from collections import defaultdict
from requests import get
from kube_hunter.conf import get_config
from kube_hunter.core.types import Discovery
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Service, Event, OpenPortEvent
logger = logging.getLogger(__name__)
class KubeProxyEvent(Event, Service):
"""proxies from a localhost address to the Kubernetes apiserver"""
def __init__(self):
Service.__init__(self, name="Kubernetes Proxy")
@handler.subscribe(OpenPortEvent, predicate=lambda x: x.port == 8001)
class KubeProxy(Discovery):
"""Proxy Discovery
Checks for the existence of a an open Proxy service
"""
def __init__(self, event):
self.event = event
self.host = event.host
@@ -25,10 +29,16 @@ class KubeProxy(Discovery):
@property
def accesible(self):
logging.debug("Attempting to discover a proxy service")
r = requests.get("http://{host}:{port}/api/v1".format(host=self.host, port=self.port))
if r.status_code == 200 and "APIResourceList" in r.text:
return True
config = get_config()
endpoint = f"http://{self.host}:{self.port}/api/v1"
logger.debug("Attempting to discover a proxy service")
try:
r = requests.get(endpoint, timeout=config.network_timeout)
if r.status_code == 200 and "APIResourceList" in r.text:
return True
except requests.Timeout:
logger.debug(f"failed to get {endpoint}", exc_info=True)
return False
def execute(self):
if self.accesible:

View File

@@ -1,7 +1,16 @@
from os.path import dirname, basename, isfile
import glob
# dynamically importing all modules in folder
files = glob.glob(dirname(__file__)+"/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith('__init__.py')):
exec('from .{} import *'.format(module_name))
# flake8: noqa: E402
from . import (
aks,
apiserver,
arp,
capabilities,
certificates,
cves,
dashboard,
dns,
etcd,
kubelet,
mounts,
proxy,
secrets,
)

View File

@@ -1,44 +1,65 @@
import json
import logging
import requests
from kube_hunter.conf import get_config
from kube_hunter.modules.hunting.kubelet import ExposedRunHandler
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import Hunter, ActiveHunter, IdentityTheft, Azure
logger = logging.getLogger(__name__)
class AzureSpnExposure(Vulnerability, Event):
"""The SPN is exposed, potentially allowing an attacker to gain access to the Azure subscription"""
def __init__(self, container):
Vulnerability.__init__(self, Azure, "Azure SPN Exposure", category=IdentityTheft, vid="KHV004")
Vulnerability.__init__(
self,
Azure,
"Azure SPN Exposure",
category=IdentityTheft,
vid="KHV004",
)
self.container = container
@handler.subscribe(ExposedRunHandler, predicate=lambda x: x.cloud=="Azure")
@handler.subscribe(ExposedRunHandler, predicate=lambda x: x.cloud == "Azure")
class AzureSpnHunter(Hunter):
"""AKS Hunting
Hunting Azure cluster deployments using specific known configurations
"""
def __init__(self, event):
self.event = event
self.base_url = "https://{}:{}".format(self.event.host, self.event.port)
self.base_url = f"https://{self.event.host}:{self.event.port}"
# getting a container that has access to the azure.json file
def get_key_container(self):
logging.debug("Passive Hunter is attempting to find container with access to azure.json file")
raw_pods = requests.get(self.base_url + "/pods", verify=False).text
if "items" in raw_pods:
pods_data = json.loads(raw_pods)["items"]
config = get_config()
endpoint = f"{self.base_url}/pods"
logger.debug("Trying to find container with access to azure.json file")
try:
r = requests.get(endpoint, verify=False, timeout=config.network_timeout)
except requests.Timeout:
logger.debug("failed getting pod info")
else:
pods_data = r.json().get("items", [])
suspicious_volume_names = []
for pod_data in pods_data:
for volume in pod_data["spec"].get("volumes", []):
if volume.get("hostPath"):
path = volume["hostPath"]["path"]
if "/etc/kubernetes/azure.json".startswith(path):
suspicious_volume_names.append(volume["name"])
for container in pod_data["spec"]["containers"]:
for mount in container["volumeMounts"]:
path = mount["mountPath"]
if '/etc/kubernetes/azure.json'.startswith(path):
for mount in container.get("volumeMounts", []):
if mount["name"] in suspicious_volume_names:
return {
"name": container["name"],
"pod": pod_data["metadata"]["name"],
"namespace": pod_data["metadata"]["namespace"]
"namespace": pod_data["metadata"]["namespace"],
}
def execute(self):
@@ -46,31 +67,33 @@ class AzureSpnHunter(Hunter):
if container:
self.publish_event(AzureSpnExposure(container=container))
""" Active Hunting """
@handler.subscribe(AzureSpnExposure)
class ProveAzureSpnExposure(ActiveHunter):
"""Azure SPN Hunter
Gets the azure subscription file on the host by executing inside a container
"""
def __init__(self, event):
self.event = event
self.base_url = "https://{}:{}".format(self.event.host, self.event.port)
self.base_url = f"https://{self.event.host}:{self.event.port}"
def run(self, command, container):
run_url = "{base}/run/{pod_namespace}/{pod_id}/{container_name}".format(
base=self.base_url,
pod_namespace=container["namespace"],
pod_id=container["pod"],
container_name=container["name"]
)
return requests.post(run_url, verify=False, params={'cmd': command}).text
config = get_config()
run_url = "/".join(self.base_url, "run", container["namespace"], container["pod"], container["name"])
return requests.post(run_url, verify=False, params={"cmd": command}, timeout=config.network_timeout)
def execute(self):
raw_output = self.run("cat /etc/kubernetes/azure.json", container=self.event.container)
if "subscriptionId" in raw_output:
subscription = json.loads(raw_output)
self.event.subscriptionId = subscription["subscriptionId"]
self.event.aadClientId = subscription["aadClientId"]
self.event.aadClientSecret = subscription["aadClientSecret"]
self.event.tenantId = subscription["tenantId"]
self.event.evidence = "subscription: {}".format(self.event.subscriptionId)
try:
subscription = self.run("cat /etc/kubernetes/azure.json", container=self.event.container).json()
except requests.Timeout:
logger.debug("failed to run command in container", exc_info=True)
except json.decoder.JSONDecodeError:
logger.warning("failed to parse SPN")
else:
if "subscriptionId" in subscription:
self.event.subscriptionId = subscription["subscriptionId"]
self.event.aadClientId = subscription["aadClientId"]
self.event.aadClientSecret = subscription["aadClientSecret"]
self.event.tenantId = subscription["tenantId"]
self.event.evidence = f"subscription: {self.event.subscriptionId}"

View File

@@ -1,18 +1,25 @@
import logging
import json
import requests
import uuid
import copy
import requests
from kube_hunter.conf import get_config
from kube_hunter.modules.discovery.apiserver import ApiServer
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, K8sVersionDisclosure
from kube_hunter.core.types import Hunter, ActiveHunter, KubernetesCluster
from kube_hunter.core.types import RemoteCodeExec, AccessRisk, InformationDisclosure, UnauthenticatedAccess
from kube_hunter.core.types import (
AccessRisk,
InformationDisclosure,
UnauthenticatedAccess,
)
logger = logging.getLogger(__name__)
class ServerApiAccess(Vulnerability, Event):
""" The API Server port is accessible. Depending on your RBAC settings this could expose access to or control of your cluster. """
"""The API Server port is accessible.
Depending on your RBAC settings this could expose access to or control of your cluster."""
def __init__(self, evidence, using_token):
if using_token:
@@ -21,27 +28,52 @@ class ServerApiAccess(Vulnerability, Event):
else:
name = "Unauthenticated access to API"
category = UnauthenticatedAccess
Vulnerability.__init__(self, KubernetesCluster, name=name, category=category, vid="KHV005")
Vulnerability.__init__(
self,
KubernetesCluster,
name=name,
category=category,
vid="KHV005",
)
self.evidence = evidence
class ServerApiHTTPAccess(Vulnerability, Event):
""" The API Server port is accessible over HTTP, and therefore unencrypted. Depending on your RBAC settings this could expose access to or control of your cluster. """
"""The API Server port is accessible over HTTP, and therefore unencrypted.
Depending on your RBAC settings this could expose access to or control of your cluster."""
def __init__(self, evidence):
name = "Insecure (HTTP) access to API"
category = UnauthenticatedAccess
Vulnerability.__init__(self, KubernetesCluster, name=name, category=category, vid="KHV006")
Vulnerability.__init__(
self,
KubernetesCluster,
name=name,
category=category,
vid="KHV006",
)
self.evidence = evidence
class ApiInfoDisclosure(Vulnerability, Event):
"""Information Disclosure depending upon RBAC permissions and Kube-Cluster Setup"""
def __init__(self, evidence, using_token, name):
category = InformationDisclosure
if using_token:
name +=" using service account token"
name += " using default service account token"
else:
name +=" as anonymous user"
Vulnerability.__init__(self, KubernetesCluster, name=name, category=InformationDisclosure, vid="KHV007")
name += " as anonymous user"
Vulnerability.__init__(
self,
KubernetesCluster,
name=name,
category=category,
vid="KHV007",
)
self.evidence = evidence
class ListPodsAndNamespaces(ApiInfoDisclosure):
""" Accessing pods might give an attacker valuable information"""
@@ -72,64 +104,84 @@ class ListClusterRoles(ApiInfoDisclosure):
class CreateANamespace(Vulnerability, Event):
""" Creating a namespace might give an attacker an area with default (exploitable) permissions to run pods in.
"""
"""Creating a namespace might give an attacker an area with default (exploitable) permissions to run pods in."""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created a namespace",
category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created a namespace",
category=AccessRisk,
)
self.evidence = evidence
class DeleteANamespace(Vulnerability, Event):
""" Deleting a namespace might give an attacker the option to affect application behavior """
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Delete a namespace",
category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="Delete a namespace",
category=AccessRisk,
)
self.evidence = evidence
class CreateARole(Vulnerability, Event):
""" Creating a role might give an attacker the option to harm the normal behavior of newly created pods
within the specified namespaces.
"""Creating a role might give an attacker the option to harm the normal behavior of newly created pods
within the specified namespaces.
"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created a role",
category=AccessRisk)
Vulnerability.__init__(self, KubernetesCluster, name="Created a role", category=AccessRisk)
self.evidence = evidence
class CreateAClusterRole(Vulnerability, Event):
""" Creating a cluster role might give an attacker the option to harm the normal behavior of newly created pods
across the whole cluster
"""Creating a cluster role might give an attacker the option to harm the normal behavior of newly created pods
across the whole cluster
"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created a cluster role",
category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created a cluster role",
category=AccessRisk,
)
self.evidence = evidence
class PatchARole(Vulnerability, Event):
""" Patching a role might give an attacker the option to create new pods with custom roles within the
"""Patching a role might give an attacker the option to create new pods with custom roles within the
specific role's namespace scope
"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Patched a role",
category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="Patched a role",
category=AccessRisk,
)
self.evidence = evidence
class PatchAClusterRole(Vulnerability, Event):
""" Patching a cluster role might give an attacker the option to create new pods with custom roles within the whole
"""Patching a cluster role might give an attacker the option to create new pods with custom roles within the whole
cluster scope.
"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Patched a cluster role",
category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="Patched a cluster role",
category=AccessRisk,
)
self.evidence = evidence
@@ -137,8 +189,12 @@ class DeleteARole(Vulnerability, Event):
""" Deleting a role might allow an attacker to affect access to resources in the namespace"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Deleted a role",
category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="Deleted a role",
category=AccessRisk,
)
self.evidence = evidence
@@ -146,8 +202,12 @@ class DeleteAClusterRole(Vulnerability, Event):
""" Deleting a cluster role might allow an attacker to affect access to resources in the cluster"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Deleted a cluster role",
category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="Deleted a cluster role",
category=AccessRisk,
)
self.evidence = evidence
@@ -155,8 +215,12 @@ class CreateAPod(Vulnerability, Event):
""" Creating a new pod allows an attacker to run custom code"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created A Pod",
category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created A Pod",
category=AccessRisk,
)
self.evidence = evidence
@@ -164,8 +228,12 @@ class CreateAPrivilegedPod(Vulnerability, Event):
""" Creating a new PRIVILEGED pod would gain an attacker FULL CONTROL over the cluster"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created A PRIVILEGED Pod",
category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created A PRIVILEGED Pod",
category=AccessRisk,
)
self.evidence = evidence
@@ -173,8 +241,12 @@ class PatchAPod(Vulnerability, Event):
""" Patching a pod allows an attacker to compromise and control it """
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Patched A Pod",
category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="Patched A Pod",
category=AccessRisk,
)
self.evidence = evidence
@@ -182,8 +254,12 @@ class DeleteAPod(Vulnerability, Event):
""" Deleting a pod allows an attacker to disturb applications on the cluster """
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Deleted A Pod",
category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="Deleted A Pod",
category=AccessRisk,
)
self.evidence = evidence
@@ -196,57 +272,67 @@ class ApiServerPassiveHunterFinished(Event):
# If we have a service account token we'll also trigger AccessApiServerWithToken below
@handler.subscribe(ApiServer)
class AccessApiServer(Hunter):
""" API Server Hunter
"""API Server Hunter
Checks if API server is accessible
"""
def __init__(self, event):
self.event = event
self.path = "{}://{}:{}".format(self.event.protocol, self.event.host, self.event.port)
self.path = f"{self.event.protocol}://{self.event.host}:{self.event.port}"
self.headers = {}
self.with_token = False
def access_api_server(self):
logging.debug('Passive Hunter is attempting to access the API at {}'.format(self.path))
config = get_config()
logger.debug(f"Passive Hunter is attempting to access the API at {self.path}")
try:
r = requests.get("{path}/api".format(path=self.path), headers=self.headers, verify=False)
if r.status_code == 200 and r.content != '':
r = requests.get(f"{self.path}/api", headers=self.headers, verify=False, timeout=config.network_timeout)
if r.status_code == 200 and r.content:
return r.content
except requests.exceptions.ConnectionError:
pass
return False
def get_items(self, path):
config = get_config()
try:
items = []
r = requests.get(path, headers=self.headers, verify=False)
if r.status_code ==200:
r = requests.get(path, headers=self.headers, verify=False, timeout=config.network_timeout)
if r.status_code == 200:
resp = json.loads(r.content)
for item in resp["items"]:
items.append(item["metadata"]["name"])
return items
logging.debug("Got HTTP {} respone: {}".format(r.status_code, r.text))
logger.debug(f"Got HTTP {r.status_code} respone: {r.text}")
except (requests.exceptions.ConnectionError, KeyError):
logging.debug("Failed retrieving items from API server at {}".format(path))
logger.debug(f"Failed retrieving items from API server at {path}")
return None
def get_pods(self, namespace=None):
config = get_config()
pods = []
try:
if namespace is None:
r = requests.get("{path}/api/v1/pods".format(path=self.path),
headers=self.headers, verify=False)
if not namespace:
r = requests.get(
f"{self.path}/api/v1/pods",
headers=self.headers,
verify=False,
timeout=config.network_timeout,
)
else:
r = requests.get("{path}/api/v1/namespaces/{namespace}/pods".format(path=self.path),
headers=self.headers, verify=False)
r = requests.get(
f"{self.path}/api/v1/namespaces/{namespace}/pods",
headers=self.headers,
verify=False,
timeout=config.network_timeout,
)
if r.status_code == 200:
resp = json.loads(r.content)
for item in resp["items"]:
name = item["metadata"]["name"].encode('ascii', 'ignore')
namespace = item["metadata"]["namespace"].encode('ascii', 'ignore')
pods.append({'name': name, 'namespace': namespace})
name = item["metadata"]["name"].encode("ascii", "ignore")
namespace = item["metadata"]["namespace"].encode("ascii", "ignore")
pods.append({"name": name, "namespace": namespace})
return pods
except (requests.exceptions.ConnectionError, KeyError):
pass
@@ -260,15 +346,15 @@ class AccessApiServer(Hunter):
else:
self.publish_event(ServerApiAccess(api, self.with_token))
namespaces = self.get_items("{path}/api/v1/namespaces".format(path=self.path))
namespaces = self.get_items(f"{self.path}/api/v1/namespaces")
if namespaces:
self.publish_event(ListNamespaces(namespaces, self.with_token))
roles = self.get_items("{path}/apis/rbac.authorization.k8s.io/v1/roles".format(path=self.path))
roles = self.get_items(f"{self.path}/apis/rbac.authorization.k8s.io/v1/roles")
if roles:
self.publish_event(ListRoles(roles, self.with_token))
cluster_roles = self.get_items("{path}/apis/rbac.authorization.k8s.io/v1/clusterroles".format(path=self.path))
cluster_roles = self.get_items(f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles")
if cluster_roles:
self.publish_event(ListClusterRoles(cluster_roles, self.with_token))
@@ -280,16 +366,17 @@ class AccessApiServer(Hunter):
# the token
self.publish_event(ApiServerPassiveHunterFinished(namespaces))
@handler.subscribe(ApiServer, predicate=lambda x: x.auth_token)
class AccessApiServerWithToken(AccessApiServer):
""" API Server Hunter
"""API Server Hunter
Accessing the API server using the service account token obtained from a compromised pod
"""
def __init__(self, event):
super(AccessApiServerWithToken, self).__init__(event)
assert self.event.auth_token != ''
self.headers = {'Authorization': 'Bearer ' + self.event.auth_token}
super().__init__(event)
assert self.event.auth_token
self.headers = {"Authorization": f"Bearer {self.event.auth_token}"}
self.category = InformationDisclosure
self.with_token = True
@@ -303,210 +390,172 @@ class AccessApiServerActive(ActiveHunter):
def __init__(self, event):
self.event = event
self.path = "{}://{}:{}".format(self.event.protocol, self.event.host, self.event.port)
self.path = f"{self.event.protocol}://{self.event.host}:{self.event.port}"
def create_item(self, path, name, data):
headers = {
'Content-Type': 'application/json'
}
def create_item(self, path, data):
config = get_config()
headers = {"Content-Type": "application/json"}
if self.event.auth_token:
headers['Authorization'] = 'Bearer {token}'.format(token=self.event.auth_token)
headers["Authorization"] = f"Bearer {self.event.auth_token}"
try:
res = requests.post(path.format(name=name), verify=False, data=data, headers=headers)
res = requests.post(path, verify=False, data=data, headers=headers, timeout=config.network_timeout)
if res.status_code in [200, 201, 202]:
parsed_content = json.loads(res.content)
return parsed_content['metadata']['name']
return parsed_content["metadata"]["name"]
except (requests.exceptions.ConnectionError, KeyError):
pass
return None
def patch_item(self, path, data):
headers = {
'Content-Type': 'application/json-patch+json'
}
config = get_config()
headers = {"Content-Type": "application/json-patch+json"}
if self.event.auth_token:
headers['Authorization'] = 'Bearer {token}'.format(token=self.event.auth_token)
headers["Authorization"] = f"Bearer {self.event.auth_token}"
try:
res = requests.patch(path, headers=headers, verify=False, data=data)
res = requests.patch(path, headers=headers, verify=False, data=data, timeout=config.network_timeout)
if res.status_code not in [200, 201, 202]:
return None
parsed_content = json.loads(res.content)
# TODO is there a patch timestamp we could use?
return parsed_content['metadata']['namespace']
return parsed_content["metadata"]["namespace"]
except (requests.exceptions.ConnectionError, KeyError):
pass
return None
def delete_item(self, path):
config = get_config()
headers = {}
if self.event.auth_token:
headers['Authorization'] = 'Bearer {token}'.format(token=self.event.auth_token)
headers["Authorization"] = f"Bearer {self.event.auth_token}"
try:
res = requests.delete(path, headers=headers, verify=False)
res = requests.delete(path, headers=headers, verify=False, timeout=config.network_timeout)
if res.status_code in [200, 201, 202]:
parsed_content = json.loads(res.content)
return parsed_content['metadata']['deletionTimestamp']
return parsed_content["metadata"]["deletionTimestamp"]
except (requests.exceptions.ConnectionError, KeyError):
pass
return None
def create_a_pod(self, namespace, is_privileged):
privileged_value = ',"securityContext":{"privileged":true}' if is_privileged else ''
random_name = (str(uuid.uuid4()))[0:5]
json_pod = \
"""
{{"apiVersion": "v1",
"kind": "Pod",
"metadata": {{
"name": "{random_name}"
}},
"spec": {{
"containers": [
{{
"name": "{random_name}",
"image": "nginx:1.7.9",
"ports": [
{{
"containerPort": 80
}}
]
{is_privileged_flag}
}}
]
}}
}}
""".format(random_name=random_name, is_privileged_flag=privileged_value)
return self.create_item(path="{path}/api/v1/namespaces/{namespace}/pods".format(
path=self.path, namespace=namespace), name=random_name, data=json_pod)
privileged_value = {"securityContext": {"privileged": True}} if is_privileged else {}
random_name = str(uuid.uuid4())[0:5]
pod = {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {"name": random_name},
"spec": {
"containers": [
{"name": random_name, "image": "nginx:1.7.9", "ports": [{"containerPort": 80}], **privileged_value}
]
},
}
return self.create_item(path=f"{self.path}/api/v1/namespaces/{namespace}/pods", data=json.dumps(pod))
def delete_a_pod(self, namespace, pod_name):
delete_timestamp = self.delete_item("{path}/api/v1/namespaces/{namespace}/pods/{name}".format(
path=self.path, name=pod_name, namespace=namespace))
if delete_timestamp is None:
logging.error("Created pod {name} in namespace {namespace} but unable to delete it".format(name=pod_name, namespace=namespace))
delete_timestamp = self.delete_item(f"{self.path}/api/v1/namespaces/{namespace}/pods/{pod_name}")
if not delete_timestamp:
logger.error(f"Created pod {pod_name} in namespace {namespace} but unable to delete it")
return delete_timestamp
def patch_a_pod(self, namespace, pod_name):
data = '[{ "op": "add", "path": "/hello", "value": ["world"] }]'
return self.patch_item(path="{path}/api/v1/namespaces/{namespace}/pods/{name}".format(
path=self.path, namespace=namespace, name=pod_name),
data=data)
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/api/v1/namespaces/{namespace}/pods/{pod_name}",
data=json.dumps(data),
)
def create_namespace(self):
random_name = (str(uuid.uuid4()))[0:5]
json = '{{"kind":"Namespace","apiVersion":"v1","metadata":{{"name":"{random_str}","labels":{{"name":"{random_str}"}}}}}}'.format(random_str=random_name)
return self.create_item(path="{path}/api/v1/namespaces".format(path=self.path), name=random_name, data=json)
data = {
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {"name": random_name, "labels": {"name": random_name}},
}
return self.create_item(path=f"{self.path}/api/v1/namespaces", data=json.dumps(data))
def delete_namespace(self, namespace):
delete_timestamp = self.delete_item("{path}/api/v1/namespaces/{name}".format(path=self.path, name=namespace))
delete_timestamp = self.delete_item(f"{self.path}/api/v1/namespaces/{namespace}")
if delete_timestamp is None:
logging.error("Created namespace {namespace} but unable to delete it".format(namespace=namespace))
logger.error(f"Created namespace {namespace} but failed to delete it")
return delete_timestamp
def create_a_role(self, namespace):
name = (str(uuid.uuid4()))[0:5]
role = """{{
"kind": "Role",
"apiVersion": "rbac.authorization.k8s.io/v1",
"metadata": {{
"namespace": "{namespace}",
"name": "{random_str}"
}},
"rules": [
{{
"apiGroups": [
""
],
"resources": [
"pods"
],
"verbs": [
"get",
"watch",
"list"
]
}}
]
}}""".format(random_str=name, namespace=namespace)
return self.create_item(path="{path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles".format(
path=self.path, namespace=namespace), name=name, data=role)
name = str(uuid.uuid4())[0:5]
role = {
"kind": "Role",
"apiVersion": "rbac.authorization.k8s.io/v1",
"metadata": {"namespace": namespace, "name": name},
"rules": [{"apiGroups": [""], "resources": ["pods"], "verbs": ["get", "watch", "list"]}],
}
return self.create_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles",
data=json.dumps(role),
)
def create_a_cluster_role(self):
name = (str(uuid.uuid4()))[0:5]
cluster_role = """{{
"kind": "ClusterRole",
"apiVersion": "rbac.authorization.k8s.io/v1",
"metadata": {{
"name": "{random_str}"
}},
"rules": [
{{
"apiGroups": [
""
],
"resources": [
"pods"
],
"verbs": [
"get",
"watch",
"list"
]
}}
]
}}""".format(random_str=name)
return self.create_item(path="{path}/apis/rbac.authorization.k8s.io/v1/clusterroles".format(
path=self.path), name=name, data=cluster_role)
name = str(uuid.uuid4())[0:5]
cluster_role = {
"kind": "ClusterRole",
"apiVersion": "rbac.authorization.k8s.io/v1",
"metadata": {"name": name},
"rules": [{"apiGroups": [""], "resources": ["pods"], "verbs": ["get", "watch", "list"]}],
}
return self.create_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles",
data=json.dumps(cluster_role),
)
def delete_a_role(self, namespace, name):
delete_timestamp = self.delete_item("{path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles/{role}".format(
path=self.path, name=namespace, role=name))
delete_timestamp = self.delete_item(
f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles/{name}"
)
if delete_timestamp is None:
logging.error("Created role {name} in namespace {namespace} but unable to delete it".format(name=name, namespace=namespace))
logger.error(f"Created role {name} in namespace {namespace} but unable to delete it")
return delete_timestamp
def delete_a_cluster_role(self, name):
delete_timestamp = self.delete_item("{path}/apis/rbac.authorization.k8s.io/v1/clusterroles/{role}".format(
path=self.path, role=name))
delete_timestamp = self.delete_item(f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles/{name}")
if delete_timestamp is None:
logging.error("Created cluster role {name} but unable to delete it".format(name=name))
logger.error(f"Created cluster role {name} but unable to delete it")
return delete_timestamp
def patch_a_role(self, namespace, role):
data = '[{ "op": "add", "path": "/hello", "value": ["world"] }]'
return self.patch_item(path="{path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles/{name}".format(
path=self.path, name=role, namespace=namespace),
data=data)
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles/{role}",
data=json.dumps(data),
)
def patch_a_cluster_role(self, cluster_role):
data = '[{ "op": "add", "path": "/hello", "value": ["world"] }]'
return self.patch_item(path="{path}/apis/rbac.authorization.k8s.io/v1/clusterroles/{name}".format(
path=self.path, name=cluster_role),
data=data)
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles/{cluster_role}",
data=json.dumps(data),
)
def execute(self):
# Try creating cluster-wide objects
namespace = self.create_namespace()
if namespace:
self.publish_event(CreateANamespace('new namespace name: {name}'.format(name=namespace)))
self.publish_event(CreateANamespace(f"new namespace name: {namespace}"))
delete_timestamp = self.delete_namespace(namespace)
if delete_timestamp:
self.publish_event(DeleteANamespace(delete_timestamp))
cluster_role = self.create_a_cluster_role()
if cluster_role:
self.publish_event(CreateAClusterRole('Cluster role name: {name}'.format(name=cluster_role)))
self.publish_event(CreateAClusterRole(f"Cluster role name: {cluster_role}"))
patch_evidence = self.patch_a_cluster_role(cluster_role)
if patch_evidence:
self.publish_event(PatchAClusterRole('Patched Cluster Role Name: {name} Patch evidence: {patch_evidence}'.format(
name=cluster_role, patch_evidence=patch_evidence)))
self.publish_event(
PatchAClusterRole(f"Patched Cluster Role Name: {cluster_role} Patch evidence: {patch_evidence}")
)
delete_timestamp = self.delete_a_cluster_role(cluster_role)
if delete_timestamp:
self.publish_event(DeleteAClusterRole('Cluster role {name} deletion time {delete_timestamp}'.format(
name=cluster_role, delete_timestamp=delete_timestamp)))
self.publish_event(DeleteAClusterRole(f"Cluster role {cluster_role} deletion time {delete_timestamp}"))
# Try attacking all the namespaces we know about
if self.event.namespaces:
@@ -514,66 +563,81 @@ class AccessApiServerActive(ActiveHunter):
# Try creating and deleting a privileged pod
pod_name = self.create_a_pod(namespace, True)
if pod_name:
self.publish_event(CreateAPrivilegedPod('Pod Name: {pod_name} Namespace: {namespace}'.format(
pod_name=pod_name, namespace=namespace)))
self.publish_event(CreateAPrivilegedPod(f"Pod Name: {pod_name} Namespace: {namespace}"))
delete_time = self.delete_a_pod(namespace, pod_name)
if delete_time:
self.publish_event(DeleteAPod('Pod Name: {pod_name} deletion time: {delete_time}'.format(
pod_name=pod_name, delete_evidence=delete_time)))
self.publish_event(DeleteAPod(f"Pod Name: {pod_name} Deletion time: {delete_time}"))
# Try creating, patching and deleting an unprivileged pod
pod_name = self.create_a_pod(namespace, False)
if pod_name:
self.publish_event(CreateAPod('Pod Name: {pod_name} Namespace: {namespace}'.format(
pod_name=pod_name, namespace=namespace)))
self.publish_event(CreateAPod(f"Pod Name: {pod_name} Namespace: {namespace}"))
patch_evidence = self.patch_a_pod(namespace, pod_name)
if patch_evidence:
self.publish_event(PatchAPod('Pod Name: {pod_name} Namespace: {namespace} Patch evidence: {patch_evidence}'.format(
pod_name=pod_name, namespace=namespace,
patch_evidence=patch_evidence)))
self.publish_event(
PatchAPod(
f"Pod Name: {pod_name} " f"Namespace: {namespace} " f"Patch evidence: {patch_evidence}"
)
)
delete_time = self.delete_a_pod(namespace, pod_name)
if delete_time:
self.publish_event(DeleteAPod('Pod Name: {pod_name} Namespace: {namespace} Delete time: {delete_time}'.format(
pod_name=pod_name, namespace=namespace, delete_time=delete_time)))
self.publish_event(
DeleteAPod(
f"Pod Name: {pod_name} " f"Namespace: {namespace} " f"Delete time: {delete_time}"
)
)
role = self.create_a_role(namespace)
if role:
self.publish_event(CreateARole('Role name: {name}'.format(name=role)))
self.publish_event(CreateARole(f"Role name: {role}"))
patch_evidence = self.patch_a_role(namespace, role)
if patch_evidence:
self.publish_event(PatchARole('Patched Role Name: {name} Namespace: {namespace} Patch evidence: {patch_evidence}'.format(
name=role, namespace=namespace, patch_evidence=patch_evidence)))
self.publish_event(
PatchARole(
f"Patched Role Name: {role} "
f"Namespace: {namespace} "
f"Patch evidence: {patch_evidence}"
)
)
delete_time = self.delete_a_role(namespace, role)
if delete_time:
self.publish_event(DeleteARole('Deleted role: {name} Namespace: {namespace} Delete time: {delete_time}'.format(
name=role, namespace=namespace, delete_time=delete_time)))
self.publish_event(
DeleteARole(
f"Deleted role: {role} " f"Namespace: {namespace} " f"Delete time: {delete_time}"
)
)
# Note: we are not binding any role or cluster role because
# in certain cases it might effect the running pod within the cluster (and we don't want to do that).
# Note: we are not binding any role or cluster role because
# -- in certain cases it might effect the running pod within the cluster (and we don't want to do that).
@handler.subscribe(ApiServer)
class ApiVersionHunter(Hunter):
"""Api Version Hunter
Tries to obtain the Api Server's version directly from /version endpoint
"""
def __init__(self, event):
self.event = event
self.path = "{}://{}:{}".format(self.event.protocol, self.event.host, self.event.port)
self.path = f"{self.event.protocol}://{self.event.host}:{self.event.port}"
self.session = requests.Session()
self.session.verify = False
if self.event.auth_token:
self.session.headers.update({"Authorization": "Bearer {}".format(self.event.auth_token)})
self.session.headers.update({"Authorization": f"Bearer {self.event.auth_token}"})
def execute(self):
config = get_config()
if self.event.auth_token:
logging.debug('Passive Hunter is attempting to access the API server version end point using the pod\'s service account token on {}:{} \t'.format(self.event.host, self.event.port))
logger.debug(
"Trying to access the API server version endpoint using pod's"
f" service account token on {self.event.host}:{self.event.port} \t"
)
else:
logging.debug('Passive Hunter is attempting to access the API server version end point anonymously')
version = json.loads(self.session.get(self.path + "/version").text)["gitVersion"]
logging.debug("Discovered version of api server {}".format(version))
logger.debug("Trying to access the API server version endpoint anonymously")
version = self.session.get(f"{self.path}/version", timeout=config.network_timeout).json()["gitVersion"]
logger.debug(f"Discovered version of api server {version}")
self.publish_event(K8sVersionDisclosure(version=version, from_endpoint="/version"))

View File

@@ -2,33 +2,48 @@ import logging
from scapy.all import ARP, IP, ICMP, Ether, sr1, srp
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import ActiveHunter, KubernetesCluster, IdentityTheft
from kube_hunter.modules.hunting.capabilities import CapNetRawEnabled
logger = logging.getLogger(__name__)
class PossibleArpSpoofing(Vulnerability, Event):
"""A malicious pod running on the cluster could potentially run an ARP Spoof attack and perform a MITM between pods on the node."""
"""A malicious pod running on the cluster could potentially run an ARP Spoof attack
and perform a MITM between pods on the node."""
def __init__(self):
Vulnerability.__init__(self, KubernetesCluster, "Possible Arp Spoof", category=IdentityTheft,vid="KHV020")
Vulnerability.__init__(
self,
KubernetesCluster,
"Possible Arp Spoof",
category=IdentityTheft,
vid="KHV020",
)
@handler.subscribe(CapNetRawEnabled)
class ArpSpoofHunter(ActiveHunter):
"""Arp Spoof Hunter
Checks for the possibility of running an ARP spoof attack from within a pod (results are based on the running node)
Checks for the possibility of running an ARP spoof
attack from within a pod (results are based on the running node)
"""
def __init__(self, event):
self.event = event
def try_getting_mac(self, ip):
ans = sr1(ARP(op=1, pdst=ip),timeout=2, verbose=0)
config = get_config()
ans = sr1(ARP(op=1, pdst=ip), timeout=config.network_timeout, verbose=0)
return ans[ARP].hwsrc if ans else None
def detect_l3_on_host(self, arp_responses):
""" returns True for an existence of an L3 network plugin """
logging.debug("Attempting to detect L3 network plugin using ARP")
unique_macs = list(set(response[ARP].hwsrc for _, response in arp_responses))
logger.debug("Attempting to detect L3 network plugin using ARP")
unique_macs = list({response[ARP].hwsrc for _, response in arp_responses})
# if LAN addresses not unique
if len(unique_macs) == 1:
@@ -41,8 +56,13 @@ class ArpSpoofHunter(ActiveHunter):
return False
def execute(self):
self_ip = sr1(IP(dst="1.1.1.1", ttl=1), ICMP(), verbose=0)[IP].dst
arp_responses, _ = srp(Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(op=1, pdst="{}/24".format(self_ip)), timeout=3, verbose=0)
config = get_config()
self_ip = sr1(IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)[IP].dst
arp_responses, _ = srp(
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"),
timeout=config.network_timeout,
verbose=0,
)
# arp enabled on cluster and more than one pod on node
if len(arp_responses) > 1:

View File

@@ -6,11 +6,22 @@ from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import Hunter, AccessRisk, KubernetesCluster
logger = logging.getLogger(__name__)
class CapNetRawEnabled(Event, Vulnerability):
"""CAP_NET_RAW is enabled by default for pods. If an attacker manages to compromise a pod, they could potentially take advantage of this capability to perform network attacks on other pods running on the same node"""
"""CAP_NET_RAW is enabled by default for pods.
If an attacker manages to compromise a pod,
they could potentially take advantage of this capability to perform network
attacks on other pods running on the same node"""
def __init__(self):
Vulnerability.__init__(self, KubernetesCluster, name="CAP_NET_RAW Enabled", category=AccessRisk)
Vulnerability.__init__(
self,
KubernetesCluster,
name="CAP_NET_RAW Enabled",
category=AccessRisk,
)
@handler.subscribe(RunningAsPodEvent)
@@ -18,19 +29,20 @@ class PodCapabilitiesHunter(Hunter):
"""Pod Capabilities Hunter
Checks for default enabled capabilities in a pod
"""
def __init__(self, event):
self.event = event
def check_net_raw(self):
logging.debug("Passive hunter's trying to open a RAW socket")
logger.debug("Passive hunter's trying to open a RAW socket")
try:
# trying to open a raw socket without CAP_NET_RAW will raise PermissionsError
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_RAW)
s.close()
logging.debug("Passive hunter's closing RAW socket")
logger.debug("Passive hunter's closing RAW socket")
return True
except PermissionError:
logging.debug("CAP_NET_RAW not enabled")
logger.debug("CAP_NET_RAW not enabled")
def execute(self):
if self.check_net_raw():

View File

@@ -3,40 +3,53 @@ import logging
import base64
import re
from socket import socket
from kube_hunter.core.types import Hunter, KubernetesCluster, InformationDisclosure
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, Service
logger = logging.getLogger(__name__)
email_pattern = re.compile(rb"([a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+)")
email_pattern = re.compile(r"([a-z0-9]+@[a-z0-9]+\.[a-z0-9]+)")
class CertificateEmail(Vulnerability, Event):
"""Certificate includes an email address"""
"""The Kubernetes API Server advertises a public certificate for TLS.
This certificate includes an email address, that may provide additional information for an attacker on your
organization, or be abused for further email based attacks."""
def __init__(self, email):
Vulnerability.__init__(self, KubernetesCluster, "Certificate Includes Email Address", category=InformationDisclosure,khv="KHV021")
Vulnerability.__init__(
self,
KubernetesCluster,
"Certificate Includes Email Address",
category=InformationDisclosure,
vid="KHV021",
)
self.email = email
self.evidence = "email: {}".format(self.email)
self.evidence = f"email: {self.email}"
@handler.subscribe(Service)
class CertificateDiscovery(Hunter):
"""Certificate Email Hunting
Checks for email addresses in kubernetes ssl certificates
"""
def __init__(self, event):
self.event = event
def execute(self):
try:
logging.debug("Passive hunter is attempting to get server certificate")
logger.debug("Passive hunter is attempting to get server certificate")
addr = (str(self.event.host), self.event.port)
cert = ssl.get_server_certificate(addr)
except ssl.SSLError:
# If the server doesn't offer SSL on this port we won't get a certificate
return
c = cert.strip(ssl.PEM_HEADER).strip(ssl.PEM_FOOTER)
certdata = base64.decodestring(c)
self.examine_certificate(cert)
def examine_certificate(self, cert):
c = cert.strip(ssl.PEM_HEADER).strip("\n").strip(ssl.PEM_FOOTER).strip("\n")
certdata = base64.b64decode(c)
emails = re.findall(email_pattern, certdata)
for email in emails:
self.publish_event( CertificateEmail(email=email) )
self.publish_event(CertificateEmail(email=email))

View File

@@ -1,84 +1,145 @@
import logging
import json
import requests
from packaging import version
from kube_hunter.conf import config
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, K8sVersionDisclosure
from kube_hunter.core.types import Hunter, ActiveHunter, KubernetesCluster, \
RemoteCodeExec, AccessRisk, InformationDisclosure, PrivilegeEscalation, \
DenialOfService, KubectlClient
from kube_hunter.core.types import (
Hunter,
KubernetesCluster,
RemoteCodeExec,
PrivilegeEscalation,
DenialOfService,
KubectlClient,
)
from kube_hunter.modules.discovery.kubectl import KubectlClientEvent
logger = logging.getLogger(__name__)
""" Cluster CVES """
class ServerApiVersionEndPointAccessPE(Vulnerability, Event):
"""Node is vulnerable to critical CVE-2018-1002105"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Critical Privilege Escalation CVE", category=PrivilegeEscalation, vid="KHV022")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Critical Privilege Escalation CVE",
category=PrivilegeEscalation,
vid="KHV022",
)
self.evidence = evidence
class ServerApiVersionEndPointAccessDos(Vulnerability, Event):
"""Node not patched for CVE-2019-1002100. Depending on your RBAC settings, a crafted json-patch could cause a Denial of Service."""
"""Node not patched for CVE-2019-1002100. Depending on your RBAC settings,
a crafted json-patch could cause a Denial of Service."""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Denial of Service to Kubernetes API Server", category=DenialOfService, vid="KHV023")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Denial of Service to Kubernetes API Server",
category=DenialOfService,
vid="KHV023",
)
self.evidence = evidence
class PingFloodHttp2Implementation(Vulnerability, Event):
"""Node not patched for CVE-2019-9512. an attacker could cause a Denial of Service by sending specially crafted HTTP requests."""
"""Node not patched for CVE-2019-9512. an attacker could cause a
Denial of Service by sending specially crafted HTTP requests."""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Possible Ping Flood Attack", category=DenialOfService, vid="KHV024")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Possible Ping Flood Attack",
category=DenialOfService,
vid="KHV024",
)
self.evidence = evidence
class ResetFloodHttp2Implementation(Vulnerability, Event):
"""Node not patched for CVE-2019-9514. an attacker could cause a Denial of Service by sending specially crafted HTTP requests."""
"""Node not patched for CVE-2019-9514. an attacker could cause a
Denial of Service by sending specially crafted HTTP requests."""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Possible Reset Flood Attack", category=DenialOfService, vid="KHV025")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Possible Reset Flood Attack",
category=DenialOfService,
vid="KHV025",
)
self.evidence = evidence
class ServerApiClusterScopedResourcesAccess(Vulnerability, Event):
"""Api Server not patched for CVE-2019-11247. API server allows access to custom resources via wrong scope"""
"""Api Server not patched for CVE-2019-11247.
API server allows access to custom resources via wrong scope"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Arbitrary Access To Cluster Scoped Resources", category=PrivilegeEscalation, vid="KHV026")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Arbitrary Access To Cluster Scoped Resources",
category=PrivilegeEscalation,
vid="KHV026",
)
self.evidence = evidence
""" Kubectl CVES """
class IncompleteFixToKubectlCpVulnerability(Vulnerability, Event):
"""The kubectl client is vulnerable to CVE-2019-11246, an attacker could potentially execute arbitrary code on the client's machine"""
"""The kubectl client is vulnerable to CVE-2019-11246,
an attacker could potentially execute arbitrary code on the client's machine"""
def __init__(self, binary_version):
Vulnerability.__init__(self, KubectlClient, "Kubectl Vulnerable To CVE-2019-11246", category=RemoteCodeExec, vid="KHV027")
Vulnerability.__init__(
self,
KubectlClient,
"Kubectl Vulnerable To CVE-2019-11246",
category=RemoteCodeExec,
vid="KHV027",
)
self.binary_version = binary_version
self.evidence = "kubectl version: {}".format(self.binary_version)
self.evidence = f"kubectl version: {self.binary_version}"
class KubectlCpVulnerability(Vulnerability, Event):
"""The kubectl client is vulnerable to CVE-2019-1002101, an attacker could potentially execute arbitrary code on the client's machine"""
"""The kubectl client is vulnerable to CVE-2019-1002101,
an attacker could potentially execute arbitrary code on the client's machine"""
def __init__(self, binary_version):
Vulnerability.__init__(self, KubectlClient, "Kubectl Vulnerable To CVE-2019-1002101", category=RemoteCodeExec, vid="KHV028")
Vulnerability.__init__(
self,
KubectlClient,
"Kubectl Vulnerable To CVE-2019-1002101",
category=RemoteCodeExec,
vid="KHV028",
)
self.binary_version = binary_version
self.evidence = "kubectl version: {}".format(self.binary_version)
self.evidence = f"kubectl version: {self.binary_version}"
class CveUtils:
@staticmethod
def get_base_release(full_ver):
# if LegacyVersion, converting manually to a base version
if type(full_ver) == version.LegacyVersion:
return version.parse('.'.join(full_ver._version.split('.')[:2]))
return version.parse('.'.join(map(str, full_ver._version.release[:2])))
if isinstance(full_ver, version.LegacyVersion):
return version.parse(".".join(full_ver._version.split(".")[:2]))
return version.parse(".".join(map(str, full_ver._version.release[:2])))
@staticmethod
def to_legacy(full_ver):
# converting version to version.LegacyVersion
return version.LegacyVersion('.'.join(map(str, full_ver._version.release)))
return version.LegacyVersion(".".join(map(str, full_ver._version.release)))
@staticmethod
def to_raw_version(v):
if type(v) != version.LegacyVersion:
return '.'.join(map(str, v._version.release))
if not isinstance(v, version.LegacyVersion):
return ".".join(map(str, v._version.release))
return v._version
@staticmethod
@@ -86,7 +147,8 @@ class CveUtils:
"""Function compares two versions, handling differences with conversion to LegacyVersion"""
# getting raw version, while striping 'v' char at the start. if exists.
# removing this char lets us safely compare the two version.
v1_raw, v2_raw = CveUtils.to_raw_version(v1).strip('v'), CveUtils.to_raw_version(v2).strip('v')
v1_raw = CveUtils.to_raw_version(v1).strip("v")
v2_raw = CveUtils.to_raw_version(v2).strip("v")
new_v1 = version.LegacyVersion(v1_raw)
new_v2 = version.LegacyVersion(v2_raw)
@@ -94,15 +156,16 @@ class CveUtils:
@staticmethod
def basic_compare(v1, v2):
return (v1>v2)-(v1<v2)
return (v1 > v2) - (v1 < v2)
@staticmethod
def is_downstream_version(version):
return any(c in version for c in '+-~')
return any(c in version for c in "+-~")
@staticmethod
def is_vulnerable(fix_versions, check_version, ignore_downstream=False):
"""Function determines if a version is vulnerable, by comparing to given fix versions by base release"""
"""Function determines if a version is vulnerable,
by comparing to given fix versions by base release"""
if ignore_downstream and CveUtils.is_downstream_version(check_version):
return False
@@ -112,7 +175,7 @@ class CveUtils:
# default to classic compare, unless the check_version is legacy.
version_compare_func = CveUtils.basic_compare
if type(check_v) == version.LegacyVersion:
if isinstance(check_v, version.LegacyVersion):
version_compare_func = CveUtils.version_compare
if check_version not in fix_versions:
@@ -123,7 +186,7 @@ class CveUtils:
# if the check version and the current fix has the same base release
if base_check_v == base_fix_v:
# when check_version is legacy, we use a custom compare func, to handle differences between versions.
# when check_version is legacy, we use a custom compare func, to handle differences between versions
if version_compare_func(check_v, fix_v) == -1:
# determine vulnerable if smaller and with same base version
vulnerable = True
@@ -139,20 +202,22 @@ class CveUtils:
@handler.subscribe_once(K8sVersionDisclosure)
class K8sClusterCveHunter(Hunter):
"""K8s CVE Hunter
Checks if Node is running a Kubernetes version vulnerable to specific important CVEs
Checks if Node is running a Kubernetes version vulnerable to
specific important CVEs
"""
def __init__(self, event):
self.event = event
def execute(self):
logging.debug('Api Cve Hunter determining vulnerable version: {}'.format(self.event.version))
config = get_config()
logger.debug(f"Checking known CVEs for k8s API version: {self.event.version}")
cve_mapping = {
ServerApiVersionEndPointAccessPE: ["1.10.11", "1.11.5", "1.12.3"],
ServerApiVersionEndPointAccessDos: ["1.11.8", "1.12.6", "1.13.4"],
ResetFloodHttp2Implementation: ["1.13.10", "1.14.6", "1.15.3"],
PingFloodHttp2Implementation: ["1.13.10", "1.14.6", "1.15.3"],
ServerApiClusterScopedResourcesAccess: ["1.13.9", "1.14.5", "1.15.2"]
ServerApiClusterScopedResourcesAccess: ["1.13.9", "1.14.5", "1.15.2"],
}
for vulnerability, fix_versions in cve_mapping.items():
if CveUtils.is_vulnerable(fix_versions, self.event.version, not config.include_patched_versions):
@@ -164,15 +229,17 @@ class KubectlCVEHunter(Hunter):
"""Kubectl CVE Hunter
Checks if the kubectl client is vulnerable to specific important CVEs
"""
def __init__(self, event):
self.event = event
def execute(self):
config = get_config()
cve_mapping = {
KubectlCpVulnerability: ['1.11.9', '1.12.7', '1.13.5' '1.14.0'],
IncompleteFixToKubectlCpVulnerability: ['1.12.9', '1.13.6', '1.14.2']
KubectlCpVulnerability: ["1.11.9", "1.12.7", "1.13.5", "1.14.0"],
IncompleteFixToKubectlCpVulnerability: ["1.12.9", "1.13.6", "1.14.2"],
}
logging.debug('Kubectl Cve Hunter determining vulnerable version: {}'.format(self.event.version))
logger.debug(f"Checking known CVEs for kubectl version: {self.event.version}")
for vulnerability, fix_versions in cve_mapping.items():
if CveUtils.is_vulnerable(fix_versions, self.event.version, not config.include_patched_versions):
self.publish_event(vulnerability(binary_version=self.event.version))

View File

@@ -2,31 +2,44 @@ import logging
import json
import requests
from kube_hunter.conf import get_config
from kube_hunter.core.types import Hunter, RemoteCodeExec, KubernetesCluster
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event
from kube_hunter.modules.discovery.dashboard import KubeDashboardEvent
logger = logging.getLogger(__name__)
class DashboardExposed(Vulnerability, Event):
"""All operations on the cluster are exposed"""
def __init__(self, nodes):
Vulnerability.__init__(self, KubernetesCluster, "Dashboard Exposed", category=RemoteCodeExec, vid="KHV029")
self.evidence = "nodes: {}".format(' '.join(nodes)) if nodes else None
Vulnerability.__init__(
self,
KubernetesCluster,
"Dashboard Exposed",
category=RemoteCodeExec,
vid="KHV029",
)
self.evidence = "nodes: {}".format(" ".join(nodes)) if nodes else None
@handler.subscribe(KubeDashboardEvent)
class KubeDashboard(Hunter):
"""Dashboard Hunting
Hunts open Dashboards, gets the type of nodes in the cluster
"""
def __init__(self, event):
self.event = event
def get_nodes(self):
logging.debug("Passive hunter is attempting to get nodes types of the cluster")
r = requests.get("http://{}:{}/api/v1/node".format(self.event.host, self.event.port))
config = get_config()
logger.debug("Passive hunter is attempting to get nodes types of the cluster")
r = requests.get(f"http://{self.event.host}:{self.event.port}/api/v1/node", timeout=config.network_timeout)
if r.status_code == 200 and "nodes" in r.text:
return list(map(lambda node: node["objectMeta"]["name"], json.loads(r.text)["nodes"]))
return [node["objectMeta"]["name"] for node in json.loads(r.text)["nodes"]]
def execute(self):
self.publish_event(DashboardExposed(nodes=self.get_nodes()))

View File

@@ -3,63 +3,88 @@ import logging
from scapy.all import IP, ICMP, UDP, DNS, DNSQR, ARP, Ether, sr1, srp1, srp
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import ActiveHunter, KubernetesCluster, IdentityTheft
from kube_hunter.modules.hunting.arp import PossibleArpSpoofing
logger = logging.getLogger(__name__)
class PossibleDnsSpoofing(Vulnerability, Event):
"""A malicious pod running on the cluster could potentially run a DNS Spoof attack and perform a MITM attack on applications running in the cluster."""
"""A malicious pod running on the cluster could potentially run a DNS Spoof attack
and perform a MITM attack on applications running in the cluster."""
def __init__(self, kubedns_pod_ip):
Vulnerability.__init__(self, KubernetesCluster, "Possible DNS Spoof", category=IdentityTheft, vid="KHV030")
Vulnerability.__init__(
self,
KubernetesCluster,
"Possible DNS Spoof",
category=IdentityTheft,
vid="KHV030",
)
self.kubedns_pod_ip = kubedns_pod_ip
self.evidence = "kube-dns at: {}".format(self.kubedns_pod_ip)
self.evidence = f"kube-dns at: {self.kubedns_pod_ip}"
# Only triggered with RunningAsPod base event
@handler.subscribe(PossibleArpSpoofing)
class DnsSpoofHunter(ActiveHunter):
"""DNS Spoof Hunter
Checks for the possibility for a malicious pod to compromise DNS requests of the cluster (results are based on the running node)
Checks for the possibility for a malicious pod to compromise DNS requests of the cluster
(results are based on the running node)
"""
def __init__(self, event):
self.event = event
def get_cbr0_ip_mac(self):
res = srp1(Ether() / IP(dst="1.1.1.1" , ttl=1) / ICMP(), verbose=0)
config = get_config()
res = srp1(Ether() / IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)
return res[IP].src, res.src
def extract_nameserver_ip(self):
with open('/etc/resolv.conf', 'r') as f:
with open("/etc/resolv.conf") as f:
# finds first nameserver in /etc/resolv.conf
match = re.search(r"nameserver (\d+.\d+.\d+.\d+)", f.read())
if match:
return match.group(1)
def get_kube_dns_ip_mac(self):
config = get_config()
kubedns_svc_ip = self.extract_nameserver_ip()
# getting actual pod ip of kube-dns service, by comparing the src mac of a dns response and arp scanning.
dns_info_res = srp1(Ether() / IP(dst=kubedns_svc_ip) / UDP(dport=53) / DNS(rd=1,qd=DNSQR()), verbose=0)
dns_info_res = srp1(
Ether() / IP(dst=kubedns_svc_ip) / UDP(dport=53) / DNS(rd=1, qd=DNSQR()),
verbose=0,
timeout=config.network_timeout,
)
kubedns_pod_mac = dns_info_res.src
self_ip = dns_info_res[IP].dst
arp_responses, _ = srp(Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(op=1, pdst="{}/24".format(self_ip)), timeout=3, verbose=0)
arp_responses, _ = srp(
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"),
timeout=config.network_timeout,
verbose=0,
)
for _, response in arp_responses:
if response[Ether].src == kubedns_pod_mac:
return response[ARP].psrc, response.src
def execute(self):
logging.debug("Attempting to get kube-dns pod ip")
self_ip = sr1(IP(dst="1.1.1.1", ttl=1), ICMP(), verbose=0)[IP].dst
config = get_config()
logger.debug("Attempting to get kube-dns pod ip")
self_ip = sr1(IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)[IP].dst
cbr0_ip, cbr0_mac = self.get_cbr0_ip_mac()
kubedns = self.get_kube_dns_ip_mac()
if kubedns:
kubedns_ip, kubedns_mac = kubedns
logging.debug("ip = {}, kubednsip = {}, cbr0ip = {}".format(self_ip, kubedns_ip, cbr0_ip))
logger.debug(f"ip={self_ip} kubednsip={kubedns_ip} cbr0ip={cbr0_ip}")
if kubedns_mac != cbr0_mac:
# if self pod in the same subnet as kube-dns pod
self.publish_event(PossibleDnsSpoofing(kubedns_pod_ip=kubedns_ip))
else:
logging.debug("Could not get kubedns identity")
logger.debug("Could not get kubedns identity")

View File

@@ -1,19 +1,37 @@
import logging
import requests
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, OpenPortEvent
from kube_hunter.core.types import ActiveHunter, Hunter, KubernetesCluster, \
InformationDisclosure, RemoteCodeExec, UnauthenticatedAccess, AccessRisk
from kube_hunter.core.types import (
ActiveHunter,
Hunter,
KubernetesCluster,
InformationDisclosure,
RemoteCodeExec,
UnauthenticatedAccess,
AccessRisk,
)
logger = logging.getLogger(__name__)
ETCD_PORT = 2379
""" Vulnerabilities """
class EtcdRemoteWriteAccessEvent(Vulnerability, Event):
"""Remote write access might grant an attacker full control over the kubernetes cluster"""
def __init__(self, write_res):
Vulnerability.__init__(self, KubernetesCluster, name="Etcd Remote Write Access Event", category=RemoteCodeExec, vid="KHV031")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd Remote Write Access Event",
category=RemoteCodeExec,
vid="KHV031",
)
self.evidence = write_res
@@ -21,7 +39,13 @@ class EtcdRemoteReadAccessEvent(Vulnerability, Event):
"""Remote read access might expose to an attacker cluster's possible exploits, secrets and more."""
def __init__(self, keys):
Vulnerability.__init__(self, KubernetesCluster, name="Etcd Remote Read Access Event", category=AccessRisk, vid="KHV032")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd Remote Read Access Event",
category=AccessRisk,
vid="KHV032",
)
self.evidence = keys
@@ -30,40 +54,54 @@ class EtcdRemoteVersionDisclosureEvent(Vulnerability, Event):
def __init__(self, version):
Vulnerability.__init__(self, KubernetesCluster, name="Etcd Remote version disclosure",
category=InformationDisclosure, vid="KHV033")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd Remote version disclosure",
category=InformationDisclosure,
vid="KHV033",
)
self.evidence = version
class EtcdAccessEnabledWithoutAuthEvent(Vulnerability, Event):
"""Etcd is accessible using HTTP (without authorization and authentication), it would allow a potential attacker to
"""Etcd is accessible using HTTP (without authorization and authentication),
it would allow a potential attacker to
gain access to the etcd"""
def __init__(self, version):
Vulnerability.__init__(self, KubernetesCluster, name="Etcd is accessible using insecure connection (HTTP)",
category=UnauthenticatedAccess, vid="KHV034")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd is accessible using insecure connection (HTTP)",
category=UnauthenticatedAccess,
vid="KHV034",
)
self.evidence = version
# Active Hunter
@handler.subscribe(OpenPortEvent, predicate=lambda p: p.port == 2379)
@handler.subscribe(OpenPortEvent, predicate=lambda p: p.port == ETCD_PORT)
class EtcdRemoteAccessActive(ActiveHunter):
"""Etcd Remote Access
Checks for remote write access to etcd- will attempt to add a new key to the etcd DB"""
Checks for remote write access to etcd, will attempt to add a new key to the etcd DB"""
def __init__(self, event):
self.event = event
self.write_evidence = ''
self.write_evidence = ""
self.event.protocol = "https"
def db_keys_write_access(self):
logging.debug("Active hunter is attempting to write keys remotely on host " + self.event.host)
data = {
'value': 'remotely written data'
}
config = get_config()
logger.debug(f"Trying to write keys remotely on host {self.event.host}")
data = {"value": "remotely written data"}
try:
r = requests.post("{protocol}://{host}:{port}/v2/keys/message".format(host=self.event.host, port=2379,
protocol=self.protocol), data=data)
self.write_evidence = r.content if r.status_code == 200 and r.content != '' else False
r = requests.post(
f"{self.event.protocol}://{self.event.host}:{ETCD_PORT}/v2/keys/message",
data=data,
timeout=config.network_timeout,
)
self.write_evidence = r.content if r.status_code == 200 and r.content else False
return self.write_evidence
except requests.exceptions.ConnectionError:
return False
@@ -74,7 +112,7 @@ class EtcdRemoteAccessActive(ActiveHunter):
# Passive Hunter
@handler.subscribe(OpenPortEvent, predicate=lambda p: p.port == 2379)
@handler.subscribe(OpenPortEvent, predicate=lambda p: p.port == ETCD_PORT)
class EtcdRemoteAccess(Hunter):
"""Etcd Remote Access
Checks for remote availability of etcd, its version, and read access to the DB
@@ -82,46 +120,57 @@ class EtcdRemoteAccess(Hunter):
def __init__(self, event):
self.event = event
self.version_evidence = ''
self.keys_evidence = ''
self.protocol = 'https'
self.version_evidence = ""
self.keys_evidence = ""
self.event.protocol = "https"
def db_keys_disclosure(self):
logging.debug(self.event.host + " Passive hunter is attempting to read etcd keys remotely")
config = get_config()
logger.debug(f"{self.event.host} Passive hunter is attempting to read etcd keys remotely")
try:
r = requests.get(
"{protocol}://{host}:{port}/v2/keys".format(protocol=self.protocol, host=self.event.host, port=2379),
verify=False)
self.keys_evidence = r.content if r.status_code == 200 and r.content != '' else False
f"{self.event.protocol}://{self.event.host}:{ETCD_PORT}/v2/keys",
verify=False,
timeout=config.network_timeout,
)
self.keys_evidence = r.content if r.status_code == 200 and r.content != "" else False
return self.keys_evidence
except requests.exceptions.ConnectionError:
return False
def version_disclosure(self):
logging.debug(self.event.host + " Passive hunter is attempting to check etcd version remotely")
config = get_config()
logger.debug(f"Trying to check etcd version remotely at {self.event.host}")
try:
r = requests.get(
"{protocol}://{host}:{port}/version".format(protocol=self.protocol, host=self.event.host, port=2379),
verify=False)
self.version_evidence = r.content if r.status_code == 200 and r.content != '' else False
f"{self.event.protocol}://{self.event.host}:{ETCD_PORT}/version",
verify=False,
timeout=config.network_timeout,
)
self.version_evidence = r.content if r.status_code == 200 and r.content else False
return self.version_evidence
except requests.exceptions.ConnectionError:
return False
def insecure_access(self):
logging.debug(self.event.host + " Passive hunter is attempting to access etcd insecurely")
config = get_config()
logger.debug(f"Trying to access etcd insecurely at {self.event.host}")
try:
r = requests.get("http://{host}:{port}/version".format(host=self.event.host, port=2379), verify=False)
return r.content if r.status_code == 200 and r.content != '' else False
r = requests.get(
f"http://{self.event.host}:{ETCD_PORT}/version",
verify=False,
timeout=config.network_timeout,
)
return r.content if r.status_code == 200 and r.content else False
except requests.exceptions.ConnectionError:
return False
def execute(self):
if self.insecure_access(): # make a decision between http and https protocol
self.protocol = 'http'
self.event.protocol = "http"
if self.version_disclosure():
self.publish_event(EtcdRemoteVersionDisclosureEvent(self.version_evidence))
if self.protocol == 'http':
if self.event.protocol == "http":
self.publish_event(EtcdAccessEnabledWithoutAuthEvent(self.version_evidence))
if self.db_keys_disclosure():
self.publish_event(EtcdRemoteReadAccessEvent(self.keys_evidence))

File diff suppressed because it is too large Load Diff

View File

@@ -1,27 +1,53 @@
import logging
import json
import re
import uuid
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import ActiveHunter, Hunter, KubernetesCluster, PrivilegeEscalation
from kube_hunter.modules.hunting.kubelet import ExposedPodsHandler, ExposedRunHandler, KubeletHandlers
from kube_hunter.core.types import (
ActiveHunter,
Hunter,
KubernetesCluster,
PrivilegeEscalation,
)
from kube_hunter.modules.hunting.kubelet import (
ExposedPodsHandler,
ExposedRunHandler,
KubeletHandlers,
)
logger = logging.getLogger(__name__)
class WriteMountToVarLog(Vulnerability, Event):
"""A pod can create symlinks in the /var/log directory on the host, which can lead to a root directory traveral"""
def __init__(self, pods):
Vulnerability.__init__(self, KubernetesCluster, "Pod With Mount To /var/log", category=PrivilegeEscalation, vid="KHV047")
Vulnerability.__init__(
self,
KubernetesCluster,
"Pod With Mount To /var/log",
category=PrivilegeEscalation,
vid="KHV047",
)
self.pods = pods
self.evidence = "pods: {}".format(', '.join((pod["metadata"]["name"] for pod in self.pods)))
self.evidence = "pods: {}".format(", ".join(pod["metadata"]["name"] for pod in self.pods))
class DirectoryTraversalWithKubelet(Vulnerability, Event):
"""An attacker can run commands on pods with mount to /var/log, and traverse read all files on the host filesystem"""
"""An attacker can run commands on pods with mount to /var/log,
and traverse read all files on the host filesystem"""
def __init__(self, output):
Vulnerability.__init__(self, KubernetesCluster, "Root Traversal Read On The Kubelet", category=PrivilegeEscalation)
Vulnerability.__init__(
self,
KubernetesCluster,
"Root Traversal Read On The Kubelet",
category=PrivilegeEscalation,
)
self.output = output
self.evidence = "output: {}".format(self.output)
self.evidence = f"output: {self.output}"
@handler.subscribe(ExposedPodsHandler)
@@ -30,6 +56,7 @@ class VarLogMountHunter(Hunter):
Hunt pods that have write access to host's /var/log. in such case,
the pod can traverse read files on the host machine
"""
def __init__(self, event):
self.event = event
@@ -49,28 +76,35 @@ class VarLogMountHunter(Hunter):
if pe_pods:
self.publish_event(WriteMountToVarLog(pods=pe_pods))
@handler.subscribe(ExposedRunHandler)
class ProveVarLogMount(ActiveHunter):
"""Prove /var/log Mount Hunter
Tries to read /etc/shadow on the host by running commands inside a pod with host mount to /var/log
"""
def __init__(self, event):
self.event = event
self.base_path = "https://{host}:{port}/".format(host=self.event.host, port=self.event.port)
self.base_path = f"https://{self.event.host}:{self.event.port}"
def run(self, command, container):
run_url = KubeletHandlers.RUN.value.format(
podNamespace=container["namespace"],
podID=container["pod"],
containerName=container["name"],
cmd=command
cmd=command,
)
return self.event.session.post(self.base_path + run_url, verify=False).text
return self.event.session.post(f"{self.base_path}/{run_url}", verify=False).text
# TODO: replace with multiple subscription to WriteMountToVarLog as well
def get_varlog_mounters(self):
logging.debug("accessing /pods manually on ProveVarLogMount")
pods = json.loads(self.event.session.get(self.base_path + KubeletHandlers.PODS.value, verify=False).text)["items"]
config = get_config()
logger.debug("accessing /pods manually on ProveVarLogMount")
pods = self.event.session.get(
f"{self.base_path}/" + KubeletHandlers.PODS.value,
verify=False,
timeout=config.network_timeout,
).json()["items"]
for pod in pods:
volume = VarLogMountHunter(ExposedPodsHandler(pods=pods)).has_write_mount_to(pod, "/var/log")
if volume:
@@ -81,32 +115,44 @@ class ProveVarLogMount(ActiveHunter):
for container in pod["spec"]["containers"]:
for volume_mount in container["volumeMounts"]:
if volume_mount["name"] == mount_name:
logging.debug("yielding {}".format(container))
logger.debug(f"yielding {container}")
yield container, volume_mount["mountPath"]
def traverse_read(self, host_file, container, mount_path, host_path):
"""Returns content of file on the host, and cleans trails"""
config = get_config()
symlink_name = str(uuid.uuid4())
# creating symlink to file
self.run("ln -s {} {}/{}".format(host_file, mount_path, symlink_name), container=container)
self.run(f"ln -s {host_file} {mount_path}/{symlink_name}", container)
# following symlink with kubelet
path_in_logs_endpoint = KubeletHandlers.LOGS.value.format(path=host_path.strip('/var/log')+symlink_name)
content = self.event.session.get("{}{}".format(self.base_path, path_in_logs_endpoint), verify=False).text
path_in_logs_endpoint = KubeletHandlers.LOGS.value.format(
path=re.sub(r"^/var/log", "", host_path) + symlink_name
)
content = self.event.session.get(
f"{self.base_path}/{path_in_logs_endpoint}",
verify=False,
timeout=config.network_timeout,
).text
# removing symlink
self.run("rm {}/{}".format(mount_path, symlink_name), container=container)
self.run(f"rm {mount_path}/{symlink_name}", container=container)
return content
def execute(self):
for pod, volume in self.get_varlog_mounters():
for container, mount_path in self.mount_path_from_mountname(pod, volume["name"]):
logging.debug("correleated container to mount_name")
logger.debug("Correlated container to mount_name")
cont = {
"name": container["name"],
"pod": pod["metadata"]["name"],
"namespace": pod["metadata"]["namespace"],
}
try:
output = self.traverse_read("/etc/shadow", container=cont, mount_path=mount_path, host_path=volume["hostPath"]["path"])
output = self.traverse_read(
"/etc/shadow",
container=cont,
mount_path=mount_path,
host_path=volume["hostPath"]["path"],
)
self.publish_event(DirectoryTraversalWithKubelet(output=output))
except Exception as x:
logging.debug("could not exploit /var/log: {}".format(x))
except Exception:
logger.debug("Could not exploit /var/log", exc_info=True)

View File

@@ -1,56 +1,76 @@
import logging
import requests
import json
from enum import Enum
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability, K8sVersionDisclosure
from kube_hunter.core.types import ActiveHunter, Hunter, KubernetesCluster, InformationDisclosure
from kube_hunter.core.types import (
ActiveHunter,
Hunter,
KubernetesCluster,
InformationDisclosure,
)
from kube_hunter.modules.discovery.dashboard import KubeDashboardEvent
from kube_hunter.modules.discovery.proxy import KubeProxyEvent
logger = logging.getLogger(__name__)
class KubeProxyExposed(Vulnerability, Event):
"""All operations on the cluster are exposed"""
def __init__(self):
Vulnerability.__init__(self, KubernetesCluster, "Proxy Exposed", category=InformationDisclosure, vid="KHV049")
Vulnerability.__init__(
self,
KubernetesCluster,
"Proxy Exposed",
category=InformationDisclosure,
vid="KHV049",
)
class Service(Enum):
DASHBOARD = "kubernetes-dashboard"
@handler.subscribe(KubeProxyEvent)
class KubeProxy(Hunter):
"""Proxy Hunting
Hunts for a dashboard behind the proxy
"""
def __init__(self, event):
self.event = event
self.api_url = "http://{host}:{port}/api/v1".format(host=self.event.host, port=self.event.port)
self.api_url = f"http://{self.event.host}:{self.event.port}/api/v1"
def execute(self):
self.publish_event(KubeProxyExposed())
for namespace, services in self.services.items():
for service in services:
if service == Service.DASHBOARD.value:
logging.debug(service)
curr_path = "api/v1/namespaces/{ns}/services/{sv}/proxy".format(ns=namespace,sv=service) # TODO: check if /proxy is a convention on other services
logger.debug(f"Found a dashboard service '{service}'")
# TODO: check if /proxy is a convention on other services
curr_path = f"api/v1/namespaces/{namespace}/services/{service}/proxy"
self.publish_event(KubeDashboardEvent(path=curr_path, secure=False))
@property
def namespaces(self):
resource_json = requests.get(self.api_url + "/namespaces").json()
config = get_config()
resource_json = requests.get(f"{self.api_url}/namespaces", timeout=config.network_timeout).json()
return self.extract_names(resource_json)
@property
def services(self):
config = get_config()
# map between namespaces and service names
services = dict()
for namespace in self.namespaces:
resource_path = "/namespaces/{ns}/services".format(ns=namespace)
resource_json = requests.get(self.api_url + resource_path).json()
resource_path = f"{self.api_url}/namespaces/{namespace}/services"
resource_json = requests.get(resource_path, timeout=config.network_timeout).json()
services[namespace] = self.extract_names(resource_json)
logging.debug(services)
logger.debug(f"Enumerated services [{' '.join(services)}]")
return services
@staticmethod
@@ -60,34 +80,48 @@ class KubeProxy(Hunter):
names.append(item["metadata"]["name"])
return names
@handler.subscribe(KubeProxyExposed)
class ProveProxyExposed(ActiveHunter):
"""Build Date Hunter
Hunts when proxy is exposed, extracts the build date of kubernetes
"""
def __init__(self, event):
self.event = event
def execute(self):
version_metadata = json.loads(requests.get("http://{host}:{port}/version".format(
host=self.event.host,
port=self.event.port,
), verify=False).text)
config = get_config()
version_metadata = requests.get(
f"http://{self.event.host}:{self.event.port}/version",
verify=False,
timeout=config.network_timeout,
).json()
if "buildDate" in version_metadata:
self.event.evidence = "build date: {}".format(version_metadata["buildDate"])
@handler.subscribe(KubeProxyExposed)
class K8sVersionDisclosureProve(ActiveHunter):
"""K8s Version Hunter
Hunts Proxy when exposed, extracts the version
"""
def __init__(self, event):
self.event = event
def execute(self):
version_metadata = json.loads(requests.get("http://{host}:{port}/version".format(
host=self.event.host,
port=self.event.port,
), verify=False).text)
config = get_config()
version_metadata = requests.get(
f"http://{self.event.host}:{self.event.port}/version",
verify=False,
timeout=config.network_timeout,
).json()
if "gitVersion" in version_metadata:
self.publish_event(K8sVersionDisclosure(version=version_metadata["gitVersion"], from_endpoint="/version", extra_info="on the kube-proxy"))
self.publish_event(
K8sVersionDisclosure(
version=version_metadata["gitVersion"],
from_endpoint="/version",
extra_info="on kube-proxy",
)
)

View File

@@ -1,27 +1,38 @@
import json
import logging
import os
import requests
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event
from kube_hunter.core.types import Hunter, KubernetesCluster, AccessRisk
from kube_hunter.modules.discovery.hosts import RunningAsPodEvent
logger = logging.getLogger(__name__)
class ServiceAccountTokenAccess(Vulnerability, Event):
""" Accessing the pod service account token gives an attacker the option to use the server API """
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Read access to pod's service account token",
category=AccessRisk, vid="KHV050")
Vulnerability.__init__(
self,
KubernetesCluster,
name="Read access to pod's service account token",
category=AccessRisk,
vid="KHV050",
)
self.evidence = evidence
class SecretsAccess(Vulnerability, Event):
""" Accessing the pod's secrets within a compromised pod might disclose valuable data to a potential attacker"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Access to pod's secrets", category=AccessRisk)
Vulnerability.__init__(
self,
component=KubernetesCluster,
name="Access to pod's secrets",
category=AccessRisk,
)
self.evidence = evidence
@@ -33,14 +44,16 @@ class AccessSecrets(Hunter):
def __init__(self, event):
self.event = event
self.secrets_evidence = ''
self.secrets_evidence = ""
def get_services(self):
logging.debug(self.event.host)
logging.debug('Passive Hunter is attempting to access pod\'s secrets directory')
logger.debug("Trying to access pod's secrets directory")
# get all files and subdirectories files:
self.secrets_evidence = [val for sublist in [[os.path.join(i[0], j) for j in i[2]] for i in os.walk('/var/run/secrets/')] for val in sublist]
return True if (len(self.secrets_evidence) > 0) else False
self.secrets_evidence = []
for dirname, _, files in os.walk("/var/run/secrets/"):
for f in files:
self.secrets_evidence.append(os.path.join(dirname, f))
return len(self.secrets_evidence) > 0
def execute(self):
if self.event.auth_token is not None:

View File

@@ -1 +1,2 @@
# flake8: noqa: E402
from kube_hunter.modules.report.factory import get_reporter, get_dispatcher

View File

@@ -1,9 +1,17 @@
from kube_hunter.core.types import Discovery
from kube_hunter.modules.report.collector import services, vulnerabilities, \
hunters, services_lock, vulnerabilities_lock
from kube_hunter.modules.report.collector import (
services,
vulnerabilities,
hunters,
services_lock,
vulnerabilities_lock,
)
BASE_KB_LINK = "https://avd.aquasec.com/"
FULL_KB_LINK = "https://avd.aquasec.com/kube-hunter/{vid}/"
class BaseReporter(object):
class BaseReporter:
def get_nodes(self):
nodes = list()
node_locations = set()
@@ -11,60 +19,52 @@ class BaseReporter(object):
for service in services:
node_location = str(service.host)
if node_location not in node_locations:
nodes.append({
"type": "Node/Master",
"location": str(service.host)
})
nodes.append({"type": "Node/Master", "location": node_location})
node_locations.add(node_location)
return nodes
def get_services(self):
with services_lock:
services_data = [{
"service": service.get_name(),
"location": f"{service.host}:"
f"{service.port}"
f"{service.get_path()}",
"description": service.explain()
} for service in services]
return services_data
return [
{"service": service.get_name(), "location": f"{service.host}:{service.port}{service.get_path()}"}
for service in services
]
def get_vulnerabilities(self):
with vulnerabilities_lock:
vulnerabilities_data = [{
"location": vuln.location(),
"vid": vuln.get_vid(),
"category": vuln.category.name,
"severity": vuln.get_severity(),
"vulnerability": vuln.get_name(),
"description": vuln.explain(),
"evidence": str(vuln.evidence),
"hunter": vuln.hunter.get_name()
} for vuln in vulnerabilities]
return vulnerabilities_data
return [
{
"location": vuln.location(),
"vid": vuln.get_vid(),
"category": vuln.category.name,
"severity": vuln.get_severity(),
"vulnerability": vuln.get_name(),
"description": vuln.explain(),
"evidence": str(vuln.evidence),
"avd_reference": FULL_KB_LINK.format(vid=vuln.get_vid().lower()),
"hunter": vuln.hunter.get_name(),
}
for vuln in vulnerabilities
]
def get_hunter_statistics(self):
hunters_data = list()
hunters_data = []
for hunter, docs in hunters.items():
if not Discovery in hunter.__mro__:
if Discovery not in hunter.__mro__:
name, doc = hunter.parse_docs(docs)
hunters_data.append({
"name": name,
"description": doc,
"vulnerabilities": hunter.publishedVulnerabilities
})
hunters_data.append(
{"name": name, "description": doc, "vulnerabilities": hunter.publishedVulnerabilities}
)
return hunters_data
def get_report(self, *, statistics, **kwargs):
report = {
"nodes": self.get_nodes(),
"services": self.get_services(),
"vulnerabilities": self.get_vulnerabilities()
"vulnerabilities": self.get_vulnerabilities(),
}
if statistics:
report["hunter_statistics"] = self.get_hunter_statistics()
report["kburl"] = "https://aquasecurity.github.io/kube-hunter/kb/{vid}"
return report

View File

@@ -1,46 +1,29 @@
import logging
import threading
from kube_hunter.conf import config
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Service, Vulnerability, HuntFinished, HuntStarted, ReportDispatched
from kube_hunter.core.events.types import (
Event,
Service,
Vulnerability,
HuntFinished,
HuntStarted,
ReportDispatched,
)
logger = logging.getLogger(__name__)
global services_lock
services_lock = threading.Lock()
services = list()
global vulnerabilities_lock
vulnerabilities_lock = threading.Lock()
vulnerabilities = list()
hunters = handler.all_hunters
def console_trim(text, prefix=' '):
a = text.split(" ")
b = a[:]
total_length = 0
count_of_inserts = 0
for index, value in enumerate(a):
if (total_length + (len(value) + len(prefix))) >= 80:
b.insert(index + count_of_inserts, '\n')
count_of_inserts += 1
total_length = 0
else:
total_length += len(value) + len(prefix)
return '\n'.join([prefix + line.strip(' ') for line in ' '.join(b).split('\n')])
def wrap_last_line(text, prefix='| ', suffix='|_'):
lines = text.split('\n')
lines[-1] = lines[-1].replace(prefix, suffix, 1)
return '\n'.join(lines)
@handler.subscribe(Service)
@handler.subscribe(Vulnerability)
class Collector(object):
class Collector:
def __init__(self, event=None):
self.event = event
@@ -52,21 +35,11 @@ class Collector(object):
if Service in bases:
with services_lock:
services.append(self.event)
import datetime
logging.info("|\n| {name}:\n| type: open service\n| service: {name}\n|_ location: {location}".format(
name=self.event.get_name(),
location=self.event.location(),
time=datetime.time()
))
logger.info(f'Found open service "{self.event.get_name()}" at {self.event.location()}')
elif Vulnerability in bases:
with vulnerabilities_lock:
vulnerabilities.append(self.event)
logging.info(
"|\n| {name}:\n| type: vulnerability\n| location: {location}\n| description: \n{desc}".format(
name=self.event.get_name(),
location=self.event.location(),
desc=wrap_last_line(console_trim(self.event.explain(), '| '))
))
logger.info(f'Found vulnerability "{self.event.get_name()}" in {self.event.location()}')
class TablesPrinted(Event):
@@ -74,11 +47,12 @@ class TablesPrinted(Event):
@handler.subscribe(HuntFinished)
class SendFullReport(object):
class SendFullReport:
def __init__(self, event):
self.event = event
def execute(self):
config = get_config()
report = config.reporter.get_report(statistics=config.statistics, mapping=config.mapping)
config.dispatcher.dispatch(report)
handler.publish_event(ReportDispatched())
@@ -86,10 +60,10 @@ class SendFullReport(object):
@handler.subscribe(HuntStarted)
class StartedInfo(object):
class StartedInfo:
def __init__(self, event):
self.event = event
def execute(self):
logging.info("~ Started")
logging.info("~ Discovering Open Kubernetes Services...")
logger.info("Started hunting")
logger.info("Discovering Open Kubernetes Services")

View File

@@ -2,54 +2,32 @@ import logging
import os
import requests
from kube_hunter.conf import config
logger = logging.getLogger(__name__)
class HTTPDispatcher(object):
class HTTPDispatcher:
def dispatch(self, report):
logging.debug('Dispatching report via http')
dispatch_method = os.environ.get(
'KUBEHUNTER_HTTP_DISPATCH_METHOD',
'POST'
).upper()
dispatch_url = os.environ.get(
'KUBEHUNTER_HTTP_DISPATCH_URL',
'https://localhost/'
)
logger.debug("Dispatching report via HTTP")
dispatch_method = os.environ.get("KUBEHUNTER_HTTP_DISPATCH_METHOD", "POST").upper()
dispatch_url = os.environ.get("KUBEHUNTER_HTTP_DISPATCH_URL", "https://localhost/")
try:
r = requests.request(
dispatch_method,
dispatch_url,
json=report,
headers={'Content-Type': 'application/json'}
headers={"Content-Type": "application/json"},
)
r.raise_for_status()
logging.info('\nReport was dispatched to: {url}'.format(url=dispatch_url))
logging.debug(
"\tResponse Code: {status}\n\tResponse Data:\n{data}".format(
status=r.status_code,
data=r.text
)
)
except requests.HTTPError as e:
# specific http exceptions
logging.exception(
"\nCould not dispatch report using HTTP {method} to {url}\nResponse Code: {status}".format(
status=r.status_code,
url=dispatch_url,
method=dispatch_method
)
)
except Exception as e:
# default all exceptions
logging.exception("\nCould not dispatch report using HTTP {method} to {url}".format(
method=dispatch_method,
url=dispatch_url
))
logger.info(f"Report was dispatched to: {dispatch_url}")
logger.debug(f"Dispatch responded {r.status_code} with: {r.text}")
class STDOUTDispatcher(object):
except requests.HTTPError:
logger.exception(f"Failed making HTTP {dispatch_method} to {dispatch_url}, " f"status code {r.status_code}")
except Exception:
logger.exception(f"Could not dispatch report to {dispatch_url}")
class STDOUTDispatcher:
def dispatch(self, report):
logging.debug('Dispatching report via stdout')
if config.report == "plain":
logging.info("\n{div}".format(div="-" * 10))
logger.debug("Dispatching report via stdout")
print(report)

View File

@@ -1,20 +1,23 @@
import logging
from kube_hunter.modules.report.json import JSONReporter
from kube_hunter.modules.report.yaml import YAMLReporter
from kube_hunter.modules.report.plain import PlainReporter
from kube_hunter.modules.report.dispatchers import \
STDOUTDispatcher, HTTPDispatcher
from kube_hunter.modules.report.dispatchers import STDOUTDispatcher, HTTPDispatcher
import logging
logger = logging.getLogger(__name__)
DEFAULT_REPORTER = "plain"
reporters = {
'yaml': YAMLReporter,
'json': JSONReporter,
'plain': PlainReporter
"yaml": YAMLReporter,
"json": JSONReporter,
"plain": PlainReporter,
}
DEFAULT_DISPATCHER = "stdout"
dispatchers = {
'stdout': STDOUTDispatcher,
'http': HTTPDispatcher
"stdout": STDOUTDispatcher,
"http": HTTPDispatcher,
}
@@ -22,13 +25,13 @@ def get_reporter(name):
try:
return reporters[name.lower()]()
except KeyError:
logging.warning('Unknown reporter selected, using plain')
return reporters['plain']()
logger.warning(f'Unknown reporter "{name}", using f{DEFAULT_REPORTER}')
return reporters[DEFAULT_REPORTER]()
def get_dispatcher(name):
try:
return dispatchers[name.lower()]()
except KeyError:
logging.warning('Unknown dispatcher selected, using stdout')
return dispatchers['stdout']()
logger.warning(f'Unknown dispatcher "{name}", using {DEFAULT_DISPATCHER}')
return dispatchers[DEFAULT_DISPATCHER]()

View File

@@ -1,4 +1,5 @@
import json
from kube_hunter.modules.report.base import BaseReporter

View File

@@ -1,18 +1,19 @@
from __future__ import print_function
from prettytable import ALL, PrettyTable
from kube_hunter.modules.report.base import BaseReporter
from kube_hunter.modules.report.collector import services, vulnerabilities, \
hunters, services_lock, vulnerabilities_lock
from kube_hunter.modules.report.base import BaseReporter, BASE_KB_LINK
from kube_hunter.modules.report.collector import (
services,
vulnerabilities,
hunters,
services_lock,
vulnerabilities_lock,
)
EVIDENCE_PREVIEW = 40
EVIDENCE_PREVIEW = 100
MAX_TABLE_WIDTH = 20
KB_LINK = "https://github.com/aquasecurity/kube-hunter/tree/master/docs/_kb"
class PlainReporter(BaseReporter):
def get_report(self, *, statistics=None, mapping=None, **kwargs):
"""generates report tables"""
output = ""
@@ -42,7 +43,6 @@ class PlainReporter(BaseReporter):
if vulnerabilities_len:
output += self.vulns_table()
output += "\nKube Hunter couldn't find any clusters"
# print("\nKube Hunter couldn't find any clusters. {}".format("Maybe try with --active?" if not config.active else ""))
return output
def nodes_table(self):
@@ -59,7 +59,7 @@ class PlainReporter(BaseReporter):
if service.event_id not in id_memory:
nodes_table.add_row(["Node/Master", service.host])
id_memory.add(service.event_id)
nodes_ret = "\nNodes\n{}\n".format(nodes_table)
nodes_ret = f"\nNodes\n{nodes_table}\n"
services_lock.release()
return nodes_ret
@@ -74,19 +74,20 @@ class PlainReporter(BaseReporter):
with services_lock:
for service in services:
services_table.add_row(
[service.get_name(),
f"{service.host}:"
f"{service.port}"
f"{service.get_path()}",
service.explain()])
detected_services_ret = "\nDetected Services\n" \
f"{services_table}\n"
[service.get_name(), f"{service.host}:{service.port}{service.get_path()}", service.explain()]
)
detected_services_ret = f"\nDetected Services\n{services_table}\n"
return detected_services_ret
def vulns_table(self):
column_names = ["ID", "Location",
"Category", "Vulnerability",
"Description", "Evidence"]
column_names = [
"ID",
"Location",
"Category",
"Vulnerability",
"Description",
"Evidence",
]
vuln_table = PrettyTable(column_names, hrules=ALL)
vuln_table.align = "l"
vuln_table.max_width = MAX_TABLE_WIDTH
@@ -97,10 +98,23 @@ class PlainReporter(BaseReporter):
with vulnerabilities_lock:
for vuln in vulnerabilities:
evidence = str(vuln.evidence)[:EVIDENCE_PREVIEW] + "..." if len(str(vuln.evidence)) > EVIDENCE_PREVIEW else str(vuln.evidence)
row = [vuln.get_vid(), vuln.location(), vuln.category.name, vuln.get_name(), vuln.explain(), evidence]
evidence = str(vuln.evidence)
if len(evidence) > EVIDENCE_PREVIEW:
evidence = evidence[:EVIDENCE_PREVIEW] + "..."
row = [
vuln.get_vid(),
vuln.location(),
vuln.category.name,
vuln.get_name(),
vuln.explain(),
evidence,
]
vuln_table.add_row(row)
return "\nVulnerabilities\nFor further information about a vulnerability, search its ID in: \n{}\n{}\n".format(KB_LINK, vuln_table)
return (
"\nVulnerabilities\n"
"For further information about a vulnerability, search its ID in: \n"
f"{BASE_KB_LINK}\n{vuln_table}\n"
)
def hunters_table(self):
column_names = ["Name", "Description", "Vulnerabilities"]
@@ -115,4 +129,4 @@ class PlainReporter(BaseReporter):
hunter_statistics = self.get_hunter_statistics()
for item in hunter_statistics:
hunters_table.add_row([item.get("name"), item.get("description"), item.get("vulnerabilities")])
return "\nHunter Statistics\n{}\n".format(hunters_table)
return f"\nHunter Statistics\n{hunters_table}\n"

View File

@@ -0,0 +1,23 @@
import pluggy
from kube_hunter.plugins import hookspecs
hookimpl = pluggy.HookimplMarker("kube-hunter")
def initialize_plugin_manager():
"""
Initializes and loads all default and setup implementations for registered plugins
@return: initialized plugin manager
"""
pm = pluggy.PluginManager("kube-hunter")
pm.add_hookspecs(hookspecs)
pm.load_setuptools_entrypoints("kube_hunter")
# default registration of builtin implemented plugins
from kube_hunter.conf import parser
pm.register(parser)
return pm

View File

@@ -0,0 +1,24 @@
import pluggy
from argparse import ArgumentParser
hookspec = pluggy.HookspecMarker("kube-hunter")
@hookspec
def parser_add_arguments(parser: ArgumentParser):
"""Add arguments to the ArgumentParser.
If a plugin requires an aditional argument, it should implement this hook
and add the argument to the Argument Parser
@param parser: an ArgumentParser, calls parser.add_argument on it
"""
@hookspec
def load_plugin(args):
"""Plugins that wish to execute code after the argument parsing
should implement this hook.
@param args: all parsed arguments passed to kube-hunter
"""

2
mypy.ini Normal file
View File

@@ -0,0 +1,2 @@
[mypy]
ignore_missing_imports = True

View File

@@ -1,14 +0,0 @@
# Plugins
This folder contains modules that will load before any parsing of arguments by kube-hunter main module.
An example for using a plugin to add an argument:
```python
# example.py
from kube_hunter.conf import config
config.parser.add_argument('--exampleflag', action="store_true", help="enables active hunting")
```
What we did here was just add a file to the `plugins/` folder, import the parser, and adding an argument.
All plugins in this folder gets imported right after the main arguments are added, and right before they are getting parsed, so you can add an argument that will later be used in your [hunting/discovery module](../kube_hunter/README.md).

View File

@@ -1,7 +0,0 @@
from os.path import dirname, basename, isfile
import glob
# dynamically importing all modules in folder
files = glob.glob(dirname(__file__)+"/*.py")
for module_name in (basename(f)[:-3] for f in files if isfile(f) and not f.endswith('__init__.py')):
exec('from .{} import *'.format(module_name))

View File

@@ -1,5 +0,0 @@
import logging
# Suppress logging from scapy
logging.getLogger("scapy.runtime").setLevel(logging.CRITICAL)
logging.getLogger("scapy.loading").setLevel(logging.CRITICAL)

View File

@@ -0,0 +1,3 @@
from PyInstaller.utils.hooks import collect_all
datas, binaries, hiddenimports = collect_all("prettytable")

32
pyproject.toml Normal file
View File

@@ -0,0 +1,32 @@
[tool.black]
line-length = 120
target-version = ['py36']
include = '\.pyi?$'
exclude = '''
(
\.eggs
| \.git
| \.hg
| \.mypy_cache
| \.tox
| \venv
| \.venv
| _build
| buck-out
| build
| dist
| \.vscode
| \.idea
| \.Python
| develop-eggs
| downloads
| eggs
| lib
| lib64
| parts
| sdist
| var
| .*\.egg-info
| \.DS_Store
)
'''

View File

@@ -2,7 +2,7 @@
flake8
pytest >= 2.9.1
requests-mock
requests-mock >= 1.8
coverage < 5.0
pytest-cov
setuptools >= 30.3.0
@@ -10,3 +10,8 @@ setuptools_scm
twine
pyinstaller
staticx
black
pre-commit
flake8-bugbear
flake8-mypy
pluggy

View File

@@ -1,12 +0,0 @@
#!/usr/bin/env python3
import argparse
import pytest
import tests
def main():
exit(pytest.main(['.']))
if __name__ == '__main__':
main()

View File

@@ -22,6 +22,8 @@ classifiers =
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Programming Language :: Python :: 3 :: Only
Topic :: Security
[options]
@@ -37,6 +39,8 @@ install_requires =
ruamel.yaml
future
packaging
dataclasses
pluggy
setup_requires =
setuptools>=30.3.0
setuptools_scm
@@ -85,6 +89,7 @@ exclude_lines =
if 0:
if __name__ == .__main__.:
# Don't complain about log messages not being tested
logger\.
logging\.
# Files to exclude from consideration

View File

@@ -1,6 +1,7 @@
from subprocess import check_call
from pkg_resources import parse_requirements
from configparser import ConfigParser
from pkg_resources import parse_requirements
from subprocess import check_call
from typing import Any, List
from setuptools import setup, Command
@@ -8,7 +9,7 @@ class ListDependenciesCommand(Command):
"""A custom command to list dependencies"""
description = "list package dependencies"
user_options = []
user_options: List[Any] = []
def initialize_options(self):
pass
@@ -27,7 +28,7 @@ class PyInstallerCommand(Command):
"""A custom command to run PyInstaller to build standalone executable."""
description = "run PyInstaller on kube-hunter entrypoint"
user_options = []
user_options: List[Any] = []
def initialize_options(self):
pass
@@ -40,6 +41,8 @@ class PyInstallerCommand(Command):
cfg.read("setup.cfg")
command = [
"pyinstaller",
"--additional-hooks-dir",
"pyinstaller_hooks",
"--clean",
"--onefile",
"--name",
@@ -50,16 +53,11 @@ class PyInstallerCommand(Command):
for r in requirements:
command.extend(["--hidden-import", r.key])
command.append("kube_hunter/__main__.py")
print(' '.join(command))
print(" ".join(command))
check_call(command)
setup(
use_scm_version={
"fallback_version": "noversion"
},
cmdclass={
"dependencies": ListDependenciesCommand,
"pyinstaller": PyInstallerCommand,
},
use_scm_version={"fallback_version": "noversion"},
cmdclass={"dependencies": ListDependenciesCommand, "pyinstaller": PyInstallerCommand},
)

View File

@@ -0,0 +1,23 @@
import logging
from kube_hunter.conf.logging import setup_logger
def test_setup_logger_level():
test_cases = [
("INFO", logging.INFO),
("Debug", logging.DEBUG),
("critical", logging.CRITICAL),
("NOTEXISTS", logging.INFO),
("BASIC_FORMAT", logging.INFO),
]
logFile = None
for level, expected in test_cases:
setup_logger(level, logFile)
actual = logging.getLogger().getEffectiveLevel()
assert actual == expected, f"{level} level should be {expected} (got {actual})"
def test_setup_logger_none():
setup_logger("NONE", None)
assert logging.getLogger().manager.disable == logging.CRITICAL

27
tests/core/test_cloud.py Normal file
View File

@@ -0,0 +1,27 @@
import requests_mock
import json
from kube_hunter.conf import Config, set_config
from kube_hunter.core.events.types import NewHostEvent
set_config(Config())
def test_presetcloud():
"""Testing if it doesn't try to run get_cloud if the cloud type is already set.
get_cloud(1.2.3.4) will result with an error
"""
expcted = "AWS"
hostEvent = NewHostEvent(host="1.2.3.4", cloud=expcted)
assert expcted == hostEvent.cloud
def test_getcloud():
fake_host = "1.2.3.4"
expected_cloud = "Azure"
result = {"cloud": expected_cloud}
with requests_mock.mock() as m:
m.get(f"https://api.azurespeed.com/api/region?ipOrUrl={fake_host}", text=json.dumps(result))
hostEvent = NewHostEvent(host=fake_host)
assert hostEvent.cloud == expected_cloud

119
tests/core/test_handler.py Normal file
View File

@@ -0,0 +1,119 @@
# flake8: noqa: E402
from kube_hunter.conf import Config, set_config
set_config(Config(active=True))
from kube_hunter.core.events.handler import handler
from kube_hunter.modules.discovery.apiserver import ApiServiceDiscovery
from kube_hunter.modules.discovery.dashboard import KubeDashboard as KubeDashboardDiscovery
from kube_hunter.modules.discovery.etcd import EtcdRemoteAccess as EtcdRemoteAccessDiscovery
from kube_hunter.modules.discovery.hosts import FromPodHostDiscovery, HostDiscovery
from kube_hunter.modules.discovery.kubectl import KubectlClientDiscovery
from kube_hunter.modules.discovery.kubelet import KubeletDiscovery
from kube_hunter.modules.discovery.ports import PortDiscovery
from kube_hunter.modules.discovery.proxy import KubeProxy as KubeProxyDiscovery
from kube_hunter.modules.hunting.aks import AzureSpnHunter, ProveAzureSpnExposure
from kube_hunter.modules.hunting.apiserver import (
AccessApiServer,
ApiVersionHunter,
AccessApiServerActive,
AccessApiServerWithToken,
)
from kube_hunter.modules.hunting.arp import ArpSpoofHunter
from kube_hunter.modules.hunting.capabilities import PodCapabilitiesHunter
from kube_hunter.modules.hunting.certificates import CertificateDiscovery
from kube_hunter.modules.hunting.cves import K8sClusterCveHunter, KubectlCVEHunter
from kube_hunter.modules.hunting.dashboard import KubeDashboard
from kube_hunter.modules.hunting.dns import DnsSpoofHunter
from kube_hunter.modules.hunting.etcd import EtcdRemoteAccess, EtcdRemoteAccessActive
from kube_hunter.modules.hunting.kubelet import (
ProveAnonymousAuth,
MaliciousIntentViaSecureKubeletPort,
ProveContainerLogsHandler,
ProveRunHandler,
ProveSystemLogs,
ReadOnlyKubeletPortHunter,
SecureKubeletPortHunter,
)
from kube_hunter.modules.hunting.mounts import VarLogMountHunter, ProveVarLogMount
from kube_hunter.modules.hunting.proxy import KubeProxy, ProveProxyExposed, K8sVersionDisclosureProve
from kube_hunter.modules.hunting.secrets import AccessSecrets
PASSIVE_HUNTERS = {
ApiServiceDiscovery,
KubeDashboardDiscovery,
EtcdRemoteAccessDiscovery,
FromPodHostDiscovery,
HostDiscovery,
KubectlClientDiscovery,
KubeletDiscovery,
PortDiscovery,
KubeProxyDiscovery,
AzureSpnHunter,
AccessApiServer,
AccessApiServerWithToken,
ApiVersionHunter,
PodCapabilitiesHunter,
CertificateDiscovery,
K8sClusterCveHunter,
KubectlCVEHunter,
KubeDashboard,
EtcdRemoteAccess,
ReadOnlyKubeletPortHunter,
SecureKubeletPortHunter,
VarLogMountHunter,
KubeProxy,
AccessSecrets,
}
ACTIVE_HUNTERS = {
ProveAzureSpnExposure,
AccessApiServerActive,
ArpSpoofHunter,
DnsSpoofHunter,
EtcdRemoteAccessActive,
ProveRunHandler,
ProveContainerLogsHandler,
ProveSystemLogs,
ProveVarLogMount,
ProveProxyExposed,
K8sVersionDisclosureProve,
ProveAnonymousAuth,
MaliciousIntentViaSecureKubeletPort,
}
def remove_test_hunters(hunters):
return {hunter for hunter in hunters if not hunter.__module__.startswith("test")}
def test_passive_hunters_registered():
expected_missing = set()
expected_odd = set()
registered_passive = remove_test_hunters(handler.passive_hunters.keys())
actual_missing = PASSIVE_HUNTERS - registered_passive
actual_odd = registered_passive - PASSIVE_HUNTERS
assert expected_missing == actual_missing, "Passive hunters are missing"
assert expected_odd == actual_odd, "Unexpected passive hunters are registered"
def test_active_hunters_registered():
expected_missing = set()
expected_odd = set()
registered_active = remove_test_hunters(handler.active_hunters.keys())
actual_missing = ACTIVE_HUNTERS - registered_active
actual_odd = registered_active - ACTIVE_HUNTERS
assert expected_missing == actual_missing, "Active hunters are missing"
assert expected_odd == actual_odd, "Unexpected active hunters are registered"
def test_all_hunters_registered():
expected = PASSIVE_HUNTERS | ACTIVE_HUNTERS
actual = remove_test_hunters(handler.all_hunters.keys())
assert expected == actual

View File

@@ -1,25 +1,31 @@
import time
from kube_hunter.conf import Config, set_config
from kube_hunter.core.types import Hunter
from kube_hunter.core.events.types import Event, Service
from kube_hunter.core.events import handler
counter = 0
set_config(Config())
class OnceOnlyEvent(Service, Event):
def __init__(self):
Service.__init__(self, "Test Once Service")
class RegularEvent(Service, Event):
def __init__(self):
Service.__init__(self, "Test Service")
@handler.subscribe_once(OnceOnlyEvent)
class OnceHunter(Hunter):
def __init__(self, event):
global counter
counter += 1
@handler.subscribe(RegularEvent)
class RegularHunter(Hunter):
def __init__(self, event):

View File

@@ -1,50 +1,69 @@
# flake8: noqa: E402
import requests_mock
import time
from kube_hunter.conf import Config, set_config
set_config(Config())
from kube_hunter.modules.discovery.apiserver import ApiServer, ApiServiceDiscovery
from kube_hunter.core.events.types import Event
from kube_hunter.core.events import handler
counter = 0
def test_ApiServer():
global counter
counter = 0
with requests_mock.Mocker() as m:
m.get('https://mockOther:443', text='elephant')
m.get('https://mockKubernetes:443', text='{"code":403}', status_code=403)
m.get('https://mockKubernetes:443/version', text='{"major": "1.14.10"}', status_code=200)
m.get("https://mockOther:443", text="elephant")
m.get("https://mockKubernetes:443", text='{"code":403}', status_code=403)
m.get(
"https://mockKubernetes:443/version",
text='{"major": "1.14.10"}',
status_code=200,
)
e = Event()
e.protocol = "https"
e.port = 443
e.host = 'mockOther'
e.host = "mockOther"
a = ApiServiceDiscovery(e)
a.execute()
e.host = 'mockKubernetes'
e.host = "mockKubernetes"
a.execute()
# Allow the events to be processed. Only the one to mockKubernetes should trigger an event
time.sleep(1)
assert counter == 1
def test_ApiServerWithServiceAccountToken():
global counter
counter = 0
with requests_mock.Mocker() as m:
m.get('https://mockKubernetes:443', request_headers={'Authorization':'Bearer very_secret'}, text='{"code":200}')
m.get('https://mockKubernetes:443', text='{"code":403}', status_code=403)
m.get('https://mockKubernetes:443/version', text='{"major": "1.14.10"}', status_code=200)
m.get('https://mockOther:443', text='elephant')
m.get(
"https://mockKubernetes:443",
request_headers={"Authorization": "Bearer very_secret"},
text='{"code":200}',
)
m.get("https://mockKubernetes:443", text='{"code":403}', status_code=403)
m.get(
"https://mockKubernetes:443/version",
text='{"major": "1.14.10"}',
status_code=200,
)
m.get("https://mockOther:443", text="elephant")
e = Event()
e.protocol = "https"
e.port = 443
# We should discover an API Server regardless of whether we have a token
e.host = 'mockKubernetes'
e.host = "mockKubernetes"
a = ApiServiceDiscovery(e)
a.execute()
time.sleep(0.1)
@@ -57,7 +76,7 @@ def test_ApiServerWithServiceAccountToken():
assert counter == 2
# But we shouldn't generate an event if we don't see an error code or find the 'major' in /version
e.host = 'mockOther'
e.host = "mockOther"
a = ApiServiceDiscovery(e)
a.execute()
time.sleep(0.1)
@@ -65,11 +84,13 @@ def test_ApiServerWithServiceAccountToken():
def test_InsecureApiServer():
global counter
global counter
counter = 0
with requests_mock.Mocker() as m:
m.get('http://mockOther:8080', text='elephant')
m.get('http://mockKubernetes:8080', text="""{
m.get("http://mockOther:8080", text="elephant")
m.get(
"http://mockKubernetes:8080",
text="""{
"paths": [
"/api",
"/api/v1",
@@ -78,20 +99,21 @@ def test_InsecureApiServer():
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io"
]}""")
]}""",
)
m.get('http://mockKubernetes:8080/version', text='{"major": "1.14.10"}')
m.get('http://mockOther:8080/version', status_code=404)
m.get("http://mockKubernetes:8080/version", text='{"major": "1.14.10"}')
m.get("http://mockOther:8080/version", status_code=404)
e = Event()
e.protocol = "http"
e.port = 8080
e.host = 'mockOther'
e.host = "mockOther"
a = ApiServiceDiscovery(e)
a.execute()
e.host = 'mockKubernetes'
e.host = "mockKubernetes"
a.execute()
# Allow the events to be processed. Only the one to mockKubernetes should trigger an event
@@ -99,12 +121,11 @@ def test_InsecureApiServer():
assert counter == 1
# We should only generate an ApiServer event for a response that looks like it came from a Kubernetes node
@handler.subscribe(ApiServer)
class testApiServer(object):
class testApiServer:
def __init__(self, event):
print("Event")
assert event.host == 'mockKubernetes'
assert event.host == "mockKubernetes"
global counter
counter += 1
counter += 1

View File

@@ -1,62 +1,107 @@
# flake8: noqa: E402
import json
import requests_mock
import time
from queue import Empty
import pytest
from netaddr import IPNetwork, IPAddress
from typing import List
from kube_hunter.conf import Config, get_config, set_config
set_config(Config())
from kube_hunter.modules.discovery.hosts import FromPodHostDiscovery, RunningAsPodEvent, HostScanEvent, AzureMetadataApi
from kube_hunter.core.events.types import Event, NewHostEvent
from kube_hunter.core.events import handler
from kube_hunter.conf import config
from kube_hunter.core.types import Hunter
from kube_hunter.modules.discovery.hosts import (
FromPodHostDiscovery,
RunningAsPodEvent,
HostScanEvent,
HostDiscoveryHelpers,
)
def test_FromPodHostDiscovery():
with requests_mock.Mocker() as m:
e = RunningAsPodEvent()
class TestFromPodHostDiscovery:
@staticmethod
def _make_response(*subnets: List[tuple]) -> str:
return json.dumps(
{
"network": {
"interface": [
{"ipv4": {"subnet": [{"address": address, "prefix": prefix} for address, prefix in subnets]}}
]
}
}
)
config.azure = False
config.remote = None
config.cidr = None
m.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", status_code=404)
f = FromPodHostDiscovery(e)
assert not f.is_azure_pod()
# TODO For now we don't test the traceroute discovery version
# f.execute()
def test_is_azure_pod_request_fail(self):
f = FromPodHostDiscovery(RunningAsPodEvent())
# Test that we generate NewHostEvent for the addresses reported by the Azure Metadata API
config.azure = True
m.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", \
text='{"network":{"interface":[{"ipv4":{"subnet":[{"address": "3.4.5.6", "prefix": "255.255.255.252"}]}}]}}')
assert f.is_azure_pod()
with requests_mock.Mocker() as m:
m.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", status_code=404)
result = f.is_azure_pod()
assert not result
def test_is_azure_pod_success(self):
f = FromPodHostDiscovery(RunningAsPodEvent())
with requests_mock.Mocker() as m:
m.get(
"http://169.254.169.254/metadata/instance?api-version=2017-08-01",
text=TestFromPodHostDiscovery._make_response(("3.4.5.6", "255.255.255.252")),
)
result = f.is_azure_pod()
assert result
def test_execute_scan_cidr(self):
set_config(Config(cidr="1.2.3.4/30"))
f = FromPodHostDiscovery(RunningAsPodEvent())
f.execute()
# Test that we don't trigger a HostScanEvent unless either config.remote or config.cidr are configured
config.remote = "1.2.3.4"
f.execute()
config.azure = False
config.remote = None
config.cidr = "1.2.3.4/24"
def test_execute_scan_remote(self):
set_config(Config(remote="1.2.3.4"))
f = FromPodHostDiscovery(RunningAsPodEvent())
f.execute()
# In this set of tests we should only trigger HostScanEvent when remote or cidr are set
@handler.subscribe(HostScanEvent)
class testHostDiscovery(object):
def __init__(self, event):
assert config.remote is not None or config.cidr is not None
assert config.remote == "1.2.3.4" or config.cidr == "1.2.3.4/24"
# In this set of tests we should only get as far as finding a host if it's Azure
# because we're not running the code that would normally be triggered by a HostScanEvent
@handler.subscribe(NewHostEvent)
class testHostDiscoveryEvent(object):
def __init__(self, event):
assert config.azure
assert str(event.host).startswith("3.4.5.")
assert config.remote is None
assert config.cidr is None
class HunterTestHostDiscovery(Hunter):
"""TestHostDiscovery
In this set of tests we should only trigger HostScanEvent when remote or cidr are set
"""
# Test that we only report this event for Azure hosts
@handler.subscribe(AzureMetadataApi)
class testAzureMetadataApi(object):
def __init__(self, event):
assert config.azure
config = get_config()
assert config.remote is not None or config.cidr is not None
assert config.remote == "1.2.3.4" or config.cidr == "1.2.3.4/30"
class TestDiscoveryUtils:
@staticmethod
def test_generate_hosts_valid_cidr():
test_cidr = "192.168.0.0/24"
expected = set(IPNetwork(test_cidr))
actual = set(HostDiscoveryHelpers.generate_hosts([test_cidr]))
assert actual == expected
@staticmethod
def test_generate_hosts_valid_ignore():
remove = IPAddress("192.168.1.8")
scan = "192.168.1.0/24"
expected = {ip for ip in IPNetwork(scan) if ip != remove}
actual = set(HostDiscoveryHelpers.generate_hosts([scan, f"!{str(remove)}"]))
assert actual == expected
@staticmethod
def test_generate_hosts_invalid_cidr():
with pytest.raises(ValueError):
list(HostDiscoveryHelpers.generate_hosts(["192..2.3/24"]))
@staticmethod
def test_generate_hosts_invalid_ignore():
with pytest.raises(ValueError):
list(HostDiscoveryHelpers.generate_hosts(["192.168.1.8", "!29.2..1/24"]))

56
tests/hunting/test_aks.py Normal file
View File

@@ -0,0 +1,56 @@
# flake8: noqa: E402
import requests_mock
from kube_hunter.conf import Config, set_config
set_config(Config())
from kube_hunter.modules.hunting.kubelet import ExposedRunHandler
from kube_hunter.modules.hunting.aks import AzureSpnHunter
def test_AzureSpnHunter():
e = ExposedRunHandler()
e.host = "mockKubernetes"
e.port = 443
e.protocol = "https"
pod_template = '{{"items":[ {{"apiVersion":"v1","kind":"Pod","metadata":{{"name":"etc","namespace":"default"}},"spec":{{"containers":[{{"command":["sleep","99999"],"image":"ubuntu","name":"test","volumeMounts":[{{"mountPath":"/mp","name":"v"}}]}}],"volumes":[{{"hostPath":{{"path":"{}"}},"name":"v"}}]}}}} ]}}'
bad_paths = ["/", "/etc", "/etc/", "/etc/kubernetes", "/etc/kubernetes/azure.json"]
good_paths = ["/yo", "/etc/yo", "/etc/kubernetes/yo.json"]
for p in bad_paths:
with requests_mock.Mocker() as m:
m.get("https://mockKubernetes:443/pods", text=pod_template.format(p))
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c
for p in good_paths:
with requests_mock.Mocker() as m:
m.get("https://mockKubernetes:443/pods", text=pod_template.format(p))
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
with requests_mock.Mocker() as m:
pod_no_volume_mounts = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test"}],"volumes":[{"hostPath":{"path":"/whatever"},"name":"v"}]}} ]}'
m.get("https://mockKubernetes:443/pods", text=pod_no_volume_mounts)
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
with requests_mock.Mocker() as m:
pod_no_volumes = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test"}]}} ]}'
m.get("https://mockKubernetes:443/pods", text=pod_no_volumes)
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
with requests_mock.Mocker() as m:
pod_other_volume = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test","volumeMounts":[{"mountPath":"/mp","name":"v"}]}],"volumes":[{"emptyDir":{},"name":"v"}]}} ]}'
m.get("https://mockKubernetes:443/pods", text=pod_other_volume)
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None

View File

@@ -1,17 +1,32 @@
# flake8: noqa: E402
import requests_mock
import time
from kube_hunter.modules.hunting.apiserver import AccessApiServer, AccessApiServerWithToken, ServerApiAccess, AccessApiServerActive
from kube_hunter.modules.hunting.apiserver import ListNamespaces, ListPodsAndNamespaces, ListRoles, ListClusterRoles
from kube_hunter.conf import Config, set_config
set_config(Config())
from kube_hunter.modules.hunting.apiserver import (
AccessApiServer,
AccessApiServerWithToken,
ServerApiAccess,
AccessApiServerActive,
)
from kube_hunter.modules.hunting.apiserver import (
ListNamespaces,
ListPodsAndNamespaces,
ListRoles,
ListClusterRoles,
)
from kube_hunter.modules.hunting.apiserver import ApiServerPassiveHunterFinished
from kube_hunter.modules.hunting.apiserver import CreateANamespace, DeleteANamespace
from kube_hunter.modules.discovery.apiserver import ApiServer
from kube_hunter.core.events.types import Event, K8sVersionDisclosure
from kube_hunter.core.types import UnauthenticatedAccess, InformationDisclosure
from kube_hunter.core.events import handler
counter = 0
def test_ApiServerToken():
global counter
counter = 0
@@ -28,6 +43,7 @@ def test_ApiServerToken():
time.sleep(0.01)
assert counter == 0
def test_AccessApiServer():
global counter
counter = 0
@@ -38,16 +54,32 @@ def test_AccessApiServer():
e.protocol = "https"
with requests_mock.Mocker() as m:
m.get('https://mockKubernetes:443/api', text='{}')
m.get('https://mockKubernetes:443/api/v1/namespaces', text='{"items":[{"metadata":{"name":"hello"}}]}')
m.get('https://mockKubernetes:443/api/v1/pods',
m.get("https://mockKubernetes:443/api", text="{}")
m.get(
"https://mockKubernetes:443/api/v1/namespaces",
text='{"items":[{"metadata":{"name":"hello"}}]}',
)
m.get(
"https://mockKubernetes:443/api/v1/pods",
text='{"items":[{"metadata":{"name":"podA", "namespace":"namespaceA"}}, \
{"metadata":{"name":"podB", "namespace":"namespaceB"}}]}')
m.get('https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/roles', status_code=403)
m.get('https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles', text='{"items":[]}')
m.get('https://mockkubernetes:443/version', text='{"major": "1","minor": "13+", "gitVersion": "v1.13.6-gke.13", \
"gitCommit": "fcbc1d20b6bca1936c0317743055ac75aef608ce", "gitTreeState": "clean", "buildDate": "2019-06-19T20:50:07Z", \
"goVersion": "go1.11.5b4", "compiler": "gc", "platform": "linux/amd64"}')
{"metadata":{"name":"podB", "namespace":"namespaceB"}}]}',
)
m.get(
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/roles",
status_code=403,
)
m.get(
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles",
text='{"items":[]}',
)
m.get(
"https://mockkubernetes:443/version",
text='{"major": "1","minor": "13+", "gitVersion": "v1.13.6-gke.13", \
"gitCommit": "fcbc1d20b6bca1936c0317743055ac75aef608ce", \
"gitTreeState": "clean", "buildDate": "2019-06-19T20:50:07Z", \
"goVersion": "go1.11.5b4", "compiler": "gc", \
"platform": "linux/amd64"}',
)
h = AccessApiServer(e)
h.execute()
@@ -60,17 +92,27 @@ def test_AccessApiServer():
counter = 0
with requests_mock.Mocker() as m:
# TODO check that these responses reflect what Kubernetes does
m.get('https://mockKubernetesToken:443/api', text='{}')
m.get('https://mockKubernetesToken:443/api/v1/namespaces', text='{"items":[{"metadata":{"name":"hello"}}]}')
m.get('https://mockKubernetesToken:443/api/v1/pods',
m.get("https://mocktoken:443/api", text="{}")
m.get(
"https://mocktoken:443/api/v1/namespaces",
text='{"items":[{"metadata":{"name":"hello"}}]}',
)
m.get(
"https://mocktoken:443/api/v1/pods",
text='{"items":[{"metadata":{"name":"podA", "namespace":"namespaceA"}}, \
{"metadata":{"name":"podB", "namespace":"namespaceB"}}]}')
m.get('https://mockkubernetesToken:443/apis/rbac.authorization.k8s.io/v1/roles', status_code=403)
m.get('https://mockkubernetesToken:443/apis/rbac.authorization.k8s.io/v1/clusterroles',
text='{"items":[{"metadata":{"name":"my-role"}}]}')
{"metadata":{"name":"podB", "namespace":"namespaceB"}}]}',
)
m.get(
"https://mocktoken:443/apis/rbac.authorization.k8s.io/v1/roles",
status_code=403,
)
m.get(
"https://mocktoken:443/apis/rbac.authorization.k8s.io/v1/clusterroles",
text='{"items":[{"metadata":{"name":"my-role"}}]}',
)
e.auth_token = "so-secret"
e.host = "mockKubernetesToken"
e.host = "mocktoken"
h = AccessApiServerWithToken(e)
h.execute()
@@ -78,12 +120,13 @@ def test_AccessApiServer():
time.sleep(0.01)
assert counter == 5
@handler.subscribe(ListNamespaces)
class test_ListNamespaces(object):
class test_ListNamespaces:
def __init__(self, event):
print("ListNamespaces")
assert event.evidence == ['hello']
if event.host == "mockKubernetesToken":
assert event.evidence == ["hello"]
if event.host == "mocktoken":
assert event.auth_token == "so-secret"
else:
assert event.auth_token is None
@@ -92,7 +135,7 @@ class test_ListNamespaces(object):
@handler.subscribe(ListPodsAndNamespaces)
class test_ListPodsAndNamespaces(object):
class test_ListPodsAndNamespaces:
def __init__(self, event):
print("ListPodsAndNamespaces")
assert len(event.evidence) == 2
@@ -101,7 +144,7 @@ class test_ListPodsAndNamespaces(object):
assert pod["namespace"] == "namespaceA"
if pod["name"] == "podB":
assert pod["namespace"] == "namespaceB"
if event.host == "mockKubernetesToken":
if event.host == "mocktoken":
assert event.auth_token == "so-secret"
assert "token" in event.name
assert "anon" not in event.name
@@ -112,27 +155,30 @@ class test_ListPodsAndNamespaces(object):
global counter
counter += 1
# Should never see this because the API call in the test returns 403 status code
@handler.subscribe(ListRoles)
class test_ListRoles(object):
class test_ListRoles:
def __init__(self, event):
print("ListRoles")
assert 0
global counter
counter += 1
# Should only see this when we have a token because the API call returns an empty list of items
# in the test where we have no token
@handler.subscribe(ListClusterRoles)
class test_ListClusterRoles(object):
class test_ListClusterRoles:
def __init__(self, event):
print("ListClusterRoles")
assert event.auth_token == "so-secret"
global counter
counter += 1
@handler.subscribe(ServerApiAccess)
class test_ServerApiAccess(object):
class test_ServerApiAccess:
def __init__(self, event):
print("ServerApiAccess")
if event.category == UnauthenticatedAccess:
@@ -143,14 +189,16 @@ class test_ServerApiAccess(object):
global counter
counter += 1
@handler.subscribe(ApiServerPassiveHunterFinished)
class test_PassiveHunterFinished(object):
class test_PassiveHunterFinished:
def __init__(self, event):
print("PassiveHunterFinished")
assert event.namespaces == ["hello"]
global counter
counter += 1
def test_AccessApiServerActive():
e = ApiServerPassiveHunterFinished(namespaces=["hello-namespace"])
e.host = "mockKubernetes"
@@ -159,7 +207,9 @@ def test_AccessApiServerActive():
with requests_mock.Mocker() as m:
# TODO more tests here with real responses
m.post('https://mockKubernetes:443/api/v1/namespaces', text="""
m.post(
"https://mockKubernetes:443/api/v1/namespaces",
text="""
{
"kind": "Namespace",
"apiVersion": "v1",
@@ -179,14 +229,25 @@ def test_AccessApiServerActive():
"phase": "Active"
}
}
"""
)
m.post('https://mockKubernetes:443/api/v1/clusterroles', text='{}')
m.post('https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles', text='{}')
m.post('https://mockkubernetes:443/api/v1/namespaces/hello-namespace/pods', text='{}')
m.post('https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/namespaces/hello-namespace/roles', text='{}')
""",
)
m.post("https://mockKubernetes:443/api/v1/clusterroles", text="{}")
m.post(
"https://mockkubernetes:443/apis/rbac.authorization.k8s.io/v1/clusterroles",
text="{}",
)
m.post(
"https://mockkubernetes:443/api/v1/namespaces/hello-namespace/pods",
text="{}",
)
m.post(
"https://mockkubernetes:443" "/apis/rbac.authorization.k8s.io/v1/namespaces/hello-namespace/roles",
text="{}",
)
m.delete('https://mockKubernetes:443/api/v1/namespaces/abcde', text="""
m.delete(
"https://mockKubernetes:443/api/v1/namespaces/abcde",
text="""
{
"kind": "Namespace",
"apiVersion": "v1",
@@ -207,17 +268,20 @@ def test_AccessApiServerActive():
"phase": "Terminating"
}
}
""")
""",
)
h = AccessApiServerActive(e)
h.execute()
@handler.subscribe(CreateANamespace)
class test_CreateANamespace(object):
class test_CreateANamespace:
def __init__(self, event):
assert "abcde" in event.evidence
@handler.subscribe(DeleteANamespace)
class test_DeleteANamespace(object):
class test_DeleteANamespace:
def __init__(self, event):
assert "2019-02-26" in event.evidence

View File

@@ -0,0 +1,42 @@
# flake8: noqa: E402
from kube_hunter.conf import Config, set_config
set_config(Config())
from kube_hunter.core.events.types import Event
from kube_hunter.modules.hunting.certificates import CertificateDiscovery, CertificateEmail
from kube_hunter.core.events import handler
def test_CertificateDiscovery():
cert = """
-----BEGIN CERTIFICATE-----
MIIDZDCCAkwCCQCAzfCLqrJvuTANBgkqhkiG9w0BAQsFADB0MQswCQYDVQQGEwJV
UzELMAkGA1UECAwCQ0ExEDAOBgNVBAoMB05vZGUuanMxETAPBgNVBAsMCG5vZGUt
Z3lwMRIwEAYDVQQDDAlsb2NhbGhvc3QxHzAdBgkqhkiG9w0BCQEWEGJ1aWxkQG5v
ZGVqcy5vcmcwHhcNMTkwNjIyMDYyMjMzWhcNMjIwNDExMDYyMjMzWjB0MQswCQYD
VQQGEwJVUzELMAkGA1UECAwCQ0ExEDAOBgNVBAoMB05vZGUuanMxETAPBgNVBAsM
CG5vZGUtZ3lwMRIwEAYDVQQDDAlsb2NhbGhvc3QxHzAdBgkqhkiG9w0BCQEWEGJ1
aWxkQG5vZGVqcy5vcmcwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDS
CHjvtVW4HdbbUwZ/ZV9s6U4x0KSoyNQrsCZjB8kRpFPe50DS5mfmu2SNBGYKRgzk
4QEEwFB9N2o8YTWsCefSRl6ti4ToPZqulU4hhRKYrEGtMJcRzi3IN7s200JaO3UH
01Su8ruO0NESb5zEU1Ykfh8Lub8TGEAINmgI61d/5d5Aq3kDjUHQJt1Ekw03Ylnu
juQyCGZxLxnngu0mIvwzyL/UeeUgsfQLzvppUk6In7tC1zzMjSPWo0c8qu6KvrW4
bKYnkZkzdQifzbpO5ERMEsh5HWq0uHa6+dgcVHFvlhdqF4Uat87ygNplVf0txsZB
MNVqbz1k6xkZYMnzDoydAgMBAAEwDQYJKoZIhvcNAQELBQADggEBADspZGtKpWxy
J1W3FA1aeQhMvequQTcMRz4avkm4K4HfTdV1iVD4CbvdezBphouBlyLVLDFJP7RZ
m7dBJVgBwnxufoFLne8cR2MGqDRoySbFT1AtDJdxabE6Fg+QGUpgOQfeBJ6ANlSB
+qJ+HG4QA+Ouh5hxz9mgYwkIsMUABHiwENdZ/kT8Edw4xKgd3uH0YP4iiePMD66c
rzW3uXH5J1jnKgBlpxtog4P6dHCcoq+PZJ17W5bdXNyqC1LPzQqniZ2BNcEZ4ix3
slAZAOWD1zLLGJhBPMV1fa0sHNBWc6oicr3YK/IDb0cp9kiLvnUu1pHy+LWQGqtC
rceJuGsnJEQ=
-----END CERTIFICATE-----
"""
c = CertificateDiscovery(Event())
c.examine_certificate(cert)
@handler.subscribe(CertificateEmail)
class test_CertificateEmail:
def __init__(self, event):
assert event.email == b"build@nodejs.org0"

View File

@@ -1,12 +1,22 @@
# flake8: noqa: E402
import time
import requests_mock
from kube_hunter.conf import Config, set_config
set_config(Config())
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import K8sVersionDisclosure
from kube_hunter.modules.hunting.cves import K8sClusterCveHunter, ServerApiVersionEndPointAccessPE, ServerApiVersionEndPointAccessDos, CveUtils
from kube_hunter.modules.hunting.cves import (
K8sClusterCveHunter,
ServerApiVersionEndPointAccessPE,
ServerApiVersionEndPointAccessDos,
CveUtils,
)
cve_counter = 0
def test_K8sCveHunter():
global cve_counter
# because the hunter unregisters itself, we manually remove this option, so we can test it
@@ -31,50 +41,52 @@ def test_K8sCveHunter():
@handler.subscribe(ServerApiVersionEndPointAccessPE)
class test_CVE_2018_1002105(object):
class test_CVE_2018_1002105:
def __init__(self, event):
global cve_counter
cve_counter += 1
@handler.subscribe(ServerApiVersionEndPointAccessDos)
class test_CVE_2019_1002100(object):
class test_CVE_2019_1002100:
def __init__(self, event):
global cve_counter
cve_counter += 1
class test_CveUtils(object):
def test_is_downstream():
class TestCveUtils:
def test_is_downstream(self):
test_cases = (
('1', False),
('1.2', False),
('1.2-3', True),
('1.2-r3', True),
('1.2+3', True),
('1.2~3', True),
('1.2+a3f5cb2', True),
('1.2-9287543', True),
('v1', False),
('v1.2', False),
('v1.2-3', True),
('v1.2-r3', True),
('v1.2+3', True),
('v1.2~3', True),
('v1.2+a3f5cb2', True),
('v1.2-9287543', True),
('v1.13.9-gke.3', True)
("1", False),
("1.2", False),
("1.2-3", True),
("1.2-r3", True),
("1.2+3", True),
("1.2~3", True),
("1.2+a3f5cb2", True),
("1.2-9287543", True),
("v1", False),
("v1.2", False),
("v1.2-3", True),
("v1.2-r3", True),
("v1.2+3", True),
("v1.2~3", True),
("v1.2+a3f5cb2", True),
("v1.2-9287543", True),
("v1.13.9-gke.3", True),
)
for version, expected in test_cases:
actual = CveUtils.is_downstream_version(version)
assert actual == expected
def test_ignore_downstream():
def test_ignore_downstream(self):
test_cases = (
('v2.2-abcd', ['v1.1', 'v2.3'], False),
('v2.2-abcd', ['v1.1', 'v2.2'], False),
('v1.13.9-gke.3', ['v1.14.8'], False)
("v2.2-abcd", ["v1.1", "v2.3"], False),
("v2.2-abcd", ["v1.1", "v2.2"], False),
("v1.13.9-gke.3", ["v1.14.8"], False),
)
for check_version, fix_versions, expected in test_cases:
actual = CveUtils.is_vulnerable(check_version, fix_versions, True)
actual = CveUtils.is_vulnerable(fix_versions, check_version, True)
assert actual == expected

View File

@@ -0,0 +1,41 @@
import json
from types import SimpleNamespace
from requests_mock import Mocker
from kube_hunter.conf import Config, set_config
set_config(Config())
from kube_hunter.modules.hunting.dashboard import KubeDashboard # noqa: E402
class TestKubeDashboard:
@staticmethod
def get_nodes_mock(result: dict, **kwargs):
with Mocker() as m:
m.get("http://mockdashboard:8000/api/v1/node", text=json.dumps(result), **kwargs)
hunter = KubeDashboard(SimpleNamespace(host="mockdashboard", port=8000))
return hunter.get_nodes()
@staticmethod
def test_get_nodes_with_result():
nodes = {"nodes": [{"objectMeta": {"name": "node1"}}]}
expected = ["node1"]
actual = TestKubeDashboard.get_nodes_mock(nodes)
assert expected == actual
@staticmethod
def test_get_nodes_without_result():
nodes = {"nodes": []}
expected = []
actual = TestKubeDashboard.get_nodes_mock(nodes)
assert expected == actual
@staticmethod
def test_get_nodes_invalid_result():
expected = None
actual = TestKubeDashboard.get_nodes_mock(dict(), status_code=404)
assert expected == actual

View File

@@ -0,0 +1,721 @@
import requests
import requests_mock
import urllib.parse
import uuid
from kube_hunter.core.events import handler
from kube_hunter.modules.hunting.kubelet import (
AnonymousAuthEnabled,
ExposedExistingPrivilegedContainersViaSecureKubeletPort,
ProveAnonymousAuth,
MaliciousIntentViaSecureKubeletPort,
)
counter = 0
pod_list_with_privileged_container = """{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {},
"items": [
{
"metadata": {
"name": "kube-hunter-privileged-deployment-86dc79f945-sjjps",
"namespace": "kube-hunter-privileged"
},
"spec": {
"containers": [
{
"name": "ubuntu",
"securityContext": {
{security_context_definition_to_test}
}
}
]
}
}
]
}
"""
service_account_token = "eyJhbGciOiJSUzI1NiIsImtpZCI6IlR0YmxoMXh..."
env = """PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=kube-hunter-privileged-deployment-86dc79f945-sjjps
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
HOME=/root"""
exposed_privileged_containers = [
{
"container_name": "ubuntu",
"environment_variables": env,
"pod_id": "kube-hunter-privileged-deployment-86dc79f945-sjjps",
"pod_namespace": "kube-hunter-privileged",
"service_account_token": service_account_token,
}
]
cat_proc_cmdline = "BOOT_IMAGE=/boot/bzImage root=LABEL=Mock loglevel=3 console=ttyS0"
number_of_rm_attempts = 1
number_of_umount_attempts = 1
number_of_rmdir_attempts = 1
def create_test_event_type_one():
anonymous_auth_enabled_event = AnonymousAuthEnabled()
anonymous_auth_enabled_event.host = "localhost"
anonymous_auth_enabled_event.session = requests.Session()
return anonymous_auth_enabled_event
def create_test_event_type_two():
exposed_existing_privileged_containers_via_secure_kubelet_port_event = (
ExposedExistingPrivilegedContainersViaSecureKubeletPort(exposed_privileged_containers)
)
exposed_existing_privileged_containers_via_secure_kubelet_port_event.host = "localhost"
exposed_existing_privileged_containers_via_secure_kubelet_port_event.session = requests.Session()
return exposed_existing_privileged_containers_via_secure_kubelet_port_event
def test_get_request_valid_url():
class_being_tested = ProveAnonymousAuth(create_test_event_type_one())
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/mock"
session_mock.get(url, text="mock")
return_value = class_being_tested.get_request(url)
assert return_value == "mock"
def test_get_request_invalid_url():
class_being_tested = ProveAnonymousAuth(create_test_event_type_one())
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/[mock]"
session_mock.get(url, exc=requests.exceptions.InvalidURL)
return_value = class_being_tested.get_request(url)
assert return_value.startswith("Exception: ")
def post_request(url, params, expected_return_value, exception=None):
class_being_tested_one = ProveAnonymousAuth(create_test_event_type_one())
with requests_mock.Mocker(session=class_being_tested_one.event.session) as session_mock:
mock_params = {"text": "mock"} if not exception else {"exc": exception}
session_mock.post(url, **mock_params)
return_value = class_being_tested_one.post_request(url, params)
assert return_value == expected_return_value
class_being_tested_two = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two())
with requests_mock.Mocker(session=class_being_tested_two.event.session) as session_mock:
mock_params = {"text": "mock"} if not exception else {"exc": exception}
session_mock.post(url, **mock_params)
return_value = class_being_tested_two.post_request(url, params)
assert return_value == expected_return_value
def test_post_request_valid_url_with_parameters():
url = "https://localhost:10250/mock?cmd=ls"
params = {"cmd": "ls"}
post_request(url, params, expected_return_value="mock")
def test_post_request_valid_url_without_parameters():
url = "https://localhost:10250/mock"
params = {}
post_request(url, params, expected_return_value="mock")
def test_post_request_invalid_url_with_parameters():
url = "https://localhost:10250/mock?cmd=ls"
params = {"cmd": "ls"}
post_request(url, params, expected_return_value="Exception: ", exception=requests.exceptions.InvalidURL)
def test_post_request_invalid_url_without_parameters():
url = "https://localhost:10250/mock"
params = {}
post_request(url, params, expected_return_value="Exception: ", exception=requests.exceptions.InvalidURL)
def test_has_no_exception_result_with_exception():
mock_result = "Exception: Mock."
return_value = ProveAnonymousAuth.has_no_exception(mock_result)
assert return_value is False
def test_has_no_exception_result_without_exception():
mock_result = "Mock."
return_value = ProveAnonymousAuth.has_no_exception(mock_result)
assert return_value is True
def test_has_no_error_result_with_error():
mock_result = "Mock exited with error."
return_value = ProveAnonymousAuth.has_no_error(mock_result)
assert return_value is False
def test_has_no_error_result_without_error():
mock_result = "Mock."
return_value = ProveAnonymousAuth.has_no_error(mock_result)
assert return_value is True
def test_has_no_error_nor_exception_result_without_exception_and_without_error():
mock_result = "Mock."
return_value = ProveAnonymousAuth.has_no_error_nor_exception(mock_result)
assert return_value is True
def test_has_no_error_nor_exception_result_with_exception_and_without_error():
mock_result = "Exception: Mock."
return_value = ProveAnonymousAuth.has_no_error_nor_exception(mock_result)
assert return_value is False
def test_has_no_error_nor_exception_result_without_exception_and_with_error():
mock_result = "Mock exited with error."
return_value = ProveAnonymousAuth.has_no_error_nor_exception(mock_result)
assert return_value is False
def test_has_no_error_nor_exception_result_with_exception_and_with_error():
mock_result = "Exception: Mock. Mock exited with error."
return_value = ProveAnonymousAuth.has_no_error_nor_exception(mock_result)
assert return_value is False
def proveanonymousauth_success(anonymous_auth_enabled_event, security_context_definition_to_test):
global counter
counter = 0
with requests_mock.Mocker(session=anonymous_auth_enabled_event.session) as session_mock:
url = "https://" + anonymous_auth_enabled_event.host + ":10250/"
listing_pods_url = url + "pods"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
session_mock.get(
listing_pods_url,
text=pod_list_with_privileged_container.replace(
"{security_context_definition_to_test}", security_context_definition_to_test
),
)
session_mock.post(
run_url + urllib.parse.quote("cat /var/run/secrets/kubernetes.io/serviceaccount/token", safe=""),
text=service_account_token,
)
session_mock.post(run_url + "env", text=env)
class_being_tested = ProveAnonymousAuth(anonymous_auth_enabled_event)
class_being_tested.execute()
assert "The following containers have been successfully breached." in class_being_tested.event.evidence
assert counter == 1
def test_proveanonymousauth_success_with_privileged_container_via_privileged_setting():
proveanonymousauth_success(create_test_event_type_one(), '"privileged": true')
def test_proveanonymousauth_success_with_privileged_container_via_capabilities():
proveanonymousauth_success(create_test_event_type_one(), '"capabilities": { "add": ["SYS_ADMIN"] }')
def test_proveanonymousauth_connectivity_issues():
class_being_tested = ProveAnonymousAuth(create_test_event_type_one())
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://" + class_being_tested.event.host + ":10250/"
listing_pods_url = url + "pods"
session_mock.get(listing_pods_url, exc=requests.exceptions.ConnectionError)
class_being_tested.execute()
assert class_being_tested.event.evidence == ""
@handler.subscribe(ExposedExistingPrivilegedContainersViaSecureKubeletPort)
class ExposedPrivilegedContainersViaAnonymousAuthEnabledInSecureKubeletPortEventCounter:
def __init__(self, event):
global counter
counter += 1
def test_check_file_exists_existing_file():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
session_mock.post(run_url + urllib.parse.quote("ls mock.txt", safe=""), text="mock.txt")
return_value = class_being_tested.check_file_exists(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu", "mock.txt"
)
assert return_value is True
def test_check_file_exists_non_existent_file():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
session_mock.post(
run_url + urllib.parse.quote("ls nonexistentmock.txt", safe=""),
text="ls: nonexistentmock.txt: No such file or directory",
)
return_value = class_being_tested.check_file_exists(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
"nonexistentmock.txt",
)
assert return_value is False
rm_command_removed_successfully_callback_counter = 0
def rm_command_removed_successfully_callback(request, context):
global rm_command_removed_successfully_callback_counter
if rm_command_removed_successfully_callback_counter == 0:
rm_command_removed_successfully_callback_counter += 1
return "mock.txt"
else:
return "ls: mock.txt: No such file or directory"
def test_rm_command_removed_successfully():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
session_mock.post(
run_url + urllib.parse.quote("ls mock.txt", safe=""), text=rm_command_removed_successfully_callback
)
session_mock.post(run_url + urllib.parse.quote("rm -f mock.txt", safe=""), text="")
return_value = class_being_tested.rm_command(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
"mock.txt",
number_of_rm_attempts=1,
seconds_to_wait_for_os_command=None,
)
assert return_value is True
def test_rm_command_removed_failed():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
session_mock.post(run_url + urllib.parse.quote("ls mock.txt", safe=""), text="mock.txt")
session_mock.post(run_url + urllib.parse.quote("rm -f mock.txt", safe=""), text="Permission denied")
return_value = class_being_tested.rm_command(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
"mock.txt",
number_of_rm_attempts=1,
seconds_to_wait_for_os_command=None,
)
assert return_value is False
def test_attack_exposed_existing_privileged_container_success():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
directory_created = "/kube-hunter-mock_" + str(uuid.uuid1())
file_name = "kube-hunter-mock" + str(uuid.uuid1())
file_name_with_path = f"{directory_created}/etc/cron.daily/{file_name}"
session_mock.post(run_url + urllib.parse.quote(f"touch {file_name_with_path}", safe=""), text="")
session_mock.post(
run_url + urllib.parse.quote("chmod {} {}".format("755", file_name_with_path), safe=""), text=""
)
return_value = class_being_tested.attack_exposed_existing_privileged_container(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
directory_created,
number_of_rm_attempts,
None,
file_name,
)
assert return_value["result"] is True
def test_attack_exposed_existing_privileged_container_failure_when_touch():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
directory_created = "/kube-hunter-mock_" + str(uuid.uuid1())
file_name = "kube-hunter-mock" + str(uuid.uuid1())
file_name_with_path = f"{directory_created}/etc/cron.daily/{file_name}"
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
session_mock.post(
run_url + urllib.parse.quote(f"touch {file_name_with_path}", safe=""),
text="Operation not permitted",
)
return_value = class_being_tested.attack_exposed_existing_privileged_container(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
directory_created,
None,
file_name,
)
assert return_value["result"] is False
def test_attack_exposed_existing_privileged_container_failure_when_chmod():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
directory_created = "/kube-hunter-mock_" + str(uuid.uuid1())
file_name = "kube-hunter-mock" + str(uuid.uuid1())
file_name_with_path = f"{directory_created}/etc/cron.daily/{file_name}"
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
session_mock.post(run_url + urllib.parse.quote(f"touch {file_name_with_path}", safe=""), text="")
session_mock.post(
run_url + urllib.parse.quote("chmod {} {}".format("755", file_name_with_path), safe=""),
text="Permission denied",
)
return_value = class_being_tested.attack_exposed_existing_privileged_container(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
directory_created,
None,
file_name,
)
assert return_value["result"] is False
def test_check_directory_exists_existing_directory():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
session_mock.post(run_url + urllib.parse.quote("ls Mock", safe=""), text="mock.txt")
return_value = class_being_tested.check_directory_exists(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu", "Mock"
)
assert return_value is True
def test_check_directory_exists_non_existent_directory():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
session_mock.post(run_url + urllib.parse.quote("ls Mock", safe=""), text="ls: Mock: No such file or directory")
return_value = class_being_tested.check_directory_exists(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu", "Mock"
)
assert return_value is False
rmdir_command_removed_successfully_callback_counter = 0
def rmdir_command_removed_successfully_callback(request, context):
global rmdir_command_removed_successfully_callback_counter
if rmdir_command_removed_successfully_callback_counter == 0:
rmdir_command_removed_successfully_callback_counter += 1
return "mock.txt"
else:
return "ls: Mock: No such file or directory"
def test_rmdir_command_removed_successfully():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
session_mock.post(
run_url + urllib.parse.quote("ls Mock", safe=""), text=rmdir_command_removed_successfully_callback
)
session_mock.post(run_url + urllib.parse.quote("rmdir Mock", safe=""), text="")
return_value = class_being_tested.rmdir_command(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
"Mock",
number_of_rmdir_attempts=1,
seconds_to_wait_for_os_command=None,
)
assert return_value is True
def test_rmdir_command_removed_failed():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
session_mock.post(run_url + urllib.parse.quote("ls Mock", safe=""), text="mock.txt")
session_mock.post(run_url + urllib.parse.quote("rmdir Mock", safe=""), text="Permission denied")
return_value = class_being_tested.rmdir_command(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
"Mock",
number_of_rmdir_attempts=1,
seconds_to_wait_for_os_command=None,
)
assert return_value is False
def test_get_root_values_success():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
root_value, root_value_type = class_being_tested.get_root_values(cat_proc_cmdline)
assert root_value == "Mock" and root_value_type == "LABEL="
def test_get_root_values_failure():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
root_value, root_value_type = class_being_tested.get_root_values("")
assert root_value is None and root_value_type is None
def test_process_exposed_existing_privileged_container_success():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
directory_created = "/kube-hunter-mock_" + str(uuid.uuid1())
session_mock.post(run_url + urllib.parse.quote("cat /proc/cmdline", safe=""), text=cat_proc_cmdline)
session_mock.post(run_url + urllib.parse.quote("findfs LABEL=Mock", safe=""), text="/dev/mock_fs")
session_mock.post(run_url + urllib.parse.quote(f"mkdir {directory_created}", safe=""), text="")
session_mock.post(
run_url + urllib.parse.quote("mount {} {}".format("/dev/mock_fs", directory_created), safe=""), text=""
)
session_mock.post(
run_url + urllib.parse.quote(f"cat {directory_created}/etc/hostname", safe=""), text="mockhostname"
)
return_value = class_being_tested.process_exposed_existing_privileged_container(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
number_of_umount_attempts,
number_of_rmdir_attempts,
None,
directory_created,
)
assert return_value["result"] is True
def test_process_exposed_existing_privileged_container_failure_when_cat_cmdline():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
directory_created = "/kube-hunter-mock_" + str(uuid.uuid1())
session_mock.post(run_url + urllib.parse.quote("cat /proc/cmdline", safe=""), text="Permission denied")
return_value = class_being_tested.process_exposed_existing_privileged_container(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
number_of_umount_attempts,
number_of_rmdir_attempts,
None,
directory_created,
)
assert return_value["result"] is False
def test_process_exposed_existing_privileged_container_failure_when_findfs():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
directory_created = "/kube-hunter-mock_" + str(uuid.uuid1())
session_mock.post(run_url + urllib.parse.quote("cat /proc/cmdline", safe=""), text=cat_proc_cmdline)
session_mock.post(run_url + urllib.parse.quote("findfs LABEL=Mock", safe=""), text="Permission denied")
return_value = class_being_tested.process_exposed_existing_privileged_container(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
number_of_umount_attempts,
number_of_rmdir_attempts,
None,
directory_created,
)
assert return_value["result"] is False
def test_process_exposed_existing_privileged_container_failure_when_mkdir():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
directory_created = "/kube-hunter-mock_" + str(uuid.uuid1())
session_mock.post(run_url + urllib.parse.quote("cat /proc/cmdline", safe=""), text=cat_proc_cmdline)
session_mock.post(run_url + urllib.parse.quote("findfs LABEL=Mock", safe=""), text="/dev/mock_fs")
session_mock.post(run_url + urllib.parse.quote(f"mkdir {directory_created}", safe=""), text="Permission denied")
return_value = class_being_tested.process_exposed_existing_privileged_container(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
number_of_umount_attempts,
number_of_rmdir_attempts,
None,
directory_created,
)
assert return_value["result"] is False
def test_process_exposed_existing_privileged_container_failure_when_mount():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
directory_created = "/kube-hunter-mock_" + str(uuid.uuid1())
session_mock.post(run_url + urllib.parse.quote("cat /proc/cmdline", safe=""), text=cat_proc_cmdline)
session_mock.post(run_url + urllib.parse.quote("findfs LABEL=Mock", safe=""), text="/dev/mock_fs")
session_mock.post(run_url + urllib.parse.quote(f"mkdir {directory_created}", safe=""), text="")
session_mock.post(
run_url + urllib.parse.quote("mount {} {}".format("/dev/mock_fs", directory_created), safe=""),
text="Permission denied",
)
return_value = class_being_tested.process_exposed_existing_privileged_container(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
number_of_umount_attempts,
number_of_rmdir_attempts,
None,
directory_created,
)
assert return_value["result"] is False
def test_process_exposed_existing_privileged_container_failure_when_cat_hostname():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
directory_created = "/kube-hunter-mock_" + str(uuid.uuid1())
session_mock.post(run_url + urllib.parse.quote("cat /proc/cmdline", safe=""), text=cat_proc_cmdline)
session_mock.post(run_url + urllib.parse.quote("findfs LABEL=Mock", safe=""), text="/dev/mock_fs")
session_mock.post(run_url + urllib.parse.quote(f"mkdir {directory_created}", safe=""), text="")
session_mock.post(
run_url + urllib.parse.quote("mount {} {}".format("/dev/mock_fs", directory_created), safe=""), text=""
)
session_mock.post(
run_url + urllib.parse.quote(f"cat {directory_created}/etc/hostname", safe=""),
text="Permission denied",
)
return_value = class_being_tested.process_exposed_existing_privileged_container(
url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu",
number_of_umount_attempts,
number_of_rmdir_attempts,
None,
directory_created,
)
assert return_value["result"] is False
def test_maliciousintentviasecurekubeletport_success():
class_being_tested = MaliciousIntentViaSecureKubeletPort(create_test_event_type_two(), None)
with requests_mock.Mocker(session=class_being_tested.event.session) as session_mock:
url = "https://localhost:10250/"
run_url = url + "run/kube-hunter-privileged/kube-hunter-privileged-deployment-86dc79f945-sjjps/ubuntu?cmd="
directory_created = "/kube-hunter-mock_" + str(uuid.uuid1())
file_name = "kube-hunter-mock" + str(uuid.uuid1())
file_name_with_path = f"{directory_created}/etc/cron.daily/{file_name}"
session_mock.post(run_url + urllib.parse.quote("cat /proc/cmdline", safe=""), text=cat_proc_cmdline)
session_mock.post(run_url + urllib.parse.quote("findfs LABEL=Mock", safe=""), text="/dev/mock_fs")
session_mock.post(run_url + urllib.parse.quote(f"mkdir {directory_created}", safe=""), text="")
session_mock.post(
run_url + urllib.parse.quote("mount {} {}".format("/dev/mock_fs", directory_created), safe=""), text=""
)
session_mock.post(
run_url + urllib.parse.quote(f"cat {directory_created}/etc/hostname", safe=""), text="mockhostname"
)
session_mock.post(run_url + urllib.parse.quote(f"touch {file_name_with_path}", safe=""), text="")
session_mock.post(
run_url + urllib.parse.quote("chmod {} {}".format("755", file_name_with_path), safe=""), text=""
)
class_being_tested.execute(directory_created, file_name)
message = "The following exposed existing privileged containers have been successfully"
message += " abused by starting/modifying a process in the host."
assert message in class_being_tested.event.evidence

View File

@@ -1,6 +1,16 @@
# flake8: noqa: E402
from kube_hunter.conf import Config, set_config
set_config(Config())
from kube_hunter.modules.report import get_reporter, get_dispatcher
from kube_hunter.modules.report.factory import YAMLReporter, JSONReporter, \
PlainReporter, HTTPDispatcher, STDOUTDispatcher
from kube_hunter.modules.report.factory import (
YAMLReporter,
JSONReporter,
PlainReporter,
HTTPDispatcher,
STDOUTDispatcher,
)
def test_reporters():
@@ -8,7 +18,7 @@ def test_reporters():
("plain", PlainReporter),
("json", JSONReporter),
("yaml", YAMLReporter),
("notexists", PlainReporter)
("notexists", PlainReporter),
]
for report_type, expected in test_cases:
@@ -20,7 +30,7 @@ def test_dispatchers():
test_cases = [
("stdout", STDOUTDispatcher),
("http", HTTPDispatcher),
("notexists", STDOUTDispatcher)
("notexists", STDOUTDispatcher),
]
for dispatcher_type, expected in test_cases:

View File

@@ -0,0 +1,13 @@
from kube_hunter.plugins import hookimpl
return_string = "return_string"
@hookimpl
def parser_add_arguments(parser):
return return_string
@hookimpl
def load_plugin(args):
return return_string

View File

@@ -0,0 +1,17 @@
from argparse import ArgumentParser
from tests.plugins import test_hooks
from kube_hunter.plugins import initialize_plugin_manager
def test_all_plugin_hooks():
pm = initialize_plugin_manager()
pm.register(test_hooks)
# Testing parser_add_arguments
parser = ArgumentParser("Test Argument Parser")
results = pm.hook.parser_add_arguments(parser=parser)
assert test_hooks.return_string in results
# Testing load_plugin
results = pm.hook.load_plugin(args=[])
assert test_hooks.return_string in results