Compare commits

..

37 Commits

Author SHA1 Message Date
Daniel Sagi
40ebaa6259 fixed bug for only one run 2021-06-17 19:42:35 +00:00
Daniel Sagi
c2d0efc6cd fixed kube-hunter logo 2021-06-17 19:34:26 +00:00
Daniel Sagi
348a288411 added auth store selection, and an impersonating: comment on top of the prompt to indicate the current auth data 2021-06-17 19:26:18 +00:00
Daniel Sagi
12f0d17c3a added eveything hunt option under hunt subconsole 2021-06-17 19:06:53 +00:00
Daniel Sagi
23b048c6d2 completed implementation for delete and create methods in the auth subconsole 2021-06-17 17:21:26 +00:00
Daniel Sagi
1aab95085a removed imports from manager 2021-06-17 16:28:20 +00:00
Daniel Sagi
f706802eb9 improved ipdb interactive option 2021-06-17 16:27:42 +00:00
Daniel Sagi
17fd10120e added new base class for general implementation of CMD class 2021-06-17 16:20:41 +00:00
Daniel Sagi
2400ab5bb1 fixed imports 2021-06-17 15:15:36 +00:00
Daniel Sagi
f6e43e2bbb fixed adding __str__ to cloud type 2021-06-17 15:11:19 +00:00
Daniel Sagi
f7e73fe642 changed sub_command to sub_console 2021-06-17 15:10:58 +00:00
Daniel Sagi
6a8568173e removed old env.py file 2021-06-17 15:09:04 +00:00
Daniel Sagi
d5efef45e2 fixed bug in init of cmd classes 2021-06-17 15:08:40 +00:00
Daniel Sagi
46b7f9f5d9 added missing env module, and fixed auth subconsole to use self.env 2021-06-17 15:06:13 +00:00
Daniel Sagi
2f6badd32f changed module hirerarchy, fixed exit problem and changed to using cmd2 2021-06-17 14:40:18 +00:00
Daniel Sagi
6a4e6040f7 merged main into feature 2021-06-17 10:25:05 +00:00
Daniel
6a3c462fde added whereami command and auth subcmd to control auth database. whereami stats the local files and updates to local auth store 2021-06-10 20:36:13 +00:00
Daniel
f1d4defcb6 added pod local discovery 2021-06-10 19:12:38 +00:00
Daniel Sagi
7601692d42 fixed import bug 2021-06-10 19:39:36 +03:00
Daniel Sagi
7fe047e512 Merge branch 'main' into feature/immersed 2021-06-05 16:28:51 +03:00
Mikolaj Pawlikowski
6689005544 K8s autodiscovery (#453)
* Add a new dependency on Kubernetes package

* Add and store a new flag about automatic nodes discovery from a pod

* Implement the listing of nodes

* Add tests to cover the k8s node listing

* Fix the k8s listing test to ensure the load incluster function is actually called

* Add more help to the k8s node discovery flags, and cross-reference them.

* Add a note on the Kubernetes auto-discovery in the main README file

* Move the kubernetes discovery from conf to modules/discovery

* When running with --pods, run the Kubernetes auto discovery

* Also mention that the auto discovery is always on when using --pod

Co-authored-by: Mikolaj Pawlikowski <mpawlikowsk1@bloomberg.net>
2021-06-05 15:53:07 +03:00
danielsagi
0b90e0e43d Bugfix - Aws metadata api discovery (#455)
* fixed aws metadata bug

* added new black reformatting
2021-05-27 21:41:43 +03:00
Daniel Sagi
d6d46527cd started adding nested cmd for discovery 2021-04-30 16:13:29 +03:00
Daniel Sagi
2fac45e42b removed test cmds 2021-04-30 15:06:22 +03:00
Daniel Sagi
5b36c9c06a started adding immersed console feature, implemented environment settings behaviour on prompt cli 2021-04-30 15:05:33 +03:00
danielsagi
65eefed721 Multiple Subscriptions Mechanism (#448)
* Add multiple subscription mechanism

* PR: address comments

* improved implementation, solved a couple of bugs, added documentation to almost the whole backend process

* added corresponding tests to the new method of the multiple subscription

* fixed linting issue

* fixed linting #2

Co-authored-by: Raito Bezarius <masterancpp@gmail.com>
2021-04-25 19:27:41 +03:00
danielsagi
599e9967e3 added pypi publish workflow (#450) 2021-04-23 14:37:31 +03:00
Tommy McCormick
5745f4a32b Add discovery for AWS metadata (#447) 2021-04-21 20:57:17 +03:00
danielsagi
1a26653007 Added Advanced Usage section to the readme, documenting azure quick scanning (#441) 2021-04-08 19:20:09 +03:00
miwithro
cdd9f9d432 Update KHV003.md (#439) 2021-03-16 17:17:55 +02:00
Simarpreet Singh
99678f3cac deps: Update github pages dependencies (#431)
Signed-off-by: Simarpreet Singh <simar@linux.com>
2021-01-17 16:03:04 +02:00
danielsagi
cdbc3dc12b Bug Fix: False Negative On AKS Hunting (#420)
* removed false negative in AzureSpnHunter when /run is disabled

* changed to use direct imported class

* fixed multiple bugs in azure spn hunting, and improved efficency

* fixed bug in cloud identification. TODO: remove the outsourcing for cloud provider

* removed unused config variable

* fixed tests to use already parsed pods as the given previous event has changed
2021-01-07 19:46:00 +02:00
Carol Valencia
d208b43532 feat: github actions to publish ecr and docker (#429)
* feat: github actions to publish ecr and docker

* test: github actions to publish ecr and docker

* chore: yaml lint github actions

* chore: yaml lint github actions

* fix: secrets envs for github action

* chore: build and push action for ecr/docker

Co-authored-by: Carol Valencia <krol3@users.noreply.github.com>
2020-12-26 21:31:53 +02:00
Itay Shakury
42250d9f62 move from master branch to main (#427) 2020-12-17 16:16:16 +02:00
danielsagi
d94d86a4c1 Created a Vulnerability Disclosure README (#423)
* Created a vulnerability disclosure readme

* Update SECURITY.md

Co-authored-by: Liz Rice <liz@lizrice.com>

* Update SECURITY.md

Co-authored-by: Liz Rice <liz@lizrice.com>

* Update SECURITY.md

Co-authored-by: Liz Rice <liz@lizrice.com>

Co-authored-by: Liz Rice <liz@lizrice.com>
2020-12-17 15:16:28 +02:00
danielsagi
a1c2c3ee3e Updated kramdown (#424)
Updated kramdown to a newer patched version, the old version was not patched to CVE-2020-14001
2020-12-17 11:50:02 +00:00
danielsagi
6aeee7f49d Improvements and bug fixed in Release workflow (#425)
* changed ubuntu to an older version, for compatibility reasons with glibc on pyinstaller steps and added a step to parse the release tag

* removed parsing of release tag

* changed flow name

* removed 'release' from the release name
2020-12-08 21:46:24 +02:00
47 changed files with 1478 additions and 338 deletions

View File

@@ -7,7 +7,7 @@
Please include a summary of the change and which issue is fixed. Also include relevant motivation and context. List any dependencies that are required for this change.
## Contribution Guidelines
Please Read through the [Contribution Guidelines](https://github.com/aquasecurity/kube-hunter/blob/master/CONTRIBUTING.md).
Please Read through the [Contribution Guidelines](https://github.com/aquasecurity/kube-hunter/blob/main/CONTRIBUTING.md).
## Fixed Issues

View File

@@ -1,67 +0,0 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ master ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
schedule:
- cron: '16 3 * * 1'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

View File

@@ -1,3 +1,4 @@
---
name: Lint
on: [push, pull_request]
@@ -10,3 +11,4 @@ jobs:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- uses: pre-commit/action@v2.0.0
- uses: ibiqlik/action-yamllint@v3

94
.github/workflows/publish.yml vendored Normal file
View File

@@ -0,0 +1,94 @@
---
name: Publish
on:
push:
tags:
- "v*"
env:
ALIAS: aquasecurity
REP: kube-hunter
jobs:
dockerhub:
name: Publish To Docker Hub
runs-on: ubuntu-18.04
steps:
- name: Check Out Repo
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildxarch-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildxarch-
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to ECR
uses: docker/login-action@v1
with:
registry: public.ecr.aws
username: ${{ secrets.ECR_ACCESS_KEY_ID }}
password: ${{ secrets.ECR_SECRET_ACCESS_KEY }}
- name: Get version
id: get_version
uses: crazy-max/ghaction-docker-meta@v1
with:
images: ${{ env.REP }}
tag-semver: |
{{version}}
- name: Build and push - Docker/ECR
id: docker_build
uses: docker/build-push-action@v2
with:
context: .
platforms: linux/amd64
builder: ${{ steps.buildx.outputs.name }}
push: true
tags: |
${{ secrets.DOCKERHUB_USER }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}
public.ecr.aws/${{ env.ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}
${{ secrets.DOCKERHUB_USER }}/${{ env.REP }}:latest
public.ecr.aws/${{ env.ALIAS }}/${{ env.REP }}:latest
cache-from: type=local,src=/tmp/.buildx-cache/release
cache-to: type=local,mode=max,dest=/tmp/.buildx-cache/release
- name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}
pypi:
name: Publish To PyPI
runs-on: ubuntu-18.04
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: |
python -m pip install -U pip
python -m pip install -r requirements-dev.txt
- name: Build project
shell: bash
run: |
python -m pip install wheel
make build
- name: Publish distribution package to PyPI
if: startsWith(github.ref, 'refs/tags')
uses: pypa/gh-action-pypi-publish@master
with:
password: ${{ secrets.PYPI_API_TOKEN }}

View File

@@ -1,15 +1,16 @@
---
on:
push:
# Sequence of patterns matched against refs/tags
tags:
- 'v*' # Push events to matching v*, i.e. v1.0, v20.15.10
name: Upload Release Asset
- 'v*' # Push events to matching v*, i.e. v1.0, v20.15.10
name: Release
jobs:
build:
name: Upload Release Asset
runs-on: ubuntu-latest
runs-on: ubuntu-16.04
steps:
- name: Checkout code
uses: actions/checkout@v2
@@ -18,17 +19,17 @@ jobs:
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: |
python -m pip install -U pip
python -m pip install -r requirements-dev.txt
- name: Build project
shell: bash
run: |
make pyinstaller
- name: Create Release
id: create_release
uses: actions/create-release@v1
@@ -36,12 +37,12 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
release_name: ${{ github.ref }}
draft: false
prerelease: false
- name: Upload Release Asset
id: upload-release-asset
id: upload-release-asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -49,4 +50,4 @@ jobs:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./dist/kube-hunter
asset_name: kube-hunter-linux-x86_64-${{ github.ref }}
asset_content_type: application/octet-stream
asset_content_type: application/octet-stream

View File

@@ -1,3 +1,4 @@
---
name: Test
on: [push, pull_request]

1
.gitignore vendored
View File

@@ -9,6 +9,7 @@ venv/
# Distribution / packaging
.Python
env/
!kube_hunter/console/env/
build/
develop-eggs/
dist/

View File

@@ -1,10 +1,11 @@
---
repos:
- repo: https://github.com/psf/black
rev: stable
hooks:
- id: black
- repo: https://gitlab.com/pycqa/flake8
rev: 3.7.9
hooks:
- id: flake8
additional_dependencies: [flake8-bugbear]
- repo: https://github.com/psf/black
rev: stable
hooks:
- id: black
- repo: https://gitlab.com/pycqa/flake8
rev: 3.7.9
hooks:
- id: flake8
additional_dependencies: [flake8-bugbear]

6
.yamllint Normal file
View File

@@ -0,0 +1,6 @@
---
extends: default
rules:
line-length: disable
truthy: disable

View File

@@ -1,12 +1,18 @@
![kube-hunter](https://github.com/aquasecurity/kube-hunter/blob/master/kube-hunter.png)
![kube-hunter](https://github.com/aquasecurity/kube-hunter/blob/main/kube-hunter.png)
[![GitHub Release][release-img]][release]
![Downloads][download]
![Docker Pulls][docker-pull]
[![Build Status](https://github.com/aquasecurity/kube-hunter/workflows/Test/badge.svg)](https://github.com/aquasecurity/kube-hunter/actions)
[![codecov](https://codecov.io/gh/aquasecurity/kube-hunter/branch/master/graph/badge.svg)](https://codecov.io/gh/aquasecurity/kube-hunter)
[![codecov](https://codecov.io/gh/aquasecurity/kube-hunter/branch/main/graph/badge.svg)](https://codecov.io/gh/aquasecurity/kube-hunter)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![License](https://img.shields.io/github/license/aquasecurity/kube-hunter)](https://github.com/aquasecurity/kube-hunter/blob/master/LICENSE)
[![License](https://img.shields.io/github/license/aquasecurity/kube-hunter)](https://github.com/aquasecurity/kube-hunter/blob/main/LICENSE)
[![Docker image](https://images.microbadger.com/badges/image/aquasec/kube-hunter.svg)](https://microbadger.com/images/aquasec/kube-hunter "Get your own image badge on microbadger.com")
[download]: https://img.shields.io/github/downloads/aquasecurity/kube-hunter/total?logo=github
[release-img]: https://img.shields.io/github/release/aquasecurity/kube-hunter.svg?logo=github
[release]: https://github.com/aquasecurity/kube-hunter/releases
[docker-pull]: https://img.shields.io/docker/pulls/aquasec/kube-hunter?logo=docker&label=docker%20pulls%20%2F%20kube-hunter
kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments. **You should NOT run kube-hunter on a Kubernetes cluster that you don't own!**
@@ -14,9 +20,9 @@ kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was d
**Explore vulnerabilities**: The kube-hunter knowledge base includes articles about discoverable vulnerabilities and issues. When kube-hunter reports an issue, it will show its VID (Vulnerability ID) so you can look it up in the KB at https://aquasecurity.github.io/kube-hunter/
**Contribute**: We welcome contributions, especially new hunter modules that perform additional tests. If you would like to develop your modules please read [Guidelines For Developing Your First kube-hunter Module](https://github.com/aquasecurity/kube-hunter/blob/master/CONTRIBUTING.md).
**Contribute**: We welcome contributions, especially new hunter modules that perform additional tests. If you would like to develop your modules please read [Guidelines For Developing Your First kube-hunter Module](https://github.com/aquasecurity/kube-hunter/blob/main/CONTRIBUTING.md).
[![kube-hunter demo video](https://github.com/aquasecurity/kube-hunter/blob/master/kube-hunter-screenshot.png)](https://youtu.be/s2-6rTkH8a8?t=57s)
[![kube-hunter demo video](https://github.com/aquasecurity/kube-hunter/blob/main/kube-hunter-screenshot.png)](https://youtu.be/s2-6rTkH8a8?t=57s)
Table of Contents
=================
@@ -29,6 +35,7 @@ Table of Contents
* [Nodes Mapping](#nodes-mapping)
* [Output](#output)
* [Dispatching](#dispatching)
* [Advanced Usage](#advanced-usage)
* [Deployment](#deployment)
* [On Machine](#on-machine)
* [Prerequisites](#prerequisites)
@@ -69,6 +76,12 @@ To specify interface scanning, you can use the `--interface` option (this will s
To specify a specific CIDR to scan, use the `--cidr` option. Example:
`kube-hunter --cidr 192.168.0.0/24`
4. **Kubernetes node auto-discovery**
Set `--k8s-auto-discover-nodes` flag to query Kubernetes for all nodes in the cluster, and then attempt to scan them all. By default, it will use [in-cluster config](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) to connect to the Kubernetes API. If you'd like to use an explicit kubeconfig file, set `--kubeconfig /location/of/kubeconfig/file`.
Also note, that this is always done when using `--pod` mode.
### Active Hunting
Active hunting is an option in which kube-hunter will exploit vulnerabilities it finds, to explore for further vulnerabilities.
@@ -108,6 +121,11 @@ Available dispatch methods are:
* KUBEHUNTER_HTTP_DISPATCH_URL (defaults to: https://localhost)
* KUBEHUNTER_HTTP_DISPATCH_METHOD (defaults to: POST)
### Advanced Usage
#### Azure Quick Scanning
When running **as a Pod in an Azure or AWS environment**, kube-hunter will fetch subnets from the Instance Metadata Service. Naturally this makes the discovery process take longer.
To hardlimit subnet scanning to a `/24` CIDR, use the `--quick` option.
## Deployment
There are three methods for deploying kube-hunter:
@@ -176,7 +194,7 @@ The example `job.yaml` file defines a Job that will run kube-hunter in a pod, us
* View the test results with `kubectl logs <pod name>`
## Contribution
To read the contribution guidelines, <a href="https://github.com/aquasecurity/kube-hunter/blob/master/CONTRIBUTING.md"> Click here </a>
To read the contribution guidelines, <a href="https://github.com/aquasecurity/kube-hunter/blob/main/CONTRIBUTING.md"> Click here </a>
## License
This repository is available under the [Apache License 2.0](https://github.com/aquasecurity/kube-hunter/blob/master/LICENSE).
This repository is available under the [Apache License 2.0](https://github.com/aquasecurity/kube-hunter/blob/main/LICENSE).

17
SECURITY.md Normal file
View File

@@ -0,0 +1,17 @@
# Security Policy
## Supported Versions
| Version | Supported |
| --------- | ------------------ |
| 0.4.x | :white_check_mark: |
| 0.3.x | :white_check_mark: |
## Reporting a Vulnerability
We encourage you to find vulnerabilities in kube-hunter.
The process is simple, just report a Bug issue. and we will take a look at this.
If you prefer to disclose privately, you can write to one of the security maintainers at:
| Name | Email |
| ----------- | ------------------ |
| Daniel Sagi | daniel.sagi@aquasec.com |

View File

@@ -1,11 +1,12 @@
GEM
remote: https://rubygems.org/
specs:
activesupport (4.2.11.1)
i18n (~> 0.7)
activesupport (6.0.3.4)
concurrent-ruby (~> 1.0, >= 1.0.2)
i18n (>= 0.7, < 2)
minitest (~> 5.1)
thread_safe (~> 0.3, >= 0.3.4)
tzinfo (~> 1.1)
zeitwerk (~> 2.2, >= 2.2.2)
addressable (2.7.0)
public_suffix (>= 2.0.2, < 5.0)
coffee-script (2.4.1)
@@ -15,65 +16,67 @@ GEM
colorator (1.1.0)
commonmarker (0.17.13)
ruby-enum (~> 0.5)
concurrent-ruby (1.1.5)
dnsruby (1.61.3)
addressable (~> 2.5)
em-websocket (0.5.1)
concurrent-ruby (1.1.7)
dnsruby (1.61.5)
simpleidn (~> 0.1)
em-websocket (0.5.2)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0.6.0)
ethon (0.12.0)
ffi (>= 1.3.0)
eventmachine (1.2.7)
execjs (2.7.0)
faraday (0.17.0)
faraday (1.3.0)
faraday-net_http (~> 1.0)
multipart-post (>= 1.2, < 3)
ffi (1.11.1)
ruby2_keywords
faraday-net_http (1.0.1)
ffi (1.14.2)
forwardable-extended (2.6.0)
gemoji (3.0.1)
github-pages (201)
activesupport (= 4.2.11.1)
github-pages (209)
github-pages-health-check (= 1.16.1)
jekyll (= 3.8.5)
jekyll-avatar (= 0.6.0)
jekyll (= 3.9.0)
jekyll-avatar (= 0.7.0)
jekyll-coffeescript (= 1.1.1)
jekyll-commonmark-ghpages (= 0.1.6)
jekyll-default-layout (= 0.1.4)
jekyll-feed (= 0.11.0)
jekyll-feed (= 0.15.1)
jekyll-gist (= 1.5.0)
jekyll-github-metadata (= 2.12.1)
jekyll-mentions (= 1.4.1)
jekyll-optional-front-matter (= 0.3.0)
jekyll-github-metadata (= 2.13.0)
jekyll-mentions (= 1.6.0)
jekyll-optional-front-matter (= 0.3.2)
jekyll-paginate (= 1.1.0)
jekyll-readme-index (= 0.2.0)
jekyll-redirect-from (= 0.14.0)
jekyll-relative-links (= 0.6.0)
jekyll-remote-theme (= 0.4.0)
jekyll-readme-index (= 0.3.0)
jekyll-redirect-from (= 0.16.0)
jekyll-relative-links (= 0.6.1)
jekyll-remote-theme (= 0.4.2)
jekyll-sass-converter (= 1.5.2)
jekyll-seo-tag (= 2.5.0)
jekyll-sitemap (= 1.2.0)
jekyll-swiss (= 0.4.0)
jekyll-seo-tag (= 2.6.1)
jekyll-sitemap (= 1.4.0)
jekyll-swiss (= 1.0.0)
jekyll-theme-architect (= 0.1.1)
jekyll-theme-cayman (= 0.1.1)
jekyll-theme-dinky (= 0.1.1)
jekyll-theme-hacker (= 0.1.1)
jekyll-theme-hacker (= 0.1.2)
jekyll-theme-leap-day (= 0.1.1)
jekyll-theme-merlot (= 0.1.1)
jekyll-theme-midnight (= 0.1.1)
jekyll-theme-minimal (= 0.1.1)
jekyll-theme-modernist (= 0.1.1)
jekyll-theme-primer (= 0.5.3)
jekyll-theme-primer (= 0.5.4)
jekyll-theme-slate (= 0.1.1)
jekyll-theme-tactile (= 0.1.1)
jekyll-theme-time-machine (= 0.1.1)
jekyll-titles-from-headings (= 0.5.1)
jemoji (= 0.10.2)
kramdown (= 1.17.0)
liquid (= 4.0.0)
listen (= 3.1.5)
jekyll-titles-from-headings (= 0.5.3)
jemoji (= 0.12.0)
kramdown (= 2.3.0)
kramdown-parser-gfm (= 1.1.0)
liquid (= 4.0.3)
mercenary (~> 0.3)
minima (= 2.5.0)
minima (= 2.5.1)
nokogiri (>= 1.10.4, < 2.0)
rouge (= 3.11.0)
rouge (= 3.23.0)
terminal-table (~> 1.4)
github-pages-health-check (1.16.1)
addressable (~> 2.3)
@@ -81,27 +84,27 @@ GEM
octokit (~> 4.0)
public_suffix (~> 3.0)
typhoeus (~> 1.3)
html-pipeline (2.12.0)
html-pipeline (2.14.0)
activesupport (>= 2)
nokogiri (>= 1.4)
http_parser.rb (0.6.0)
i18n (0.9.5)
concurrent-ruby (~> 1.0)
jekyll (3.8.5)
jekyll (3.9.0)
addressable (~> 2.4)
colorator (~> 1.0)
em-websocket (~> 0.5)
i18n (~> 0.7)
jekyll-sass-converter (~> 1.0)
jekyll-watch (~> 2.0)
kramdown (~> 1.14)
kramdown (>= 1.17, < 3)
liquid (~> 4.0)
mercenary (~> 0.3.3)
pathutil (~> 0.9)
rouge (>= 1.7, < 4)
safe_yaml (~> 1.0)
jekyll-avatar (0.6.0)
jekyll (~> 3.0)
jekyll-avatar (0.7.0)
jekyll (>= 3.0, < 5.0)
jekyll-coffeescript (1.1.1)
coffee-script (~> 2.2)
coffee-script-source (~> 1.11.1)
@@ -114,36 +117,37 @@ GEM
rouge (>= 2.0, < 4.0)
jekyll-default-layout (0.1.4)
jekyll (~> 3.0)
jekyll-feed (0.11.0)
jekyll (~> 3.3)
jekyll-feed (0.15.1)
jekyll (>= 3.7, < 5.0)
jekyll-gist (1.5.0)
octokit (~> 4.2)
jekyll-github-metadata (2.12.1)
jekyll (~> 3.4)
jekyll-github-metadata (2.13.0)
jekyll (>= 3.4, < 5.0)
octokit (~> 4.0, != 4.4.0)
jekyll-mentions (1.4.1)
jekyll-mentions (1.6.0)
html-pipeline (~> 2.3)
jekyll (~> 3.0)
jekyll-optional-front-matter (0.3.0)
jekyll (~> 3.0)
jekyll (>= 3.7, < 5.0)
jekyll-optional-front-matter (0.3.2)
jekyll (>= 3.0, < 5.0)
jekyll-paginate (1.1.0)
jekyll-readme-index (0.2.0)
jekyll (~> 3.0)
jekyll-redirect-from (0.14.0)
jekyll (~> 3.3)
jekyll-relative-links (0.6.0)
jekyll (~> 3.3)
jekyll-remote-theme (0.4.0)
jekyll-readme-index (0.3.0)
jekyll (>= 3.0, < 5.0)
jekyll-redirect-from (0.16.0)
jekyll (>= 3.3, < 5.0)
jekyll-relative-links (0.6.1)
jekyll (>= 3.3, < 5.0)
jekyll-remote-theme (0.4.2)
addressable (~> 2.0)
jekyll (~> 3.5)
rubyzip (>= 1.2.1, < 3.0)
jekyll (>= 3.5, < 5.0)
jekyll-sass-converter (>= 1.0, <= 3.0.0, != 2.0.0)
rubyzip (>= 1.3.0, < 3.0)
jekyll-sass-converter (1.5.2)
sass (~> 3.4)
jekyll-seo-tag (2.5.0)
jekyll (~> 3.3)
jekyll-sitemap (1.2.0)
jekyll (~> 3.3)
jekyll-swiss (0.4.0)
jekyll-seo-tag (2.6.1)
jekyll (>= 3.3, < 5.0)
jekyll-sitemap (1.4.0)
jekyll (>= 3.7, < 5.0)
jekyll-swiss (1.0.0)
jekyll-theme-architect (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
@@ -153,8 +157,8 @@ GEM
jekyll-theme-dinky (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-hacker (0.1.1)
jekyll (~> 3.5)
jekyll-theme-hacker (0.1.2)
jekyll (> 3.5, < 5.0)
jekyll-seo-tag (~> 2.0)
jekyll-theme-leap-day (0.1.1)
jekyll (~> 3.5)
@@ -171,8 +175,8 @@ GEM
jekyll-theme-modernist (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-primer (0.5.3)
jekyll (~> 3.5)
jekyll-theme-primer (0.5.4)
jekyll (> 3.5, < 5.0)
jekyll-github-metadata (~> 2.9)
jekyll-seo-tag (~> 2.0)
jekyll-theme-slate (0.1.1)
@@ -184,43 +188,49 @@ GEM
jekyll-theme-time-machine (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-titles-from-headings (0.5.1)
jekyll (~> 3.3)
jekyll-titles-from-headings (0.5.3)
jekyll (>= 3.3, < 5.0)
jekyll-watch (2.2.1)
listen (~> 3.0)
jemoji (0.10.2)
jemoji (0.12.0)
gemoji (~> 3.0)
html-pipeline (~> 2.2)
jekyll (~> 3.0)
kramdown (1.17.0)
liquid (4.0.0)
listen (3.1.5)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
ruby_dep (~> 1.2)
jekyll (>= 3.0, < 5.0)
kramdown (2.3.0)
rexml
kramdown-parser-gfm (1.1.0)
kramdown (~> 2.0)
liquid (4.0.3)
listen (3.4.0)
rb-fsevent (~> 0.10, >= 0.10.3)
rb-inotify (~> 0.9, >= 0.9.10)
mercenary (0.3.6)
mini_portile2 (2.4.0)
minima (2.5.0)
jekyll (~> 3.5)
mini_portile2 (2.5.0)
minima (2.5.1)
jekyll (>= 3.5, < 5.0)
jekyll-feed (~> 0.9)
jekyll-seo-tag (~> 2.1)
minitest (5.12.2)
minitest (5.14.3)
multipart-post (2.1.1)
nokogiri (1.10.8)
mini_portile2 (~> 2.4.0)
octokit (4.14.0)
nokogiri (1.11.1)
mini_portile2 (~> 2.5.0)
racc (~> 1.4)
octokit (4.20.0)
faraday (>= 0.9)
sawyer (~> 0.8.0, >= 0.5.3)
pathutil (0.16.2)
forwardable-extended (~> 2.6)
public_suffix (3.1.1)
rb-fsevent (0.10.3)
rb-inotify (0.10.0)
racc (1.5.2)
rb-fsevent (0.10.4)
rb-inotify (0.10.1)
ffi (~> 1.0)
rouge (3.11.0)
ruby-enum (0.7.2)
rexml (3.2.4)
rouge (3.23.0)
ruby-enum (0.8.0)
i18n
ruby_dep (1.5.0)
rubyzip (2.0.0)
ruby2_keywords (0.0.2)
rubyzip (2.3.0)
safe_yaml (1.0.5)
sass (3.7.4)
sass-listen (~> 4.0.0)
@@ -230,14 +240,20 @@ GEM
sawyer (0.8.2)
addressable (>= 2.3.5)
faraday (> 0.8, < 2.0)
simpleidn (0.1.1)
unf (~> 0.1.4)
terminal-table (1.8.0)
unicode-display_width (~> 1.1, >= 1.1.1)
thread_safe (0.3.6)
typhoeus (1.3.1)
typhoeus (1.4.0)
ethon (>= 0.9.0)
tzinfo (1.2.5)
tzinfo (1.2.9)
thread_safe (~> 0.1)
unicode-display_width (1.6.0)
unf (0.1.4)
unf_ext
unf_ext (0.0.7.7)
unicode-display_width (1.7.0)
zeitwerk (2.4.2)
PLATFORMS
ruby
@@ -247,4 +263,4 @@ DEPENDENCIES
jekyll-sitemap
BUNDLED WITH
1.17.2
2.2.5

View File

@@ -1,6 +1,7 @@
---
title: kube-hunter
description: Kube-hunter hunts for security weaknesses in Kubernetes clusters
logo: https://raw.githubusercontent.com/aquasecurity/kube-hunter/master/kube-hunter.png
logo: https://raw.githubusercontent.com/aquasecurity/kube-hunter/main/kube-hunter.png
show_downloads: false
google_analytics: UA-63272154-1
theme: jekyll-theme-minimal
@@ -10,7 +11,7 @@ collections:
defaults:
-
scope:
path: "" # an empty string here means all files in the project
path: "" # an empty string here means all files in the project
values:
layout: "default"

View File

@@ -12,7 +12,10 @@ Microsoft Azure provides an internal HTTP endpoint that exposes information from
## Remediation
Consider using AAD Pod Identity. A Microsoft project that allows scoping the identity of workloads to Kubernetes Pods instead of VMs (instances).
Starting in the 2020.10.15 Azure VHD Release, AKS restricts the pod CIDR access to that internal HTTP endpoint.
[CVE-2021-27075](https://github.com/Azure/AKS/issues/2168)
## References

24
docs/_kb/KHV053.md Normal file
View File

@@ -0,0 +1,24 @@
---
vid: KHV053
title: AWS Metadata Exposure
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
AWS EC2 provides an internal HTTP endpoint that exposes information from the cloud platform to workloads running in an instance. The endpoint is accessible to every workload running in the instance. An attacker that is able to execute a pod in the cluster may be able to query the metadata service and discover additional information about the environment.
## Remediation
* Limit access to the instance metadata service. Consider using a local firewall such as `iptables` to disable access from some or all processes/users to the instance metadata service.
* Disable the metadata service (via instance metadata options or IAM), or at a minimum enforce the use IMDSv2 on an instance to require token-based access to the service.
* Modify the HTTP PUT response hop limit on the instance to 1. This will only allow access to the service from the instance itself rather than from within a pod.
## References
- [AWS Instance Metadata service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html)
- [EC2 Instance Profiles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html)

View File

@@ -1,3 +1,4 @@
---
apiVersion: batch/v1
kind: Job
metadata:
@@ -6,9 +7,9 @@ spec:
template:
spec:
containers:
- name: kube-hunter
image: aquasec/kube-hunter
command: ["kube-hunter"]
args: ["--pod"]
- name: kube-hunter
image: aquasec/kube-hunter
command: ["kube-hunter"]
args: ["--pod"]
restartPolicy: Never
backoffLimit: 4

View File

@@ -25,6 +25,8 @@ config = Config(
quick=args.quick,
remote=args.remote,
statistics=args.statistics,
k8s_auto_discover_nodes=args.k8s_auto_discover_nodes,
kubeconfig=args.kubeconfig,
)
setup_logger(args.log, args.log_file)
set_config(config)
@@ -32,6 +34,7 @@ set_config(config)
# Running all other registered plugins before execution
pm.hook.load_plugin(args=args)
from kube_hunter.console.manager import start_console
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import HuntFinished, HuntStarted
from kube_hunter.modules.discovery.hosts import RunningAsPodEvent, HostScanEvent
@@ -88,13 +91,17 @@ hunt_started = False
def main():
global hunt_started
scan_options = [config.pod, config.cidr, config.remote, config.interface]
scan_options = [config.pod, config.cidr, config.remote, config.interface, config.k8s_auto_discover_nodes]
try:
if args.list:
if args.console:
start_console()
return
elif args.list:
list_hunters()
return
if not any(scan_options):
elif not any(scan_options):
if not interactive_set_config():
return

View File

@@ -36,6 +36,8 @@ class Config:
remote: Optional[str] = None
reporter: Optional[Any] = None
statistics: bool = False
k8s_auto_discover_nodes: bool = False
kubeconfig: Optional[str] = None
_config: Optional[Config] = None

View File

@@ -1,6 +1,6 @@
from argparse import ArgumentParser
from kube_hunter.plugins import hookimpl
from colorama import init, Fore, Style
@hookimpl
def parser_add_arguments(parser):
@@ -8,6 +8,15 @@ def parser_add_arguments(parser):
This is the default hook implementation for parse_add_argument
Contains initialization for all default arguments
"""
# Initializes colorma
init()
parser.add_argument(
"-c", "--console",
action="store_true",
help=f"Starts kube-hunter's {Fore.GREEN}Immersed Console{Style.RESET_ALL}"
)
parser.add_argument(
"--list",
action="store_true",
@@ -46,6 +55,26 @@ def parser_add_arguments(parser):
help="One or more remote ip/dns to hunt",
)
parser.add_argument(
"--k8s-auto-discover-nodes",
action="store_true",
help="Enables automatic detection of all nodes in a Kubernetes cluster "
"by quering the Kubernetes API server. "
"It supports both in-cluster config (when running as a pod), "
"and a specific kubectl config file (use --kubeconfig to set this). "
"By default, when this flag is set, it will use in-cluster config. "
"NOTE: this is automatically switched on in --pod mode."
)
parser.add_argument(
"--kubeconfig",
type=str,
metavar="KUBECONFIG",
default=None,
help="Specify the kubeconfig file to use for Kubernetes nodes auto discovery "
" (to be used in conjuction with the --k8s-auto-discover-nodes flag."
)
parser.add_argument("--active", action="store_true", help="Enables active hunting")
parser.add_argument(

View File

@@ -0,0 +1,4 @@
from . import general
from . import manager
from . import env
from . import auth

View File

@@ -0,0 +1 @@
from .auth import AuthSubConsole

View File

@@ -0,0 +1,62 @@
import argparse
from cmd2 import with_argparser
from kube_hunter.console.general import BaseKubeHunterCmd
class AuthSubConsole(BaseKubeHunterCmd):
"""
Manages the underlying AuthStore database
Implementes 3 methods to manage the db
delete - Removes an auth entry by an index
create - creates a new entry in the db,
select - select the auth to use in the environment
"""
def __init__(self, env):
super(AuthSubConsole, self).__init__()
self.env = env
self.sub_console_name = "env/auth"
delete_parser = argparse.ArgumentParser()
delete_parser.add_argument("index", type=int, help="index of the auth entry for deletion")
@with_argparser(delete_parser)
def do_delete(self, opts):
"""Creates a new entry in the auth database based on a given raw jwt token"""
if opts.index:
if self.env.current_auth.get_auths_count() > opts.index:
self.env.current_auth.delete_auth(opts.jwt_token.strip())
else:
self.perror("Index too large")
create_parser = argparse.ArgumentParser()
create_parser.add_argument("jwt_token", help="A raw jwt_token of the new auth entry")
@with_argparser(create_parser)
def do_create(self, opts):
"""Creates a new entry in the auth database based on a given raw jwt token"""
if opts.jwt_token:
self.env.current_auth.new_auth(opts.jwt_token.strip())
show_parser = argparse.ArgumentParser()
show_parser.add_argument("-i", "--index", type=int, help="set index to print raw data for a specific auth entry")
@with_argparser(show_parser)
def do_show(self, opts):
"""Show current collected auths"""
if opts.index is not None:
if self.env.current_auth.get_auths_count() > opts.index:
self.poutput(f"Token:\n{self.env.current_auth.get_auth(opts.index).raw_token}")
else:
self.perror("Index too large")
else:
self.poutput(self.env.current_auth.get_table())
select_parser = argparse.ArgumentParser()
select_parser.add_argument("index", type=int, help="index of auth entry to set for environment")
@with_argparser(select_parser)
def do_select(self, opts):
"""Sets the auth entry for the environment"""
if opts.index is not None:
if self.env.current_auth.get_auths_count() > opts.index:
self.env.current_auth.set_select_auth(opts.index)
else:
self.perror("Index too large")

View File

@@ -0,0 +1,76 @@
import json
import base64
from prettytable import PrettyTable, ALL
""" Auth models"""
class Auth:
def parse_token(self, token):
""" Extracting data from token file """
# adding maximum base64 padding to parse correctly
self.raw_token = token
token_json = base64.b64decode(f"{token.split('.')[1]}==")
token_data = json.loads(token_json)
self.iss = token_data.get("iss")
self.namespace = token_data.get("kubernetes.io/serviceaccount/namespace")
self.name = token_data.get("kubernetes.io/serviceaccount/secret.name")
self.service_account_name = token_data.get("kubernetes.io/serviceaccount/service-account.name")
self.uid = token_data.get("kubernetes.io/serviceaccount/service-account.uid")
self.sub = token_data.get("sub")
def __init__(self, token=None):
if token:
self.parse_token(token)
class AuthStore:
auths = []
selected_auth = None
def new_auth(self, token):
""" Initializes new Auth object and adds it to the auth db """
new_auth = Auth(token)
if not self.is_exists(new_auth):
self.auths.append(new_auth)
# if it's the only auth, selecting it
if not self.selected_auth:
self.selected_auth = 0
def get_auths_count(self):
return len(self.auths)
def delete_auth(self, index):
return self.auths.pop(index)
def is_exists(self, check_auth):
""" Checks for uniques auth in auth_store """
for auth in self.auths:
if auth.sub == check_auth.sub:
return True
return False
def get_current_auth(self):
return self.auths[self.selected_auth]
def get_auth(self, index):
return self.auths[index]
def set_select_auth(self, index):
self.selected_auth = index
def get_table(self):
auth_table = PrettyTable(["index", "Name", "Selected"], hrules=ALL)
auth_table.align = "l"
auth_table.padding_width = 1
auth_table.header_style = "upper"
# building auth token table, showing selected auths
for i, auth in enumerate(self.auths):
selected_mark = ""
if i == self.selected_auth:
selected_mark = "*"
auth_table.add_row([i, auth.sub, selected_mark])
return auth_table

View File

View File

@@ -0,0 +1,67 @@
from kube_hunter.console.auth import AuthSubConsole
from kube_hunter.console.general import BaseKubeHunterCmd
from kube_hunter.conf import Config, set_config
from kube_hunter.conf.logging import setup_logger
from kube_hunter.core.events import handler
from kube_hunter.modules.discovery.hosts import RunningAsPodEvent, HostScanEvent
from kube_hunter.modules.report import get_reporter, get_dispatcher
from kube_hunter.core.events.types import HuntFinished, HuntStarted
import time
from cmd2 import ansi
from progressbar import FormatLabel, RotatingMarker, UnknownLength, ProgressBar, Timer
class HuntSubConsole(BaseKubeHunterCmd):
"""DiscoverSubConsole
In charge of managing and running kube-hunter's discover modules
"""
def __init__(self, env):
super(HuntSubConsole, self).__init__()
self.env = env
self.sub_console_name = "hunt"
@staticmethod
def progress_bar():
"""Displays animated progress bar
Integrates with handler object, to properly block until hunt finish
"""
# Logger
widgets = ['[', Timer(), ']', ': ', FormatLabel(''), ' ', RotatingMarker()]
bar = ProgressBar(max_value=UnknownLength, widgets=widgets)
while handler.unfinished_tasks > 0:
widgets[4] = FormatLabel(f'Tasks Left To Process: {handler.unfinished_tasks}')
bar.update(handler.unfinished_tasks)
time.sleep(0.1)
bar.finish()
def do_everything(self, arg):
"""Wraps running of kube-hunter's hunting
Uses the current environment to specify data to start_event when starting a scan
"""
# TODO: display output
current_auth = self.env.current_auth.get_current_auth()
start_event = None
if self.env.is_inside_pod:
self.pfeedback(ansi.style(f"Hunting Started (as {current_auth.sub})", fg="green"))
start_event = RunningAsPodEvent()
start_event.auth_token = current_auth.raw_token
# setting basic stuff for output methods
setup_logger("none", None)
config = Config()
config.dispatcher = get_dispatcher("stdout")
config.reporter = get_reporter("plain")
set_config(config)
# trigger hunting
handler.publish_event(start_event)
handler.publish_event(HuntStarted())
self.progress_bar()
self.pfeedback(ansi.style(f"Finished hunting. found {0} services and {0} vulnerabilities", fg="green"))
handler.join()
handler.publish_event(HuntFinished())
handler.free()

2
kube_hunter/console/env/__init__.py vendored Normal file
View File

@@ -0,0 +1,2 @@
from .env import EnvSubConsole
from .types import ImmersedEnvironment

17
kube_hunter/console/env/env.py vendored Normal file
View File

@@ -0,0 +1,17 @@
from kube_hunter.console.auth import AuthSubConsole
from kube_hunter.console.general import BaseKubeHunterCmd
class EnvSubConsole(BaseKubeHunterCmd):
"""EnvSubConsole
In charge of managing and viewing the entire current environment state
Includes: Auth database..
"""
def __init__(self, env):
super(EnvSubConsole, self).__init__()
self.env = env
self.sub_console_name = "env"
# self.prompt = self.env.get_prompt(sub_console="env")
def do_auth(self, arg):
AuthSubConsole(self.env).cmdloop()

55
kube_hunter/console/env/types.py vendored Normal file
View File

@@ -0,0 +1,55 @@
from colorama import init, Fore, Style
from kube_hunter.console.general import types as GeneralTypes
from kube_hunter.console.auth import types as AuthTypes
# initializes colorma
init()
class ImmersedEnvironment:
"""
ImmersedEnvironment keeps track of the current console run state.
"""
auths = []
pods = []
is_inside_cloud = False
is_inside_container = False
is_inside_pod = False
current_cloud = GeneralTypes.UnknownCloud()
current_pod = GeneralTypes.Pod()
current_container = GeneralTypes.Container()
current_auth = AuthTypes.AuthStore()
def get_prompt(self, sub_console=""):
"""
Parses current env state to picture a short description of where we are right now
General format is `(cloud) -> (run_unit) kube-hunter $`
"""
arrow = "->"
prompt_prefix = f" kube-hunter{' [' + sub_console + ']' if sub_console else ''} $ "
# add colores unly
cloud = f"({Fore.BLUE}{self.current_cloud}{Style.RESET_ALL})"
pod = f"({Fore.MAGENTA}{self.current_pod}{Style.RESET_ALL})"
container = f"(container: {Fore.CYAN}{self.current_container}{Style.RESET_ALL})"
container_in_pod = f"({Fore.MAGENTA}{self.current_pod}/{{}}{Style.RESET_ALL})"
env_description = ""
if self.current_auth.get_auths_count():
auth = self.current_auth.get_current_auth()
env_description += f" {Fore.LIGHTRED_EX}[Impersonating {auth.sub}]{Style.RESET_ALL}\n"
env_description += cloud
if self.is_inside_pod:
if len(self.current_pod.containers):
env_description += f" {arrow} {container_in_pod.format(self.current_pod.containers[0])}"
else:
env_description += f" {arrow} {pod}"
elif self.is_inside_container:
env_description += f" {arrow} {container}"
return f"{env_description}{prompt_prefix}"

View File

@@ -0,0 +1 @@
from .console import BaseKubeHunterCmd

View File

@@ -0,0 +1,16 @@
import cmd2
class BaseKubeHunterCmd(cmd2.Cmd):
sub_console_name = ""
def postcmd(self, stop, line):
self.prompt = self.env.get_prompt(self.sub_console_name)
if stop:
return True
def do_exit(self, arg):
'exists shell'
return True
def emptyline(self):
pass

View File

@@ -0,0 +1,38 @@
import socket
""" General Types """
class Container:
""" Basic model for Container objects """
name = ""
def __str__(self):
return self.name
class Pod:
""" Basic model for Pod objects """
ip_address = ""
name = ""
namespace = ""
containers = []
def __str__(self):
return f"{self.namespace}/{self.name}"
def incluster_update(self, pod_event):
"""
uses pod_event and other techniques to get full data on the incluster pod env data
"""
self.namespace = pod_event.namespace
# hostname will almost always will be the pod's name
self.name = socket.gethostname()
class Cloud:
def __init__(self, name):
self.name = name
def __str__(self):
return self.name
class UnknownCloud(Cloud):
def __init__(self):
super(UnknownCloud, self).__init__("Unknown Cloud")

View File

@@ -0,0 +1,59 @@
from kube_hunter.console.discover.discover import HuntSubConsole
from kube_hunter.console.general import BaseKubeHunterCmd
from kube_hunter.console.env import EnvSubConsole, ImmersedEnvironment
from kube_hunter.modules.discovery.hosts import RunningAsPodEvent
from colorama import (
Back,
Fore,
Style,
)
from cmd2 import ansi
class KubeHunterMainConsole(BaseKubeHunterCmd):
def __init__(self, env):
super(KubeHunterMainConsole, self).__init__()
kube_hunter_logo = r"""
_ __ __ __
/\ \ /\ \ /\ \ /\ \__
\ \ \/'\ __ _\ \ \____ __ \ \ \___ __ __ ___\ \ ,_\ __ _ __
\ \ , < /\ \/\ \ \ '__`\ /'__`\ ______\ \ _ `\/\ \/\ \/' _ `\ \ \/ /'__`/\`'__\
\ \ \\`\\ \ \_\ \ \ \L\ /\ __//\______\ \ \ \ \ \ \_\ /\ \/\ \ \ \_/\ __\ \ \/
\ \_\ \_\ \____/\ \_,__\ \____\/______/\ \_\ \_\ \____\ \_\ \_\ \__\ \____\ \_\
\/_/\/_/\/___/ \/___/ \/____/ \/_/\/_/\/___/ \/_/\/_/\/__/\/____/\/_/
"""
self.intro = f'{kube_hunter_logo}\n\nWelcome to kube-hunter Immeresed Console. Type help or ? to list commands.\n'
self.env = env
def do_hunt(self, arg):
'hunt using specified environment'
HuntSubConsole(self.env).cmdloop()
def do_env(self, arg):
'Show your environment data collected so far'
EnvSubConsole(self.env).cmdloop()
def do_interactive(self, arg):
"""Start interactive ipython session"""
environment = self.env
self.poutput("\n\tStarted an interactive python session. use `environment` to manage the populated environment object")
import ipdb
ipdb.set_trace()
def do_whereami(self, arg):
"""Try to determine you are based on local files and mounts"""
self.pfeedback("Trying to find out where you are...")
pod_event = RunningAsPodEvent()
if pod_event.auth_token:
self.pfeedback(ansi.style("Found running inside a kubernetes pod", fg="green"))
self.env.current_auth.new_auth(pod_event.auth_token)
self.pfeedback(ansi.style("Loaded a new auth entry: (hint: env/auth/show)", fg="green"))
self.env.current_pod.incluster_update(pod_event)
self.env.is_inside_pod = True
self.pfeedback("Updated environment with locally found data")
def start_console():
environment = ImmersedEnvironment()
a = KubeHunterMainConsole(environment)
a.cmdloop()

View File

@@ -6,7 +6,7 @@ from threading import Thread
from kube_hunter.conf import get_config
from kube_hunter.core.types import ActiveHunter, HunterBase
from kube_hunter.core.events.types import Vulnerability, EventFilterBase
from kube_hunter.core.events.types import Vulnerability, EventFilterBase, MultipleEventsContainer
logger = logging.getLogger(__name__)
@@ -19,11 +19,33 @@ class EventQueue(Queue):
self.active_hunters = dict()
self.all_hunters = dict()
self.hooks = defaultdict(list)
self.filters = defaultdict(list)
self.running = True
self.workers = list()
# -- Regular Subscription --
# Structure: key: Event Class, value: tuple(Registered Hunter, Predicate Function)
self.hooks = defaultdict(list)
self.filters = defaultdict(list)
# --------------------------
# -- Multiple Subscription --
# Structure: key: Event Class, value: tuple(Registered Hunter, Predicate Function)
self.multi_hooks = defaultdict(list)
# When subscribing to multiple events, this gets populated with required event classes
# Structure: key: Hunter Class, value: set(RequiredEventClass1, RequiredEventClass2)
self.hook_dependencies = defaultdict(set)
# To keep track of fulfilled dependencies. we need to have a structure which saves historical instanciated
# events mapped to a registered hunter.
# We used a 2 dimensional dictionary in order to fulfill two demands:
# * correctly count published required events
# * save historical events fired, easily sorted by their type
#
# Structure: hook_fulfilled_deps[hunter_class] -> fulfilled_events_for_hunter[event_class] -> [EventObject, EventObject2]
self.hook_fulfilled_deps = defaultdict(lambda: defaultdict(list))
# ---------------------------
for _ in range(num_worker):
t = Thread(target=self.worker)
t.daemon = True
@@ -34,16 +56,66 @@ class EventQueue(Queue):
t.daemon = True
t.start()
# decorator wrapping for easy subscription
"""
######################################################
+ ----------------- Public Methods ----------------- +
######################################################
"""
def subscribe(self, event, hook=None, predicate=None):
"""
The Subscribe Decorator - For Regular Registration
Use this to register for one event only. Your hunter will execute each time this event is published
@param event - Event class to subscribe to
@param predicate - Optional: Function that will be called with the published event as a parameter before trigger.
If it's return value is False, the Hunter will not run (default=None).
@param hook - Hunter class to register for (ignore when using as a decorator)
"""
def wrapper(hook):
self.subscribe_event(event, hook=hook, predicate=predicate)
return hook
return wrapper
# wrapper takes care of the subscribe once mechanism
def subscribe_many(self, events, hook=None, predicates=None):
"""
The Subscribe Many Decorator - For Multiple Registration,
When your attack needs several prerequisites to exist in the cluster, You need to register for multiple events.
Your hunter will execute once for every new combination of required events.
For example:
1. event A was published 3 times
2. event B was published once.
3. event B was published again
Your hunter will execute 2 times:
* (on step 2) with the newest version of A
* (on step 3) with the newest version of A and newest version of B
@param events - List of event classes to subscribe to
@param predicates - Optional: List of function that will be called with the published event as a parameter before trigger.
If it's return value is False, the Hunter will not run (default=None).
@param hook - Hunter class to register for (ignore when using as a decorator)
"""
def wrapper(hook):
self.subscribe_events(events, hook=hook, predicates=predicates)
return hook
return wrapper
def subscribe_once(self, event, hook=None, predicate=None):
"""
The Subscribe Once Decorator - For Single Trigger Registration,
Use this when you want your hunter to execute only in your entire program run
wraps subscribe_event method
@param events - List of event classes to subscribe to
@param predicates - Optional: List of function that will be called with the published event as a parameter before trigger.
If it's return value is False, the Hunter will not run (default=None).
@param hook - Hunter class to register for (ignore when using as a decorator)
"""
def wrapper(hook):
# installing a __new__ magic method on the hunter
# which will remove the hunter from the list upon creation
@@ -58,29 +130,160 @@ class EventQueue(Queue):
return wrapper
# getting uninstantiated event object
def subscribe_event(self, event, hook=None, predicate=None):
def publish_event(self, event, caller=None):
"""
The Publish Event Method - For Publishing Events To Kube-Hunter's Queue
"""
# Document that the hunter published a vulnerability (if it's indeed a vulnerability)
# For statistics options
self._increase_vuln_count(event, caller)
# sets the event's parent to be it's publisher hunter.
self._set_event_chain(event, caller)
# applying filters on the event, before publishing it to subscribers.
# if filter returned None, not proceeding to publish
event = self.apply_filters(event)
if event:
# If event was rewritten, make sure it's linked again
self._set_event_chain(event, caller)
# Regular Hunter registrations - publish logic
# Here we iterate over all the registered-to events:
for hooked_event in self.hooks.keys():
# We check if the event we want to publish is an inherited class of the current registered-to iterated event
# Meaning - if this is a relevant event:
if hooked_event in event.__class__.__mro__:
# If so, we want to publish to all registerd hunters.
for hook, predicate in self.hooks[hooked_event]:
if predicate and not predicate(event):
continue
self.put(hook(event))
logger.debug(f"Event {event.__class__} got published to hunter - {hook} with {event}")
# Multiple Hunter registrations - publish logic
# Here we iterate over all the registered-to events:
for hooked_event in self.multi_hooks.keys():
# We check if the event we want to publish is an inherited class of the current registered-to iterated event
# Meaning - if this is a relevant event:
if hooked_event in event.__class__.__mro__:
# now we iterate over the corresponding registered hunters.
for hook, predicate in self.multi_hooks[hooked_event]:
if predicate and not predicate(event):
continue
self._update_multi_hooks(hook, event)
if self._is_all_fulfilled_for_hunter(hook):
events_container = MultipleEventsContainer(self._get_latest_events_from_multi_hooks(hook))
self.put(hook(events_container))
logger.debug(
f"Multiple subscription requirements were met for hunter {hook}. events container was \
published with {self.hook_fulfilled_deps[hook].keys()}"
)
"""
######################################################
+ ---------------- Private Methods ----------------- +
+ ---------------- (Backend Logic) ----------------- +
######################################################
"""
def _get_latest_events_from_multi_hooks(self, hook):
"""
Iterates over fulfilled deps for the hunter, and fetching the latest appended events from history
"""
latest_events = list()
for event_class in self.hook_fulfilled_deps[hook].keys():
latest_events.append(self.hook_fulfilled_deps[hook][event_class][-1])
return latest_events
def _update_multi_hooks(self, hook, event):
"""
Updates published events in the multi hooks fulfilled store.
"""
self.hook_fulfilled_deps[hook][event.__class__].append(event)
def _is_all_fulfilled_for_hunter(self, hook):
"""
Returns true for multi hook fulfilled, else oterwise
"""
# Check if the first dimension already contains all necessary event classes
return len(self.hook_fulfilled_deps[hook].keys()) == len(self.hook_dependencies[hook])
def _set_event_chain(self, event, caller):
"""
Sets' events attribute chain.
In here we link the event with it's publisher (Hunter),
so in the next hunter that catches this event, we could access the previous one's attributes.
@param event: the event object to be chained
@param caller: the Hunter object that published this event.
"""
if caller:
event.previous = caller.event
event.hunter = caller.__class__
def _register_hunters(self, hook=None):
"""
This method is called when a Hunter registers itself to the handler.
this is done in order to track and correctly configure the current run of the program.
passive_hunters, active_hunters, all_hunters
"""
config = get_config()
if ActiveHunter in hook.__mro__:
if not config.active:
return
self.active_hunters[hook] = hook.__doc__
return False
else:
self.active_hunters[hook] = hook.__doc__
elif HunterBase in hook.__mro__:
self.passive_hunters[hook] = hook.__doc__
if HunterBase in hook.__mro__:
self.all_hunters[hook] = hook.__doc__
return True
def _register_filter(self, event, hook=None, predicate=None):
if hook not in self.filters[event]:
self.filters[event].append((hook, predicate))
logging.debug("{} filter subscribed to {}".format(hook, event))
def _register_hook(self, event, hook=None, predicate=None):
if hook not in self.hooks[event]:
self.hooks[event].append((hook, predicate))
logging.debug("{} subscribed to {}".format(hook, event))
def subscribe_event(self, event, hook=None, predicate=None):
if not self._register_hunters(hook):
return
# registering filters
if EventFilterBase in hook.__mro__:
if hook not in self.filters[event]:
self.filters[event].append((hook, predicate))
logger.debug(f"{hook} filter subscribed to {event}")
self._register_filter(event, hook, predicate)
# registering hunters
elif hook not in self.hooks[event]:
self.hooks[event].append((hook, predicate))
logger.debug(f"{hook} subscribed to {event}")
else:
self._register_hook(event, hook, predicate)
def subscribe_events(self, events, hook=None, predicates=None):
if not self._register_hunters(hook):
return False
if predicates is None:
predicates = [None] * len(events)
# registering filters.
if EventFilterBase in hook.__mro__:
for event, predicate in zip(events, predicates):
self._register_filter(event, hook, predicate)
# registering hunters.
else:
for event, predicate in zip(events, predicates):
self.multi_hooks[event].append((hook, predicate))
self.hook_dependencies[hook] = frozenset(events)
def apply_filters(self, event):
# if filters are subscribed, apply them on the event
@@ -97,36 +300,11 @@ class EventQueue(Queue):
return None
return event
# getting instantiated event object
def publish_event(self, event, caller=None):
def _increase_vuln_count(self, event, caller):
config = get_config()
# setting event chain
if caller:
event.previous = caller.event
event.hunter = caller.__class__
# applying filters on the event, before publishing it to subscribers.
# if filter returned None, not proceeding to publish
event = self.apply_filters(event)
if event:
# If event was rewritten, make sure it's linked to its parent ('previous') event
if caller:
event.previous = caller.event
event.hunter = caller.__class__
for hooked_event in self.hooks.keys():
if hooked_event in event.__class__.__mro__:
for hook, predicate in self.hooks[hooked_event]:
if predicate and not predicate(event):
continue
if config.statistics and caller:
if Vulnerability in event.__class__.__mro__:
caller.__class__.publishedVulnerabilities += 1
logger.debug(f"Event {event.__class__} got published with {event}")
self.put(hook(event))
if config.statistics and caller:
if Vulnerability in event.__class__.__mro__:
caller.__class__.publishedVulnerabilities += 1
# executes callbacks on dedicated thread as a daemon
def worker(self):

View File

@@ -62,6 +62,20 @@ class Event:
return history
class MultipleEventsContainer(Event):
"""
This is the class of the object an hunter will get if he was registered to multiple events.
"""
def __init__(self, events):
self.events = events
def get_by_class(self, event_class):
for event in self.events:
if event.__class__ == event_class:
return event
class Service:
def __init__(self, name, path="", secure=True):
self.name = name
@@ -191,7 +205,7 @@ class ReportDispatched(Event):
class K8sVersionDisclosure(Vulnerability, Event):
"""The kubernetes version could be obtained from the {} endpoint """
"""The kubernetes version could be obtained from the {} endpoint"""
def __init__(self, version, from_endpoint, extra_info=""):
Vulnerability.__init__(

View File

@@ -50,6 +50,12 @@ class Kubelet(KubernetesCluster):
name = "Kubelet"
class AWS(KubernetesCluster):
"""AWS Cluster"""
name = "AWS"
class Azure(KubernetesCluster):
"""Azure Cluster"""

View File

@@ -8,9 +8,10 @@ from netaddr import IPNetwork, IPAddress, AddrFormatError
from netifaces import AF_INET, ifaddresses, interfaces, gateways
from kube_hunter.conf import get_config
from kube_hunter.modules.discovery.kubernetes_client import list_all_k8s_cluster_nodes
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, NewHostEvent, Vulnerability
from kube_hunter.core.types import Discovery, InformationDisclosure, Azure
from kube_hunter.core.types import Discovery, InformationDisclosure, AWS, Azure
logger = logging.getLogger(__name__)
@@ -40,6 +41,21 @@ class RunningAsPodEvent(Event):
pass
class AWSMetadataApi(Vulnerability, Event):
"""Access to the AWS Metadata API exposes information about the machines associated with the cluster"""
def __init__(self, cidr):
Vulnerability.__init__(
self,
AWS,
"AWS Metadata Exposure",
category=InformationDisclosure,
vid="KHV053",
)
self.cidr = cidr
self.evidence = f"cidr: {cidr}"
class AzureMetadataApi(Vulnerability, Event):
"""Access to the Azure Metadata API exposes information about the machines associated with the cluster"""
@@ -99,6 +115,9 @@ class FromPodHostDiscovery(Discovery):
def execute(self):
config = get_config()
# Attempt to read all hosts from the Kubernetes API
for host in list_all_k8s_cluster_nodes(config.kubeconfig):
self.publish_event(NewHostEvent(host=host))
# Scan any hosts that the user specified
if config.remote or config.cidr:
self.publish_event(HostScanEvent())
@@ -107,6 +126,10 @@ class FromPodHostDiscovery(Discovery):
cloud = None
if self.is_azure_pod():
subnets, cloud = self.azure_metadata_discovery()
elif self.is_aws_pod_v1():
subnets, cloud = self.aws_metadata_v1_discovery()
elif self.is_aws_pod_v2():
subnets, cloud = self.aws_metadata_v2_discovery()
else:
subnets = self.gateway_discovery()
@@ -122,6 +145,46 @@ class FromPodHostDiscovery(Discovery):
if should_scan_apiserver:
self.publish_event(NewHostEvent(host=IPAddress(self.event.kubeservicehost), cloud=cloud))
def is_aws_pod_v1(self):
config = get_config()
try:
# Instance Metadata Service v1
logger.debug("From pod attempting to access AWS Metadata v1 API")
if (
requests.get(
"http://169.254.169.254/latest/meta-data/",
timeout=config.network_timeout,
).status_code
== 200
):
return True
except requests.exceptions.ConnectionError:
logger.debug("Failed to connect AWS metadata server v1")
return False
def is_aws_pod_v2(self):
config = get_config()
try:
# Instance Metadata Service v2
logger.debug("From pod attempting to access AWS Metadata v2 API")
token = requests.put(
"http://169.254.169.254/latest/api/token/",
headers={"X-aws-ec2-metatadata-token-ttl-seconds": "21600"},
timeout=config.network_timeout,
).text
if (
requests.get(
"http://169.254.169.254/latest/meta-data/",
headers={"X-aws-ec2-metatadata-token": token},
timeout=config.network_timeout,
).status_code
== 200
):
return True
except requests.exceptions.ConnectionError:
logger.debug("Failed to connect AWS metadata server v2")
return False
def is_azure_pod(self):
config = get_config()
try:
@@ -141,9 +204,60 @@ class FromPodHostDiscovery(Discovery):
# for pod scanning
def gateway_discovery(self):
""" Retrieving default gateway of pod, which is usually also a contact point with the host """
"""Retrieving default gateway of pod, which is usually also a contact point with the host"""
return [[gateways()["default"][AF_INET][0], "24"]]
# querying AWS's interface metadata api v1 | works only from a pod
def aws_metadata_v1_discovery(self):
config = get_config()
logger.debug("From pod attempting to access aws's metadata v1")
mac_address = requests.get(
"http://169.254.169.254/latest/meta-data/mac",
timeout=config.network_timeout,
).text
cidr = requests.get(
f"http://169.254.169.254/latest/meta-data/network/interfaces/macs/{mac_address}/subnet-ipv4-cidr-block",
timeout=config.network_timeout,
).text.split("/")
address, subnet = (cidr[0], cidr[1])
subnet = subnet if not config.quick else "24"
cidr = f"{address}/{subnet}"
logger.debug(f"From pod discovered subnet {cidr}")
self.publish_event(AWSMetadataApi(cidr=cidr))
return [(address, subnet)], "AWS"
# querying AWS's interface metadata api v2 | works only from a pod
def aws_metadata_v2_discovery(self):
config = get_config()
logger.debug("From pod attempting to access aws's metadata v2")
token = requests.get(
"http://169.254.169.254/latest/api/token",
headers={"X-aws-ec2-metatadata-token-ttl-seconds": "21600"},
timeout=config.network_timeout,
).text
mac_address = requests.get(
"http://169.254.169.254/latest/meta-data/mac",
headers={"X-aws-ec2-metatadata-token": token},
timeout=config.network_timeout,
).text
cidr = requests.get(
f"http://169.254.169.254/latest/meta-data/network/interfaces/macs/{mac_address}/subnet-ipv4-cidr-block",
headers={"X-aws-ec2-metatadata-token": token},
timeout=config.network_timeout,
).text.split("/")
address, subnet = (cidr[0], cidr[1])
subnet = subnet if not config.quick else "24"
cidr = f"{address}/{subnet}"
logger.debug(f"From pod discovered subnet {cidr}")
self.publish_event(AWSMetadataApi(cidr=cidr))
return [(address, subnet)], "AWS"
# querying azure's interface metadata api | works only from a pod
def azure_metadata_discovery(self):
config = get_config()
@@ -188,6 +302,9 @@ class HostDiscovery(Discovery):
elif len(config.remote) > 0:
for host in config.remote:
self.publish_event(NewHostEvent(host=host))
elif config.k8s_auto_discover_nodes:
for host in list_all_k8s_cluster_nodes(config.kubeconfig):
self.publish_event(NewHostEvent(host=host))
# for normal scanning
def scan_interfaces(self):

View File

@@ -0,0 +1,27 @@
import logging
import kubernetes
def list_all_k8s_cluster_nodes(kube_config=None, client=None):
logger = logging.getLogger(__name__)
try:
if kube_config:
logger.info("Attempting to use kubeconfig file: %s", kube_config)
kubernetes.config.load_kube_config(config_file=kube_config)
else:
logger.info("Attempting to use in cluster Kubernetes config")
kubernetes.config.load_incluster_config()
except kubernetes.config.config_exception.ConfigException:
logger.exception("Failed to initiate Kubernetes client")
return
try:
if client is None:
client = kubernetes.client.CoreV1Api()
ret = client.list_node(watch=False)
logger.info("Listed %d nodes in the cluster" % len(ret.items))
for item in ret.items:
for addr in item.status.addresses:
yield addr.address
except:
logger.exception("Failed to list nodes from Kubernetes")

View File

@@ -1,9 +1,10 @@
import os
import json
import logging
import requests
from kube_hunter.conf import get_config
from kube_hunter.modules.hunting.kubelet import ExposedRunHandler
from kube_hunter.modules.hunting.kubelet import ExposedPodsHandler, SecureKubeletPortHunter
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import Hunter, ActiveHunter, IdentityTheft, Azure
@@ -14,7 +15,7 @@ logger = logging.getLogger(__name__)
class AzureSpnExposure(Vulnerability, Event):
"""The SPN is exposed, potentially allowing an attacker to gain access to the Azure subscription"""
def __init__(self, container):
def __init__(self, container, evidence=""):
Vulnerability.__init__(
self,
Azure,
@@ -23,9 +24,10 @@ class AzureSpnExposure(Vulnerability, Event):
vid="KHV004",
)
self.container = container
self.evidence = evidence
@handler.subscribe(ExposedRunHandler, predicate=lambda x: x.cloud == "Azure")
@handler.subscribe(ExposedPodsHandler, predicate=lambda x: x.cloud_type == "Azure")
class AzureSpnHunter(Hunter):
"""AKS Hunting
Hunting Azure cluster deployments using specific known configurations
@@ -37,35 +39,33 @@ class AzureSpnHunter(Hunter):
# getting a container that has access to the azure.json file
def get_key_container(self):
config = get_config()
endpoint = f"{self.base_url}/pods"
logger.debug("Trying to find container with access to azure.json file")
try:
r = requests.get(endpoint, verify=False, timeout=config.network_timeout)
except requests.Timeout:
logger.debug("failed getting pod info")
else:
pods_data = r.json().get("items", [])
suspicious_volume_names = []
for pod_data in pods_data:
for volume in pod_data["spec"].get("volumes", []):
if volume.get("hostPath"):
path = volume["hostPath"]["path"]
if "/etc/kubernetes/azure.json".startswith(path):
suspicious_volume_names.append(volume["name"])
for container in pod_data["spec"]["containers"]:
for mount in container.get("volumeMounts", []):
if mount["name"] in suspicious_volume_names:
return {
"name": container["name"],
"pod": pod_data["metadata"]["name"],
"namespace": pod_data["metadata"]["namespace"],
}
# pods are saved in the previous event object
pods_data = self.event.pods
suspicious_volume_names = []
for pod_data in pods_data:
for volume in pod_data["spec"].get("volumes", []):
if volume.get("hostPath"):
path = volume["hostPath"]["path"]
if "/etc/kubernetes/azure.json".startswith(path):
suspicious_volume_names.append(volume["name"])
for container in pod_data["spec"]["containers"]:
for mount in container.get("volumeMounts", []):
if mount["name"] in suspicious_volume_names:
return {
"name": container["name"],
"pod": pod_data["metadata"]["name"],
"namespace": pod_data["metadata"]["namespace"],
"mount": mount,
}
def execute(self):
container = self.get_key_container()
if container:
self.publish_event(AzureSpnExposure(container=container))
evidence = f"pod: {container['pod']}, namespace: {container['namespace']}"
self.publish_event(AzureSpnExposure(container=container, evidence=evidence))
@handler.subscribe(AzureSpnExposure)
@@ -78,14 +78,42 @@ class ProveAzureSpnExposure(ActiveHunter):
self.event = event
self.base_url = f"https://{self.event.host}:{self.event.port}"
def test_run_capability(self):
"""
Uses SecureKubeletPortHunter to test the /run handler
TODO: when multiple event subscription is implemented, use this here to make sure /run is accessible
"""
debug_handlers = SecureKubeletPortHunter.DebugHandlers(path=self.base_url, session=self.event.session, pod=None)
return debug_handlers.test_run_container()
def run(self, command, container):
config = get_config()
run_url = "/".join(self.base_url, "run", container["namespace"], container["pod"], container["name"])
return requests.post(run_url, verify=False, params={"cmd": command}, timeout=config.network_timeout)
run_url = f"{self.base_url}/run/{container['namespace']}/{container['pod']}/{container['name']}"
return self.event.session.post(run_url, verify=False, params={"cmd": command}, timeout=config.network_timeout)
def get_full_path_to_azure_file(self):
"""
Returns a full path to /etc/kubernetes/azure.json
Taking into consideration the difference folder of the mount inside the container.
TODO: implement the edge case where the mount is to parent /etc folder.
"""
azure_file_path = self.event.container["mount"]["mountPath"]
# taking care of cases where a subPath is added to map the specific file
if not azure_file_path.endswith("azure.json"):
azure_file_path = os.path.join(azure_file_path, "azure.json")
return azure_file_path
def execute(self):
if not self.test_run_capability():
logger.debug("Not proving AzureSpnExposure because /run debug handler is disabled")
return
try:
subscription = self.run("cat /etc/kubernetes/azure.json", container=self.event.container).json()
azure_file_path = self.get_full_path_to_azure_file()
logger.debug(f"trying to access the azure.json at the resolved path: {azure_file_path}")
subscription = self.run(f"cat {azure_file_path}", container=self.event.container).json()
except requests.Timeout:
logger.debug("failed to run command in container", exc_info=True)
except json.decoder.JSONDecodeError:

View File

@@ -75,28 +75,28 @@ class ApiInfoDisclosure(Vulnerability, Event):
class ListPodsAndNamespaces(ApiInfoDisclosure):
""" Accessing pods might give an attacker valuable information"""
"""Accessing pods might give an attacker valuable information"""
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing pods")
class ListNamespaces(ApiInfoDisclosure):
""" Accessing namespaces might give an attacker valuable information """
"""Accessing namespaces might give an attacker valuable information"""
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing namespaces")
class ListRoles(ApiInfoDisclosure):
""" Accessing roles might give an attacker valuable information """
"""Accessing roles might give an attacker valuable information"""
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing roles")
class ListClusterRoles(ApiInfoDisclosure):
""" Accessing cluster roles might give an attacker valuable information """
"""Accessing cluster roles might give an attacker valuable information"""
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing cluster roles")
@@ -118,7 +118,7 @@ class CreateANamespace(Vulnerability, Event):
class DeleteANamespace(Vulnerability, Event):
""" Deleting a namespace might give an attacker the option to affect application behavior """
"""Deleting a namespace might give an attacker the option to affect application behavior"""
def __init__(self, evidence):
Vulnerability.__init__(
@@ -186,7 +186,7 @@ class PatchAClusterRole(Vulnerability, Event):
class DeleteARole(Vulnerability, Event):
""" Deleting a role might allow an attacker to affect access to resources in the namespace"""
"""Deleting a role might allow an attacker to affect access to resources in the namespace"""
def __init__(self, evidence):
Vulnerability.__init__(
@@ -199,7 +199,7 @@ class DeleteARole(Vulnerability, Event):
class DeleteAClusterRole(Vulnerability, Event):
""" Deleting a cluster role might allow an attacker to affect access to resources in the cluster"""
"""Deleting a cluster role might allow an attacker to affect access to resources in the cluster"""
def __init__(self, evidence):
Vulnerability.__init__(
@@ -212,7 +212,7 @@ class DeleteAClusterRole(Vulnerability, Event):
class CreateAPod(Vulnerability, Event):
""" Creating a new pod allows an attacker to run custom code"""
"""Creating a new pod allows an attacker to run custom code"""
def __init__(self, evidence):
Vulnerability.__init__(
@@ -225,7 +225,7 @@ class CreateAPod(Vulnerability, Event):
class CreateAPrivilegedPod(Vulnerability, Event):
""" Creating a new PRIVILEGED pod would gain an attacker FULL CONTROL over the cluster"""
"""Creating a new PRIVILEGED pod would gain an attacker FULL CONTROL over the cluster"""
def __init__(self, evidence):
Vulnerability.__init__(
@@ -238,7 +238,7 @@ class CreateAPrivilegedPod(Vulnerability, Event):
class PatchAPod(Vulnerability, Event):
""" Patching a pod allows an attacker to compromise and control it """
"""Patching a pod allows an attacker to compromise and control it"""
def __init__(self, evidence):
Vulnerability.__init__(
@@ -251,7 +251,7 @@ class PatchAPod(Vulnerability, Event):
class DeleteAPod(Vulnerability, Event):
""" Deleting a pod allows an attacker to disturb applications on the cluster """
"""Deleting a pod allows an attacker to disturb applications on the cluster"""
def __init__(self, evidence):
Vulnerability.__init__(

View File

@@ -41,7 +41,7 @@ class ArpSpoofHunter(ActiveHunter):
return ans[ARP].hwsrc if ans else None
def detect_l3_on_host(self, arp_responses):
""" returns True for an existence of an L3 network plugin """
"""returns True for an existence of an L3 network plugin"""
logger.debug("Attempting to detect L3 network plugin using ARP")
unique_macs = list({response[ARP].hwsrc for _, response in arp_responses})

View File

@@ -303,7 +303,7 @@ class SecureKubeletPortHunter(Hunter):
"""
class DebugHandlers:
""" all methods will return the handler name if successful """
"""all methods will return the handler name if successful"""
def __init__(self, path, pod, session=None):
self.path = path + ("/" if not path.endswith("/") else "")

View File

@@ -10,7 +10,7 @@ logger = logging.getLogger(__name__)
class ServiceAccountTokenAccess(Vulnerability, Event):
""" Accessing the pod service account token gives an attacker the option to use the server API """
"""Accessing the pod service account token gives an attacker the option to use the server API"""
def __init__(self, evidence):
Vulnerability.__init__(
@@ -24,7 +24,7 @@ class ServiceAccountTokenAccess(Vulnerability, Event):
class SecretsAccess(Vulnerability, Event):
""" Accessing the pod's secrets within a compromised pod might disclose valuable data to a potential attacker"""
"""Accessing the pod's secrets within a compromised pod might disclose valuable data to a potential attacker"""
def __init__(self, evidence):
Vulnerability.__init__(

View File

@@ -41,6 +41,7 @@ install_requires =
packaging
dataclasses
pluggy
kubernetes==12.0.1
setup_requires =
setuptools>=30.3.0
setuptools_scm

View File

@@ -6,6 +6,8 @@ from kube_hunter.core.events.types import Event, Service
from kube_hunter.core.events import handler
counter = 0
first_run = True
set_config(Config())
@@ -19,6 +21,16 @@ class RegularEvent(Service, Event):
Service.__init__(self, "Test Service")
class AnotherRegularEvent(Service, Event):
def __init__(self):
Service.__init__(self, "Test Service (another)")
class DifferentRegularEvent(Service, Event):
def __init__(self):
Service.__init__(self, "Test Service (different)")
@handler.subscribe_once(OnceOnlyEvent)
class OnceHunter(Hunter):
def __init__(self, event):
@@ -33,8 +45,36 @@ class RegularHunter(Hunter):
counter += 1
@handler.subscribe_many([DifferentRegularEvent, AnotherRegularEvent])
class SmartHunter(Hunter):
def __init__(self, events):
global counter, first_run
counter += 1
# we add an attribute on the second scan.
# here we test that we get the latest event
different_event = events.get_by_class(DifferentRegularEvent)
if first_run:
first_run = False
assert not different_event.new_value
else:
assert different_event.new_value
@handler.subscribe_many([DifferentRegularEvent, AnotherRegularEvent])
class SmartHunter2(Hunter):
def __init__(self, events):
global counter
counter += 1
# check if we can access the events
assert events.get_by_class(DifferentRegularEvent).__class__ == DifferentRegularEvent
assert events.get_by_class(AnotherRegularEvent).__class__ == AnotherRegularEvent
def test_subscribe_mechanism():
global counter
counter = 0
# first test normal subscribe and publish works
handler.publish_event(RegularEvent())
@@ -43,13 +83,47 @@ def test_subscribe_mechanism():
time.sleep(0.02)
assert counter == 3
def test_subscribe_once_mechanism():
global counter
counter = 0
# testing the subscribe_once mechanism
handler.publish_event(OnceOnlyEvent())
handler.publish_event(OnceOnlyEvent())
# testing the multiple subscription mechanism
handler.publish_event(OnceOnlyEvent())
time.sleep(0.02)
# should have been triggered once
assert counter == 1
counter = 0
handler.publish_event(OnceOnlyEvent())
handler.publish_event(OnceOnlyEvent())
handler.publish_event(OnceOnlyEvent())
time.sleep(0.02)
assert counter == 0
def test_subscribe_many_mechanism():
global counter
counter = 0
# testing the multiple subscription mechanism
handler.publish_event(DifferentRegularEvent())
handler.publish_event(DifferentRegularEvent())
handler.publish_event(DifferentRegularEvent())
handler.publish_event(DifferentRegularEvent())
handler.publish_event(DifferentRegularEvent())
handler.publish_event(AnotherRegularEvent())
time.sleep(0.02)
# We expect SmartHunter and SmartHunter2 to be executed once. hence the counter should be 2
assert counter == 2
counter = 0
# Test using most recent event
newer_version_event = DifferentRegularEvent()
newer_version_event.new_value = True
handler.publish_event(newer_version_event)
assert counter == 2

View File

@@ -1,4 +1,12 @@
# flake8: noqa: E402
from kube_hunter.modules.discovery.hosts import (
FromPodHostDiscovery,
RunningAsPodEvent,
HostScanEvent,
HostDiscoveryHelpers,
)
from kube_hunter.core.types import Hunter
from kube_hunter.core.events import handler
import json
import requests_mock
import pytest
@@ -9,19 +17,10 @@ from kube_hunter.conf import Config, get_config, set_config
set_config(Config())
from kube_hunter.core.events import handler
from kube_hunter.core.types import Hunter
from kube_hunter.modules.discovery.hosts import (
FromPodHostDiscovery,
RunningAsPodEvent,
HostScanEvent,
HostDiscoveryHelpers,
)
class TestFromPodHostDiscovery:
@staticmethod
def _make_response(*subnets: List[tuple]) -> str:
def _make_azure_response(*subnets: List[tuple]) -> str:
return json.dumps(
{
"network": {
@@ -32,6 +31,10 @@ class TestFromPodHostDiscovery:
}
)
@staticmethod
def _make_aws_response(*data: List[str]) -> str:
return "\n".join(data)
def test_is_azure_pod_request_fail(self):
f = FromPodHostDiscovery(RunningAsPodEvent())
@@ -47,12 +50,125 @@ class TestFromPodHostDiscovery:
with requests_mock.Mocker() as m:
m.get(
"http://169.254.169.254/metadata/instance?api-version=2017-08-01",
text=TestFromPodHostDiscovery._make_response(("3.4.5.6", "255.255.255.252")),
text=TestFromPodHostDiscovery._make_azure_response(("3.4.5.6", "255.255.255.252")),
)
result = f.is_azure_pod()
assert result
def test_is_aws_pod_v1_request_fail(self):
f = FromPodHostDiscovery(RunningAsPodEvent())
with requests_mock.Mocker() as m:
m.get("http://169.254.169.254/latest/meta-data/", status_code=404)
result = f.is_aws_pod_v1()
assert not result
def test_is_aws_pod_v1_success(self):
f = FromPodHostDiscovery(RunningAsPodEvent())
with requests_mock.Mocker() as m:
m.get(
"http://169.254.169.254/latest/meta-data/",
text=TestFromPodHostDiscovery._make_aws_response(
"\n".join(
(
"ami-id",
"ami-launch-index",
"ami-manifest-path",
"block-device-mapping/",
"events/",
"hostname",
"iam/",
"instance-action",
"instance-id",
"instance-type",
"local-hostname",
"local-ipv4",
"mac",
"metrics/",
"network/",
"placement/",
"profile",
"public-hostname",
"public-ipv4",
"public-keys/",
"reservation-id",
"security-groups",
"services/",
)
),
),
)
result = f.is_aws_pod_v1()
assert result
def test_is_aws_pod_v2_request_fail(self):
f = FromPodHostDiscovery(RunningAsPodEvent())
with requests_mock.Mocker() as m:
m.put(
"http://169.254.169.254/latest/api/token/",
headers={"X-aws-ec2-metatadata-token-ttl-seconds": "21600"},
status_code=404,
)
m.get(
"http://169.254.169.254/latest/meta-data/",
headers={"X-aws-ec2-metatadata-token": "token"},
status_code=404,
)
result = f.is_aws_pod_v2()
assert not result
def test_is_aws_pod_v2_success(self):
f = FromPodHostDiscovery(RunningAsPodEvent())
with requests_mock.Mocker() as m:
m.put(
"http://169.254.169.254/latest/api/token/",
headers={"X-aws-ec2-metatadata-token-ttl-seconds": "21600"},
text=TestFromPodHostDiscovery._make_aws_response("token"),
)
m.get(
"http://169.254.169.254/latest/meta-data/",
headers={"X-aws-ec2-metatadata-token": "token"},
text=TestFromPodHostDiscovery._make_aws_response(
"\n".join(
(
"ami-id",
"ami-launch-index",
"ami-manifest-path",
"block-device-mapping/",
"events/",
"hostname",
"iam/",
"instance-action",
"instance-id",
"instance-type",
"local-hostname",
"local-ipv4",
"mac",
"metrics/",
"network/",
"placement/",
"profile",
"public-hostname",
"public-ipv4",
"public-keys/",
"reservation-id",
"security-groups",
"services/",
)
),
),
)
result = f.is_aws_pod_v2()
assert result
def test_execute_scan_cidr(self):
set_config(Config(cidr="1.2.3.4/30"))
f = FromPodHostDiscovery(RunningAsPodEvent())

View File

@@ -0,0 +1,31 @@
from kube_hunter.conf import Config, set_config
set_config(Config())
from kube_hunter.modules.discovery.kubernetes_client import list_all_k8s_cluster_nodes
from unittest.mock import MagicMock, patch
def test_client_yields_ips():
client = MagicMock()
response = MagicMock()
client.list_node.return_value = response
response.items = [MagicMock(), MagicMock()]
response.items[0].status.addresses = [MagicMock(), MagicMock()]
response.items[0].status.addresses[0].address = "127.0.0.1"
response.items[0].status.addresses[1].address = "127.0.0.2"
response.items[1].status.addresses = [MagicMock()]
response.items[1].status.addresses[0].address = "127.0.0.3"
with patch('kubernetes.config.load_incluster_config') as m:
output = list(list_all_k8s_cluster_nodes(client=client))
m.assert_called_once()
assert output == ["127.0.0.1", "127.0.0.2", "127.0.0.3"]
def test_client_uses_kubeconfig():
with patch('kubernetes.config.load_kube_config') as m:
list(list_all_k8s_cluster_nodes(kube_config="/location", client=MagicMock()))
m.assert_called_once_with(config_file="/location")

View File

@@ -3,54 +3,47 @@ import requests_mock
from kube_hunter.conf import Config, set_config
import json
set_config(Config())
from kube_hunter.modules.hunting.kubelet import ExposedRunHandler
from kube_hunter.modules.hunting.kubelet import ExposedPodsHandler
from kube_hunter.modules.hunting.aks import AzureSpnHunter
def test_AzureSpnHunter():
e = ExposedRunHandler()
e.host = "mockKubernetes"
e.port = 443
e.protocol = "https"
e = ExposedPodsHandler(pods=[])
pod_template = '{{"items":[ {{"apiVersion":"v1","kind":"Pod","metadata":{{"name":"etc","namespace":"default"}},"spec":{{"containers":[{{"command":["sleep","99999"],"image":"ubuntu","name":"test","volumeMounts":[{{"mountPath":"/mp","name":"v"}}]}}],"volumes":[{{"hostPath":{{"path":"{}"}},"name":"v"}}]}}}} ]}}'
bad_paths = ["/", "/etc", "/etc/", "/etc/kubernetes", "/etc/kubernetes/azure.json"]
good_paths = ["/yo", "/etc/yo", "/etc/kubernetes/yo.json"]
for p in bad_paths:
with requests_mock.Mocker() as m:
m.get("https://mockKubernetes:443/pods", text=pod_template.format(p))
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c
e.pods = json.loads(pod_template.format(p))["items"]
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c
for p in good_paths:
with requests_mock.Mocker() as m:
m.get("https://mockKubernetes:443/pods", text=pod_template.format(p))
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
with requests_mock.Mocker() as m:
pod_no_volume_mounts = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test"}],"volumes":[{"hostPath":{"path":"/whatever"},"name":"v"}]}} ]}'
m.get("https://mockKubernetes:443/pods", text=pod_no_volume_mounts)
e.pods = json.loads(pod_template.format(p))["items"]
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
with requests_mock.Mocker() as m:
pod_no_volumes = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test"}]}} ]}'
m.get("https://mockKubernetes:443/pods", text=pod_no_volumes)
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
pod_no_volume_mounts = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test"}],"volumes":[{"hostPath":{"path":"/whatever"},"name":"v"}]}} ]}'
e.pods = json.loads(pod_no_volume_mounts)["items"]
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
with requests_mock.Mocker() as m:
pod_other_volume = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test","volumeMounts":[{"mountPath":"/mp","name":"v"}]}],"volumes":[{"emptyDir":{},"name":"v"}]}} ]}'
m.get("https://mockKubernetes:443/pods", text=pod_other_volume)
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
pod_no_volumes = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test"}]}} ]}'
e.pods = json.loads(pod_no_volumes)["items"]
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None
pod_other_volume = '{"items":[ {"apiVersion":"v1","kind":"Pod","metadata":{"name":"etc","namespace":"default"},"spec":{"containers":[{"command":["sleep","99999"],"image":"ubuntu","name":"test","volumeMounts":[{"mountPath":"/mp","name":"v"}]}],"volumes":[{"emptyDir":{},"name":"v"}]}} ]}'
e.pods = json.loads(pod_other_volume)["items"]
h = AzureSpnHunter(e)
c = h.get_key_container()
assert c == None