Compare commits

..

33 Commits

Author SHA1 Message Date
danielsagi
17513d2c6f Merge branch 'master' into refactor_host_discovery 2019-12-01 00:45:07 +02:00
Oleg Butuzov
fd4d79a853 adding codecoverage (#198) 2019-11-30 08:45:33 +00:00
Liz Rice
3326171c7a docs: clarify that job.yaml is an example (#279) 2019-11-27 18:42:35 +02:00
Liz Rice
4c82b68f48 Merges #225 (#278)
* Fix typos

* Fix review comments
2019-11-26 21:11:33 +02:00
danielsagi
7c2ec7c03c Merge branch 'master' into refactor_host_discovery 2019-11-15 19:55:51 +02:00
Daniel Sagi
ea5e17116d removed unecessary host checking in get_cloud 2019-09-10 16:10:27 +03:00
danielsagi
79a7882cba Merge branch 'master' into refactor_host_discovery 2019-09-08 12:49:22 +03:00
Daniel Sagi
2672124169 changed minor check for truthness of config.remote 2019-09-08 12:44:55 +03:00
Daniel Sagi
023a5d6640 changed pod_subnet_discovery to return tuple 2019-09-08 12:43:52 +03:00
Daniel Sagi
f5e76f7c1e removed unecessary loop in hosts discoevry 2019-09-08 12:42:54 +03:00
Daniel Sagi
710aa63dc2 added unpacking in aks discovery 2019-09-08 12:28:23 +03:00
Daniel Sagi
cc7f708c7e changed predicate from wrong azure cloud type 2019-08-19 17:30:25 +03:00
Daniel Sagi
0da9c97031 added tests 2019-08-19 16:39:37 +03:00
Daniel Sagi
06f73244a5 changed canhazip to https 2019-08-19 16:12:04 +03:00
Daniel Sagi
cb3c1dd3b7 fixed merge, new apiserver duplicate evasion 2019-08-19 16:11:01 +03:00
Daniel Sagi
c40547e387 Merge branch 'refactor_host_discovery' of ssh://github.com/aquasecurity/kube-hunter into refactor_host_discovery 2019-08-19 15:58:21 +03:00
Daniel Sagi
676b23e68a moved hostscan event to top 2019-08-19 15:58:09 +03:00
Daniel Sagi
6a7ba489f5 removed localhost exclusion 2019-08-19 15:49:16 +03:00
Daniel Sagi
014b472c0c simplfied generate_subnet 2019-08-04 14:50:37 +03:00
Daniel Sagi
4f01667a6b changed classmethod to staticmethod on get_cloud 2019-07-03 18:50:46 +03:00
Daniel Sagi
82982183fd changed comment for default clud discovery 2019-07-03 18:44:59 +03:00
Liz Rice
127029f46c Merge branch 'master' into refactor_host_discovery 2019-07-03 09:25:05 +01:00
Daniel Sagi
9dae39eb54 changed azure checks on tests 2019-07-01 21:01:57 +03:00
Daniel Sagi
ceeb70c5f4 added test for canhazip.com 2019-07-01 20:17:07 +03:00
Daniel Sagi
903c0689f0 moved aks metadata discovery to own module, from hosts 2019-07-01 20:16:22 +03:00
Daniel Sagi
5f39ee0963 changed aks hunting to use the new CloudTypes Enum 2019-07-01 20:15:44 +03:00
Daniel Sagi
2c8e8467d2 refactored most, added RunningPodOnCloud event, triggered on cloud detection, also added support for the new CloudTypes 2019-07-01 20:14:45 +03:00
Daniel Sagi
34be06eaa4 added CloudTypes Enum, contains the cloud typs 2019-07-01 20:13:23 +03:00
Daniel Sagi
f6782b9ffc Merge branch 'refactor_host_discovery' of ssh://github.com/aquasecurity/kube-hunter into refactor_host_discovery 2019-07-01 12:44:46 +03:00
Daniel Sagi
6b2a382ada changed is_azure_pod to is_azure_api on tests 2019-07-01 12:44:33 +03:00
danielsagi
391faffe49 Merge branch 'master' into refactor_host_discovery 2019-07-01 12:38:17 +03:00
Daniel Sagi
b3155fcdb0 fixed wrong kwarg in get_cloud 2019-06-30 23:07:13 +03:00
Daniel Sagi
5db3f057a8 replaced traceroute discovery, and refactored the code a bit 2019-06-30 21:57:13 +03:00
22 changed files with 258 additions and 145 deletions

1
.gitignore vendored
View File

@@ -4,6 +4,7 @@
*aqua*
venv/
.vscode
.coverage
.idea
# Directory Cache Files

View File

@@ -14,8 +14,11 @@ before_script:
- flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
- flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- pip install pytest coverage pytest-cov
script:
- python runtest.py
after_success:
- bash <(curl -s https://codecov.io/bash)
notifications:
on_success: change
on_failure: change # `always` will be the setting once code changes slow down

View File

@@ -1,10 +1,12 @@
![kube-hunter](https://github.com/aquasecurity/kube-hunter/blob/master/kube-hunter.png)
[![Build Status](https://travis-ci.org/aquasecurity/kube-hunter.svg?branch=master)](https://travis-ci.org/aquasecurity/kube-hunter)
[![codecov](https://codecov.io/gh/aquasecurity/kube-hunter/branch/master/graph/badge.svg)](https://codecov.io/gh/aquasecurity/kube-hunter)
[![License](https://img.shields.io/github/license/aquasecurity/kube-hunter)](https://github.com/aquasecurity/kube-hunter/blob/master/LICENSE)
[![Docker image](https://images.microbadger.com/badges/image/aquasec/kube-hunter.svg)](https://microbadger.com/images/aquasec/kube-hunter "Get your own image badge on microbadger.com")
kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments. **You should NOT run kube-hunter on a Kubernetes cluster that you don't own!**
**Run kube-hunter**: kube-hunter is available as a container (aquasec/kube-hunter), and we also offer a web site at [kube-hunter.aquasec.com](https://kube-hunter.aquasec.com) where you can register online to receive a token allowing you to see and share the results online. You can also run the Python code yourself as described below.
@@ -148,7 +150,8 @@ By default, kube-hunter runs in interactive mode. You can also specify the scann
### Pod
This option lets you discover what running a malicious container can do/discover on your cluster. This gives a perspective on what an attacker could do if they were able to compromise a pod, perhaps through a software vulnerability. This may reveal significantly more vulnerabilities.
The `job.yaml` file defines a Job that will run kube-hunter in a pod, using default Kubernetes pod access settings.
The example `job.yaml` file defines a Job that will run kube-hunter in a pod, using default Kubernetes pod access settings. (You may wish to modify this definition, for example to run as a non-root user, or to run in a different namespace.)
* Run the job with `kubectl create -f ./job.yaml`
* Find the pod name with `kubectl describe job kube-hunter`
* View the test results with `kubectl logs <pod name>`

View File

@@ -8,7 +8,7 @@ categories: [Information Disclosure]
## Issue description
The kubelet is is leaking container logs via the `/containerLogs` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
The kubelet is leaking container logs via the `/containerLogs` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation

View File

@@ -8,7 +8,7 @@ categories: [Information Disclosure]
## Issue description
The kubelet is is leaking information about runnig pods via the `/runningpods` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
The kubelet is leaking information about running pods via the `/runningpods` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation

View File

@@ -8,7 +8,7 @@ categories: [Remote Code Execution]
## Issue description
An attacker could run arbitrary commands on a container via the the kubelet's `/exec` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
An attacker could run arbitrary commands on a container via the kubelet's `/exec` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation

View File

@@ -8,7 +8,7 @@ categories: [Remote Code Execution]
## Issue description
An attacker could run arbitrary commands on a container via the the kubelet's `/run` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
An attacker could run arbitrary commands on a container via the kubelet's `/run` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation

View File

@@ -8,7 +8,7 @@ categories: [Remote Code Execution]
## Issue description
An attacker could read and write data from a pod via the the kubelet's `/portForward` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
An attacker could read and write data from a pod via the kubelet's `/portForward` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation

View File

@@ -8,7 +8,7 @@ categories: [Remote Code Execution]
## Issue description
An attacker could attach to a running container via a websocket on the the the kubelet's `/attach` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
An attacker could attach to a running container via a websocket on the kubelet's `/attach` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation

View File

@@ -8,7 +8,7 @@ categories: [Information Disclosure]
## Issue description
The kubelet is is leaking it's health information, which may contain sensitive information, via the `/healthz` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
The kubelet is leaking it's health information, which may contain sensitive information, via the `/healthz` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation

View File

@@ -8,7 +8,7 @@ categories: [Information Disclosure]
## Issue description
The kubelet is is leaking system logs via the `/logs` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
The kubelet is leaking system logs via the `/logs` endpoint. This endpoint is exposed as part of the kubelet's debug handlers.
## Remediation

View File

@@ -8,7 +8,7 @@ categories: [Information Disclosure]
## Issue description
An open kubectl proxy was detected. `kubectl proxy` is a convenient tool to connent from a local machine into an application running in Kubernetes or to the Kubernetes API. This is common practice to browse for example the Kubernetes dashboard. Leaving an open proxy can be exploited by an attacker to gain access into your entire cluster.
An open kubectl proxy was detected. `kubectl proxy` is a convenient tool to connect from a local machine into an application running in Kubernetes or to the Kubernetes API. This is common practice to browse for example the Kubernetes dashboard. Leaving an open proxy can be exploited by an attacker to gain access into your entire cluster.
## Remediation

View File

@@ -1,5 +1,5 @@
import logging
# Supress logging from scapy
# Suppress logging from scapy
logging.getLogger("scapy.runtime").setLevel(logging.CRITICAL)
logging.getLogger("scapy.loading").setLevel(logging.CRITICAL)

33
setup.cfg Normal file
View File

@@ -0,0 +1,33 @@
[aliases]
test=pytest
# PyTest
[tool:pytest]
minversion = 2.9.1
norecursedirs = .venv .vscode
addopts = --cov=src
testpaths = tests
console_output_style = progress
python_classes = Test*
python_files = test_*.py
python_functions = test_*
filterwarnings = ignore::DeprecationWarning
# Coverage
[coverage:report]
# show missing lines numbers
show_missing = True
# Regexes for lines to exclude from consideration
exclude_lines =
# Have to re-enable the standard pragma
pragma: no cover
# Don't complain about missing debug-only code:
def __repr__
if self\.debug
# Don't complain if tests don't hit defensive
# assertion code:
raise AssertionError
raise NotImplementedError
# Don't complain if non-runnable code isn't run:
if 0:
if __name__ == .__main__.:

View File

@@ -25,7 +25,7 @@ When you write your module, you can decide on which Event to subscribe to, meani
-----------------------
### Hunter Types
There are three hunter types which you can implement: a `Hunter`, `ActiveHunter` and `Discovery`. Hunters just probe the state of a cluster, whereas ActiveHunter modules can attempt operations that could change the state of the cluster. Discovery is Hunter for discovery purposes only.
There are three hunter types which you can implement: a `Hunter`, `ActiveHunter` and `Discovery`. Hunters just probe the state of a cluster, whereas ActiveHunter modules can attempt operations that could change the state of the cluster. Discovery is Hunter for discovery purposes only.
##### Hunter
Example:
~~~python

View File

@@ -27,7 +27,7 @@ class Event(object):
# Event's logical location to be used mainly for reports.
# If event don't implement it check previous event
# This is because events are composed (previous -> previous ...)
# and not inheritted
# and not inherited
def location(self):
location = None
if self.previous:
@@ -77,7 +77,7 @@ class Vulnerability(object):
UnauthenticatedAccess: "low"
})
# TODO: make vid mandatry once migration is done
# TODO: make vid mandatory once migration is done
def __init__(self, component, name, category=None, vid=None):
self.vid = vid
self.component = component
@@ -154,7 +154,7 @@ class ReportDispatched(Event):
pass
""" Core Vulnerabilites """
""" Core Vulnerabilities """
class K8sVersionDisclosure(Vulnerability, Event):
"""The kubernetes version could be obtained from the {} endpoint """
def __init__(self, version, from_endpoint, extra_info=""):

View File

@@ -1,3 +1,5 @@
from enum import Enum
class HunterBase(object):
publishedVulnerabilities = 0
@@ -32,6 +34,19 @@ class Discovery(HunterBase):
pass
""" Clouds Enum """
class CloudTypes(Enum):
"""Values are as defined in azurespeed"""
AKS = "Azure"
EKS = "AWS"
ACK = "AliCloud"
NO_CLOUD = "No Cloud"
@classmethod
def get_enum(cls, value):
return {item.value: item for item in cls}.get(value, cls.NO_CLOUD)
"""Kubernetes Components"""
class KubernetesCluster():
"""Kubernetes Cluster"""
@@ -78,4 +93,5 @@ class PrivilegeEscalation(KubernetesCluster):
class DenialOfService(object):
name = "Denial of Service"
from .events import handler # import is in the bottom to break import loops

View File

@@ -0,0 +1,59 @@
import os
import json
import logging
import sys
import requests
from netaddr import IPNetwork
from __main__ import config
from ...core.events import handler
from ...core.events.types import Event, NewHostEvent, Vulnerability
from ...core.types import Discovery, InformationDisclosure, Azure, CloudTypes
from .hosts import RunningPodOnCloud, HostDiscoveryUtils
class AzureMetadataApi(Vulnerability, Event):
"""Access to the Azure Metadata API exposes information about the machines associated with the cluster"""
def __init__(self, cidr):
Vulnerability.__init__(self, Azure, "Azure Metadata Exposure", category=InformationDisclosure)
self.cidr = cidr
self.evidence = "cidr: {}".format(cidr)
@handler.subscribe(RunningPodOnCloud, predicate=lambda x: x.cloud == CloudTypes.AKS)
class AzureHostDiscovery(Discovery):
"""Azure Host Discovery
Discovers AKS specific nodes when running as a pod in Azure
"""
def __init__(self, event):
self.event = event
def is_azure_api(self):
try:
logging.debug("From pod attempting to access Azure Metadata API")
if requests.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", headers={"Metadata":"true"}, timeout=5).status_code == 200:
return True
except requests.exceptions.ConnectionError:
return False
# quering azure's interface metadata api | works only from a pod
def azure_metadata_subnets_discovery(self):
logging.debug("From pod attempting to access azure's metadata")
machine_metadata = json.loads(requests.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", headers={"Metadata":"true"}).text)
subnets = list()
for interface in machine_metadata["network"]["interface"]:
address, subnet = interface["ipv4"]["subnet"][0]["address"], interface["ipv4"]["subnet"][0]["prefix"]
logging.debug("From pod discovered subnet {0}/{1}".format(address, subnet if not config.quick else "24"))
subnets.append([address,subnet if not config.quick else "24"])
self.publish_event(AzureMetadataApi(cidr="{}/{}".format(address, subnet)))
return subnets
def execute(self):
if self.is_azure_api():
for address, cidr in self.azure_metadata_subnets_discovery():
logging.debug("Azure subnet scanning {0}/{1}".format(address, cidr))
for ip in HostDiscoveryUtils.generate_subnet(ip=address, sn=cidr):
self.publish_event(NewHostEvent(host=ip, cloud=CloudTypes.AKS))

View File

@@ -4,17 +4,23 @@ import logging
import socket
import sys
import time
from enum import Enum
import requests
from netaddr import IPNetwork, IPAddress
from __main__ import config
from netifaces import AF_INET, ifaddresses, interfaces
from netifaces import AF_INET, ifaddresses, interfaces, gateways
from ...core.events import handler
from ...core.events.types import Event, NewHostEvent, Vulnerability
from ...core.types import Discovery, InformationDisclosure, Azure
from ...core.types import Discovery, InformationDisclosure, Azure, CloudTypes
class RunningPodOnCloud(Event):
def __init__(self, cloud):
self.cloud = cloud
class HostScanEvent(Event):
pass
class RunningAsPodEvent(Event):
def __init__(self):
@@ -29,7 +35,6 @@ class RunningAsPodEvent(Event):
location = "Local to Pod"
if 'HOSTNAME' in os.environ:
location += "(" + os.environ['HOSTNAME'] + ")"
return location
def get_service_account_file(self, file):
@@ -39,43 +44,55 @@ class RunningAsPodEvent(Event):
except IOError:
pass
class AzureMetadataApi(Vulnerability, Event):
"""Access to the Azure Metadata API exposes information about the machines associated with the cluster"""
def __init__(self, cidr):
Vulnerability.__init__(self, Azure, "Azure Metadata Exposure", category=InformationDisclosure, vid="KHV003")
self.cidr = cidr
self.evidence = "cidr: {}".format(cidr)
class HostScanEvent(Event):
def __init__(self, pod=False, active=False, predefined_hosts=list()):
self.active = active # flag to specify whether to get actual data from vulnerabilities
self.predefined_hosts = predefined_hosts
class HostDiscoveryHelpers:
class HostDiscoveryUtils:
""" Static class containes util functions for Host discovery processes """
@staticmethod
def get_cloud(host):
""" Returns cloud for a given ip address, defaults to NO_CLOUD """
cloud = ""
try:
logging.debug("Checking whether the cluster is deployed on azure's cloud")
logging.debug("Checking if {} is deployed on a cloud".format(host))
# azurespeed.com provide their API via HTTP only; the service can be queried with
# HTTPS, but doesn't show a proper certificate. Since no encryption is worse then
# any encryption, we go with the verify=false option for the time being. At least
# this prevents leaking internal IP addresses to passive eavesdropping.
# TODO: find a more secure service to detect cloud IPs
metadata = requests.get("https://www.azurespeed.com/api/region?ipOrUrl={ip}".format(ip=host), verify=False).text
if "cloud" in metadata:
cloud = json.loads(metadata)["cloud"]
except requests.ConnectionError as e:
logging.info("- unable to check cloud: {0}".format(e))
return
if "cloud" in metadata:
return json.loads(metadata)["cloud"]
# generator, generating a subnet by given a cidr
return CloudTypes.get_enum(cloud)
@staticmethod
def get_default_gateway():
return gateways()['default'][AF_INET][0]
@staticmethod
def get_external_ip():
external_ip = None
try:
logging.debug("HostDiscovery hunter attempting to get external IP address")
external_ip = requests.get("https://canhazip.com", verify=False).text # getting external ip, to determine if cloud cluster
except requests.ConnectionError as e:
logging.debug("unable to determine external IP address: {0}".format(e))
return external_ip
# generator, generating ip addresses from a given cidr
@staticmethod
def generate_subnet(ip, sn="24"):
logging.debug("HostDiscoveryHelpers.generate_subnet {0}/{1}".format(ip, sn))
subnet = IPNetwork('{ip}/{sn}'.format(ip=ip, sn=sn))
for ip in IPNetwork(subnet):
logging.debug("HostDiscoveryHelpers.generate_subnet yielding {0}".format(ip))
yield ip
logging.debug("HostDiscoveryUtils.generate_subnet {0}/{1}".format(ip, sn))
return IPNetwork('{ip}/{sn}'.format(ip=ip, sn=sn))
# generate ip addresses from all internal network interfaces
@staticmethod
def generate_interfaces_subnet(sn='24'):
for ifaceName in interfaces():
for ip in [i['addr'] for i in ifaddresses(ifaceName).setdefault(AF_INET, [])]:
for ip in HostDiscoveryUtils.generate_subnet(ip, sn):
yield ip
@handler.subscribe(RunningAsPodEvent)
@@ -87,59 +104,36 @@ class FromPodHostDiscovery(Discovery):
self.event = event
def execute(self):
# Scan any hosts that the user specified
# If user has specified specific remotes, scanning only them
if config.remote or config.cidr:
self.publish_event(HostScanEvent())
else:
# Discover cluster subnets, we'll scan all these hosts
if self.is_azure_pod():
subnets, cloud = self.azure_metadata_discovery()
else:
subnets, cloud = self.traceroute_discovery()
# figuring out the cloud from the external ip, default to CloudTypes.NO_CLOUD
external_ip = HostDiscoveryUtils.get_external_ip()
cloud = HostDiscoveryUtils.get_cloud(external_ip)
should_scan_apiserver = False
# specific cloud discoveries should subscribe to RunningPodOnCloud
if cloud != CloudTypes.NO_CLOUD:
self.publish_event(RunningPodOnCloud(cloud=cloud))
# normal pod discovery
pod_subnet = self.pod_subnet_discovery()
logging.debug("From pod scanning subnet {0}/{1}".format(pod_subnet[0], pod_subnet[1]))
for ip in HostDiscoveryUtils.generate_subnet(ip=pod_subnet[0], sn=pod_subnet[1]):
self.publish_event(NewHostEvent(host=ip, cloud=cloud))
# manually publishing the Api server host if outside the subnet
if self.event.kubeservicehost:
should_scan_apiserver = True
for subnet in subnets:
if self.event.kubeservicehost and self.event.kubeservicehost in IPNetwork("{}/{}".format(subnet[0], subnet[1])):
should_scan_apiserver = False
logging.debug("From pod scanning subnet {0}/{1}".format(subnet[0], subnet[1]))
for ip in HostDiscoveryHelpers.generate_subnet(ip=subnet[0], sn=subnet[1]):
self.publish_event(NewHostEvent(host=ip, cloud=cloud))
if should_scan_apiserver:
self.publish_event(NewHostEvent(host=IPAddress(self.event.kubeservicehost), cloud=cloud))
if self.event.kubeservicehost not in IPNetwork("{}/{}".format(pod_subnet[0], pod_subnet[1])):
self.publish_event(NewHostEvent(host=IPAddress(self.event.kubeservicehost), cloud=cloud))
def is_azure_pod(self):
try:
logging.debug("From pod attempting to access Azure Metadata API")
if requests.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", headers={"Metadata":"true"}, timeout=5).status_code == 200:
return True
except requests.exceptions.ConnectionError:
return False
def pod_subnet_discovery(self):
# normal option when running as a pod is to scan it's own subnet
# The gateway connects us to the host, and we can discover the
# kubelet from there, other ip's are pods that are running
# next to us,
return HostDiscoveryUtils.get_default_gateway(), "24"
# for pod scanning
def traceroute_discovery(self):
external_ip = requests.get("http://canhazip.com").text # getting external ip, to determine if cloud cluster
from scapy.all import ICMP, IP, Ether, srp1
node_internal_ip = srp1(Ether() / IP(dst="google.com" , ttl=1) / ICMP(), verbose=0)[IP].src
return [ [node_internal_ip,"24"], ], external_ip
# querying azure's interface metadata api | works only from a pod
def azure_metadata_discovery(self):
logging.debug("From pod attempting to access azure's metadata")
machine_metadata = json.loads(requests.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", headers={"Metadata":"true"}).text)
address, subnet = "", ""
subnets = list()
for interface in machine_metadata["network"]["interface"]:
address, subnet = interface["ipv4"]["subnet"][0]["address"], interface["ipv4"]["subnet"][0]["prefix"]
logging.debug("From pod discovered subnet {0}/{1}".format(address, subnet if not config.quick else "24"))
subnets.append([address,subnet if not config.quick else "24"])
self.publish_event(AzureMetadataApi(cidr="{}/{}".format(address, subnet)))
return subnets, "Azure"
@handler.subscribe(HostScanEvent)
class HostDiscovery(Discovery):
@@ -150,43 +144,25 @@ class HostDiscovery(Discovery):
self.event = event
def execute(self):
# handling multiple scan options
if config.cidr:
try:
ip, sn = config.cidr.split('/')
except ValueError as e:
logging.error("unable to parse cidr: {0}".format(e))
return
cloud = HostDiscoveryHelpers.get_cloud(ip)
for ip in HostDiscoveryHelpers.generate_subnet(ip, sn=sn):
cloud = HostDiscoveryUtils.get_cloud(ip)
for ip in HostDiscoveryUtils.generate_subnet(ip, sn=sn):
self.publish_event(NewHostEvent(host=ip, cloud=cloud))
elif config.interface:
if config.interface:
self.scan_interfaces()
elif len(config.remote) > 0:
if config.remote:
for host in config.remote:
self.publish_event(NewHostEvent(host=host, cloud=HostDiscoveryHelpers.get_cloud(host)))
self.publish_event(NewHostEvent(host=host, cloud=HostDiscoveryUtils.get_cloud(host)))
# for normal scanning
def scan_interfaces(self):
try:
logging.debug("HostDiscovery hunter attempting to get external IP address")
external_ip = requests.get("http://canhazip.com").text # getting external ip, to determine if cloud cluster
except requests.ConnectionError as e:
logging.debug("unable to determine local IP address: {0}".format(e))
logging.info("~ default to 127.0.0.1")
external_ip = "127.0.0.1"
cloud = HostDiscoveryHelpers.get_cloud(external_ip)
for ip in self.generate_interfaces_subnet():
external_ip = HostDiscoveryUtils.get_external_ip()
cloud = HostDiscoveryUtils.get_cloud(host=external_ip)
for ip in HostDiscoveryUtils.generate_interfaces_subnet():
handler.publish_event(NewHostEvent(host=ip, cloud=cloud))
# generate all subnets from all internal network interfaces
def generate_interfaces_subnet(self, sn='24'):
for ifaceName in interfaces():
for ip in [i['addr'] for i in ifaddresses(ifaceName).setdefault(AF_INET, [])]:
if not self.event.localhost and InterfaceTypes.LOCALHOST.value in ip.__str__():
continue
for ip in HostDiscoveryHelpers.generate_subnet(ip, sn):
yield ip
# for comparing prefixes
class InterfaceTypes(Enum):
LOCALHOST = "127"

View File

@@ -7,7 +7,7 @@ from .kubelet import ExposedRunHandler
from ...core.events import handler
from ...core.events.types import Event, Vulnerability
from ...core.types import Hunter, ActiveHunter, IdentityTheft, Azure
from ...core.types import Hunter, ActiveHunter, IdentityTheft, Azure, CloudTypes
class AzureSpnExposure(Vulnerability, Event):
@@ -16,7 +16,7 @@ class AzureSpnExposure(Vulnerability, Event):
Vulnerability.__init__(self, Azure, "Azure SPN Exposure", category=IdentityTheft, vid="KHV004")
self.container = container
@handler.subscribe(ExposedRunHandler, predicate=lambda x: x.cloud=="Azure")
@handler.subscribe(ExposedRunHandler, predicate=lambda x: x.cloud==CloudTypes.AKS)
class AzureSpnHunter(Hunter):
"""AKS Hunting
Hunting Azure cluster deployments using specific known configurations

View File

@@ -26,7 +26,7 @@ class ArpSpoofHunter(ActiveHunter):
return ans[ARP].hwsrc if ans else None
def detect_l3_on_host(self, arp_responses):
""" returns True for an existance of an L3 network plugin """
""" returns True for an existence of an L3 network plugin """
logging.debug("Attempting to detect L3 network plugin using ARP")
unique_macs = list(set(response[ARP].hwsrc for _, response in arp_responses))

View File

@@ -2,41 +2,65 @@ import requests_mock
import time
from queue import Empty
from src.modules.discovery.hosts import FromPodHostDiscovery, RunningAsPodEvent, HostScanEvent, AzureMetadataApi
from src.modules.discovery.aks import AzureHostDiscovery, AzureMetadataApi
from src.modules.discovery.hosts import HostScanEvent, RunningPodOnCloud, FromPodHostDiscovery
from src.core.events.types import Event, NewHostEvent
from src.core.events import handler
from src.core.types import CloudTypes
from __main__ import config
def test_FromPodHostDiscovery():
# global variables for cloud discovery check
aws_triggered = False
azure_triggered = False
def test_AzureHostDiscovery():
config.remote = None
config.cidr = None
config.pod = True
with requests_mock.Mocker() as m:
e = RunningAsPodEvent()
config.azure = False
config.remote = None
config.cidr = None
m.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", status_code=404)
f = FromPodHostDiscovery(e)
assert not f.is_azure_pod()
# TODO For now we don't test the traceroute discovery version
# f.execute()
f = FromPodHostDiscovery(HostScanEvent())
m.get("https://canhazip.com", text="1.2.3.4")
m.get("https://www.azurespeed.com/api/region?ipOrUrl=1.2.3.4", text="""{
"cloud": "Azure",
"regionId": null,
"region": null,
"location": null,
"ipAddress": "1.2.3.4"}""")
# Test that we generate NewHostEvent for the addresses reported by the Azure Metadata API
config.azure = True
m.get("http://169.254.169.254/metadata/instance?api-version=2017-08-01", \
text='{"network":{"interface":[{"ipv4":{"subnet":[{"address": "3.4.5.6", "prefix": "255.255.255.252"}]}}]}}')
assert f.is_azure_pod()
f.execute()
time.sleep(0.1)
assert azure_triggered
# Test that we don't trigger a HostScanEvent unless either config.remote or config.cidr are configured
config.remote = "1.2.3.4"
def test_AWSPodDiscovery():
with requests_mock.Mocker() as m:
e = HostScanEvent()
config.remote = None
config.cidr = None
config.pod = True
f = FromPodHostDiscovery(e)
m.get("https://canhazip.com", text="1.2.3.4")
m.get("https://www.azurespeed.com/api/region?ipOrUrl=1.2.3.4", text="""{
"cloud": "AWS",
"regionId": null,
"region": null,
"location": null,
"ipAddress": "1.2.3.4"}""")
f.execute()
time.sleep(0.1)
assert aws_triggered
config.azure = False
config.remote = None
config.cidr = "1.2.3.4/24"
f.execute()
@handler.subscribe(RunningPodOnCloud, predicate = lambda x: x.cloud == CloudTypes.EKS)
class testAWSCloud(object):
def __init__(self, event):
global aws_triggered
aws_triggered = True
# In this set of tests we should only trigger HostScanEvent when remote or cidr are set
@handler.subscribe(HostScanEvent)
@@ -44,16 +68,14 @@ class testHostDiscovery(object):
def __init__(self, event):
assert config.remote is not None or config.cidr is not None
assert config.remote == "1.2.3.4" or config.cidr == "1.2.3.4/24"
# In this set of tests we should only get as far as finding a host if it's Azure
# because we're not running the code that would normally be triggered by a HostScanEvent
@handler.subscribe(NewHostEvent)
class testHostDiscoveryEvent(object):
class testNewHostEvent(object):
def __init__(self, event):
assert config.azure
assert str(event.host).startswith("3.4.5.")
assert config.remote is None
assert config.cidr is None
if event.cloud == CloudTypes.AKS:
global azure_triggered
azure_triggered = True
assert not str(event.host).startswith("3.4.5")
# Test that we only report this event for Azure hosts
@handler.subscribe(AzureMetadataApi)