Compare commits

...

231 Commits
v0.7.2 ... main

Author SHA1 Message Date
Andy Pitcher
f1807bb192 feat: add CIS-1.12 support (#2035)
- Update master to 1.2.29 and purge CBC ciphers: https://workbench.cisecurity.org/tickets/24968
- Remove TLS_RSA_WITH_AES_256_GCM_SHA384 & TLS_RSA_WITH_AES_128_GCM_SHA256 (CBC support) for node 4.2.12: https://workbench.cisecurity.org/tickets/24968
- Remove node check 4.2.15: https://workbench.cisecurity.org/tickets/24915
- Remove policy 5.2.9 "Minimize the admission of containers with added capabilities" (Manual): https://workbench.cisecurity.org/benchmarks/21709/tickets/25337
- Update "Minimize the admission of containers with capabilities assigned" policies to remove PodSecurityPolicy (PSP) references

Signed-off-by: Andy Pitcher <andy.pitcher@suse.com>
2026-02-12 11:34:08 +06:00
dependabot[bot]
c1bee59a02 build(deps): bump k8s.io/client-go from 0.34.2 to 0.35.0 (#2022)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.34.2 to 0.35.0.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.34.2...v0.35.0)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-version: 0.35.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-11 16:30:49 +06:00
dependabot[bot]
8e7777928f build(deps): bump k8s.io/apimachinery from 0.34.3 to 0.35.0 (#2023)
Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.34.3 to 0.35.0.
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.34.3...v0.35.0)

---
updated-dependencies:
- dependency-name: k8s.io/apimachinery
  dependency-version: 0.35.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-11 16:16:30 +06:00
dependabot[bot]
43a9034b2f build(deps): bump github.com/aws/aws-sdk-go-v2/config (#2029)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.32.6 to 1.32.7.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.32.6...v1.32.7)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.32.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-11 13:34:56 +06:00
dependabot[bot]
5848e5df6c build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#2030)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.67.2 to 1.67.3.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/ecs/v1.67.2...service/ecs/v1.67.3)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.67.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-11 13:17:45 +06:00
dependabot[bot]
651df2d4ce build(deps): bump golang from 1.25.6 to 1.25.7 (#2038)
Bumps golang from 1.25.6 to 1.25.7.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.25.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-10 12:44:18 +06:00
dependabot[bot]
1aa87bbebb build(deps): bump actions/setup-go from 6.1.0 to 6.2.0 (#2033)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 6.1.0 to 6.2.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](4dc6199c7b...7a3fe6cf4c)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: 6.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-06 19:09:11 +06:00
dependabot[bot]
06159c6edb build(deps): bump alpine from 3.23.2 to 3.23.3 (#2037)
Bumps alpine from 3.23.2 to 3.23.3.

---
updated-dependencies:
- dependency-name: alpine
  dependency-version: 3.23.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-03 13:31:36 +06:00
dependabot[bot]
22f5a1c559 build(deps): bump golang from 1.25.5 to 1.25.6 (#2034)
Bumps golang from 1.25.5 to 1.25.6.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.25.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-30 13:26:36 +06:00
dependabot[bot]
0ec876e1bc build(deps): bump github.com/aws/aws-sdk-go-v2 from 1.41.0 to 1.41.1 (#2031)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.41.0 to 1.41.1.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.41.0...v1.41.1)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-version: 1.41.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-13 11:55:32 +06:00
dependabot[bot]
9e8c6b2c7d build(deps): bump actions/setup-go from 6.0.0 to 6.1.0 (#2027)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 6.0.0 to 6.1.0.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](4469467582...4dc6199c7b)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: 6.1.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-01-12 11:37:28 +06:00
LaibaBareera
581b68d985 Add CIS Benchmark for EKS-1.8 (#2020)
* Add CIS Benchmark for EKS-1.8

* fix linter error

* fix the mentioned issue

---------

Co-authored-by: afdesk <work@afdesk.com>
2025-12-29 17:30:13 +06:00
LaibaBareera
379f11996f ci: bump up Go version in pipeline (#2026) 2025-12-29 16:43:17 +06:00
dependabot[bot]
659b0c1cad build(deps): bump alpine from 3.23.0 to 3.23.2 (#2021)
Bumps alpine from 3.23.0 to 3.23.2.

---
updated-dependencies:
- dependency-name: alpine
  dependency-version: 3.23.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-23 13:35:10 +06:00
LaibaBareera
8afa78abaf release: prepare-0.14.1 (#2019) 2025-12-22 15:58:50 +06:00
LaibaBareera
2938a24924 chore: bump up kubectl version to 1.35.0 (#2018) 2025-12-22 15:34:07 +06:00
LaibaBareera
462a50341a fix: Checks of rke2-1.8 (#2010)
* fix: Checks of rke2-1.8

* fix the check 1.1.7 and 1.1.8 in all rke2 versions

* fix the mentioned issues

* fix the check 1.1.11

---------
2025-12-22 14:00:43 +06:00
dependabot[bot]
60eb8104ad build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#2012)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.66.1 to 1.67.2.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.66.1...service/ecs/v1.67.2)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.67.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-18 16:04:49 +06:00
dependabot[bot]
6eb894633a build(deps): bump github.com/aws/aws-sdk-go-v2/config (#2013)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.32.2 to 1.32.5.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.32.2...v1.32.5)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.32.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-18 15:24:21 +06:00
dependabot[bot]
428f433fae build(deps): bump actions/cache from 4 to 5 (#2011)
Bumps [actions/cache](https://github.com/actions/cache) from 4 to 5.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-18 14:50:50 +06:00
dependabot[bot]
8a3701577b build(deps): bump github.com/aws/aws-sdk-go-v2 from 1.40.0 to 1.41.0 (#2014)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.40.0 to 1.41.0.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.40.0...v1.41.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-version: 1.41.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-18 14:26:19 +06:00
dependabot[bot]
e3e9e7d390 build(deps): bump github.com/spf13/cobra from 1.10.1 to 1.10.2 (#2005)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.10.1 to 1.10.2.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.10.1...v1.10.2)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-version: 1.10.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-18 14:12:47 +06:00
dependabot[bot]
e25d283dd1 build(deps): bump k8s.io/apimachinery from 0.34.2 to 0.34.3 (#2015)
Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.34.2 to 0.34.3.
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.34.2...v0.34.3)

---
updated-dependencies:
- dependency-name: k8s.io/apimachinery
  dependency-version: 0.34.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-18 10:16:19 +06:00
jdesouza
b48ee8511f CVE-2025-61729: (#2003) 2025-12-11 18:02:09 +06:00
dependabot[bot]
315817617b build(deps): bump golang from 1.25.4 to 1.25.5 (#2004)
Bumps golang from 1.25.4 to 1.25.5.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.25.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-11 17:21:47 +06:00
dependabot[bot]
227665c9e8 build(deps): bump alpine from 3.22.2 to 3.23.0 (#2006)
Bumps alpine from 3.22.2 to 3.23.0.

---
updated-dependencies:
- dependency-name: alpine
  dependency-version: 3.23.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-11 14:04:35 +06:00
dependabot[bot]
4fc9c0e7a8 build(deps): bump gorm.io/gorm from 1.31.0 to 1.31.1 (#1995)
Bumps [gorm.io/gorm](https://github.com/go-gorm/gorm) from 1.31.0 to 1.31.1.
- [Release notes](https://github.com/go-gorm/gorm/releases)
- [Commits](https://github.com/go-gorm/gorm/compare/v1.31.0...v1.31.1)

---
updated-dependencies:
- dependency-name: gorm.io/gorm
  dependency-version: 1.31.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-27 17:41:00 +06:00
dependabot[bot]
1f401b1a50 build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1997)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.31.17 to 1.31.20.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.31.17...config/v1.31.20)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.31.20
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-27 17:27:08 +06:00
dependabot[bot]
10e0a78701 build(deps): bump golang from 1.25.3 to 1.25.4 (#1994)
Bumps golang from 1.25.3 to 1.25.4.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.25.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-27 17:10:32 +06:00
dependabot[bot]
182cbaa71d build(deps): bump actions/checkout from 5 to 6 (#2001)
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-27 16:32:01 +06:00
dependabot[bot]
7793925b22 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1998)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.65.2 to 1.65.4.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.65.2...service/ecs/v1.65.4)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.65.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-27 16:19:28 +06:00
dependabot[bot]
1cf0f8cd92 build(deps): bump k8s.io/client-go from 0.34.1 to 0.34.2 (#1999)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.34.1 to 0.34.2.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.34.1...v0.34.2)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-version: 0.34.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-27 15:57:25 +06:00
dependabot[bot]
c9382e4e96 build(deps): bump golang.org/x/crypto from 0.36.0 to 0.45.0 (#2000)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.36.0 to 0.45.0.
- [Commits](https://github.com/golang/crypto/compare/v0.36.0...v0.45.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.45.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-26 10:57:01 +06:00
LaibaBareera
ec1005509f release: prepare-0.14.0 (#1990)
Co-authored-by: afdesk <work@afdesk.com>
2025-11-05 13:50:58 +06:00
LaibaBareera
5678009fae chore: bump up kubectl version to 1.35.0-alpha.2 (#1991) 2025-11-05 13:37:29 +06:00
dependabot[bot]
25f773b279 build(deps): bump k8s.io/client-go from 0.33.4 to 0.34.1 (#1967)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.33.4 to 0.34.1.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.33.4...v0.34.1)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-version: 0.34.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-05 12:56:21 +06:00
dependabot[bot]
e044dcaffb build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1985)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.31.11 to 1.31.15.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.31.11...config/v1.31.15)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.31.15
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-05 12:36:25 +06:00
dependabot[bot]
6a39a2e516 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1987)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.64.5 to 1.65.0.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/iot/v1.64.5...service/s3/v1.65.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.65.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-05 12:19:10 +06:00
LaibaBareera
496ec149bc fix: update checks 5.1.1, 5.1.2 and 5.1.4 for CIS 1.9 / CIS 1.10 (#1989)
* Fix the issue 1982

* remove the type manual and revert changes of test in each check

* fix linter error

* changed scored to false for check 5.1.3, 5.1.5, 5.1.6
2025-11-04 20:05:33 +06:00
LaibaBareera
c7d9863e57 add cis benchmark for rke2-cis-1.8 (#1983)
* add cis benchmark for rke2-cis-1.8

* fix check 1.1.11, 1.1.7, 1.1.8, 4.1.9 and 4.1.10

* fix the issue in all rke2 versions

---------

Co-authored-by: afdesk <work@afdesk.com>
2025-11-03 13:18:29 +06:00
dependabot[bot]
0990df031b build(deps): bump actions/setup-python from 5 to 6 (#1951)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5 to 6.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 18:53:29 +06:00
dependabot[bot]
c64cf3d19d build(deps): bump actions/checkout from 4 to 5 (#1925)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 18:37:54 +06:00
dependabot[bot]
a983f0c9de build(deps): bump alpine from 3.22.1 to 3.22.2 (#1974)
Bumps alpine from 3.22.1 to 3.22.2.

---
updated-dependencies:
- dependency-name: alpine
  dependency-version: 3.22.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 16:31:22 +06:00
dependabot[bot]
691afc028c build(deps): bump github.com/spf13/cobra from 1.9.1 to 1.10.1 (#1968)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.9.1 to 1.10.1.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.9.1...v1.10.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-version: 1.10.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 12:33:50 +06:00
dependabot[bot]
02305b2e7a build(deps): bump github.com/spf13/viper from 1.20.1 to 1.21.0 (#1969)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.20.1 to 1.21.0.
- [Release notes](https://github.com/spf13/viper/releases)
- [Commits](https://github.com/spf13/viper/compare/v1.20.1...v1.21.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/viper
  dependency-version: 1.21.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 12:19:37 +06:00
dependabot[bot]
6d234c5155 build(deps): bump golang from 1.25.1 to 1.25.3 (#1980)
Bumps golang from 1.25.1 to 1.25.3.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.25.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 12:02:19 +06:00
dependabot[bot]
506198ce97 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1981)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.64.2 to 1.64.5.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/ecs/v1.64.2...service/iot/v1.64.5)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.64.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-21 11:01:42 +06:00
Tyler Auerbeck
fd531a75a7 Update golangci lint to v2 (#1972)
* chore: migrate golangcilint to v2 config

Signed-off-by: Tyler Auerbeck <tylerauerbeck@users.noreply.github.com>

* chore: update golangci-lint version in build workflow

Signed-off-by: Tyler Auerbeck <tylerauerbeck@users.noreply.github.com>

* bump golagci-lint action to v8

Signed-off-by: Tyler Auerbeck <tylerauerbeck@users.noreply.github.com>

---------

Signed-off-by: Tyler Auerbeck <tylerauerbeck@users.noreply.github.com>
Co-authored-by: Tyler Auerbeck <tylerauerbeck@users.noreply.github.com>
2025-10-07 15:11:10 +06:00
LaibaBareera
295b5e6aa9 release: prepare-0.13 (#1965) 2025-09-29 17:25:11 +06:00
dependabot[bot]
6943f0690a build(deps): bump github.com/stretchr/testify from 1.10.0 to 1.11.1 (#1939)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.10.0 to 1.11.1.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.10.0...v1.11.1)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-version: 1.11.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 16:32:53 +06:00
dependabot[bot]
f52c5acbe6 build(deps): bump golang from 1.25.0 to 1.25.1 (#1946)
Bumps golang from 1.25.0 to 1.25.1.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.25.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 15:11:46 +06:00
dependabot[bot]
0fd581935e build(deps): bump gorm.io/gorm from 1.30.1 to 1.31.0 (#1956)
Bumps [gorm.io/gorm](https://github.com/go-gorm/gorm) from 1.30.1 to 1.31.0.
- [Release notes](https://github.com/go-gorm/gorm/releases)
- [Commits](https://github.com/go-gorm/gorm/compare/v1.30.1...v1.31.0)

---
updated-dependencies:
- dependency-name: gorm.io/gorm
  dependency-version: 1.31.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 14:39:18 +06:00
dependabot[bot]
c4dc17c96c build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1960)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.31.0 to 1.31.9.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.31.0...config/v1.31.9)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.31.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 12:13:22 +06:00
LaibaBareera
76804bf7fa feat: add cis benchmark for gke v1.8.0 (#1958)
* add cis benchmark for gke v1.8.0

* fix linter error

* fix checks for managed services
2025-09-26 12:18:40 +06:00
Markus Boehme
014ac455b5 eks-1.7.0: allow default value for eventRecordQPS rule (#1954)
The CIS Benchmark for Amazon EKS v1.7.0, recommendation 3.2.7 asks to
"Ensure that the --eventRecordQPS argument is set to 0 or a level which
ensures appropriate event capture". The --event-qps option on the
command line and the eventRecordQPS option in the configuration file
both have the same default value of 5, but differ in how they treat the
an explicitly set value of 0:

  - The --event-qps command line option treats 0 as the default
    value of 5 QPS.
  - The eventRecordQPS configuration file option treats 0 as unlimited
    (and the absence of the option as the default value of 5 QPS).

Since setting --event-qps=0, using the default value, is acceptable for
the command line option, using the default value for eventRecordQPS by
not explicitly setting the option should be allowed as well. Note that
this is already the case in the configuration for the generic Kubernetes
CIS Benchmark.
2025-09-26 12:06:18 +06:00
dependabot[bot]
844a28b3fd build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1959)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.62.0 to 1.64.2.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.62.0...service/iot/v1.64.2)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.64.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-24 11:37:14 +06:00
LaibaBareera
21dd168736 add checks for cis benchmarks of rh-1.8 (#1945)
Co-authored-by: afdesk <work@afdesk.com>
2025-09-16 14:00:14 +06:00
Andy Pitcher
e3becc9f19 Create cis-1.11 (#1944)
First yamls and Update info
	- Modify yaml versions from 1.10 to 1.11
	- Adapt configmap to cover cis-1.11
	- Adapt docs and cmd files
	- Fix version_mapping in global configMap and common_test.go: Kuberversion for cis-1.11
	- doc: improve version mapping in platforms
Adapt master.yaml
	- modify: 1.1.20 https://workbench.cisecurity.org/benchmarks/19519/tickets/24017 permissions changed from 600 to 644
	- create: 1.2.30 Ensure that the --service-account-extend-token-expiration parameter is set to false (Automated)
Adapt node.yaml
	- Add: 4.2.14 Ensure that the --seccomp-default parameter is set to true (Manual)
	- Add: 4.2.15 Ensure that the --IPAddressDeny is set to any (Manual) - this check is to be removed in CIS-1.1.12, I suggest we discard it.
	- Modify: 4.1.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual) - (changed from 600 to 644) https://workbench.cisecurity.org/community/43/discussions/11786
	- Modify: 4.2.4 Verify that if defined, readOnlyPort is set to 0 (Manual) - Added "if defined"
Adapt policies.yaml
	- Modify: 5.1.1 to 5.1.6 from (Automated) to (Manual)
	- Modify: section titled "General Policies" was renumbered from 5.7 in v1.10 to 5.6
2025-09-09 15:00:43 +06:00
LaibaBareera
52a646c2a3 Add rh 1.4 (#1922)
* add CIS Benchmark for eks-v1.7

* fix failed test cases

* added eks 1.7 for supported kubernetes version

* added eks 1.7 for supported kubernetes version

* fix failed test cases

* add test cases for it

* fix

* add test case for eks 1.5

* change methodoloy

* fix the issue mentioned in pr

* fix linter error

* Update cmd/util.go

Co-authored-by: afdesk <work@afdesk.com>

* fix the failed test

* add cis benchmark for red hat openshift containre v1.4

* fix failed test cases

* fix checks for rh-1.4

* mark scored true to manual test if they have test cases

* fix check 1.2.4

* rebase the changes in go.sum

---------

Co-authored-by: afdesk <work@afdesk.com>
2025-09-02 22:28:03 +06:00
dependabot[bot]
0333e55b63 build(deps): bump github.com/aws/aws-sdk-go-v2 from 1.38.0 to 1.38.1 (#1934)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.38.0 to 1.38.1.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.38.0...v1.38.1)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-version: 1.38.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-26 15:05:54 +06:00
LaibaBareera
abca29520b release: prepare-0.12 (#1929)
* release: prepare-0.12

* fix
2025-08-22 13:56:21 +06:00
LaibaBareera
858c15c999 update kubectl version (#1933) 2025-08-22 13:38:37 +06:00
dependabot[bot]
2df3826789 build(deps): bump github.com/go-viper/mapstructure/v2 (#1932)
Bumps [github.com/go-viper/mapstructure/v2](https://github.com/go-viper/mapstructure) from 2.3.0 to 2.4.0.
- [Release notes](https://github.com/go-viper/mapstructure/releases)
- [Changelog](https://github.com/go-viper/mapstructure/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-viper/mapstructure/compare/v2.3.0...v2.4.0)

---
updated-dependencies:
- dependency-name: github.com/go-viper/mapstructure/v2
  dependency-version: 2.4.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-22 12:59:06 +06:00
dependabot[bot]
e9c0f3c8a6 build(deps): bump k8s.io/client-go from 0.33.3 to 0.33.4 (#1931)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.33.3 to 0.33.4.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.33.3...v0.33.4)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-version: 0.33.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-19 17:50:58 +06:00
dependabot[bot]
77a5aba051 build(deps): bump gorm.io/gorm from 1.30.0 to 1.30.1 (#1921)
Bumps [gorm.io/gorm](https://github.com/go-gorm/gorm) from 1.30.0 to 1.30.1.
- [Release notes](https://github.com/go-gorm/gorm/releases)
- [Commits](https://github.com/go-gorm/gorm/compare/v1.30.0...v1.30.1)

---
updated-dependencies:
- dependency-name: gorm.io/gorm
  dependency-version: 1.30.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-18 14:39:02 +06:00
dependabot[bot]
47b782d4d5 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1924)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.58.2 to 1.62.0.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.58.2...service/s3/v1.62.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.62.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-18 13:18:57 +06:00
dependabot[bot]
2731f9d9f2 build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1927)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.29.18 to 1.31.0.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.29.18...v1.31.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.31.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-18 12:44:12 +06:00
dependabot[bot]
7a18d0bddf build(deps): bump golang from 1.24.5 to 1.24.6 (#1926)
Bumps golang from 1.24.5 to 1.24.6.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.24.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-18 12:21:48 +06:00
LaibaBareera
9c01682a92 add CIS Benchmark for eks-v1.7 (#1916)
* add CIS Benchmark for eks-v1.7

* fix failed test cases

* added eks 1.7 for supported kubernetes version

* added eks 1.7 for supported kubernetes version

* fix failed test cases

* add test cases for it

* fix

* add test case for eks 1.5

* change methodoloy

* fix the issue mentioned in pr

* fix linter error

* Update cmd/util.go

Co-authored-by: afdesk <work@afdesk.com>

* fix the failed test

---------

Co-authored-by: afdesk <work@afdesk.com>
2025-08-18 12:05:16 +06:00
mjshastha
a02030bd51 Release pr -- fixed vulns. (#1915) 2025-07-30 12:33:53 +06:00
dependabot[bot]
9fa5c45321 build(deps): bump k8s.io/client-go from 0.33.2 to 0.33.3 (#1910)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.33.2 to 0.33.3.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.33.2...v0.33.3)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-version: 0.33.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 23:22:39 +06:00
dependabot[bot]
329e0d27f8 build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1908)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.29.17 to 1.29.18.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.29.17...config/v1.29.18)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.29.18
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 22:45:56 +06:00
dependabot[bot]
77eff279ce build(deps): bump golang from 1.24.4 to 1.24.5 (#1906)
Bumps golang from 1.24.4 to 1.24.5.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.24.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 21:59:34 +06:00
dependabot[bot]
d509b7b23b build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1909)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.58.0 to 1.58.2.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.58.0...service/s3/v1.58.2)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.58.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 21:25:17 +06:00
dependabot[bot]
1f3f282b2b build(deps): bump k8s.io/apimachinery from 0.33.2 to 0.33.3 (#1911)
Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.33.2 to 0.33.3.
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.33.2...v0.33.3)

---
updated-dependencies:
- dependency-name: k8s.io/apimachinery
  dependency-version: 0.33.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 20:24:05 +06:00
dependabot[bot]
308fa068ba build(deps): bump github.com/aws/aws-sdk-go-v2 from 1.36.5 to 1.36.6 (#1912)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.36.5 to 1.36.6.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.36.5...v1.36.6)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-version: 1.36.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 19:55:19 +06:00
dependabot[bot]
29f3e436a7 build(deps): bump alpine from 3.22.0 to 3.22.1 (#1913)
Bumps alpine from 3.22.0 to 3.22.1.

---
updated-dependencies:
- dependency-name: alpine
  dependency-version: 3.22.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-28 17:35:00 +06:00
dependabot[bot]
20d5858fc5 build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1903)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.29.16 to 1.29.17.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.29.16...config/v1.29.17)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.29.17
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-07 16:28:25 +06:00
dependabot[bot]
79d120609d build(deps): bump k8s.io/client-go from 0.33.1 to 0.33.2 (#1902)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.33.1 to 0.33.2.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.33.1...v0.33.2)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-version: 0.33.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-07 15:55:30 +06:00
dependabot[bot]
d2cb29d66d build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1900)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.57.5 to 1.58.0.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/ecs/v1.57.5...service/s3/v1.58.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.58.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-07 15:39:09 +06:00
dependabot[bot]
d483671e70 build(deps): bump k8s.io/apimachinery from 0.33.1 to 0.33.2 (#1899)
Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.33.1 to 0.33.2.
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.33.1...v0.33.2)

---
updated-dependencies:
- dependency-name: k8s.io/apimachinery
  dependency-version: 0.33.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-07 15:09:32 +06:00
dependabot[bot]
95909f4f25 build(deps): bump github.com/go-viper/mapstructure/v2 (#1904)
Bumps [github.com/go-viper/mapstructure/v2](https://github.com/go-viper/mapstructure) from 2.2.1 to 2.3.0.
- [Release notes](https://github.com/go-viper/mapstructure/releases)
- [Changelog](https://github.com/go-viper/mapstructure/blob/main/CHANGELOG.md)
- [Commits](https://github.com/go-viper/mapstructure/compare/v2.2.1...v2.3.0)

---
updated-dependencies:
- dependency-name: github.com/go-viper/mapstructure/v2
  dependency-version: 2.3.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-07 14:44:10 +06:00
Josh-aqua
93b0cfa256 Update platforms.md (#1896)
Fix typos and improved some wording
2025-06-25 12:40:49 +06:00
afdesk
b8c5fbb44b release: prepare v0.11.1 (#1895) 2025-06-17 19:48:43 +06:00
afdesk
20a26a02b2 chore: use kubectl 1.34.0-alpha.1 (#1894) 2025-06-17 19:26:44 +06:00
afdesk
4542f75dce release: prepare v0.11.0 (#1893) 2025-06-17 15:00:52 +06:00
LaibaBareera
a3a8544a1d Add AKS-1.7 version (#1874)
* Add AKS-1.7 version

* resolve linter error

* add aks-1.7 as a default plateform aks version

* add alternative method to identify AKS specific cluster

* fix alternative method

* combine logic of label and providerId in isAKS function

* fix checks of aks-1.7

* fix the mentioned issues

* fix test cases
2025-06-17 14:43:21 +06:00
afdesk
4e2651fda0 chore: bunp up Go version to 1.24.4 (#1892) 2025-06-17 14:11:15 +06:00
dependabot[bot]
98aa7bbfce build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1890)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.29.14 to 1.29.16.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.29.14...config/v1.29.16)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-version: 1.29.16
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-17 13:01:40 +06:00
dependabot[bot]
666e0aae71 build(deps): bump golang from 1.24.3 to 1.24.4 (#1888)
Bumps golang from 1.24.3 to 1.24.4.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.24.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-17 12:48:00 +06:00
dependabot[bot]
665c95239a build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1891)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.57.4 to 1.57.5.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/ecs/v1.57.4...service/ecs/v1.57.5)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.57.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-17 11:10:02 +06:00
mjshastha
271241f926 release: prepare v0.10.7 (#1886) 2025-06-05 16:12:01 +06:00
mjshastha
74872845a2 fix(audit): improve of etcd, controller, and scheduler audits (#1883)
- Updated 1.1.11 to wrap etcd data directory stat in a conditional check.
- Updated 1.3.7 and 1.4.2 to conditionally check if the controller manager and scheduler binaries exist before running ps/grep.
2025-06-04 19:14:21 +06:00
dependabot[bot]
51e849d9f7 build(deps): bump gorm.io/driver/postgres from 1.5.11 to 1.6.0 (#1880)
Bumps [gorm.io/driver/postgres](https://github.com/go-gorm/postgres) from 1.5.11 to 1.6.0.
- [Commits](https://github.com/go-gorm/postgres/compare/v1.5.11...v1.6.0)

---
updated-dependencies:
- dependency-name: gorm.io/driver/postgres
  dependency-version: 1.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-03 13:01:00 +06:00
dependabot[bot]
a882850f2b build(deps): bump alpine from 3.21.3 to 3.22.0 (#1879)
Bumps alpine from 3.21.3 to 3.22.0.

---
updated-dependencies:
- dependency-name: alpine
  dependency-version: 3.22.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-03 11:48:49 +06:00
dependabot[bot]
2077fcf1e0 build(deps): bump k8s.io/client-go from 0.33.0 to 0.33.1 (#1875)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.33.0 to 0.33.1.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.33.0...v0.33.1)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-version: 0.33.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-27 12:34:21 +06:00
dependabot[bot]
dd4ddb59ea build(deps): bump gorm.io/gorm from 1.26.1 to 1.30.0 (#1878)
Bumps [gorm.io/gorm](https://github.com/go-gorm/gorm) from 1.26.1 to 1.30.0.
- [Release notes](https://github.com/go-gorm/gorm/releases)
- [Commits](https://github.com/go-gorm/gorm/compare/v1.26.1...v1.30.0)

---
updated-dependencies:
- dependency-name: gorm.io/gorm
  dependency-version: 1.30.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-27 12:08:32 +06:00
dependabot[bot]
6ecc6a2066 build(deps): bump github.com/golang/glog from 1.2.4 to 1.2.5 (#1871)
Bumps [github.com/golang/glog](https://github.com/golang/glog) from 1.2.4 to 1.2.5.
- [Release notes](https://github.com/golang/glog/releases)
- [Commits](https://github.com/golang/glog/compare/v1.2.4...v1.2.5)

---
updated-dependencies:
- dependency-name: github.com/golang/glog
  dependency-version: 1.2.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-13 16:27:15 +06:00
dependabot[bot]
eb8ffc613e build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1872)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.57.3 to 1.57.4.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/eks/v1.57.3...service/eks/v1.57.4)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.57.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-13 14:47:29 +06:00
dependabot[bot]
ad826cd83c build(deps): bump golang from 1.24.2 to 1.24.3 (#1873)
Bumps golang from 1.24.2 to 1.24.3.

---
updated-dependencies:
- dependency-name: golang
  dependency-version: 1.24.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-13 09:58:58 +06:00
dependabot[bot]
f0c648d16e build(deps): bump gorm.io/gorm from 1.25.12 to 1.26.0 (#1865)
Bumps [gorm.io/gorm](https://github.com/go-gorm/gorm) from 1.25.12 to 1.26.0.
- [Release notes](https://github.com/go-gorm/gorm/releases)
- [Commits](https://github.com/go-gorm/gorm/compare/v1.25.12...v1.26.0)

---
updated-dependencies:
- dependency-name: gorm.io/gorm
  dependency-version: 1.26.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-12 14:59:34 +06:00
dependabot[bot]
23b2a1aa7d build(deps): bump github.com/spf13/viper from 1.19.0 to 1.20.1 (#1848)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.19.0 to 1.20.1.
- [Release notes](https://github.com/spf13/viper/releases)
- [Commits](https://github.com/spf13/viper/compare/v1.19.0...v1.20.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/viper
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-06 16:13:04 +06:00
dependabot[bot]
60110935cb build(deps): bump k8s.io/client-go from 0.32.3 to 0.33.0 (#1866)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.32.3 to 0.33.0.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.32.3...v0.33.0)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-version: 0.33.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-06 11:33:58 +06:00
dependabot[bot]
190548035d build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1867)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.57.2 to 1.57.3.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/eks/v1.57.2...service/eks/v1.57.3)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.57.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-06 11:13:02 +06:00
afdesk
9815f99e2c release: prepare v0.10.6 (#1863) 2025-04-24 21:59:01 +06:00
afdesk
56bb426fce fix: update kubectl to v1.33.0 (#1861) 2025-04-24 20:05:35 +06:00
afdesk
5feae8a80d release: prepare v0.10.5 (#1860) 2025-04-23 18:02:04 +06:00
dependabot[bot]
00cd12ef19 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1855)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.57.0 to 1.57.2.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.57.0...service/eks/v1.57.2)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-version: 1.57.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-23 14:28:26 +06:00
dependabot[bot]
d1e948acd3 build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1847)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.29.9 to 1.29.12.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.29.9...config/v1.29.12)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-23 14:14:56 +06:00
dependabot[bot]
c4c5317f44 build(deps): bump github.com/magiconair/properties from 1.8.9 to 1.8.10 (#1854)
Bumps [github.com/magiconair/properties](https://github.com/magiconair/properties) from 1.8.9 to 1.8.10.
- [Release notes](https://github.com/magiconair/properties/releases)
- [Commits](https://github.com/magiconair/properties/compare/v1.8.9...v1.8.10)

---
updated-dependencies:
- dependency-name: github.com/magiconair/properties
  dependency-version: 1.8.10
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-04-23 13:18:08 +06:00
Masashi Honma
6e454a1dd5 Fix CVEs (#1857)
Resolves #1852.

usr/local/bin/kube-bench (gobinary)

Total: 2 (UNKNOWN: 0, LOW: 0, MEDIUM: 2, HIGH: 0, CRITICAL: 0)

┌──────────────────┬────────────────┬──────────┬────────┬───────────────────┬────────────────┬──────────────────────────────────────────────────────────┐
│     Library      │ Vulnerability  │ Severity │ Status │ Installed Version │ Fixed Version  │                          Title                           │
├──────────────────┼────────────────┼──────────┼────────┼───────────────────┼────────────────┼──────────────────────────────────────────────────────────┤
│ golang.org/x/net │ CVE-2025-22872 │ MEDIUM   │ fixed  │ v0.36.0           │ 0.38.0         │ The tokenizer incorrectly interprets tags with unquoted  │
│                  │                │          │        │                   │                │ attribute valu ...                                       │
│                  │                │          │        │                   │                │ https://avd.aquasec.com/nvd/cve-2025-22872               │
├──────────────────┼────────────────┤          │        ├───────────────────┼────────────────┼──────────────────────────────────────────────────────────┤
│ stdlib           │ CVE-2025-22871 │          │        │ v1.24.1           │ 1.23.8, 1.24.2 │ net/http: Request smuggling due to acceptance of invalid │
│                  │                │          │        │                   │                │ chunked data in net/http...                              │
│                  │                │          │        │                   │                │ https://avd.aquasec.com/nvd/cve-2025-22871               │
└──────────────────┴────────────────┴──────────┴────────┴───────────────────┴────────────────┴──────────────────────────────────────────────────────────┘

Signed-off-by: Masashi Honma <masashi.honma@gmail.com>
2025-04-23 12:54:40 +06:00
afdesk
cd322c587c chore(ci): bump up golangci version to v1.64 (#1849) 2025-04-04 15:44:59 +06:00
Bastian Nutzinger
d28ea670c8 add necessary mounts for /var/vcap/data/jobs & sys (#1841) 2025-04-03 14:47:07 +06:00
Masashi Honma
6a46d64538 1.1.15, 1.1.17 of rke2-cis-1.7 fails (#1844)
Resolves #1843.

This PR adds pathes to schedulerkubeconfig and controllermanagerkubeconfig to
fix the failures. And replace hard coded values with variables.

Signed-off-by: Masashi Honma <masashi.honma@gmail.com>
2025-04-02 14:52:03 +06:00
dependabot[bot]
6edf7e590c build(deps): bump k8s.io/client-go from 0.32.2 to 0.32.3 (#1833)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.32.2 to 0.32.3.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.32.2...v0.32.3)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-18 17:13:33 +06:00
dependabot[bot]
a686691252 build(deps): bump k8s.io/apimachinery from 0.32.2 to 0.32.3 (#1834)
Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.32.2 to 0.32.3.
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.32.2...v0.32.3)

---
updated-dependencies:
- dependency-name: k8s.io/apimachinery
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-18 16:55:05 +06:00
dependabot[bot]
486272f8db build(deps): bump golang.org/x/net from 0.33.0 to 0.36.0 (#1830)
Bumps [golang.org/x/net](https://github.com/golang/net) from 0.33.0 to 0.36.0.
- [Commits](https://github.com/golang/net/compare/v0.33.0...v0.36.0)

---
updated-dependencies:
- dependency-name: golang.org/x/net
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-18 13:51:13 +06:00
afdesk
152d0e7528 release: prepare v0.10.4 (#1829) 2025-03-11 22:33:47 +06:00
mjshastha
c74ce3a813 fix: address vulnerabilities in kubectl (#1828)
kubectl has vulnerabilities in the stable version, it was fixed in 1.33.0-alpha.3
2025-03-11 18:06:25 +06:00
dependabot[bot]
594eb2cf18 build(deps): bump golang from 1.23.6 to 1.24.0 (#1805)
Bumps golang from 1.23.6 to 1.24.0.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: afdesk <work@afdesk.com>
2025-03-11 12:14:58 +06:00
dependabot[bot]
eb375f4d9d build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1826)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.56.1 to 1.57.0.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.56.1...service/s3/v1.57.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-11 11:43:18 +06:00
dependabot[bot]
8c385ffb08 build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1827)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.29.8 to 1.29.9.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.29.8...config/v1.29.9)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-11 11:24:39 +06:00
afdesk
b6a88e8282 release: prepare v0.10.3 (#1825) 2025-03-05 16:27:10 +06:00
dependabot[bot]
01afe91352 build(deps): bump github.com/spf13/cobra from 1.8.1 to 1.9.1 (#1809)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.8.1 to 1.9.1.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.8.1...v1.9.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-05 14:49:20 +06:00
dependabot[bot]
d85a765a00 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1824)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.55.8 to 1.56.1.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/wafv2/v1.55.8...service/s3/v1.56.1)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-05 14:28:05 +06:00
dependabot[bot]
99d3eb6a9e build(deps): bump k8s.io/client-go from 0.32.1 to 0.32.2 (#1811)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.32.1 to 0.32.2.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.32.1...v0.32.2)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-05 14:16:31 +06:00
dependabot[bot]
bd6eae0c97 build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1821)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.29.4 to 1.29.8.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.29.4...config/v1.29.8)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: afdesk <work@afdesk.com>
2025-03-05 13:54:22 +06:00
mjshastha
b0cb472482 fix: Add default case to CIS benchmark version selection (#1823)
This commit adds a default case to the switch statements for both "rancher" and "rke2" platforms. This ensures that a fallback CIS benchmark version ("rke-cis-1.7" and "rke2-cis-1.7" respectively) is returned when the Kubernetes version does not match any of the explicitly defined cases.
2025-03-05 13:43:06 +06:00
dependabot[bot]
7f2f0f3227 build(deps): bump github.com/aws/aws-sdk-go-v2 from 1.36.0 to 1.36.3 (#1822)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.36.0 to 1.36.3.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.36.0...v1.36.3)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-04 17:58:09 +06:00
dependabot[bot]
c8d80e6354 build(deps): bump alpine from 3.21.2 to 3.21.3 (#1806)
Bumps alpine from 3.21.2 to 3.21.3.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: afdesk <work@afdesk.com>
2025-03-04 13:32:03 +06:00
Simon Alexander Alsing
c40b2a72e2 fix: typo of applicaions which should have been applications (#1819) 2025-03-04 12:27:13 +06:00
Lihiz
949999145e DEVOPS-934: Fix UBI image labels in order to be able to pass Red Hat pre-flight checks (#1812)
* DEVOPS-934: Fix UBI image labels in order to be able to pass Red Hat pre-flight checks
2025-02-19 15:25:31 +02:00
afdesk
422a7fc5b1 release: prepare v0.10.2 (#1803) 2025-02-12 20:41:39 +06:00
afdesk
18e7e35919 fix: suppress vulnerabilities in kubectl (#1802) 2025-02-12 20:26:50 +06:00
afdesk
f9e2c77967 ci: bump up Go version to 1.23.6 in Github workflows (#1801) 2025-02-12 20:13:43 +06:00
Grischa Ekart
2de22f84fc Updated version in documentation and using a version var (#1799) 2025-02-12 12:15:11 +06:00
Masashi Honma
fcb6517b8b Bump golang from 1.23.5 to 1.23.6 to fix CVE-2025-22866 (#1800)
This is the scan result of Trivy.

usr/local/bin/kube-bench (gobinary)
===================================
Total: 1 (UNKNOWN: 1, LOW: 0, MEDIUM: 0, HIGH: 0, CRITICAL: 0)

┌─────────┬────────────────┬──────────┬────────┬───────────────────┬──────────────────────────────┬────────────────────────────────────────────┐
│ Library │ Vulnerability  │ Severity │ Status │ Installed Version │        Fixed Version         │                   Title                    │
├─────────┼────────────────┼──────────┼────────┼───────────────────┼──────────────────────────────┼────────────────────────────────────────────┤
│ stdlib  │ CVE-2025-22866 │ UNKNOWN  │ fixed  │ 1.23.5            │ 1.22.12, 1.23.6, 1.24.0-rc.3 │ Timing sidechannel for P-256 on ppc64le in │
│         │                │          │        │                   │                              │ crypto/internal/nistec                     │
│         │                │          │        │                   │                              │ https://avd.aquasec.com/nvd/cve-2025-22866 │
└─────────┴────────────────┴──────────┴────────┴───────────────────┴──────────────────────────────┴────────────────────────────────────────────┘

Signed-off-by: Masashi Honma <masashi.honma@gmail.com>
2025-02-10 12:11:21 +06:00
afdesk
60d1842d0d release: prepare v0.10.1 (#1797) 2025-02-04 22:08:07 +06:00
Abubakr-Sadik Nii Nai Davis
26aaeecc0f fix: required fixes for rke-cis 1.7 / 1.28 / 1.29 (#1792) 2025-02-04 18:19:05 +06:00
dependabot[bot]
c04b700d8a build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1794)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.55.3 to 1.55.8.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/iot/v1.55.3...service/wafv2/v1.55.8)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-04 18:01:51 +06:00
dependabot[bot]
4d82ee7f9a build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1795)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.28.10 to 1.29.4.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.28.10...config/v1.29.4)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-04 17:43:45 +06:00
dependabot[bot]
62a54424cb build(deps): bump github.com/aws/aws-sdk-go-v2 from 1.32.8 to 1.36.0 (#1796)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.32.8 to 1.36.0.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.32.8...v1.36.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-04 17:30:37 +06:00
dependabot[bot]
a7bd33cd02 build(deps): bump k8s.io/client-go from 0.32.0 to 0.32.1 (#1785)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.32.0 to 0.32.1.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.32.0...v0.32.1)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-04 17:18:32 +06:00
Masashi Honma
c9985a6e9b Bump golang from 1.23.4 to 1.23.5 to fix vulnerabilities. (#1793)
This is the scan result of Trivy.

usr/local/bin/kube-bench (gobinary)

Total: 2 (UNKNOWN: 0, LOW: 0, MEDIUM: 2, HIGH: 0, CRITICAL: 0)

┌─────────┬────────────────┬──────────┬────────┬───────────────────┬─────────────────────────────┬──────────────────────────────────────────────────────────────┐
│ Library │ Vulnerability  │ Severity │ Status │ Installed Version │        Fixed Version        │                            Title                             │
├─────────┼────────────────┼──────────┼────────┼───────────────────┼─────────────────────────────┼──────────────────────────────────────────────────────────────┤
│ stdlib  │ CVE-2024-45336 │ MEDIUM   │ fixed  │ v1.23.4           │ 1.22.11, 1.23.5, 1.24.0-rc2 │ golang: net/http: net/http: sensitive headers incorrectly    │
│         │                │          │        │                   │                             │ sent after cross-domain redirect                             │
│         │                │          │        │                   │                             │ https://avd.aquasec.com/nvd/cve-2024-45336                   │
│         ├────────────────┤          │        │                   │                             ├──────────────────────────────────────────────────────────────┤
│         │ CVE-2024-45341 │          │        │                   │                             │ golang: crypto/x509: crypto/x509: usage of IPv6 zone IDs can │
│         │                │          │        │                   │                             │ bypass URI name...                                           │
│         │                │          │        │                   │                             │ https://avd.aquasec.com/nvd/cve-2024-45341                   │
└─────────┴────────────────┴──────────┴────────┴───────────────────┴─────────────────────────────┴──────────────────────────────────────────────────────────────┘

Signed-off-by: Masashi Honma <masashi.honma@gmail.com>
2025-02-04 17:06:14 +06:00
dependabot[bot]
368a8b5017 build(deps): bump k8s.io/apimachinery from 0.32.0 to 0.32.1 (#1782)
Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.32.0 to 0.32.1.
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.32.0...v0.32.1)

---
updated-dependencies:
- dependency-name: k8s.io/apimachinery
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-04 16:55:04 +06:00
dependabot[bot]
b1547c0b6b build(deps): bump golang from 1.23.4 to 1.23.5 (#1787)
Bumps golang from 1.23.4 to 1.23.5.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-04 16:42:00 +06:00
afdesk
fcfb537ac1 fix(ci): add missed args for building docker images (#1788) 2025-01-21 22:36:54 +06:00
Lihiz
5557601134 DEVOPS-788: in order to pass RedHat operator certification, labels mu… (#1781)
* DEVOPS-788: in order to pass RedHat operator certification, labels must be set on images

---------

Co-authored-by: Lihi Zitzer <lihi.zitzer@aquasec.com>
2025-01-20 17:47:55 +02:00
afdesk
5967494a5e release: prepare v0.10.0 (#1777) 2025-01-16 09:50:56 +06:00
afdesk
08574d779f chore: bump up Go version to 1.23.4 (#1776)
* chore; bump up Go version to 1.23.4

* chore(ci): set up a timeout for go linter

* chore: remove deprecated linter checks

* chore: bump up golinter timeout to 10sec

* chore: bump up golinter action version to v1.61

* chore: fix linter errors

* chore: set up a timeout for golinter in Github action
2025-01-15 23:02:16 +06:00
dependabot[bot]
4e70640598 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1770)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.55.0 to 1.55.3.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.55.0...service/iot/v1.55.3)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-15 15:46:30 +06:00
dependabot[bot]
362e95a219 build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1769)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.28.6 to 1.28.10.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.28.6...config/v1.28.10)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-15 15:01:02 +06:00
dependabot[bot]
9e526e6c5f build(deps): bump github.com/golang/glog from 1.2.3 to 1.2.4 (#1774)
Bumps [github.com/golang/glog](https://github.com/golang/glog) from 1.2.3 to 1.2.4.
- [Release notes](https://github.com/golang/glog/releases)
- [Commits](https://github.com/golang/glog/compare/v1.2.3...v1.2.4)

---
updated-dependencies:
- dependency-name: github.com/golang/glog
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-14 15:50:02 +06:00
dependabot[bot]
438dc0922c build(deps): bump alpine from 3.21.0 to 3.21.2 (#1773)
Bumps alpine from 3.21.0 to 3.21.2.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-14 11:04:55 +06:00
Andy Pitcher
3a2348eba7 Add CIS Kubernetes CIS-1.10 for k8s v1.28 - v1.31 (#1753)
* Create cis-1.10 yamls and Update info
	- Modify yaml versions from 1.9 to 1.10
	- Adapt configmap to cover cis-1.10
	- Adapt docs and cmd files

* Adapt master.yaml
	- 1.2.29 update cipher list to remove the following insecure ones (RC4-Based, 3DES-Based, RSA-Based AES CBC):
          TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,
          TLS_RSA_WITH_3DES_EDE_CBC_SHA,
          TLS_RSA_WITH_AES_128_CBC_SHA256,
          TLS_RSA_WITH_AES_128_CBC_SHA,
          TLS_RSA_WITH_AES_256_CBC_SHA,
          TLS_RSA_WITH_RC4_128_SHA,
          TLS_ECDHE_RSA_WITH_RC4_128_SHA
          ticket: https://workbench.cisecurity.org/community/43/tickets/21760

* Adapt policies.yaml
	- 5.1.11 typo in sub-resource name 'certificatesigningrequest' https://workbench.cisecurity.org/tickets/21352
	- 5.2.2 new audit to verify if a container is privileged or not. https://workbench.cisecurity.org/tickets/20919
	- 5.2.3 new audit to verify the presence of hostPID opt-in across all pods. https://workbench.cisecurity.org/tickets/20919
	- 5.2.4 new audit to verify the presence of hostIPC opt-in across all pods. https://workbench.cisecurity.org/tickets/20923
	- 5.2.5 new audit to verify the presence of hostNetwork opt-in across all pods. https://workbench.cisecurity.org/tickets/20921
	- 5.2.6 new audit to verify the presence of 'allowPrivilegeEscalation' to true across all pods' container(s)
	- 5.2.6 the 'allowPrivilegeEscalation' setting is moved from 'spec' to 'securityContext' https://workbench.cisecurity.org/tickets/20922
	- 5.2.9 new audit to verify the presence of added capabilities across all pods' container(s)

* Fix 5.2.6 remediation
2025-01-13 11:18:15 +06:00
dependabot[bot]
2cab7f9ecb build(deps): bump github.com/aws/aws-sdk-go-v2 from 1.32.6 to 1.32.8 (#1771)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.32.6 to 1.32.8.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.32.6...v1.32.8)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-01-13 10:53:57 +06:00
jdesouza
acab94ea50 CVE-2024-45338: Inefficient Regular Expression Complexity (#1766) 2025-01-13 10:36:52 +06:00
Guilherme Macedo
fd5e2c17c2 VEX related improvements in the build process (#1768)
Signed-off-by: Guilherme Macedo <guilherme@gmacedo.com>
2025-01-13 10:18:02 +06:00
Peter Balogh
a38a3c5bbc feat: CIS EKS 1.5.0 (#1653)
* feat(cfg): add EKS 1.5.0

* fix(cfg): target map

* fix: update eks job

* fix: target mapping

* feat: use CIS EKS 1.5.0 by default

* fix: scored in node.yaml

Signed-off-by: Peter Balogh <p.balogh.sa@gmail.com>

* doc: add CIS EKS 1.5.0

Signed-off-by: Peter Balogh <p.balogh.sa@gmail.com>

---------

Signed-off-by: Peter Balogh <p.balogh.sa@gmail.com>
2025-01-10 15:18:50 +06:00
afdesk
df48da449c release: prepare v0.9.4 (#1756) 2024-12-16 12:12:37 +06:00
Abubakr-Sadik Nii Nai Davis
f0f89b2707 fix: change the folder name for certificate files in rke-1.23 and rke-1.24, fixes #1747 (#1749) 2024-12-16 11:44:08 +06:00
Abubakr-Sadik Nii Nai Davis
fbb674c450 fix: check_files_owner_in_dir.sh script not found error, fixes #1746 (#1755) 2024-12-16 11:33:06 +06:00
dependabot[bot]
e26eff019c build(deps): bump golang.org/x/crypto from 0.24.0 to 0.31.0 (#1754)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.24.0 to 0.31.0.
- [Commits](https://github.com/golang/crypto/compare/v0.24.0...v0.31.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-12 14:53:40 +06:00
dependabot[bot]
cce6b9d24f build(deps): bump github.com/magiconair/properties from 1.8.7 to 1.8.9 (#1750)
Bumps [github.com/magiconair/properties](https://github.com/magiconair/properties) from 1.8.7 to 1.8.9.
- [Release notes](https://github.com/magiconair/properties/releases)
- [Commits](https://github.com/magiconair/properties/compare/v1.8.7...v1.8.9)

---
updated-dependencies:
- dependency-name: github.com/magiconair/properties
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-12 13:39:04 +06:00
dependabot[bot]
84fb69d65e build(deps): bump golang from 1.23.3 to 1.23.4 (#1752)
Bumps golang from 1.23.3 to 1.23.4.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-12 13:23:48 +06:00
dependabot[bot]
2fa813e790 build(deps): bump alpine from 3.20.3 to 3.21.0 (#1751)
Bumps alpine from 3.20.3 to 3.21.0.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-11 12:34:47 +06:00
afdesk
b6687c1b79 release: prepare v0.9.3 (#1748) 2024-12-09 14:38:35 +06:00
Abubakr-Sadik Nii Nai Davis
20604a5f86 fix: change the folder name for certificate files in rke-cis-1.7 2024-12-09 11:16:04 +06:00
lizhang96
64bc05354b fix: k3s-cis-*- CHECK 4.2.1-4.2.3 (#1739)
* fix the node kubelet related tests

* update the tests
2024-12-06 13:29:34 +06:00
dependabot[bot]
3ee8299bc4 build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1743)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.28.4 to 1.28.6.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.28.4...config/v1.28.6)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-05 15:56:34 +06:00
dependabot[bot]
6aa242e2e5 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1745)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.54.6 to 1.55.0.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/kendra/v1.54.6...service/s3/v1.55.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-05 15:43:04 +06:00
dependabot[bot]
6da5ff4026 build(deps): bump gorm.io/driver/postgres from 1.5.9 to 1.5.11 (#1742)
Bumps [gorm.io/driver/postgres](https://github.com/go-gorm/postgres) from 1.5.9 to 1.5.11.
- [Commits](https://github.com/go-gorm/postgres/compare/v1.5.9...v1.5.11)

---
updated-dependencies:
- dependency-name: gorm.io/driver/postgres
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-05 15:25:34 +06:00
dependabot[bot]
09aa59e0cc build(deps): bump github.com/stretchr/testify from 1.9.0 to 1.10.0 (#1736)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.9.0 to 1.10.0.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.9.0...v1.10.0)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-29 13:20:56 +06:00
dependabot[bot]
2500ceed5b build(deps): bump k8s.io/client-go from 0.31.2 to 0.31.3 (#1738)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.31.2 to 0.31.3.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.31.2...v0.31.3)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-29 12:41:36 +06:00
dependabot[bot]
0eae00cf44 build(deps): bump codecov/codecov-action from 4 to 5 (#1733)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4 to 5.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-19 12:14:09 +06:00
Konstantinos Tsakalozos
39dfe93b68 Ensure 127.0.0.1 for the --bind-address parameter (#1723) 2024-11-18 09:56:28 +06:00
afdesk
4de7b2095a release: prepare v0.9.2 (#1730) 2024-11-16 16:05:57 +06:00
Saurabh Misra
5eccb498c1 FIX| RKE-CIS-1.24- CHECK 1.1.19 (#1722)
We have added the missing script required for check 1.1.19 in rke-cis-1.24 and made it available to the kube-bench file system(https://github.com/rancher/security-scan/blob/master/package/helper_scripts/check_files_owner_in_dir.sh).
2024-11-15 18:32:24 +06:00
dependabot[bot]
7ce327f1db build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1728)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.27.37 to 1.28.4.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.27.37...config/v1.28.4)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 18:21:14 +06:00
dependabot[bot]
8656945200 build(deps): bump github.com/golang/glog from 1.2.2 to 1.2.3 (#1726)
Bumps [github.com/golang/glog](https://github.com/golang/glog) from 1.2.2 to 1.2.3.
- [Release notes](https://github.com/golang/glog/releases)
- [Commits](https://github.com/golang/glog/compare/v1.2.2...v1.2.3)

---
updated-dependencies:
- dependency-name: github.com/golang/glog
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 17:53:09 +06:00
dependabot[bot]
702107daff build(deps): bump github.com/spf13/viper from 1.18.2 to 1.19.0 (#1720)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.18.2 to 1.19.0.
- [Release notes](https://github.com/spf13/viper/releases)
- [Commits](https://github.com/spf13/viper/compare/v1.18.2...v1.19.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/viper
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 17:34:30 +06:00
dependabot[bot]
5fac7f626b build(deps): bump github.com/fatih/color from 1.16.0 to 1.18.0 (#1719)
Bumps [github.com/fatih/color](https://github.com/fatih/color) from 1.16.0 to 1.18.0.
- [Release notes](https://github.com/fatih/color/releases)
- [Commits](https://github.com/fatih/color/compare/v1.16.0...v1.18.0)

---
updated-dependencies:
- dependency-name: github.com/fatih/color
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 16:14:30 +06:00
dependabot[bot]
27a1942bcc build(deps): bump golang from 1.23.2 to 1.23.3 (#1727)
Bumps golang from 1.23.2 to 1.23.3.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 15:39:05 +06:00
dependabot[bot]
9f0f5567ae build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1724)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.54.4 to 1.54.6.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.54.4...service/lambda/v1.54.6)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 11:32:36 +06:00
dependabot[bot]
ea24d0e240 build(deps): bump engineerd/setup-kind from 0.5.0 to 0.6.2 (#1721)
Bumps [engineerd/setup-kind](https://github.com/engineerd/setup-kind) from 0.5.0 to 0.6.2.
- [Release notes](https://github.com/engineerd/setup-kind/releases)
- [Commits](https://github.com/engineerd/setup-kind/compare/v0.5.0...v0.6.2)

---
updated-dependencies:
- dependency-name: engineerd/setup-kind
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-06 10:15:05 +06:00
dependabot[bot]
74f5c8b800 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1716)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.53.3 to 1.54.4.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/iot/v1.53.3...service/s3/v1.54.4)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-01 11:14:40 +06:00
dependabot[bot]
e2a97f49f5 build(deps): bump github.com/spf13/cobra from 1.8.0 to 1.8.1 (#1718)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.8.0 to 1.8.1.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.8.0...v1.8.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-01 10:31:03 +06:00
dependabot[bot]
b4000f677b build(deps): bump gorm.io/gorm from 1.25.10 to 1.25.12 (#1714)
Bumps [gorm.io/gorm](https://github.com/go-gorm/gorm) from 1.25.10 to 1.25.12.
- [Release notes](https://github.com/go-gorm/gorm/releases)
- [Commits](https://github.com/go-gorm/gorm/compare/v1.25.10...v1.25.12)

---
updated-dependencies:
- dependency-name: gorm.io/gorm
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 15:37:35 +06:00
dependabot[bot]
86c6a27cc4 build(deps): bump golang from 1.22.7 to 1.23.2 (#1697)
Bumps golang from 1.22.7 to 1.23.2.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 14:08:26 +06:00
dependabot[bot]
8a695eb8d1 build(deps): bump k8s.io/client-go from 0.29.3 to 0.31.2 (#1712)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.29.3 to 0.31.2.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.29.3...v0.31.2)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:26:08 +06:00
dependabot[bot]
e48c3dd7b5 build(deps): bump golangci/golangci-lint-action from 5 to 6 (#1707)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 5 to 6.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:05:00 +06:00
dependabot[bot]
ddb586d441 build(deps): bump k8s.io/apimachinery from 0.29.3 to 0.31.1 (#1681)
* build(deps): bump k8s.io/apimachinery from 0.29.3 to 0.31.1

Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.29.3 to 0.31.1.
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.29.3...v0.31.1)

---
updated-dependencies:
- dependency-name: k8s.io/apimachinery
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* skip go toolchain

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-24 12:51:14 +06:00
afdesk
5568895095 chore: add go toolchain version (#1710)
* chore: add go toolchain version

* bump up toolchain to 1.22.7
2024-10-24 12:40:41 +06:00
dependabot[bot]
d5ba5edca0 build(deps): bump actions/setup-python from 4 to 5 (#1536)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4 to 5.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 11:30:08 +06:00
dependabot[bot]
0e3dbfa985 build(deps): bump docker/build-push-action from 5 to 6 (#1631)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-21 23:30:31 +06:00
dependabot[bot]
e9ea1dbb74 build(deps): bump golangci/golangci-lint-action from 4 to 5 (#1604)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 4 to 5.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-21 15:47:43 +06:00
afdesk
c5dc28ee6f release: prepare v0.9.1 (#1705) 2024-10-16 19:48:17 +06:00
Omar kamoun
fa478ce238 fix: correct TLSCipherSuites to tlsCipherSuites (#1703) 2024-10-16 11:50:10 +06:00
dependabot[bot]
1d8f80e846 build(deps): bump github.com/golang/glog from 1.2.0 to 1.2.2 (#1702)
Bumps [github.com/golang/glog](https://github.com/golang/glog) from 1.2.0 to 1.2.2.
- [Release notes](https://github.com/golang/glog/releases)
- [Commits](https://github.com/golang/glog/compare/v1.2.0...v1.2.2)

---
updated-dependencies:
- dependency-name: github.com/golang/glog
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-15 11:14:14 +06:00
Abubakr-Sadik Nii Nai Davis
a15e8acaa3 Add GKE 1.6 CIS benchmark for GCP environment (#1672)
* Add config entries for GKE 1.6 controls

* Add gke1.6 control plane recommendations

* Add gke-1.6.0 worker node recommendations

* Add gke-1.6.0 policy recommendations

* Add managed services and policy recommendation

* Add master recommendations

* Fix formatting across gke-1.6.0 files

* Add gke-1.6.0 benchmark selection based on k8s version

* Workaround: hardcode kubelet config path for gke-1.6.0

* Fix tests for makeIPTablesUtilChaings

* Change scored field for all node tests to true

* Fix kubelet file permission to check for

---------

Co-authored-by: afdesk <work@afdesk.com>
2024-10-11 10:49:35 +06:00
dependabot[bot]
e47725299e build(deps): bump gorm.io/driver/postgres from 1.5.6 to 1.5.9 (#1698)
Bumps [gorm.io/driver/postgres](https://github.com/go-gorm/postgres) from 1.5.6 to 1.5.9.
- [Commits](https://github.com/go-gorm/postgres/compare/v1.5.6...v1.5.9)

---
updated-dependencies:
- dependency-name: gorm.io/driver/postgres
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-10 10:37:41 +06:00
Matthias Muth
e8562f2944 Extend default kubelet configlist to fit AWS EKS (#1637)
- the latest default Kubernetes setup of AWS has
  its kubelet config path in the added location.
  Proposing to extend the list of scanned paths in
  order to make kube-bench execution more painless
  and "quick start like" in default setups.
2024-10-04 14:08:03 +06:00
Arano-kai
3a0ccc440c fix: rh-1.0 check 4.1.3 typo (#1652)
Co-authored-by: Arano-kai <captcha.is(dot)evil(meov)gmail.com>
2024-10-04 13:42:56 +06:00
dependabot[bot]
c683e93968 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1696)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.53.1 to 1.53.3.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.53.1...service/iot/v1.53.3)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-04 12:21:07 +06:00
jdesouza
e75cd6bbc8 Updated KUBECTL_VERSION to 1.31.0 for fixing vulnerabilities (#1690)
* Bumped Go to 1.22.7 for fixing Critical/High vulberabilities

* Bumped Go to 1.22.7 for fixing Critical/High vulberabilities

* Bumped kubectl version for fixing vulnerabilities

* Fixed kubectl version

* Update go.mod
2024-10-03 22:43:01 +06:00
dependabot[bot]
d8f041a826 build(deps): bump alpine from 3.20.0 to 3.20.3 (#1676)
Bumps alpine from 3.20.0 to 3.20.3.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-03 09:20:12 +06:00
Winnerson Kharsunai
7ea1d59bb1 update audit script for cis-1.9 kubernetes policies id 5.1.6 (#1655) 2024-10-01 11:48:02 +06:00
Winnerson Kharsunai
89842dcaaa update dockerfile to add package findutils (#1657) 2024-10-01 11:32:23 +06:00
za
674d8e8bb7 Update command to build docker to run in EKS cluster (#1648)
because with the previous command unable to get the argument.

Issue: https://github.com/aquasecurity/kube-bench/issues/1647

Co-authored-by: za <za@noreply.users.github.com>
2024-09-30 12:13:10 +06:00
Andy Pitcher
4b4c1ce709 Modify 1.2.3 Ensure that the DenyServiceExternalIPs is set in CIS-1.7/1.8 (#1607)
* Modify 1.2.3 Ensure that the DenyServiceExternalIPs is set
 - op changed from `have` to `has` and removed bin_op: or
 - remediation description changed to only include --enable-admission-plugins

* Apply changes for CIS-1.9
2024-09-30 10:30:59 +06:00
Andy Pitcher
b85ec78a84 Fix CIS-1.9 policies 5.1.1/5.1.5 typos (#1658)
* Fix CIS-1.9 policies 5.1.1 typo

* Fix typo CIS-1.9 5.1.5

* Add new lines to CIS-1.9
2024-09-30 09:54:45 +06:00
Wolfgang Reichert
f6877e3c17 Fix issue 1595: failed to output to ASFF (#1691)
A breaking change was introduced in aws-sdk-go-v2.
See https://github.com/aws/aws-sdk-go-v2/issues/2370#issuecomment-1953308268.

Mixing aws-sdk-go-v2 packages from versions before and after the breaking change causes kube-bench to fail. This issue occurs when it attempts to access AWS Security Hub.

Addressed issue: https://github.com/aquasecurity/kube-bench/issues/1595

Supersedes bot PR: https://github.com/aquasecurity/kube-bench/pull/1689
Besides upgrading to latest SDK version, some variable types need to be adapted.
2024-09-28 13:36:44 +06:00
Andy Pitcher
2751f87034 Fix audit and remediation for CIS-1.9 master 1.1.13/1.1.14 (#1649)
* Fix audit and remediation for CIS-1.9 master 1.1.13/1.1.14

* Fix loop syntax for file paths

---------

Co-authored-by: afdesk <work@afdesk.com>
2024-09-26 10:45:48 +06:00
Derek Nola
a9422a6623 Overhaul of K3s scans (#1659)
* Overhaul K3s 1.X checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Overhaul K3s 2.X Checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Overhaul K3s 4.X checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Overhaul K3s 5.X checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Add K3s cis-1.8 scan

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Fix K3s 1.1.10 check

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Merge journalctl checks for K3s

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Matched Manual/Automated to correct scoring (false/true)

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Remove incorrect use of check_for_default_sa.sh script

Signed-off-by: Derek Nola <derek.nola@suse.com>

---------

Signed-off-by: Derek Nola <derek.nola@suse.com>
Co-authored-by: afdesk <work@afdesk.com>
2024-09-25 13:12:02 +06:00
mjshastha
f8b6f2fc19 chore: fixed vulns - bump Go version (#1687) 2024-09-24 12:12:40 +06:00
Saurabh Misra
c533d68bad FIXING RKE-2-CIS-1.24 Checks (#1688)
MASTER:
          Checks 1.1.10,1.1.20 are manual 
NODE:
            a. Check 4.2.12 is the node-level equivalent of the master-level check 1.3.6 and is treated the same way.
2024-09-24 11:56:58 +06:00
dependabot[bot]
5a3fd1d896 build(deps): bump golang from 1.22.2 to 1.22.4 (#1629)
Bumps golang from 1.22.2 to 1.22.4.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-04 08:46:34 +03:00
chenk
366e79ddda release: prepare v0.8.0 (#1639)
Signed-off-by: chenk <hen.keinan@gmail.com>
2024-07-02 10:35:09 +03:00
dependabot[bot]
871027447f build(deps): bump goreleaser/goreleaser-action from 5 to 6 (#1628)
Bumps [goreleaser/goreleaser-action](https://github.com/goreleaser/goreleaser-action) from 5 to 6.
- [Release notes](https://github.com/goreleaser/goreleaser-action/releases)
- [Commits](https://github.com/goreleaser/goreleaser-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: goreleaser/goreleaser-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-29 15:53:49 +03:00
Andy Pitcher
7027b6b2ec Add CIS kubernetes CIS-1.9 for k8s v1.27 - v1.29 (#1617)
* Create cis-1.9 yamls and Update info
      - policies.yaml
          - 5.1.1 to 5.1.6 were adapted from Manual to Automated
          - 5.1.3 got broken down into 5.1.3.1 and 5.1.3.2
          - 5.1.6 got broken down into 5.1.6.1 and 5.1.6.2
          - version was set to cis-1.9
       - node.yaml master.yaml controlplane.yaml etcd.yaml
          - version was set to cis-1.9

* Adapt master.yaml
    - Expand 1.1.13/1.1.14 checks by adding super-admin.conf to the permission and ownership verification
    - Remove 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
    - Adjust numbering from 1.2.12 to 1.2.29

* Adjust policies.yaml
   - Check 5.2.3 to 5.2.9 Title Automated to Manual

* Append node.yaml
   - Create 4.3 kube-config group
   - Create 4.3.1 Ensure that the kube-proxy metrics service is bound to localhost (Automated)

* Adjust policies 5.1.3 and 5.1.6

   - Merge 5.1.3.1 and 5.1.3.2 into 5.1.3 (use role_is_compliant and clusterrole_is_compliant)
   - Remove 5.1.6.1 and promote 5.1.6.2 to 5.1.6 since it natively covered 5.1.6.1 artifacts

* Add kubectl dependency and update publish
   - Download kubectl (build stage) based on version and architecture
   - Add binary checksum verification
   - Use go env GOARCH for ARCH
2024-06-26 15:53:57 +03:00
dependabot[bot]
d8fc37649a build(deps): bump alpine from 3.19.1 to 3.20.0 (#1621)
Bumps alpine from 3.19.1 to 3.20.0.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-31 17:28:56 +03:00
Paulo Gomes
0f8dfaf115 Statically link binaries and remove debug information (#1615)
Signed-off-by: Paulo Gomes <pjbgf@linux.com>
2024-05-22 08:37:36 +03:00
Derek Nola
ed51191d7c Replace custom k3s etcd script checks with vanilla grep checks (#1601)
* Replace custom k3s etcd script checks with vanilla grep checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Rework etcd grep, remove etcd ENV checks (no-op), add correct k3s etcddatadir

Signed-off-by: Derek Nola <derek.nola@suse.com>

* chore: update go-linter version

Signed-off-by: chenk <hen.keinan@gmail.com>

* Use etcddatadir variable

Signed-off-by: Derek Nola <derek.nola@suse.com>

---------

Signed-off-by: Derek Nola <derek.nola@suse.com>
Signed-off-by: chenk <hen.keinan@gmail.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-05-20 13:47:15 +03:00
dependabot[bot]
2a8615befd build(deps): bump golang from 1.22.1 to 1.22.2 (#1596)
Bumps golang from 1.22.1 to 1.22.2.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-03 19:35:58 +03:00
chenk
ff9341a5d0 release: prepare-v0.7.3 (#1599)
Signed-off-by: chenk <hen.keinan@gmail.com>
2024-04-18 09:58:44 +03:00
dependabot[bot]
65c484e85a build(deps): bump k8s.io/client-go from 0.29.1 to 0.29.3 (#1587)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.29.1 to 0.29.3.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.29.1...v0.29.3)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-04-18 09:54:55 +03:00
mjshastha
d2d3e72271 Currently, certain commands involve retrieving all node names or pods and then executing additional commands in a loop, resulting in a time complexity linearly proportional to the number of nodes. (#1597)
This approach becomes time-consuming for larger clusters.

As kube-bench is executed as a job on every node in the cluster, To enhance performance, Streamlined the commands to execute directly on current node where kube-bench operates.
This change ensures that the time complexity remains constant, regardless of the cluster size.
By running the necessary commands only once per node, regardless of how many nodes are in the cluster, this approach significantly boosts performance and efficiency.
2024-04-18 09:01:17 +03:00
dependabot[bot]
73e1377ce0 build(deps): bump github.com/jackc/pgx/v5 from 5.4.3 to 5.5.4 (#1586)
Bumps [github.com/jackc/pgx/v5](https://github.com/jackc/pgx) from 5.4.3 to 5.5.4.
- [Changelog](https://github.com/jackc/pgx/blob/master/CHANGELOG.md)
- [Commits](https://github.com/jackc/pgx/compare/v5.4.3...v5.5.4)

---
updated-dependencies:
- dependency-name: github.com/jackc/pgx/v5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-06 08:59:45 +03:00
dependabot[bot]
dc8f4d37f0 build(deps): bump github.com/aws/aws-sdk-go-v2 from 1.25.2 to 1.26.0 (#1589)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.25.2 to 1.26.0.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.25.2...v1.26.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-03-30 12:41:07 +03:00
dependabot[bot]
dc7441620f build(deps): bump golang from 1.22.0 to 1.22.1 (#1583)
Bumps golang from 1.22.0 to 1.22.1.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-29 14:10:34 +03:00
dependabot[bot]
45afbd76c2 build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1577)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.26.6 to 1.27.4.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.26.6...config/v1.27.4)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-03-08 15:36:33 +02:00
178 changed files with 26869 additions and 1931 deletions

View File

@@ -14,7 +14,6 @@ on:
- "LICENSE"
- "NOTICE"
env:
GO_VERSION: "1.21"
KIND_VERSION: "v0.11.1"
KIND_IMAGE: "kindest/node:v1.21.1@sha256:69860bda5563ac81e3c0057d654b5253219618a22ec3a346306239bba8cfa1a6"
@@ -23,47 +22,47 @@ jobs:
name: Lint
runs-on: ubuntu-latest
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup Go
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with:
go-version-file: go.mod
- name: yaml-lint
uses: ibiqlik/action-yamllint@v3
- name: Setup golangci-lint
uses: golangci/golangci-lint-action@v4
uses: golangci/golangci-lint-action@v8
with:
version: latest
args: --verbose
version: v2.5.0
args: --verbose --timeout 2m
unit:
name: Unit tests
runs-on: ubuntu-latest
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup Go
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with:
go-version-file: go.mod
- name: Run unit tests
run: make tests
- name: Upload code coverage
uses: codecov/codecov-action@v4
uses: codecov/codecov-action@v5
with:
file: ./coverage.txt
e2e:
name: E2e tests
runs-on: ubuntu-latest
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Setup Go
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with:
go-version-file: go.mod
- name: Setup Kubernetes cluster (KIND)
uses: engineerd/setup-kind@v0.5.0
uses: engineerd/setup-kind@v0.6.2
with:
version: ${{ env.KIND_VERSION }}
image: ${{ env.KIND_IMAGE }}
@@ -86,16 +85,16 @@ jobs:
runs-on: ubuntu-latest
needs: [e2e, unit]
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with:
go-version-file: go.mod
- name: Dry-run release snapshot
uses: goreleaser/goreleaser-action@v5
uses: goreleaser/goreleaser-action@v6
with:
distribution: goreleaser
version: v1.7.0

View File

@@ -16,11 +16,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout main
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0
persist-credentials: true
- uses: actions/setup-python@v4
- uses: actions/setup-python@v6
with:
python-version: 3.x
- run: |

View File

@@ -9,20 +9,21 @@ env:
ALIAS: aquasecurity
DOCKERHUB_ALIAS: aquasec
REP: kube-bench
jobs:
publish:
name: Publish
runs-on: ubuntu-latest
steps:
- name: Check Out Repo
uses: actions/checkout@v4
uses: actions/checkout@v6
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
uses: actions/cache@v4
uses: actions/cache@v5
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildxarch-${{ github.sha }}
@@ -46,10 +47,13 @@ jobs:
images: ${{ env.REP }}
tag-semver: |
{{version}}
- name: Extract variables from makefile (kubectl)
id: extract_vars
run: |
echo "KUBECTL_VERSION=$(grep -oP '^KUBECTL_VERSION\s*\?=\s*\K.*' makefile)" >> $GITHUB_ENV
- name: Build and push - Docker/ECR
id: docker_build
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64,linux/ppc64le,linux/s390x
@@ -57,6 +61,7 @@ jobs:
push: true
build-args: |
KUBEBENCH_VERSION=${{ steps.get_version.outputs.version }}
KUBECTL_VERSION=${{ env.KUBECTL_VERSION }}
tags: |
${{ env.DOCKERHUB_ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}
public.ecr.aws/${{ env.ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}
@@ -67,7 +72,7 @@ jobs:
- name: Build and push ubi image - Docker/ECR
id: docker_build_ubi
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64,linux/ppc64le,linux/s390x
@@ -76,6 +81,7 @@ jobs:
file: Dockerfile.ubi
build-args: |
KUBEBENCH_VERSION=${{ steps.get_version.outputs.version }}
KUBECTL_VERSION=${{ env.KUBECTL_VERSION }}
tags: |
${{ env.DOCKERHUB_ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}-ubi
public.ecr.aws/${{ env.ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}-ubi
@@ -86,7 +92,7 @@ jobs:
- name: Build and push fips ubi image - Docker/ECR
id: docker_build_fips_ubi
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64,linux/ppc64le,linux/s390x
@@ -95,6 +101,7 @@ jobs:
file: Dockerfile.fips.ubi
build-args: |
KUBEBENCH_VERSION=${{ steps.get_version.outputs.version }}
KUBECTL_VERSION=${{ env.KUBECTL_VERSION }}
tags: |
${{ env.DOCKERHUB_ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}-ubi-fips
public.ecr.aws/${{ env.ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}-ubi-fips

View File

@@ -5,7 +5,6 @@ on:
tags:
- "v*"
env:
GO_VERSION: "1.21"
KIND_VERSION: "v0.11.1"
KIND_IMAGE: "kindest/node:v1.21.1@sha256:69860bda5563ac81e3c0057d654b5253219618a22ec3a346306239bba8cfa1a6"
@@ -14,18 +13,18 @@ jobs:
name: Release
runs-on: ubuntu-latest
steps:
- name: Setup Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout code
uses: actions/checkout@v4
uses: actions/checkout@v6
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@7a3fe6cf4cb3a834922a1244abfce67bcef6a0c5 # v6.2.0
with:
go-version-file: go.mod
- name: Run unit tests
run: make tests
- name: Setup Kubernetes cluster (KIND)
uses: engineerd/setup-kind@v0.5.0
uses: engineerd/setup-kind@v0.6.2
with:
version: ${{ env.KIND_VERSION }}
image: ${{ env.KIND_IMAGE }}
@@ -44,7 +43,7 @@ jobs:
second_file_path: integration/testdata/Expected_output.data
expected_result: PASSED
- name: Release
uses: goreleaser/goreleaser-action@v5
uses: goreleaser/goreleaser-action@v6
with:
distribution: goreleaser
version: v1.7.0

View File

@@ -1,12 +1,28 @@
---
version: "2"
linters:
disable-all: true
default: none
enable:
- deadcode
- gocyclo
- gofmt
- goimports
- govet
- misspell
- typecheck
- varcheck
exclusions:
generated: lax
presets:
- comments
- common-false-positives
- legacy
- std-error-handling
paths:
- third_party$
- builtin$
- examples$
formatters:
enable:
- gofmt
- goimports
exclusions:
generated: lax
paths:
- third_party$
- builtin$
- examples$

View File

@@ -2,10 +2,15 @@
project_name: kube-bench
env:
- GO111MODULE=on
- CGO_ENABLED=0
- KUBEBENCH_CFG=/etc/kube-bench/cfg
builds:
- main: main.go
- main: .
binary: kube-bench
tags:
- osusergo
- netgo
- static_build
goos:
- linux
- darwin
@@ -19,6 +24,9 @@ builds:
- 6
- 7
ldflags:
- "-s"
- "-w"
- "-extldflags '-static'"
- "-X github.com/aquasecurity/kube-bench/cmd.KubeBenchVersion={{.Version}}"
- "-X github.com/aquasecurity/kube-bench/cmd.cfgDir={{.Env.KUBEBENCH_CFG}}"
# Archive customization

View File

@@ -1,4 +1,4 @@
FROM golang:1.22.0 AS build
FROM golang:1.25.7 AS build
WORKDIR /go/src/github.com/aquasecurity/kube-bench/
COPY makefile makefile
COPY go.mod go.sum ./
@@ -9,11 +9,22 @@ COPY internal/ internal/
ARG KUBEBENCH_VERSION
RUN make build && cp kube-bench /go/bin/kube-bench
FROM alpine:3.19.1 AS run
# Add kubectl to run policies checks
ARG KUBECTL_VERSION TARGETARCH
RUN wget -O /usr/local/bin/kubectl "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl"
RUN wget -O kubectl.sha256 "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl.sha256"
# Verify kubectl sha256sum
RUN /bin/bash -c 'echo "$(<kubectl.sha256) /usr/local/bin/kubectl" | sha256sum -c -'
RUN chmod +x /usr/local/bin/kubectl
FROM alpine:3.23.3 AS run
WORKDIR /opt/kube-bench/
# add GNU ps for -C, -o cmd, and --no-headers support
# add GNU ps for -C, -o cmd, --no-headers support and add findutils to get GNU xargs
# https://github.com/aquasecurity/kube-bench/issues/109
RUN apk --no-cache add procps
# https://github.com/aquasecurity/kube-bench/issues/1656
RUN apk --no-cache add procps findutils
# Upgrading apk-tools to remediate CVE-2021-36159 - https://snyk.io/vuln/SNYK-ALPINE314-APKTOOLS-1533752
# https://github.com/aquasecurity/kube-bench/issues/943
@@ -29,21 +40,34 @@ RUN wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/s
RUN apk add gcompat
RUN apk add jq
ENV PATH=$PATH:/usr/local/mount-from-host/bin
# Add bash for running helper scripts
RUN apk add bash
ENV PATH=$PATH:/usr/local/mount-from-host/bin:/go/bin
COPY --from=build /go/bin/kube-bench /usr/local/bin/kube-bench
COPY --from=build /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY entrypoint.sh .
COPY cfg/ cfg/
COPY helper_scripts/check_files_owner_in_dir.sh /go/bin/
RUN chmod a+x /go/bin/check_files_owner_in_dir.sh
ENTRYPOINT ["./entrypoint.sh"]
CMD ["install"]
# Build-time metadata as defined at http://label-schema.org
ARG BUILD_DATE
ARG VCS_REF
ARG KUBEBENCH_VERSION
LABEL org.label-schema.build-date=$BUILD_DATE \
org.label-schema.name="kube-bench" \
org.label-schema.description="Run the CIS Kubernetes Benchmark tests" \
org.label-schema.url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.schema-version="1.0"
org.label-schema.name="kube-bench" \
org.label-schema.vendor="Aqua Security Software Ltd." \
org.label-schema.version=$KUBEBENCH_VERSION \
org.label-schema.release=$KUBEBENCH_VERSION \
org.label-schema.summary="Aqua security server" \
org.label-schema.maintainer="admin@aquasec.com" \
org.label-schema.description="Run the CIS Kubernetes Benchmark tests" \
org.label-schema.url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.schema-version="1.0"

View File

@@ -1,4 +1,4 @@
FROM golang:1.22.0 AS build
FROM golang:1.25.7 AS build
WORKDIR /go/src/github.com/aquasecurity/kube-bench/
COPY makefile makefile
COPY go.mod go.sum ./
@@ -9,6 +9,14 @@ COPY internal/ internal/
ARG KUBEBENCH_VERSION
RUN make build-fips && cp kube-bench /go/bin/kube-bench
# Add kubectl to run policies checks
ARG KUBECTL_VERSION TARGETARCH
RUN wget -O /usr/local/bin/kubectl "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl"
RUN wget -O kubectl.sha256 "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl.sha256"
# Verify kubectl sha256sum
RUN /bin/bash -c 'echo "$(<kubectl.sha256) /usr/local/bin/kubectl" | sha256sum -c -'
RUN chmod +x /usr/local/bin/kubectl
# ubi8-minimal base image for build with ubi standards
FROM registry.access.redhat.com/ubi9/ubi-minimal as run
@@ -31,8 +39,10 @@ ENV PATH=$PATH:/usr/local/mount-from-host/bin
COPY LICENSE /licenses/LICENSE
COPY --from=build /go/bin/kube-bench /usr/local/bin/kube-bench
COPY --from=build /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY entrypoint.sh .
COPY cfg/ cfg/
COPY helper_scripts/check_files_owner_in_dir.sh /go/bin
ENTRYPOINT ["./entrypoint.sh"]
CMD ["install"]
@@ -40,10 +50,17 @@ CMD ["install"]
# Build-time metadata as defined at http://label-schema.org
ARG BUILD_DATE
ARG VCS_REF
ARG KUBEBENCH_VERSION
LABEL org.label-schema.build-date=$BUILD_DATE \
org.label-schema.name="kube-bench" \
org.label-schema.description="Run the CIS Kubernetes Benchmark tests" \
org.label-schema.url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.schema-version="1.0"
org.label-schema.name="kube-bench" \
org.label-schema.vendor="Aqua Security Software Ltd." \
org.label-schema.version=$KUBEBENCH_VERSION \
org.label-schema.release=$KUBEBENCH_VERSION \
org.label-schema.summary="Aqua security server" \
org.label-schema.maintainer="admin@aquasec.com" \
org.label-schema.description="Run the CIS Kubernetes Benchmark tests" \
org.label-schema.url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.schema-version="1.0"

View File

@@ -1,4 +1,4 @@
FROM golang:1.22.0 AS build
FROM golang:1.25.7 AS build
WORKDIR /go/src/github.com/aquasecurity/kube-bench/
COPY makefile makefile
COPY go.mod go.sum ./
@@ -9,6 +9,14 @@ COPY internal/ internal/
ARG KUBEBENCH_VERSION
RUN make build && cp kube-bench /go/bin/kube-bench
# Add kubectl to run policies checks
ARG KUBECTL_VERSION TARGETARCH
RUN wget -O /usr/local/bin/kubectl "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl"
RUN wget -O kubectl.sha256 "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl.sha256"
# Verify kubectl sha256sum
RUN /bin/bash -c 'echo "$(<kubectl.sha256) /usr/local/bin/kubectl" | sha256sum -c -'
RUN chmod +x /usr/local/bin/kubectl
# ubi8-minimal base image for build with ubi standards
FROM registry.access.redhat.com/ubi9/ubi-minimal as run
@@ -27,12 +35,14 @@ RUN microdnf install -y yum findutils openssl \
WORKDIR /opt/kube-bench/
ENV PATH=$PATH:/usr/local/mount-from-host/bin
ENV PATH=$PATH:/usr/local/mount-from-host/bin
COPY LICENSE /licenses/LICENSE
COPY --from=build /go/bin/kube-bench /usr/local/bin/kube-bench
COPY --from=build /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY entrypoint.sh .
COPY cfg/ cfg/
COPY helper_scripts/check_files_owner_in_dir.sh /go/bin
ENTRYPOINT ["./entrypoint.sh"]
CMD ["install"]
@@ -40,10 +50,17 @@ CMD ["install"]
# Build-time metadata as defined at http://label-schema.org
ARG BUILD_DATE
ARG VCS_REF
ARG KUBEBENCH_VERSION
LABEL org.label-schema.build-date=$BUILD_DATE \
org.label-schema.name="kube-bench" \
org.label-schema.description="Run the CIS Kubernetes Benchmark tests" \
org.label-schema.url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.schema-version="1.0"
org.label-schema.description="Run the CIS Kubernetes Benchmark tests" \
org.label-schema.url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.schema-version="1.0" \
vendor="Aqua Security Software Ltd." \
maintainer="Aqua Security Software Ltd." \
version=$KUBEBENCH_VERSION \
release=$KUBEBENCH_VERSION \
summary="Aqua Security Kube-bench." \
description="Run the CIS Kubernetes Benchmark tests"

View File

@@ -377,7 +377,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set TLSCipherSuites: to
If using a Kubelet config file, edit the file to set tlsCipherSuites: to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -132,7 +132,7 @@ groups:
type: "manual"
remediation: |
Review the use of capabilites in applications runnning on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false

2
cfg/aks-1.7/config.yaml Normal file
View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,31 @@
---
controls:
version: "aks-1.7"
id: 2
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 2.1
text: "Logging"
checks:
- id: 2.1.1
text: "Enable audit Logs"
type: "manual"
remediation: |
Azure audit logs are enabled and managed in the Azure portal. To enable log collection for
the Kubernetes master components in your AKS cluster, open the Azure portal in a web
browser and complete the following steps:
1. Select the resource group for your AKS cluster, such as myResourceGroup. Don't
select the resource group that contains your individual AKS cluster resources, such
as MC_myResourceGroup_myAKSCluster_eastus.
2. On the left-hand side, choose Diagnostic settings.
3. Select your AKS cluster, such as myAKSCluster, then choose to Add diagnostic setting.
4. Enter a name, such as myAKSClusterLogs, then select the option to Send to Log Analytics.
5. Select an existing workspace or create a new one. If you create a workspace, provide
a workspace name, a resource group, and a location.
6. In the list of available logs, select the logs you wish to enable. For this example,
enable the kube-audit and kube-audit-admin logs. Common logs include the kube-
apiserver, kube-controller-manager, and kube-scheduler. You can return and change
the collected logs once Log Analytics workspaces are enabled.
7. When ready, select Save to enable collection of the selected logs.
scored: false

View File

@@ -0,0 +1,169 @@
---
controls:
version: "aks-1.7"
id: 5
text: "Managed Services"
type: "managedservices"
groups:
- id: 5.1
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning using Microsoft Defender for Cloud (MDC) image scanning or a third party provider (Manual)"
type: "manual"
remediation: |
Enable MDC for Container Registries by running the following Azure CLI command:
az security pricing create --name ContainerRegistry --tier Standard
Alternatively, use the following command to enable image scanning for your container registry:
az resource update --ids /subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.ContainerRegistry/registries/{registry-name} --set properties.enabled=true
Replace `subscription-id`, `resource-group-name`, and `registry-name` with the correct values for your environment.
Please note that enabling MDC for Container Registries will incur additional costs, so be sure to review the pricing information provided in the Azure documentation before enabling it.
scored: false
- id: 5.1.2
text: "Minimize user access to Azure Container Registry (ACR) (Manual)"
type: "manual"
remediation: |
Azure Container Registry
If you use Azure Container Registry (ACR) as your container image store, you need to grant
permissions to the service principal for your AKS cluster to read and pull images. Currently,
the recommended configuration is to use the az aks create or az aks update command to
integrate with a registry and assign the appropriate role for the service principal. For
detailed steps, see Authenticate with Azure Container Registry from Azure Kubernetes
Service.
To avoid needing an Owner or Azure account administrator role, you can configure a
service principal manually or use an existing service principal to authenticate ACR from
AKS. For more information, see ACR authentication with service principals or Authenticate
from Kubernetes with a pull secret.
scored: false
- id: 5.1.3
text: "Minimize cluster access to read-only for Azure Container Registry (ACR) (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.1.4
text: "Minimize Container Registries to only those approved (Manual)"
type: "manual"
remediation: |
If you are using **Azure Container Registry**, you can restrict access using firewall rules as described in the official documentation:
https://docs.microsoft.com/en-us/azure/container-registry/container-registry-firewall-access-rules
For other non-AKS repositories, you can use **admission controllers** or **Azure Policy** to enforce registry access restrictions.
Limiting or locking down egress traffic to specific container registries is also recommended. For more information, refer to:
https://docs.microsoft.com/en-us/azure/aks/limit-egress-traffic
scored: false
- id: 5.2
text: "Access and identity options for Azure Kubernetes Service (AKS)"
checks:
- id: 5.2.1
text: "Prefer using dedicated AKS Service Accounts (Manual)"
type: "manual"
remediation: |
Azure Active Directory integration
The security of AKS clusters can be enhanced with the integration of Azure Active Directory
(AD). Built on decades of enterprise identity management, Azure AD is a multi-tenant,
cloud-based directory, and identity management service that combines core directory
services, application access management, and identity protection. With Azure AD, you can
integrate on-premises identities into AKS clusters to provide a single source for account
management and security.
Azure Active Directory integration with AKS clusters
With Azure AD-integrated AKS clusters, you can grant users or groups access to Kubernetes
resources within a namespace or across the cluster. To obtain a kubectl configuration
context, a user can run the az aks get-credentials command. When a user then interacts
with the AKS cluster with kubectl, they're prompted to sign in with their Azure AD
credentials. This approach provides a single source for user account management and
password credentials. The user can only access the resources as defined by the cluster
administrator.
Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect
is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID
Connect, see the Open ID connect documentation. From inside of the Kubernetes cluster,
Webhook Token Authentication is used to verify authentication tokens. Webhook token
authentication is configured and managed as part of the AKS cluster.
scored: false
- id: 5.3
text: "Key Management Service (KMS)"
checks:
- id: 5.3.1
text: "Ensure Kubernetes Secrets are encrypted (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.4
text: "Cluster Networking"
checks:
- id: 5.4.1
text: "Restrict Access to the Control Plane Endpoint (Manual)"
type: "manual"
remediation: |
By enabling private endpoint access to the Kubernetes API server, all communication between your nodes and the API server stays within your VPC. You can also limit the IP addresses that can access your API server from the internet, or completely disable internet access to the API server.
With this in mind, you can update your cluster accordingly using the AKS CLI to ensure that Private Endpoint Access is enabled.
If you choose to also enable Public Endpoint Access then you should also configure a list of allowable CIDR blocks, resulting in restricted access from the internet. If you specify no CIDR blocks, then the public API server endpoint is able to receive and process requests from all IP addresses by defaulting to ['0.0.0.0/0'].
Example:
az aks update --name ${CLUSTER_NAME} --resource-group ${RESOURCE_GROUP} --api-server-access-profile enablePrivateCluster=true --api-server-access-profile authorizedIpRanges=192.168.1.0/24
scored: false
- id: 5.4.2
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
type: "manual"
remediation: |
To use a private endpoint, create a new private endpoint in your virtual network, then create a link between your virtual network and a new private DNS zone.
You can also restrict access to the public endpoint by enabling only specific CIDR blocks to access it. For example:
az aks update --name ${CLUSTER_NAME} --resource-group ${RESOURCE_GROUP} --api-server-access-profile enablePublicFqdn=false
This command disables the public API endpoint for your AKS cluster.
scored: false
- id: 5.4.3
text: "Ensure clusters are created with Private Nodes (Manual)"
type: "manual"
remediation: |
To create a private cluster, use the following command:
az aks create \
--resource-group <private-cluster-resource-group> \
--name <private-cluster-name> \
--load-balancer-sku standard \
--enable-private-cluster \
--network-plugin azure \
--vnet-subnet-id <subnet-id> \
--docker-bridge-address <docker-bridge-address> \
--dns-service-ip <dns-service-ip> \
--service-cidr <service-cidr>
Ensure that --enable-private-cluster flag is set to enable private nodes in your cluster.
scored: false
- id: 5.4.4
text: "Ensure Network Policy is Enabled and set as appropriate (Manual)"
type: "manual"
remediation: |
Utilize Calico or another network policy engine to segment and isolate your traffic.
Enable network policies on your AKS cluster by following the Azure documentation or using the `az aks` CLI to enable the network policy add-on.
scored: false
- id: 5.4.5
text: "Encrypt traffic to HTTPS load balancers with TLS certificates (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.5
text: "Authentication and Authorization"
checks:
- id: 5.5.1
text: "Manage Kubernetes RBAC users with Azure AD (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.5.2
text: "Use Azure RBAC for Kubernetes Authorization (Manual)"
type: "manual"
remediation: "No remediation"
scored: false

6
cfg/aks-1.7/master.yaml Normal file
View File

@@ -0,0 +1,6 @@
---
controls:
version: "aks-1.7"
id: 1
text: "Control Plane Components"
type: "master"

283
cfg/aks-1.7/node.yaml Normal file
View File

@@ -0,0 +1,283 @@
---
controls:
version: "aks-1.7"
id: 3
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 3.1
text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletkubeconfig
scored: true
- id: 3.1.2
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 3.1.3
text: "Ensure that the azure.json file has permissions set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e /etc/kubernetes/azure.json; then stat -c permissions=%a /etc/kubernetes/azure.json; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 644 $kubeletconf
scored: true
- id: 3.1.4
text: "Ensure that the azure.json file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 3.2
text: "Kubelet"
checks:
- id: 3.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
false.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.4
text: "Ensure that the --read-only-port is secured (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: true
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set readOnlyPort to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: true
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated) "
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: true
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.7
text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
set: true
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set the 'eventRecordQPS' value to an appropriate level (e.g., 5).
If using executable arguments, check the Kubelet service file `$kubeletsvc` on each worker node, and add the following parameter to the `KUBELET_ARGS` variable:
--eventRecordQPS=5
Ensure that there is no conflicting `--eventRecordQPS` setting in the service file that overrides the config file.
After making the changes, restart the Kubelet service:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.8
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: true
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If modifying the Kubelet config file, edit the `kubelet-config.json` file located at `/etc/kubernetes/kubelet/kubelet-config.json` and set the following parameter to `true`:
"rotateCertificates": true
Ensure that the Kubelet service file located at `/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf` does not define the `--rotate-certificates` argument as `false`, as this would override the config file.
If using executable arguments, add the following line to the `KUBELET_CERTIFICATE_ARGS` variable:
--rotate-certificates=true
After making the necessary changes, restart the Kubelet service:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.9
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: true
compare:
op: eq
value: true
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true

417
cfg/aks-1.7/policies.yaml Normal file
View File

@@ -0,0 +1,417 @@
---
controls:
version: "aks-1.7"
id: 4
text: "Policies"
type: "policies"
groups:
- id: 4.1
text: "RBAC and Service Accounts"
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
audit: |
kubectl get clusterrolebindings -o json | jq -r '
.items[]
| select(.roleRef.name == "cluster-admin")
| .subjects[]?
| select(.kind != "Group" or (.name != "system:masters" and .name != "system:nodes"))
| "FOUND_CLUSTER_ADMIN_BINDING"
' || echo "NO_CLUSTER_ADMIN_BINDINGS"
tests:
test_items:
- flag: "NO_CLUSTER_ADMIN_BINDINGS"
set: true
compare:
op: eq
value: "NO_CLUSTER_ADMIN_BINDINGS"
remediation: |
Identify all clusterrolebindings to the cluster-admin role using:
kubectl get clusterrolebindings --no-headers | grep cluster-admin
Review if each of them actually needs this role. If not, remove the binding:
kubectl delete clusterrolebinding <binding-name>
Where possible, assign a less privileged ClusterRole.
scored: true
- id: 4.1.2
text: "Minimize access to secrets (Automated)"
audit: |
count=$(kubectl get roles --all-namespaces -o json | jq '
.items[]
| select(.rules[]?
| (.resources[]? == "secrets")
and ((.verbs[]? == "get") or (.verbs[]? == "list") or (.verbs[]? == "watch"))
)' | wc -l)
if [ "$count" -gt 0 ]; then
echo "SECRETS_ACCESS_FOUND"
fi
tests:
test_items:
- flag: "SECRETS_ACCESS_FOUND"
set: false
remediation: |
Identify all roles that grant access to secrets via get/list/watch verbs.
Use `kubectl edit role -n <namespace> <name>` to remove these permissions.
Alternatively, create a new least-privileged role that excludes secret access.
scored: true
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
audit: |
wildcards=$(kubectl get roles --all-namespaces -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
wildcards_clusterroles=$(kubectl get clusterroles -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
total=$((wildcards + wildcards_clusterroles))
if [ "$total" -gt 0 ]; then
echo "wildcards_present"
fi
tests:
test_items:
- flag: wildcards_present
set: false
remediation: |
Identify roles and clusterroles using wildcards (*) in 'verbs', 'resources', or 'apiGroups'.
Replace wildcards with specific values to enforce least privilege access.
Use `kubectl edit role -n <namespace> <name>` or `kubectl edit clusterrole <name>` to update.
scored: true
- id: 4.1.4
text: "Minimize access to create pods (Automated)"
audit: |
echo "🔹 Roles and ClusterRoles with 'create' access on 'pods':"
access=$(kubectl get roles,clusterroles -A -o json | jq '
[.items[] |
select(
.rules[]? |
(.resources[]? == "pods" and .verbs[]? == "create")
)
] | length')
if [ "$access" -gt 0 ]; then
echo "pods_create_access"
fi
tests:
test_items:
- flag: pods_create_access
set: false
remediation: |
Review all roles and clusterroles that have "create" permission on "pods".
🔒 Where possible, remove or restrict this permission to only required service accounts.
🛠 Use:
- `kubectl edit role -n <namespace> <role>`
- `kubectl edit clusterrole <name>`
✅ Apply least privilege principle across the cluster.
scored: true
- id: 4.1.5
text: "Ensure that default service accounts are not actively used (Automated)"
audit: |
echo "🔹 Default Service Accounts with automountServiceAccountToken enabled:"
default_sa_count=$(kubectl get serviceaccounts --all-namespaces -o json | jq '
[.items[] | select(.metadata.name == "default" and (.automountServiceAccountToken != false))] | length')
if [ "$default_sa_count" -gt 0 ]; then
echo "default_sa_not_auto_mounted"
fi
echo "\n🔹 Pods using default ServiceAccount:"
pods_using_default_sa=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.serviceAccountName == "default")] | length')
if [ "$pods_using_default_sa" -gt 0 ]; then
echo "default_sa_used_in_pods"
fi
tests:
test_items:
- flag: default_sa_not_auto_mounted
set: false
- flag: default_sa_used_in_pods
set: false
remediation: |
1. Avoid using default service accounts for workloads.
2. Set `automountServiceAccountToken: false` on all default SAs:
kubectl patch serviceaccount default -n <namespace> -p '{"automountServiceAccountToken": false}'
3. Use custom service accounts with only the necessary permissions.
scored: true
- id: 4.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
audit: |
echo "🔹 Pods with automountServiceAccountToken enabled:"
pods_with_token_mount=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.automountServiceAccountToken != false)] | length')
if [ "$pods_with_token_mount" -gt 0 ]; then
echo "automountServiceAccountToken"
fi
tests:
test_items:
- flag: automountServiceAccountToken
set: false
remediation: |
Pods that do not need access to the Kubernetes API should not mount service account tokens.
✅ To disable token mounting in a pod definition:
```yaml
spec:
automountServiceAccountToken: false
```
✅ Or patch an existing pod's spec (recommended via workload template):
Patch not possible for running pods — update the deployment YAML or recreate pods with updated spec.
scored: true
- id: 4.2
text: "Pod Security Policies"
checks:
- id: 4.2.1
text: "Minimize the admission of privileged containers (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?.spec.containers[]?; .securityContext?.privileged == true) then "PRIVILEGED_FOUND" else "NO_PRIVILEGED" end'
tests:
test_items:
- flag: "NO_PRIVILEGED"
set: true
compare:
op: eq
value: "NO_PRIVILEGED"
remediation: |
Add a Pod Security Admission (PSA) policy to each namespace in the cluster to restrict the admission of privileged containers.
To enforce a restricted policy for a specific namespace, use the following command:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also enforce PSA for all namespaces:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
Additionally, review the namespaces that should be excluded (e.g., `kube-system`, `gatekeeper-system`, `azure-arc`, `azure-extensions-usage-system`) and adjust your filtering if necessary.
To enable Pod Security Policies, refer to the detailed documentation for Kubernetes and Azure integration at:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: true
- id: 4.2.2
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?; .spec.hostPID == true) then "HOSTPID_FOUND" else "NO_HOSTPID" end'
tests:
test_items:
- flag: "NO_HOSTPID"
set: true
compare:
op: eq
value: "NO_HOSTPID"
remediation: |
Add a policy to each namespace in the cluster that restricts the admission of containers with hostPID. For namespaces that need it, ensure RBAC controls limit access to a specific service account.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: true
- id: 4.2.3
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r 'if any(.items[]?; .spec.hostIPC == true) then "HOSTIPC_FOUND" else "NO_HOSTIPC" end'
tests:
test_items:
- flag: "NO_HOSTIPC"
set: true
compare:
op: eq
value: "NO_HOSTIPC"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the admission of hostIPC containers.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: true
- id: 4.2.4
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r 'if any(.items[]?; .spec.hostNetwork == true) then "HOSTNETWORK_FOUND" else "NO_HOSTNETWORK" end'
tests:
test_items:
- flag: "NO_HOSTNETWORK"
set: true
compare:
op: eq
value: "NO_HOSTNETWORK"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the admission of hostNetwork containers.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: true
- id: 4.2.5
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?.spec.containers[]?; .securityContext?.allowPrivilegeEscalation == true) then "ALLOWPRIVILEGEESCALTION_FOUND" else "NO_ALLOWPRIVILEGEESCALTION" end'
tests:
test_items:
- flag: "NO_ALLOWPRIVILEGEESCALTION"
set: true
compare:
op: eq
value: "NO_ALLOWPRIVILEGEESCALTION"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the admission of containers with .spec.allowPrivilegeEscalation set to true.
You can label your namespaces as follows to restrict or enforce the policy:
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
You can also use the following to warn about policies:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
For more information, refer to the official Kubernetes and Azure documentation on policies:
https://learn.microsoft.com/en-us/azure/governance/policy/concepts/policy-for-kubernetes
scored: true
- id: 4.3
text: "Azure Policy / OPA"
checks: []
- id: 4.4
text: "CNI Plugin"
checks:
- id: 4.4.1
text: "Ensure latest CNI version is used (Manual)"
type: "manual"
remediation: |
Review the documentation of AWS CNI plugin, and ensure latest CNI version is used.
scored: false
- id: 4.4.2
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
audit: |
ns_without_np=$(comm -23 \
<(kubectl get ns -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | sort) \
<(kubectl get networkpolicy --all-namespaces -o jsonpath='{.items[*].metadata.namespace}' | tr ' ' '\n' | sort))
if [ -z "$ns_without_np" ]; then echo "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"; else echo "MISSING_NETWORKPOLICIES"; fi
tests:
test_items:
- flag: "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"
set: true
compare:
op: eq
value: "ALL_NAMESPACES_HAVE_NETWORKPOLICIES"
remediation: |
Define at least one NetworkPolicy in each namespace to control pod-level traffic. Example:
kubectl apply -n <namespace> -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
EOF
This denies all traffic unless explicitly allowed. Review and adjust policies per namespace as needed.
scored: true
- id: 4.5
text: "Secrets Management"
checks:
- id: 4.5.1
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
audit: |
output=$(kubectl get all --all-namespaces -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}')
if [ -z "$output" ]; then echo "NO_ENV_SECRET_REFERENCES"; else echo "ENV_SECRET_REFERENCES_FOUND"; fi
tests:
test_items:
- flag: "NO_ENV_SECRET_REFERENCES"
set: true
compare:
op: eq
value: "NO_ENV_SECRET_REFERENCES"
remediation: |
Refactor application deployments to mount secrets as files instead of passing them as environment variables.
Avoid using `envFrom` or `env` with `secretKeyRef` in container specs.
scored: true
- id: 4.5.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 4.6
text: "General Policies"
checks:
- id: 4.6.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 4.6.2
text: "Apply Security Context to Your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 4.6.3
text: "The default namespace should not be used (Automated)"
audit: |
output=$(kubectl get all -n default --no-headers 2>/dev/null | grep -v '^service\s\+kubernetes\s' || true)
if [ -z "$output" ]; then echo "DEFAULT_NAMESPACE_UNUSED"; else echo "DEFAULT_NAMESPACE_IN_USE"; fi
tests:
test_items:
- flag: "DEFAULT_NAMESPACE_UNUSED"
set: true
compare:
op: eq
value: "DEFAULT_NAMESPACE_UNUSED"
remediation: |
Avoid using the default namespace for user workloads.
- Create separate namespaces for your applications and infrastructure components.
- Move any user-defined resources out of the default namespace.
Example to create a namespace:
kubectl create namespace my-namespace
Example to move resources:
kubectl get deployment my-app -n default -o yaml | sed 's/namespace: default/namespace: my-namespace/' | kubectl apply -f -
kubectl delete deployment my-app -n default
scored: true

2
cfg/cis-1.10/config.yaml Normal file
View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,62 @@
---
controls:
version: "cis-1.10"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.1.2
text: "Service account token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of service account tokens.
scored: false
- id: 3.1.3
text: "Bootstrap token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of bootstrap tokens.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

135
cfg/cis-1.10/etcd.yaml Normal file
View File

@@ -0,0 +1,135 @@
---
controls:
version: "cis-1.10"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
env: "ETCD_CERT_FILE"
- flag: "--key-file"
env: "ETCD_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are
set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
env: "ETCD_PEER_CERT_FILE"
- flag: "--peer-key-file"
env: "ETCD_PEER_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--peer-client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: false

917
cfg/cis-1.10/master.yaml Normal file
View File

@@ -0,0 +1,917 @@
---
controls:
version: "cis-1.10"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the
control plane node.
For example, chmod 600 $apiserverconf
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $apiserverconf
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $controllermanagerconf
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $controllermanagerconf
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $schedulerconf
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $schedulerconf
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c permissions=%a; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $etcdconf
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c %U:%G; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $etcdconf
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 <path/to/cni/files>
scored: false
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root <path/to/cni/files>
scored: false
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c permissions=%a "$DATA_DIR"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "700"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c %U:%G "$DATA_DIR"
tests:
test_items:
- flag: "etcd:etcd"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
scored: true
- id: 1.1.13
text: "Ensure that the default administrative credential file permissions are set to 600 (Automated)"
audit: |
for adminconf in /etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf; do if test -e $adminconf; then stat -c "permissions=%a %n" $adminconf; fi; done
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /etc/kubernetes/admin.conf
On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
For example, chmod 600 /etc/kubernetes/super-admin.conf
scored: true
- id: 1.1.14
text: "Ensure that the default administrative credential file ownership is set to root:root (Automated)"
audit: |
for adminconf in /etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf; do if test -e $adminconf; then stat -c "ownership=%U:%G %n" $adminconf; fi; done
use_multiple_values: true
tests:
test_items:
- flag: "ownership"
compare:
op: eq
value: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /etc/kubernetes/admin.conf
On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
For example, chown root:root /etc/kubernetes/super-admin.conf
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c permissions=%a $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $schedulerkubeconfig
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c %U:%G $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c permissions=%a $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $controllermanagerkubeconfig
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c %U:%G $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "find /etc/kubernetes/pki/ | xargs stat -c %U:%G"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /etc/kubernetes/pki/
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.crt' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.key' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.key
scored: false
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--anonymous-auth=false
scored: false
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the control plane node and remove the --token-auth-file=<filename> parameter.
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "DenyServiceExternalIPs"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and add the `DenyServiceExternalIPs` plugin
to the enabled admission plugins, as such --enable-admission-plugin=DenyServiceExternalIPs.
scored: false
- id: 1.2.4
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the
apiserver and kubelets. Then, edit API server pod specification file
$apiserverconf on the control plane node and set the
kubelet client certificate and key parameters as below.
--kubelet-client-certificate=<path/to/client-certificate-file>
--kubelet-client-key=<path/to/client-key-file>
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the control plane node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
scored: true
- id: 1.2.6
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
One such example could be as below.
--authorization-mode=RBAC
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
for example `--authorization-mode=Node,RBAC`.
scored: true
- id: 1.2.9
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file $apiserverconf
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
scored: false
- id: 1.2.10
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
value that does not include AlwaysAdmit.
scored: true
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
scored: false
- id: 1.2.12
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Follow the documentation and create ServiceAccount objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
value that does not include ServiceAccount.
scored: true
- id: 1.2.13
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --disable-admission-plugins parameter to
ensure it does not include NamespaceLifecycle.
scored: true
- id: 1.2.14
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to a
value that includes NodeRestriction.
--enable-admission-plugins=...,NodeRestriction,...
scored: true
- id: 1.2.15
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.2.16
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
--audit-log-path=/var/log/apiserver/audit.log
scored: true
- id: 1.2.17
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxage parameter to 30
or as an appropriate number of days, for example,
--audit-log-maxage=30
scored: true
- id: 1.2.18
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value. For example,
--audit-log-maxbackup=10
scored: true
- id: 1.2.19
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB, --audit-log-maxsize=100
scored: true
- id: 1.2.20
text: "Ensure that the --request-timeout argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
type: manual
remediation: |
Edit the API server pod specification file $apiserverconf
and set the below parameter as appropriate and if needed.
For example, --request-timeout=300s
scored: false
- id: 1.2.21
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--service-account-lookup=true
Alternatively, you can delete the --service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.22
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --service-account-key-file parameter
to the public key file for service accounts. For example,
--service-account-key-file=<filename>
scored: true
- id: 1.2.23
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
- flag: "--etcd-keyfile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate and key file parameters.
--etcd-certfile=<path/to/client-certificate-file>
--etcd-keyfile=<path/to/client-key-file>
scored: true
- id: 1.2.24
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
- flag: "--tls-private-key-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the TLS certificate and private key file parameters.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
scored: true
- id: 1.2.25
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the client certificate authority file.
--client-ca-file=<path/to/client-ca-file>
scored: true
- id: 1.2.26
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate authority file parameter.
--etcd-cafile=<path/to/ca-file>
scored: true
- id: 1.2.27
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --encryption-provider-config parameter to the path of that file.
For example, --encryption-provider-config=</path/to/EncryptionConfig/File>
scored: false
- id: 1.2.28
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: |
ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep $apiserverbin | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi
tests:
test_items:
- flag: "provider"
compare:
op: valid_elements
value: "aescbc,kms,secretbox"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
scored: false
- id: 1.2.29
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the control plane node and set the below parameter.
--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
scored: false
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example, --terminated-pod-gc-threshold=10
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node to set the below parameter.
--use-service-account-credentials=true
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --service-account-private-key-file parameter
to the private key file for service accounts.
--service-account-private-key-file=<filename>
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
--root-ca-file=<path/to/file>
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf file
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true

478
cfg/cis-1.10/node.yaml Normal file
View File

@@ -0,0 +1,478 @@
---
controls:
version: "cis-1.10"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 600 $kubeletsvc
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: false
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: false
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 <filename>
scored: false
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: false
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 600 $kubeletconf
scored: true
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
audit: "/bin/ps -fC $kubeletbin"
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.8
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.9
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cert-file
path: '{.tlsCertFile}'
- flag: --tls-private-key-file
path: '{.tlsPrivateKeyFile}'
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.10
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.11
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.12
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.13
text: "Ensure that a limit is set on pod PIDs (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --pod-max-pids
path: '{.podPidsLimit}'
remediation: |
Decide on an appropriate level for this parameter and set it,
either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
scored: false
- id: 4.3
text: "kube-proxy"
checks:
- id: 4.3.1
text: "Ensure that the kube-proxy metrics service is bound to localhost (Automated)"
audit: "/bin/ps -fC $proxybin"
audit_config: "/bin/sh -c 'if test -e $proxykubeconfig; then cat $proxykubeconfig; fi'"
tests:
bin_op: or
test_items:
- flag: "--metrics-bind-address"
path: '{.metricsBindAddress}'
compare:
op: has
value: "127.0.0.1"
- flag: "--metrics-bind-address"
path: '{.metricsBindAddress}'
set: false
remediation: |
Modify or remove any values which bind the metrics service to a non-localhost address.
The default value is 127.0.0.1:10249.
scored: true

559
cfg/cis-1.10/policies.yaml Normal file
View File

@@ -0,0 +1,559 @@
---
controls:
version: "cis-1.10"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
audit: |
kubectl get clusterrolebindings -o=custom-columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name --no-headers | while read -r role_name role_binding subject
do
if [[ "${role_name}" != "cluster-admin" && "${role_binding}" == "cluster-admin" ]]; then
is_compliant="false"
else
is_compliant="true"
fi;
echo "**role_name: ${role_name} role_binding: ${role_binding} subject: ${subject} is_compliant: ${is_compliant}"
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role : kubectl delete clusterrolebinding [name]
Condition: is_compliant is false if rolename is not cluster-admin and rolebinding is cluster-admin.
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Automated)"
audit: "echo \"canGetListWatchSecretsAsSystemAuthenticated: $(kubectl auth can-i get,list,watch secrets --all-namespaces --as=system:authenticated)\""
tests:
test_items:
- flag: "canGetListWatchSecretsAsSystemAuthenticated"
compare:
op: eq
value: no
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
audit: |
# Check Roles
kubectl get roles --all-namespaces -o custom-columns=ROLE_NAMESPACE:.metadata.namespace,ROLE_NAME:.metadata.name --no-headers | while read -r role_namespace role_name
do
role_rules=$(kubectl get role -n "${role_namespace}" "${role_name}" -o=json | jq -c '.rules')
if echo "${role_rules}" | grep -q "\[\"\*\"\]"; then
role_is_compliant="false"
else
role_is_compliant="true"
fi;
echo "**role_name: ${role_name} role_namespace: ${role_namespace} role_rules: ${role_rules} role_is_compliant: ${role_is_compliant}"
done
# Check ClusterRoles
kubectl get clusterroles -o custom-columns=CLUSTERROLE_NAME:.metadata.name --no-headers | while read -r clusterrole_name
do
clusterrole_rules=$(kubectl get clusterrole "${clusterrole_name}" -o=json | jq -c '.rules')
if echo "${clusterrole_rules}" | grep -q "\[\"\*\"\]"; then
clusterrole_is_compliant="false"
else
clusterrole_is_compliant="true"
fi;
echo "**clusterrole_name: ${clusterrole_name} clusterrole_rules: ${clusterrole_rules} clusterrole_is_compliant: ${clusterrole_is_compliant}"
done
use_multiple_values: true
tests:
bin_op: or
test_items:
- flag: "role_is_compliant"
compare:
op: eq
value: true
set: true
- flag: "clusterrole_is_compliant"
compare:
op: eq
value: true
set: true
remediation: |
Where possible replace any use of wildcards ["*"] in roles and clusterroles with specific
objects or actions.
Condition: role_is_compliant is false if ["*"] is found in rules.
Condition: clusterrole_is_compliant is false if ["*"] is found in rules.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Automated)"
audit: |
echo "canCreatePodsAsSystemAuthenticated: $(kubectl auth can-i create pods --all-namespaces --as=system:authenticated)"
tests:
test_items:
- flag: "canCreatePodsAsSystemAuthenticated"
compare:
op: eq
value: no
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used (Automated)"
audit: |
kubectl get serviceaccount --all-namespaces --field-selector metadata.name=default -o=json | jq -r '.items[] | " namespace: \(.metadata.namespace), kind: \(.kind), name: \(.metadata.name), automountServiceAccountToken: \(.automountServiceAccountToken | if . == null then "notset" else . end )"' | xargs -L 1
use_multiple_values: true
tests:
test_items:
- flag: "automountServiceAccountToken"
compare:
op: eq
value: false
set: true
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
`automountServiceAccountToken: false`.
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAMESPACE:.metadata.namespace,POD_NAME:.metadata.name,POD_SERVICE_ACCOUNT:.spec.serviceAccount,POD_IS_AUTOMOUNTSERVICEACCOUNTTOKEN:.spec.automountServiceAccountToken --no-headers | while read -r pod_namespace pod_name pod_service_account pod_is_automountserviceaccounttoken
do
# Retrieve automountServiceAccountToken's value for ServiceAccount and Pod, set to notset if null or <none>.
svacc_is_automountserviceaccounttoken=$(kubectl get serviceaccount -n "${pod_namespace}" "${pod_service_account}" -o json | jq -r '.automountServiceAccountToken' | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
pod_is_automountserviceaccounttoken=$(echo "${pod_is_automountserviceaccounttoken}" | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
if [ "${svacc_is_automountserviceaccounttoken}" = "false" ] && ( [ "${pod_is_automountserviceaccounttoken}" = "false" ] || [ "${pod_is_automountserviceaccounttoken}" = "notset" ] ); then
is_compliant="true"
elif [ "${svacc_is_automountserviceaccounttoken}" = "true" ] && [ "${pod_is_automountserviceaccounttoken}" = "false" ]; then
is_compliant="true"
else
is_compliant="false"
fi
echo "**namespace: ${pod_namespace} pod_name: ${pod_name} service_account: ${pod_service_account} pod_is_automountserviceaccounttoken: ${pod_is_automountserviceaccounttoken} svacc_is_automountServiceAccountToken: ${svacc_is_automountserviceaccounttoken} is_compliant: ${is_compliant}"
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Modify the definition of ServiceAccounts and Pods which do not need to mount service
account tokens to disable it, with `automountServiceAccountToken: false`.
If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.
Condition: Pod is_compliant to true when
- ServiceAccount is automountServiceAccountToken: false and Pod is automountServiceAccountToken: false or notset
- ServiceAccount is automountServiceAccountToken: true notset and Pod is automountServiceAccountToken: false
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.1.9
text: "Minimize access to create persistent volumes (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to PersistentVolume objects in the cluster.
scored: false
- id: 5.1.10
text: "Minimize access to the proxy sub-resource of nodes (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the proxy sub-resource of node objects.
scored: false
- id: 5.1.11
text: "Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the approval sub-resource of certificatesigningrequests objects.
scored: false
- id: 5.1.12
text: "Minimize access to webhook configuration objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
scored: false
- id: 5.1.13
text: "Minimize access to the service account token creation (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the token sub-resource of serviceaccount objects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve container(s) for each Pod.
kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o json | jq -c '.spec.containers[]' | while read -r container
do
# Retrieve container's name.
container_name=$(echo ${container} | jq -r '.name')
# Retrieve container's .securityContext.privileged value.
container_privileged=$(echo ${container} | jq -r '.securityContext.privileged' | sed -e 's/null/notset/g')
if [ "${container_privileged}" = "false" ] || [ "${container_privileged}" = "notset" ] ; then
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_privileged: ${container_privileged} is_compliant: true"
else
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_privileged: ${container_privileged} is_compliant: false"
fi
done
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
Audit: the audit list all pods' containers to retrieve their .securityContext.privileged value.
Condition: is_compliant is false if container's `.securityContext.privileged` is set to `true`.
Default: by default, there are no restrictions on the creation of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve spec.hostPID for each pod.
pod_hostpid=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostPID}' 2>/dev/null)
if [ -z "${pod_hostpid}" ]; then
pod_hostpid="false"
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostpid: ${pod_hostpid} is_compliant: true"
else
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostpid: ${pod_hostpid} is_compliant: false"
fi
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
Audit: the audit retrieves each Pod' spec.hostPID.
Condition: is_compliant is false if Pod's spec.hostPID is set to `true`.
Default: by default, there are no restrictions on the creation of hostPID containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve spec.hostIPC for each pod.
pod_hostipc=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostIPC}' 2>/dev/null)
if [ -z "${pod_hostipc}" ]; then
pod_hostipc="false"
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostipc: ${pod_hostipc} is_compliant: true"
else
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostipc: ${pod_hostipc} is_compliant: false"
fi
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
Audit: the audit retrieves each Pod' spec.IPC.
Condition: is_compliant is false if Pod's spec.hostIPC is set to `true`.
Default: by default, there are no restrictions on the creation of hostIPC containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve spec.hostNetwork for each pod.
pod_hostnetwork=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostNetwork}' 2>/dev/null)
if [ -z "${pod_hostnetwork}" ]; then
pod_hostnetwork="false"
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostnetwork: ${pod_hostnetwork} is_compliant: true"
else
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostnetwork: ${pod_hostnetwork} is_compliant: false"
fi
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
Audit: the audit retrieves each Pod' spec.hostNetwork.
Condition: is_compliant is false if Pod's spec.hostNetwork is set to `true`.
Default: by default, there are no restrictions on the creation of hostNetwork containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve container(s) for each Pod.
kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o json | jq -c '.spec.containers[]' | while read -r container
do
# Retrieve container's name
container_name=$(echo ${container} | jq -r '.name')
# Retrieve container's .securityContext.allowPrivilegeEscalation
container_allowprivesc=$(echo ${container} | jq -r '.securityContext.allowPrivilegeEscalation' | sed -e 's/null/notset/g')
if [ "${container_allowprivesc}" = "false" ] || [ "${container_allowprivesc}" = "notset" ]; then
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_allowprivesc: ${container_allowprivesc} is_compliant: true"
else
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_allowprivesc: ${container_allowprivesc} is_compliant: false"
fi
done
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.securityContext.allowPrivilegeEscalation` set to `true`.
Audit: the audit retrieves each Pod's container(s) `.securityContext.allowPrivilegeEscalation`.
Condition: is_compliant is false if container's `.securityContext.allowPrivilegeEscalation` is set to `true`.
Default: If notset, privilege escalation is allowed (default to true). However if PSP/PSA is used with a `restricted` profile,
privilege escalation is explicitly disallowed unless configured otherwise.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Manual)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve container(s) for each Pod.
kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o json | jq -c '.spec.containers[]' | while read -r container
do
# Retrieve container's name
container_name=$(echo ${container} | jq -r '.name')
# Retrieve container's added capabilities
container_caps_add=$(echo ${container} | jq -r '.securityContext.capabilities.add' | sed -e 's/null/notset/g')
# Set is_compliant to true by default.
is_compliant=true
caps_list=""
if [ "${container_caps_add}" != "notset" ]; then
# Loop through all caps and append caps_list, then set is_compliant to false.
for cap in $(echo "${container_caps_add}" | jq -r '.[]'); do
caps_list+="${cap},"
is_compliant=false
done
# Remove trailing comma for the last list member.
caps_list=${caps_list%,}
fi
if [ "${is_compliant}" = true ]; then
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} container_caps_add: ${container_caps_add} is_compliant: true"
else
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} container_caps_add: ${caps_list} is_compliant: false"
fi
done
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
Audit: the audit retrieves each Pod's container(s) added capabilities.
Condition: is_compliant is false if added capabilities are added for a given container.
Default: Containers run with a default set of capabilities as assigned by the Container Runtime.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

2
cfg/cis-1.11/config.yaml Normal file
View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,62 @@
---
controls:
version: "cis-1.11"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.1.2
text: "Service account token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of service account tokens.
scored: false
- id: 3.1.3
text: "Bootstrap token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of bootstrap tokens.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

135
cfg/cis-1.11/etcd.yaml Normal file
View File

@@ -0,0 +1,135 @@
---
controls:
version: "cis-1.11"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
env: "ETCD_CERT_FILE"
- flag: "--key-file"
env: "ETCD_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are
set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
env: "ETCD_PEER_CERT_FILE"
- flag: "--peer-key-file"
env: "ETCD_PEER_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--peer-client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: false

932
cfg/cis-1.11/master.yaml Normal file
View File

@@ -0,0 +1,932 @@
---
controls:
version: "cis-1.11"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the
control plane node.
For example, chmod 600 $apiserverconf
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $apiserverconf
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $controllermanagerconf
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $controllermanagerconf
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $schedulerconf
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $schedulerconf
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c permissions=%a; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $etcdconf
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c %U:%G; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $etcdconf
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 <path/to/cni/files>
scored: false
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root <path/to/cni/files>
scored: false
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c permissions=%a "$DATA_DIR"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "700"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c %U:%G "$DATA_DIR"
tests:
test_items:
- flag: "etcd:etcd"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
scored: true
- id: 1.1.13
text: "Ensure that the default administrative credential file permissions are set to 600 (Automated)"
audit: |
for adminconf in /etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf; do if test -e $adminconf; then stat -c "permissions=%a %n" $adminconf; fi; done
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /etc/kubernetes/admin.conf
On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
For example, chmod 600 /etc/kubernetes/super-admin.conf
scored: true
- id: 1.1.14
text: "Ensure that the default administrative credential file ownership is set to root:root (Automated)"
audit: |
for adminconf in /etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf; do if test -e $adminconf; then stat -c "ownership=%U:%G %n" $adminconf; fi; done
use_multiple_values: true
tests:
test_items:
- flag: "ownership"
compare:
op: eq
value: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /etc/kubernetes/admin.conf
On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
For example, chown root:root /etc/kubernetes/super-admin.conf
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c permissions=%a $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $schedulerkubeconfig
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c %U:%G $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c permissions=%a $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $controllermanagerkubeconfig
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c %U:%G $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "find /etc/kubernetes/pki/ | xargs stat -c %U:%G"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /etc/kubernetes/pki/
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.crt' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 644 /etc/kubernetes/pki/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.key' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.key
scored: false
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--anonymous-auth=false
scored: false
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the control plane node and remove the --token-auth-file=<filename> parameter.
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "DenyServiceExternalIPs"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and add the `DenyServiceExternalIPs` plugin
to the enabled admission plugins, as such --enable-admission-plugin=DenyServiceExternalIPs.
scored: false
- id: 1.2.4
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the
apiserver and kubelets. Then, edit API server pod specification file
$apiserverconf on the control plane node and set the
kubelet client certificate and key parameters as below.
--kubelet-client-certificate=<path/to/client-certificate-file>
--kubelet-client-key=<path/to/client-key-file>
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the control plane node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
scored: true
- id: 1.2.6
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
One such example could be as below.
--authorization-mode=RBAC
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
for example `--authorization-mode=Node,RBAC`.
scored: true
- id: 1.2.9
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file $apiserverconf
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
scored: false
- id: 1.2.10
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
value that does not include AlwaysAdmit.
scored: true
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
scored: false
- id: 1.2.12
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Follow the documentation and create ServiceAccount objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
value that does not include ServiceAccount.
scored: true
- id: 1.2.13
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --disable-admission-plugins parameter to
ensure it does not include NamespaceLifecycle.
scored: true
- id: 1.2.14
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to a
value that includes NodeRestriction.
--enable-admission-plugins=...,NodeRestriction,...
scored: true
- id: 1.2.15
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.2.16
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
--audit-log-path=/var/log/apiserver/audit.log
scored: true
- id: 1.2.17
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxage parameter to 30
or as an appropriate number of days, for example,
--audit-log-maxage=30
scored: true
- id: 1.2.18
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value. For example,
--audit-log-maxbackup=10
scored: true
- id: 1.2.19
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB, --audit-log-maxsize=100
scored: true
- id: 1.2.20
text: "Ensure that the --request-timeout argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
type: manual
remediation: |
Edit the API server pod specification file $apiserverconf
and set the below parameter as appropriate and if needed.
For example, --request-timeout=300s
scored: false
- id: 1.2.21
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--service-account-lookup=true
Alternatively, you can delete the --service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.22
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --service-account-key-file parameter
to the public key file for service accounts. For example,
--service-account-key-file=<filename>
scored: true
- id: 1.2.23
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
- flag: "--etcd-keyfile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate and key file parameters.
--etcd-certfile=<path/to/client-certificate-file>
--etcd-keyfile=<path/to/client-key-file>
scored: true
- id: 1.2.24
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
- flag: "--tls-private-key-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the TLS certificate and private key file parameters.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
scored: true
- id: 1.2.25
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the client certificate authority file.
--client-ca-file=<path/to/client-ca-file>
scored: true
- id: 1.2.26
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate authority file parameter.
--etcd-cafile=<path/to/ca-file>
scored: true
- id: 1.2.27
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --encryption-provider-config parameter to the path of that file.
For example, --encryption-provider-config=</path/to/EncryptionConfig/File>
scored: false
- id: 1.2.28
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: |
ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep $apiserverbin | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi
tests:
test_items:
- flag: "provider"
compare:
op: valid_elements
value: "aescbc,kms,secretbox"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
scored: false
- id: 1.2.29
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the control plane node and set the below parameter.
--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
scored: false
- id: 1.2.30
text: "Ensure that the --service-account-extend-token-expiration parameter is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-extend-token-expiration"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the Control Plane node and set the --service-account-extend-token-expiration parameter to false.
`--service-account-extend-token-expiration=false`
By default, this parameter is set to true.
scored: true
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example, --terminated-pod-gc-threshold=10
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node to set the below parameter.
--use-service-account-credentials=true
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --service-account-private-key-file parameter
to the private key file for service accounts.
--service-account-private-key-file=<filename>
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
--root-ca-file=<path/to/file>
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf file
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true

509
cfg/cis-1.11/node.yaml Normal file
View File

@@ -0,0 +1,509 @@
---
controls:
version: "cis-1.11"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 600 $kubeletsvc
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: false
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: false
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 644 <filename>
scored: false
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: false
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 600 $kubeletconf
scored: true
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Verify that if defined, the --read-only-port argument is set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
audit: "/bin/ps -fC $kubeletbin"
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.8
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.9
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cert-file
path: '{.tlsCertFile}'
- flag: --tls-private-key-file
path: '{.tlsPrivateKeyFile}'
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.10
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.11
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.12
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.13
text: "Ensure that a limit is set on pod PIDs (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --pod-max-pids
path: '{.podPidsLimit}'
remediation: |
Decide on an appropriate level for this parameter and set it,
either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
scored: false
- id: 4.2.14
text: "Ensure that the --seccomp-default parameter is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --seccomp-default
path: '{.seccompDefault}'
remediation: |
Set the parameter, either via the --seccomp-default command line parameter or the
seccompDefault configuration file setting.
By default the seccomp profile is not enabled.
scored: false
- id: 4.2.15
text: "Ensure that the --IPAddressDeny is set to any (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --IPAddressDeny
path: '{.IPAddressDeny}'
remediation: |
Configuring the setting IPAddressDeny=any will deny service to any IP address not specified in the complimentary setting IPAddressAllow configuration parameter (
IPAddressDeny=any
IPAddressAllow={{ kubelet_secure_addresses }}
*Note
kubelet_secure_addresses: "localhost link-local {{ kube_pods_subnets |regex_replace(',', ' ') }} {{ kube_node_addresses }} {{ loadbalancer_apiserver.address | default('')"
By default IPAddressDeny is not enabled.
scored: false
- id: 4.3
text: "kube-proxy"
checks:
- id: 4.3.1
text: "Ensure that the kube-proxy metrics service is bound to localhost (Automated)"
audit: "/bin/ps -fC $proxybin"
audit_config: "/bin/sh -c 'if test -e $proxykubeconfig; then cat $proxykubeconfig; fi'"
tests:
bin_op: or
test_items:
- flag: "--metrics-bind-address"
path: '{.metricsBindAddress}'
compare:
op: has
value: "127.0.0.1"
- flag: "--metrics-bind-address"
path: '{.metricsBindAddress}'
set: false
remediation: |
Modify or remove any values which bind the metrics service to a non-localhost address.
The default value is 127.0.0.1:10249.
scored: true

560
cfg/cis-1.11/policies.yaml Normal file
View File

@@ -0,0 +1,560 @@
---
controls:
version: "cis-1.11"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
audit: |
kubectl get clusterrolebindings -o=custom-columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name --no-headers | while read -r role_name role_binding subject
do
if [[ "${role_name}" != "cluster-admin" && "${role_binding}" == "cluster-admin" ]]; then
is_compliant="false"
else
is_compliant="true"
fi;
echo "**role_name: ${role_name} role_binding: ${role_binding} subject: ${subject} is_compliant: ${is_compliant}"
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role : kubectl delete clusterrolebinding [name]
Condition: is_compliant is false if rolename is not cluster-admin and rolebinding is cluster-admin.
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
audit: "echo \"canGetListWatchSecretsAsSystemAuthenticated: $(kubectl auth can-i get,list,watch secrets --all-namespaces --as=system:authenticated)\""
tests:
test_items:
- flag: "canGetListWatchSecretsAsSystemAuthenticated"
compare:
op: eq
value: no
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
audit: |
# Check Roles
kubectl get roles --all-namespaces -o custom-columns=ROLE_NAMESPACE:.metadata.namespace,ROLE_NAME:.metadata.name --no-headers | while read -r role_namespace role_name
do
role_rules=$(kubectl get role -n "${role_namespace}" "${role_name}" -o=json | jq -c '.rules')
if echo "${role_rules}" | grep -q "\[\"\*\"\]"; then
role_is_compliant="false"
else
role_is_compliant="true"
fi;
echo "**role_name: ${role_name} role_namespace: ${role_namespace} role_rules: ${role_rules} role_is_compliant: ${role_is_compliant}"
done
# Check ClusterRoles
kubectl get clusterroles -o custom-columns=CLUSTERROLE_NAME:.metadata.name --no-headers | while read -r clusterrole_name
do
clusterrole_rules=$(kubectl get clusterrole "${clusterrole_name}" -o=json | jq -c '.rules')
if echo "${clusterrole_rules}" | grep -q "\[\"\*\"\]"; then
clusterrole_is_compliant="false"
else
clusterrole_is_compliant="true"
fi;
echo "**clusterrole_name: ${clusterrole_name} clusterrole_rules: ${clusterrole_rules} clusterrole_is_compliant: ${clusterrole_is_compliant}"
done
use_multiple_values: true
tests:
bin_op: or
test_items:
- flag: "role_is_compliant"
compare:
op: eq
value: true
set: true
- flag: "clusterrole_is_compliant"
compare:
op: eq
value: true
set: true
remediation: |
Where possible replace any use of wildcards ["*"] in roles and clusterroles with specific
objects or actions.
Condition: role_is_compliant is false if ["*"] is found in rules.
Condition: clusterrole_is_compliant is false if ["*"] is found in rules.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
audit: |
echo "canCreatePodsAsSystemAuthenticated: $(kubectl auth can-i create pods --all-namespaces --as=system:authenticated)"
tests:
test_items:
- flag: "canCreatePodsAsSystemAuthenticated"
compare:
op: eq
value: no
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used (Manual)"
audit: |
kubectl get serviceaccount --all-namespaces --field-selector metadata.name=default -o=json | jq -r '.items[] | " namespace: \(.metadata.namespace), kind: \(.kind), name: \(.metadata.name), automountServiceAccountToken: \(.automountServiceAccountToken | if . == null then "notset" else . end )"' | xargs -L 1
use_multiple_values: true
tests:
test_items:
- flag: "automountServiceAccountToken"
compare:
op: eq
value: false
set: true
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
`automountServiceAccountToken: false`.
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAMESPACE:.metadata.namespace,POD_NAME:.metadata.name,POD_SERVICE_ACCOUNT:.spec.serviceAccount,POD_IS_AUTOMOUNTSERVICEACCOUNTTOKEN:.spec.automountServiceAccountToken --no-headers | while read -r pod_namespace pod_name pod_service_account pod_is_automountserviceaccounttoken
do
# Retrieve automountServiceAccountToken's value for ServiceAccount and Pod, set to notset if null or <none>.
svacc_is_automountserviceaccounttoken=$(kubectl get serviceaccount -n "${pod_namespace}" "${pod_service_account}" -o json | jq -r '.automountServiceAccountToken' | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
pod_is_automountserviceaccounttoken=$(echo "${pod_is_automountserviceaccounttoken}" | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
if [ "${svacc_is_automountserviceaccounttoken}" = "false" ] && ( [ "${pod_is_automountserviceaccounttoken}" = "false" ] || [ "${pod_is_automountserviceaccounttoken}" = "notset" ] ); then
is_compliant="true"
elif [ "${svacc_is_automountserviceaccounttoken}" = "true" ] && [ "${pod_is_automountserviceaccounttoken}" = "false" ]; then
is_compliant="true"
else
is_compliant="false"
fi
echo "**namespace: ${pod_namespace} pod_name: ${pod_name} service_account: ${pod_service_account} pod_is_automountserviceaccounttoken: ${pod_is_automountserviceaccounttoken} svacc_is_automountServiceAccountToken: ${svacc_is_automountserviceaccounttoken} is_compliant: ${is_compliant}"
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Modify the definition of ServiceAccounts and Pods which do not need to mount service
account tokens to disable it, with `automountServiceAccountToken: false`.
If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.
Condition: Pod is_compliant to true when
- ServiceAccount is automountServiceAccountToken: false and Pod is automountServiceAccountToken: false or notset
- ServiceAccount is automountServiceAccountToken: true notset and Pod is automountServiceAccountToken: false
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.1.9
text: "Minimize access to create persistent volumes (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to PersistentVolume objects in the cluster.
scored: false
- id: 5.1.10
text: "Minimize access to the proxy sub-resource of nodes (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the proxy sub-resource of node objects.
scored: false
- id: 5.1.11
text: "Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the approval sub-resource of certificatesigningrequests objects.
scored: false
- id: 5.1.12
text: "Minimize access to webhook configuration objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
scored: false
- id: 5.1.13
text: "Minimize access to the service account token creation (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the token sub-resource of serviceaccount objects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve container(s) for each Pod.
kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o json | jq -c '.spec.containers[]' | while read -r container
do
# Retrieve container's name.
container_name=$(echo ${container} | jq -r '.name')
# Retrieve container's .securityContext.privileged value.
container_privileged=$(echo ${container} | jq -r '.securityContext.privileged' | sed -e 's/null/notset/g')
if [ "${container_privileged}" = "false" ] || [ "${container_privileged}" = "notset" ] ; then
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_privileged: ${container_privileged} is_compliant: true"
else
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_privileged: ${container_privileged} is_compliant: false"
fi
done
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
Audit: the audit list all pods' containers to retrieve their .securityContext.privileged value.
Condition: is_compliant is false if container's `.securityContext.privileged` is set to `true`.
Default: by default, there are no restrictions on the creation of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve spec.hostPID for each pod.
pod_hostpid=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostPID}' 2>/dev/null)
if [ -z "${pod_hostpid}" ]; then
pod_hostpid="false"
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostpid: ${pod_hostpid} is_compliant: true"
else
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostpid: ${pod_hostpid} is_compliant: false"
fi
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
Audit: the audit retrieves each Pod' spec.hostPID.
Condition: is_compliant is false if Pod's spec.hostPID is set to `true`.
Default: by default, there are no restrictions on the creation of hostPID containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve spec.hostIPC for each pod.
pod_hostipc=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostIPC}' 2>/dev/null)
if [ -z "${pod_hostipc}" ]; then
pod_hostipc="false"
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostipc: ${pod_hostipc} is_compliant: true"
else
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostipc: ${pod_hostipc} is_compliant: false"
fi
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
Audit: the audit retrieves each Pod' spec.IPC.
Condition: is_compliant is false if Pod's spec.hostIPC is set to `true`.
Default: by default, there are no restrictions on the creation of hostIPC containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve spec.hostNetwork for each pod.
pod_hostnetwork=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostNetwork}' 2>/dev/null)
if [ -z "${pod_hostnetwork}" ]; then
pod_hostnetwork="false"
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostnetwork: ${pod_hostnetwork} is_compliant: true"
else
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostnetwork: ${pod_hostnetwork} is_compliant: false"
fi
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
Audit: the audit retrieves each Pod' spec.hostNetwork.
Condition: is_compliant is false if Pod's spec.hostNetwork is set to `true`.
Default: by default, there are no restrictions on the creation of hostNetwork containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve container(s) for each Pod.
kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o json | jq -c '.spec.containers[]' | while read -r container
do
# Retrieve container's name
container_name=$(echo ${container} | jq -r '.name')
# Retrieve container's .securityContext.allowPrivilegeEscalation
container_allowprivesc=$(echo ${container} | jq -r '.securityContext.allowPrivilegeEscalation' | sed -e 's/null/notset/g')
if [ "${container_allowprivesc}" = "false" ] || [ "${container_allowprivesc}" = "notset" ]; then
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_allowprivesc: ${container_allowprivesc} is_compliant: true"
else
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_allowprivesc: ${container_allowprivesc} is_compliant: false"
fi
done
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.securityContext.allowPrivilegeEscalation` set to `true`.
Audit: the audit retrieves each Pod's container(s) `.securityContext.allowPrivilegeEscalation`.
Condition: is_compliant is false if container's `.securityContext.allowPrivilegeEscalation` is set to `true`.
Default: If notset, privilege escalation is allowed (default to true). However if PSP/PSA is used with a `restricted` profile,
privilege escalation is explicitly disallowed unless configured otherwise.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Manual)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve container(s) for each Pod.
kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o json | jq -c '.spec.containers[]' | while read -r container
do
# Retrieve container's name
container_name=$(echo ${container} | jq -r '.name')
# Retrieve container's added capabilities
container_caps_add=$(echo ${container} | jq -r '.securityContext.capabilities.add' | sed -e 's/null/notset/g')
# Set is_compliant to true by default.
is_compliant=true
caps_list=""
if [ "${container_caps_add}" != "notset" ]; then
# Loop through all caps and append caps_list, then set is_compliant to false.
for cap in $(echo "${container_caps_add}" | jq -r '.[]'); do
caps_list+="${cap},"
is_compliant=false
done
# Remove trailing comma for the last list member.
caps_list=${caps_list%,}
fi
if [ "${is_compliant}" = true ]; then
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} container_caps_add: ${container_caps_add} is_compliant: true"
else
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} container_caps_add: ${caps_list} is_compliant: false"
fi
done
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
Audit: the audit retrieves each Pod's container(s) added capabilities.
Condition: is_compliant is false if added capabilities are added for a given container.
Default: Containers run with a default set of capabilities as assigned by the Container Runtime.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.6
text: "General Policies"
checks:
- id: 5.6.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.6.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.6.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.6.4
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

2
cfg/cis-1.12/config.yaml Normal file
View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,62 @@
---
controls:
version: "cis-1.12"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.1.2
text: "Service account token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of service account tokens.
scored: false
- id: 3.1.3
text: "Bootstrap token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of bootstrap tokens.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

135
cfg/cis-1.12/etcd.yaml Normal file
View File

@@ -0,0 +1,135 @@
---
controls:
version: "cis-1.12"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
env: "ETCD_CERT_FILE"
- flag: "--key-file"
env: "ETCD_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are
set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
env: "ETCD_PEER_CERT_FILE"
- flag: "--peer-key-file"
env: "ETCD_PEER_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--peer-client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: false

930
cfg/cis-1.12/master.yaml Normal file
View File

@@ -0,0 +1,930 @@
---
controls:
version: "cis-1.12"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the
control plane node.
For example, chmod 600 $apiserverconf
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $apiserverconf
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $controllermanagerconf
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $controllermanagerconf
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $schedulerconf
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $schedulerconf
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c permissions=%a; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $etcdconf
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c %U:%G; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $etcdconf
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 <path/to/cni/files>
scored: false
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root <path/to/cni/files>
scored: false
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c permissions=%a "$DATA_DIR"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "700"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c %U:%G "$DATA_DIR"
tests:
test_items:
- flag: "etcd:etcd"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
scored: true
- id: 1.1.13
text: "Ensure that the default administrative credential file permissions are set to 600 (Automated)"
audit: |
for adminconf in /etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf; do if test -e $adminconf; then stat -c "permissions=%a %n" $adminconf; fi; done
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /etc/kubernetes/admin.conf
On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
For example, chmod 600 /etc/kubernetes/super-admin.conf
scored: true
- id: 1.1.14
text: "Ensure that the default administrative credential file ownership is set to root:root (Automated)"
audit: |
for adminconf in /etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf; do if test -e $adminconf; then stat -c "ownership=%U:%G %n" $adminconf; fi; done
use_multiple_values: true
tests:
test_items:
- flag: "ownership"
compare:
op: eq
value: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /etc/kubernetes/admin.conf
On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
For example, chown root:root /etc/kubernetes/super-admin.conf
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c permissions=%a $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $schedulerkubeconfig
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c %U:%G $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c permissions=%a $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $controllermanagerkubeconfig
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c %U:%G $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "find /etc/kubernetes/pki/ | xargs stat -c %U:%G"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /etc/kubernetes/pki/
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.crt' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 644 /etc/kubernetes/pki/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.key' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.key
scored: false
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--anonymous-auth=false
scored: false
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the control plane node and remove the --token-auth-file=<filename> parameter.
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "DenyServiceExternalIPs"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and add the `DenyServiceExternalIPs` plugin
to the enabled admission plugins, as such --enable-admission-plugin=DenyServiceExternalIPs.
scored: false
- id: 1.2.4
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the
apiserver and kubelets. Then, edit API server pod specification file
$apiserverconf on the control plane node and set the
kubelet client certificate and key parameters as below.
--kubelet-client-certificate=<path/to/client-certificate-file>
--kubelet-client-key=<path/to/client-key-file>
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the control plane node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
scored: true
- id: 1.2.6
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
One such example could be as below.
--authorization-mode=RBAC
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
for example `--authorization-mode=Node,RBAC`.
scored: true
- id: 1.2.9
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file $apiserverconf
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
scored: false
- id: 1.2.10
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
value that does not include AlwaysAdmit.
scored: true
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
scored: false
- id: 1.2.12
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Follow the documentation and create ServiceAccount objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
value that does not include ServiceAccount.
scored: true
- id: 1.2.13
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --disable-admission-plugins parameter to
ensure it does not include NamespaceLifecycle.
scored: true
- id: 1.2.14
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to a
value that includes NodeRestriction.
--enable-admission-plugins=...,NodeRestriction,...
scored: true
- id: 1.2.15
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.2.16
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
--audit-log-path=/var/log/apiserver/audit.log
scored: true
- id: 1.2.17
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxage parameter to 30
or as an appropriate number of days, for example,
--audit-log-maxage=30
scored: true
- id: 1.2.18
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value. For example,
--audit-log-maxbackup=10
scored: true
- id: 1.2.19
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB, --audit-log-maxsize=100
scored: true
- id: 1.2.20
text: "Ensure that the --request-timeout argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
type: manual
remediation: |
Edit the API server pod specification file $apiserverconf
and set the below parameter as appropriate and if needed.
For example, --request-timeout=300s
scored: false
- id: 1.2.21
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--service-account-lookup=true
Alternatively, you can delete the --service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.22
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --service-account-key-file parameter
to the public key file for service accounts. For example,
--service-account-key-file=<filename>
scored: true
- id: 1.2.23
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
- flag: "--etcd-keyfile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate and key file parameters.
--etcd-certfile=<path/to/client-certificate-file>
--etcd-keyfile=<path/to/client-key-file>
scored: true
- id: 1.2.24
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
- flag: "--tls-private-key-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the TLS certificate and private key file parameters.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
scored: true
- id: 1.2.25
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the client certificate authority file.
--client-ca-file=<path/to/client-ca-file>
scored: true
- id: 1.2.26
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate authority file parameter.
--etcd-cafile=<path/to/ca-file>
scored: true
- id: 1.2.27
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --encryption-provider-config parameter to the path of that file.
For example, --encryption-provider-config=</path/to/EncryptionConfig/File>
scored: false
- id: 1.2.28
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: |
ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep $apiserverbin | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi
tests:
test_items:
- flag: "provider"
compare:
op: valid_elements
value: "aescbc,kms,secretbox"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
scored: false
- id: 1.2.29
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the control plane node and set the below parameter.
--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,
TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
scored: false
- id: 1.2.30
text: "Ensure that the --service-account-extend-token-expiration parameter is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-extend-token-expiration"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the Control Plane node and set the --service-account-extend-token-expiration parameter to false.
`--service-account-extend-token-expiration=false`
By default, this parameter is set to true.
scored: true
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example, --terminated-pod-gc-threshold=10
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node to set the below parameter.
--use-service-account-credentials=true
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --service-account-private-key-file parameter
to the private key file for service accounts.
--service-account-private-key-file=<filename>
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
--root-ca-file=<path/to/file>
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf file
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true

492
cfg/cis-1.12/node.yaml Normal file
View File

@@ -0,0 +1,492 @@
---
controls:
version: "cis-1.12"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 600 $kubeletsvc
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: false
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: false
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 644 <filename>
scored: false
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: false
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 600 $kubeletconf
scored: true
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Verify that if defined, the --read-only-port argument is set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
audit: "/bin/ps -fC $kubeletbin"
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.8
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.9
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cert-file
path: '{.tlsCertFile}'
- flag: --tls-private-key-file
path: '{.tlsPrivateKeyFile}'
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.10
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.11
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.12
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.13
text: "Ensure that a limit is set on pod PIDs (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --pod-max-pids
path: '{.podPidsLimit}'
remediation: |
Decide on an appropriate level for this parameter and set it,
either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
scored: false
- id: 4.2.14
text: "Ensure that the --seccomp-default parameter is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --seccomp-default
path: '{.seccompDefault}'
remediation: |
Set the parameter, either via the --seccomp-default command line parameter or the
seccompDefault configuration file setting.
By default the seccomp profile is not enabled.
scored: false
- id: 4.3
text: "kube-proxy"
checks:
- id: 4.3.1
text: "Ensure that the kube-proxy metrics service is bound to localhost (Automated)"
audit: "/bin/ps -fC $proxybin"
audit_config: "/bin/sh -c 'if test -e $proxykubeconfig; then cat $proxykubeconfig; fi'"
tests:
bin_op: or
test_items:
- flag: "--metrics-bind-address"
path: '{.metricsBindAddress}'
compare:
op: has
value: "127.0.0.1"
- flag: "--metrics-bind-address"
path: '{.metricsBindAddress}'
set: false
remediation: |
Modify or remove any values which bind the metrics service to a non-localhost address.
The default value is 127.0.0.1:10249.
scored: true

515
cfg/cis-1.12/policies.yaml Normal file
View File

@@ -0,0 +1,515 @@
---
controls:
version: "cis-1.12"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
audit: |
kubectl get clusterrolebindings -o=custom-columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name --no-headers | while read -r role_name role_binding subject
do
if [[ "${role_name}" != "cluster-admin" && "${role_binding}" == "cluster-admin" ]]; then
is_compliant="false"
else
is_compliant="true"
fi;
echo "**role_name: ${role_name} role_binding: ${role_binding} subject: ${subject} is_compliant: ${is_compliant}"
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role : kubectl delete clusterrolebinding [name]
Condition: is_compliant is false if rolename is not cluster-admin and rolebinding is cluster-admin.
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
audit: "echo \"canGetListWatchSecretsAsSystemAuthenticated: $(kubectl auth can-i get,list,watch secrets --all-namespaces --as=system:authenticated)\""
tests:
test_items:
- flag: "canGetListWatchSecretsAsSystemAuthenticated"
compare:
op: eq
value: no
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
audit: |
# Check Roles
kubectl get roles --all-namespaces -o custom-columns=ROLE_NAMESPACE:.metadata.namespace,ROLE_NAME:.metadata.name --no-headers | while read -r role_namespace role_name
do
role_rules=$(kubectl get role -n "${role_namespace}" "${role_name}" -o=json | jq -c '.rules')
if echo "${role_rules}" | grep -q "\[\"\*\"\]"; then
role_is_compliant="false"
else
role_is_compliant="true"
fi;
echo "**role_name: ${role_name} role_namespace: ${role_namespace} role_rules: ${role_rules} role_is_compliant: ${role_is_compliant}"
done
# Check ClusterRoles
kubectl get clusterroles -o custom-columns=CLUSTERROLE_NAME:.metadata.name --no-headers | while read -r clusterrole_name
do
clusterrole_rules=$(kubectl get clusterrole "${clusterrole_name}" -o=json | jq -c '.rules')
if echo "${clusterrole_rules}" | grep -q "\[\"\*\"\]"; then
clusterrole_is_compliant="false"
else
clusterrole_is_compliant="true"
fi;
echo "**clusterrole_name: ${clusterrole_name} clusterrole_rules: ${clusterrole_rules} clusterrole_is_compliant: ${clusterrole_is_compliant}"
done
use_multiple_values: true
tests:
bin_op: or
test_items:
- flag: "role_is_compliant"
compare:
op: eq
value: true
set: true
- flag: "clusterrole_is_compliant"
compare:
op: eq
value: true
set: true
remediation: |
Where possible replace any use of wildcards ["*"] in roles and clusterroles with specific
objects or actions.
Condition: role_is_compliant is false if ["*"] is found in rules.
Condition: clusterrole_is_compliant is false if ["*"] is found in rules.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
audit: |
echo "canCreatePodsAsSystemAuthenticated: $(kubectl auth can-i create pods --all-namespaces --as=system:authenticated)"
tests:
test_items:
- flag: "canCreatePodsAsSystemAuthenticated"
compare:
op: eq
value: no
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used (Manual)"
audit: |
kubectl get serviceaccount --all-namespaces --field-selector metadata.name=default -o=json | jq -r '.items[] | " namespace: \(.metadata.namespace), kind: \(.kind), name: \(.metadata.name), automountServiceAccountToken: \(.automountServiceAccountToken | if . == null then "notset" else . end )"' | xargs -L 1
use_multiple_values: true
tests:
test_items:
- flag: "automountServiceAccountToken"
compare:
op: eq
value: false
set: true
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
`automountServiceAccountToken: false`.
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAMESPACE:.metadata.namespace,POD_NAME:.metadata.name,POD_SERVICE_ACCOUNT:.spec.serviceAccount,POD_IS_AUTOMOUNTSERVICEACCOUNTTOKEN:.spec.automountServiceAccountToken --no-headers | while read -r pod_namespace pod_name pod_service_account pod_is_automountserviceaccounttoken
do
# Retrieve automountServiceAccountToken's value for ServiceAccount and Pod, set to notset if null or <none>.
svacc_is_automountserviceaccounttoken=$(kubectl get serviceaccount -n "${pod_namespace}" "${pod_service_account}" -o json | jq -r '.automountServiceAccountToken' | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
pod_is_automountserviceaccounttoken=$(echo "${pod_is_automountserviceaccounttoken}" | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
if [ "${svacc_is_automountserviceaccounttoken}" = "false" ] && ( [ "${pod_is_automountserviceaccounttoken}" = "false" ] || [ "${pod_is_automountserviceaccounttoken}" = "notset" ] ); then
is_compliant="true"
elif [ "${svacc_is_automountserviceaccounttoken}" = "true" ] && [ "${pod_is_automountserviceaccounttoken}" = "false" ]; then
is_compliant="true"
else
is_compliant="false"
fi
echo "**namespace: ${pod_namespace} pod_name: ${pod_name} service_account: ${pod_service_account} pod_is_automountserviceaccounttoken: ${pod_is_automountserviceaccounttoken} svacc_is_automountServiceAccountToken: ${svacc_is_automountserviceaccounttoken} is_compliant: ${is_compliant}"
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Modify the definition of ServiceAccounts and Pods which do not need to mount service
account tokens to disable it, with `automountServiceAccountToken: false`.
If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.
Condition: Pod is_compliant to true when
- ServiceAccount is automountServiceAccountToken: false and Pod is automountServiceAccountToken: false or notset
- ServiceAccount is automountServiceAccountToken: true notset and Pod is automountServiceAccountToken: false
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.1.9
text: "Minimize access to create persistent volumes (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to PersistentVolume objects in the cluster.
scored: false
- id: 5.1.10
text: "Minimize access to the proxy sub-resource of nodes (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the proxy sub-resource of node objects.
scored: false
- id: 5.1.11
text: "Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the approval sub-resource of certificatesigningrequests objects.
scored: false
- id: 5.1.12
text: "Minimize access to webhook configuration objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
scored: false
- id: 5.1.13
text: "Minimize access to the service account token creation (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the token sub-resource of serviceaccount objects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve container(s) for each Pod.
kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o json | jq -c '.spec.containers[]' | while read -r container
do
# Retrieve container's name.
container_name=$(echo ${container} | jq -r '.name')
# Retrieve container's .securityContext.privileged value.
container_privileged=$(echo ${container} | jq -r '.securityContext.privileged' | sed -e 's/null/notset/g')
if [ "${container_privileged}" = "false" ] || [ "${container_privileged}" = "notset" ] ; then
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_privileged: ${container_privileged} is_compliant: true"
else
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_privileged: ${container_privileged} is_compliant: false"
fi
done
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
Audit: the audit list all pods' containers to retrieve their .securityContext.privileged value.
Condition: is_compliant is false if container's `.securityContext.privileged` is set to `true`.
Default: by default, there are no restrictions on the creation of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve spec.hostPID for each pod.
pod_hostpid=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostPID}' 2>/dev/null)
if [ -z "${pod_hostpid}" ]; then
pod_hostpid="false"
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostpid: ${pod_hostpid} is_compliant: true"
else
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostpid: ${pod_hostpid} is_compliant: false"
fi
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
Audit: the audit retrieves each Pod' spec.hostPID.
Condition: is_compliant is false if Pod's spec.hostPID is set to `true`.
Default: by default, there are no restrictions on the creation of hostPID containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve spec.hostIPC for each pod.
pod_hostipc=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostIPC}' 2>/dev/null)
if [ -z "${pod_hostipc}" ]; then
pod_hostipc="false"
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostipc: ${pod_hostipc} is_compliant: true"
else
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostipc: ${pod_hostipc} is_compliant: false"
fi
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
Audit: the audit retrieves each Pod' spec.IPC.
Condition: is_compliant is false if Pod's spec.hostIPC is set to `true`.
Default: by default, there are no restrictions on the creation of hostIPC containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve spec.hostNetwork for each pod.
pod_hostnetwork=$(kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o jsonpath='{.spec.hostNetwork}' 2>/dev/null)
if [ -z "${pod_hostnetwork}" ]; then
pod_hostnetwork="false"
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostnetwork: ${pod_hostnetwork} is_compliant: true"
else
echo "***pod_name: ${pod_name} pod_namespace: ${pod_namespace} is_pod_hostnetwork: ${pod_hostnetwork} is_compliant: false"
fi
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
Audit: the audit retrieves each Pod' spec.hostNetwork.
Condition: is_compliant is false if Pod's spec.hostNetwork is set to `true`.
Default: by default, there are no restrictions on the creation of hostNetwork containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Manual)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAME:.metadata.name,POD_NAMESPACE:.metadata.namespace --no-headers | while read -r pod_name pod_namespace
do
# Retrieve container(s) for each Pod.
kubectl get pod "${pod_name}" --namespace "${pod_namespace}" -o json | jq -c '.spec.containers[]' | while read -r container
do
# Retrieve container's name
container_name=$(echo ${container} | jq -r '.name')
# Retrieve container's .securityContext.allowPrivilegeEscalation
container_allowprivesc=$(echo ${container} | jq -r '.securityContext.allowPrivilegeEscalation' | sed -e 's/null/notset/g')
if [ "${container_allowprivesc}" = "false" ] || [ "${container_allowprivesc}" = "notset" ]; then
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_allowprivesc: ${container_allowprivesc} is_compliant: true"
else
echo "***pod_name: ${pod_name} container_name: ${container_name} pod_namespace: ${pod_namespace} is_container_allowprivesc: ${container_allowprivesc} is_compliant: false"
fi
done
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.securityContext.allowPrivilegeEscalation` set to `true`.
Audit: the audit retrieves each Pod's container(s) `.securityContext.allowPrivilegeEscalation`.
Condition: is_compliant is false if container's `.securityContext.allowPrivilegeEscalation` is set to `true`.
Default: If notset, privilege escalation is allowed (default to true). However if PSP/PSA is used with a `restricted` profile,
privilege escalation is explicitly disallowed unless configured otherwise.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Manual)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilities in applications running on your cluster. Where a
namespace contains applications which do not require any Linux capabilities to operate
consider adding a policy which forbids the admission of containers which do not drop all
capabilities.
scored: false
- id: 5.2.10
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.11
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.12
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.6
text: "General Policies"
checks:
- id: 5.6.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.6.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.6.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.6.4
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -448,7 +448,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set TLSCipherSuites: to
If using a Kubelet config file, edit the file to set tlsCipherSuites: to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -146,7 +146,7 @@ groups:
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false

View File

@@ -447,7 +447,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -153,7 +153,7 @@ groups:
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false

View File

@@ -445,7 +445,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -900,14 +900,11 @@ groups:
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and ensure the correct value for the --bind-address parameter
@@ -935,14 +932,11 @@ groups:
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the control plane node and ensure the correct value for the --bind-address parameter

View File

@@ -451,7 +451,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -153,7 +153,7 @@ groups:
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false

View File

@@ -472,7 +472,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set TLSCipherSuites: to
If using a Kubelet config file, edit the file to set tlsCipherSuites: to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -132,7 +132,7 @@ groups:
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false

View File

@@ -452,7 +452,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set TLSCipherSuites: to
If using a Kubelet config file, edit the file to set tlsCipherSuites: to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -132,7 +132,7 @@ groups:
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false

View File

@@ -345,16 +345,15 @@ groups:
text: "Ensure that the --DenyServiceExternalIPs is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: have
op: has
value: "DenyServiceExternalIPs"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and remove the `DenyServiceExternalIPs`
from enabled admission plugins.
on the control plane node and add the `DenyServiceExternalIPs` plugin
to the enabled admission plugins, as such --enable-admission-plugin=DenyServiceExternalIPs.
scored: false
- id: 1.2.4

View File

@@ -429,7 +429,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -188,7 +188,7 @@ groups:
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false

View File

@@ -345,16 +345,15 @@ groups:
text: "Ensure that the --DenyServiceExternalIPs is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: have
op: has
value: "DenyServiceExternalIPs"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and remove the `DenyServiceExternalIPs`
from enabled admission plugins.
on the control plane node and add the `DenyServiceExternalIPs` plugin
to the enabled admission plugins, as such --enable-admission-plugin=DenyServiceExternalIPs.
scored: false
- id: 1.2.4

View File

@@ -429,7 +429,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -188,7 +188,7 @@ groups:
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false

2
cfg/cis-1.9/config.yaml Normal file
View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,62 @@
---
controls:
version: "cis-1.9"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.1.2
text: "Service account token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of service account tokens.
scored: false
- id: 3.1.3
text: "Bootstrap token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of bootstrap tokens.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

135
cfg/cis-1.9/etcd.yaml Normal file
View File

@@ -0,0 +1,135 @@
---
controls:
version: "cis-1.9"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
env: "ETCD_CERT_FILE"
- flag: "--key-file"
env: "ETCD_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are
set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
env: "ETCD_PEER_CERT_FILE"
- flag: "--peer-key-file"
env: "ETCD_PEER_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--peer-client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: false

919
cfg/cis-1.9/master.yaml Normal file
View File

@@ -0,0 +1,919 @@
---
controls:
version: "cis-1.9"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the
control plane node.
For example, chmod 600 $apiserverconf
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $apiserverconf
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $controllermanagerconf
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $controllermanagerconf
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $schedulerconf
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $schedulerconf
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c permissions=%a; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $etcdconf
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c %U:%G; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $etcdconf
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 <path/to/cni/files>
scored: false
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root <path/to/cni/files>
scored: false
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c permissions=%a "$DATA_DIR"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "700"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c %U:%G "$DATA_DIR"
tests:
test_items:
- flag: "etcd:etcd"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
scored: true
- id: 1.1.13
text: "Ensure that the default administrative credential file permissions are set to 600 (Automated)"
audit: |
for adminconf in /etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf; do if test -e $adminconf; then stat -c "permissions=%a %n" $adminconf; fi; done
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /etc/kubernetes/admin.conf
On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
For example, chmod 600 /etc/kubernetes/super-admin.conf
scored: true
- id: 1.1.14
text: "Ensure that the default administrative credential file ownership is set to root:root (Automated)"
audit: |
for adminconf in /etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf; do if test -e $adminconf; then stat -c "ownership=%U:%G %n" $adminconf; fi; done
use_multiple_values: true
tests:
test_items:
- flag: "ownership"
compare:
op: eq
value: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /etc/kubernetes/admin.conf
On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
For example, chown root:root /etc/kubernetes/super-admin.conf
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c permissions=%a $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $schedulerkubeconfig
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c %U:%G $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c permissions=%a $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $controllermanagerkubeconfig
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c %U:%G $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "find /etc/kubernetes/pki/ | xargs stat -c %U:%G"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /etc/kubernetes/pki/
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.crt' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.key' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.key
scored: false
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--anonymous-auth=false
scored: false
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the control plane node and remove the --token-auth-file=<filename> parameter.
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "DenyServiceExternalIPs"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and add the `DenyServiceExternalIPs` plugin
to the enabled admission plugins, as such --enable-admission-plugin=DenyServiceExternalIPs.
scored: false
- id: 1.2.4
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the
apiserver and kubelets. Then, edit API server pod specification file
$apiserverconf on the control plane node and set the
kubelet client certificate and key parameters as below.
--kubelet-client-certificate=<path/to/client-certificate-file>
--kubelet-client-key=<path/to/client-key-file>
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the control plane node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
scored: true
- id: 1.2.6
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
One such example could be as below.
--authorization-mode=RBAC
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
for example `--authorization-mode=Node,RBAC`.
scored: true
- id: 1.2.9
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file $apiserverconf
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
scored: false
- id: 1.2.10
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
value that does not include AlwaysAdmit.
scored: true
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
scored: false
- id: 1.2.12
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Follow the documentation and create ServiceAccount objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
value that does not include ServiceAccount.
scored: true
- id: 1.2.13
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --disable-admission-plugins parameter to
ensure it does not include NamespaceLifecycle.
scored: true
- id: 1.2.14
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to a
value that includes NodeRestriction.
--enable-admission-plugins=...,NodeRestriction,...
scored: true
- id: 1.2.15
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.2.16
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
--audit-log-path=/var/log/apiserver/audit.log
scored: true
- id: 1.2.17
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxage parameter to 30
or as an appropriate number of days, for example,
--audit-log-maxage=30
scored: true
- id: 1.2.18
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value. For example,
--audit-log-maxbackup=10
scored: true
- id: 1.2.19
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB, --audit-log-maxsize=100
scored: true
- id: 1.2.20
text: "Ensure that the --request-timeout argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
type: manual
remediation: |
Edit the API server pod specification file $apiserverconf
and set the below parameter as appropriate and if needed.
For example, --request-timeout=300s
scored: false
- id: 1.2.21
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--service-account-lookup=true
Alternatively, you can delete the --service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.22
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --service-account-key-file parameter
to the public key file for service accounts. For example,
--service-account-key-file=<filename>
scored: true
- id: 1.2.23
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
- flag: "--etcd-keyfile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate and key file parameters.
--etcd-certfile=<path/to/client-certificate-file>
--etcd-keyfile=<path/to/client-key-file>
scored: true
- id: 1.2.24
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
- flag: "--tls-private-key-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the TLS certificate and private key file parameters.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
scored: true
- id: 1.2.25
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the client certificate authority file.
--client-ca-file=<path/to/client-ca-file>
scored: true
- id: 1.2.26
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate authority file parameter.
--etcd-cafile=<path/to/ca-file>
scored: true
- id: 1.2.27
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --encryption-provider-config parameter to the path of that file.
For example, --encryption-provider-config=</path/to/EncryptionConfig/File>
scored: false
- id: 1.2.28
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: |
ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep $apiserverbin | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi
tests:
test_items:
- flag: "provider"
compare:
op: valid_elements
value: "aescbc,kms,secretbox"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
scored: false
- id: 1.2.29
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384"
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the control plane node and set the below parameter.
--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
scored: false
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example, --terminated-pod-gc-threshold=10
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node to set the below parameter.
--use-service-account-credentials=true
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --service-account-private-key-file parameter
to the private key file for service accounts.
--service-account-private-key-file=<filename>
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
--root-ca-file=<path/to/file>
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf file
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true

478
cfg/cis-1.9/node.yaml Normal file
View File

@@ -0,0 +1,478 @@
---
controls:
version: "cis-1.9"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 600 $kubeletsvc
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: false
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: false
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 <filename>
scored: false
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: false
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 600 $kubeletconf
scored: true
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
audit: "/bin/ps -fC $kubeletbin"
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.8
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.9
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cert-file
path: '{.tlsCertFile}'
- flag: --tls-private-key-file
path: '{.tlsPrivateKeyFile}'
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.10
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.11
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.12
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.13
text: "Ensure that a limit is set on pod PIDs (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --pod-max-pids
path: '{.podPidsLimit}'
remediation: |
Decide on an appropriate level for this parameter and set it,
either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
scored: false
- id: 4.3
text: "kube-proxy"
checks:
- id: 4.3.1
text: "Ensure that the kube-proxy metrics service is bound to localhost (Automated)"
audit: "/bin/ps -fC $proxybin"
audit_config: "/bin/sh -c 'if test -e $proxykubeconfig; then cat $proxykubeconfig; fi'"
tests:
bin_op: or
test_items:
- flag: "--metrics-bind-address"
path: '{.metricsBindAddress}'
compare:
op: has
value: "127.0.0.1"
- flag: "--metrics-bind-address"
path: '{.metricsBindAddress}'
set: false
remediation: |
Modify or remove any values which bind the metrics service to a non-localhost address.
The default value is 127.0.0.1:10249.
scored: true

405
cfg/cis-1.9/policies.yaml Normal file
View File

@@ -0,0 +1,405 @@
---
controls:
version: "cis-1.9"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
audit: |
kubectl get clusterrolebindings -o=custom-columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name --no-headers | while read -r role_name role_binding subject
do
if [[ "${role_name}" != "cluster-admin" && "${role_binding}" == "cluster-admin" ]]; then
is_compliant="false"
else
is_compliant="true"
fi;
echo "**role_name: ${role_name} role_binding: ${role_binding} subject: ${subject} is_compliant: ${is_compliant}"
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role : kubectl delete clusterrolebinding [name]
Condition: is_compliant is false if rolename is not cluster-admin and rolebinding is cluster-admin.
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Automated)"
audit: "echo \"canGetListWatchSecretsAsSystemAuthenticated: $(kubectl auth can-i get,list,watch secrets --all-namespaces --as=system:authenticated)\""
tests:
test_items:
- flag: "canGetListWatchSecretsAsSystemAuthenticated"
compare:
op: eq
value: no
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
audit: |
# Check Roles
kubectl get roles --all-namespaces -o custom-columns=ROLE_NAMESPACE:.metadata.namespace,ROLE_NAME:.metadata.name --no-headers | while read -r role_namespace role_name
do
role_rules=$(kubectl get role -n "${role_namespace}" "${role_name}" -o=json | jq -c '.rules')
if echo "${role_rules}" | grep -q "\[\"\*\"\]"; then
role_is_compliant="false"
else
role_is_compliant="true"
fi;
echo "**role_name: ${role_name} role_namespace: ${role_namespace} role_rules: ${role_rules} role_is_compliant: ${role_is_compliant}"
done
# Check ClusterRoles
kubectl get clusterroles -o custom-columns=CLUSTERROLE_NAME:.metadata.name --no-headers | while read -r clusterrole_name
do
clusterrole_rules=$(kubectl get clusterrole "${clusterrole_name}" -o=json | jq -c '.rules')
if echo "${clusterrole_rules}" | grep -q "\[\"\*\"\]"; then
clusterrole_is_compliant="false"
else
clusterrole_is_compliant="true"
fi;
echo "**clusterrole_name: ${clusterrole_name} clusterrole_rules: ${clusterrole_rules} clusterrole_is_compliant: ${clusterrole_is_compliant}"
done
use_multiple_values: true
tests:
bin_op: or
test_items:
- flag: "role_is_compliant"
compare:
op: eq
value: true
set: true
- flag: "clusterrole_is_compliant"
compare:
op: eq
value: true
set: true
remediation: |
Where possible replace any use of wildcards ["*"] in roles and clusterroles with specific
objects or actions.
Condition: role_is_compliant is false if ["*"] is found in rules.
Condition: clusterrole_is_compliant is false if ["*"] is found in rules.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Automated)"
audit: |
echo "canCreatePodsAsSystemAuthenticated: $(kubectl auth can-i create pods --all-namespaces --as=system:authenticated)"
tests:
test_items:
- flag: "canCreatePodsAsSystemAuthenticated"
compare:
op: eq
value: no
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used (Automated)"
audit: |
kubectl get serviceaccount --all-namespaces --field-selector metadata.name=default -o=json | jq -r '.items[] | " namespace: \(.metadata.namespace), kind: \(.kind), name: \(.metadata.name), automountServiceAccountToken: \(.automountServiceAccountToken | if . == null then "notset" else . end )"' | xargs -L 1
use_multiple_values: true
tests:
test_items:
- flag: "automountServiceAccountToken"
compare:
op: eq
value: false
set: true
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
`automountServiceAccountToken: false`.
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAMESPACE:.metadata.namespace,POD_NAME:.metadata.name,POD_SERVICE_ACCOUNT:.spec.serviceAccount,POD_IS_AUTOMOUNTSERVICEACCOUNTTOKEN:.spec.automountServiceAccountToken --no-headers | while read -r pod_namespace pod_name pod_service_account pod_is_automountserviceaccounttoken
do
# Retrieve automountServiceAccountToken's value for ServiceAccount and Pod, set to notset if null or <none>.
svacc_is_automountserviceaccounttoken=$(kubectl get serviceaccount -n "${pod_namespace}" "${pod_service_account}" -o json | jq -r '.automountServiceAccountToken' | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
pod_is_automountserviceaccounttoken=$(echo "${pod_is_automountserviceaccounttoken}" | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
if [ "${svacc_is_automountserviceaccounttoken}" = "false" ] && ( [ "${pod_is_automountserviceaccounttoken}" = "false" ] || [ "${pod_is_automountserviceaccounttoken}" = "notset" ] ); then
is_compliant="true"
elif [ "${svacc_is_automountserviceaccounttoken}" = "true" ] && [ "${pod_is_automountserviceaccounttoken}" = "false" ]; then
is_compliant="true"
else
is_compliant="false"
fi
echo "**namespace: ${pod_namespace} pod_name: ${pod_name} service_account: ${pod_service_account} pod_is_automountserviceaccounttoken: ${pod_is_automountserviceaccounttoken} svacc_is_automountServiceAccountToken: ${svacc_is_automountserviceaccounttoken} is_compliant: ${is_compliant}"
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Modify the definition of ServiceAccounts and Pods which do not need to mount service
account tokens to disable it, with `automountServiceAccountToken: false`.
If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.
Condition: Pod is_compliant to true when
- ServiceAccount is automountServiceAccountToken: false and Pod is automountServiceAccountToken: false or notset
- ServiceAccount is automountServiceAccountToken: true notset and Pod is automountServiceAccountToken: false
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.1.9
text: "Minimize access to create persistent volumes (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to PersistentVolume objects in the cluster.
scored: false
- id: 5.1.10
text: "Minimize access to the proxy sub-resource of nodes (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the proxy sub-resource of node objects.
scored: false
- id: 5.1.11
text: "Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
scored: false
- id: 5.1.12
text: "Minimize access to webhook configuration objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
scored: false
- id: 5.1.13
text: "Minimize access to the service account token creation (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the token sub-resource of serviceaccount objects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Manual)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Manual)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -60,6 +60,7 @@ master:
- /etc/kubernetes/scheduler.conf
- /var/lib/kube-scheduler/kubeconfig
- /var/lib/kube-scheduler/config.yaml
- /var/lib/rancher/rke2/server/cred/scheduler.kubeconfig
- /system/secrets/kubernetes/kube-scheduler/kubeconfig
defaultkubeconfig: /etc/kubernetes/scheduler.conf
@@ -84,6 +85,7 @@ master:
kubeconfig:
- /etc/kubernetes/controller-manager.conf
- /var/lib/kube-controller-manager/kubeconfig
- /var/lib/rancher/rke2/server/cred/controller.kubeconfig
- /system/secrets/kubernetes/kube-controller-manager/kubeconfig
defaultkubeconfig: /etc/kubernetes/controller-manager.conf
@@ -95,6 +97,7 @@ master:
datadirs:
- /var/lib/etcd/default.etcd
- /var/lib/etcd/data.etcd
- /var/lib/rancher/k3s/server/db/etcd
confs:
- /etc/kubernetes/manifests/etcd.yaml
- /etc/kubernetes/manifests/etcd.yml
@@ -105,6 +108,7 @@ master:
- /var/snap/microk8s/current/args/etcd
- /usr/lib/systemd/system/etcd.service
- /var/lib/rancher/rke2/server/db/etcd/config
- /var/lib/rancher/k3s/server/db/etcd/config
defaultconf: /etc/kubernetes/manifests/etcd.yaml
defaultdatadir: /var/lib/etcd/default.etcd
@@ -162,13 +166,13 @@ node:
- "/var/snap/microk8s/current/credentials/kubelet.config"
- "/etc/kubernetes/kubeconfig-kubelet"
- "/var/lib/rancher/rke2/agent/kubelet.kubeconfig"
- "/var/lib/rancher/k3s/server/cred/admin.kubeconfig"
- "/var/lib/rancher/k3s/agent/kubelet.kubeconfig"
confs:
- "/etc/kubernetes/kubelet-config.yaml"
- "/var/lib/kubelet/config.yaml"
- "/var/lib/kubelet/config.yml"
- "/etc/kubernetes/kubelet/kubelet-config.json"
- "/etc/kubernetes/kubelet/config.json"
- "/etc/kubernetes/kubelet/config"
- "/home/kubernetes/kubelet-config.yaml"
- "/home/kubernetes/kubelet-config.yml"
@@ -188,7 +192,6 @@ node:
- "/etc/systemd/system/snap.kubelet.daemon.service"
- "/etc/systemd/system/snap.microk8s.daemon-kubelet.service"
- "/etc/kubernetes/kubelet.yaml"
- "/var/lib/rancher/rke2/agent/kubelet.kubeconfig"
defaultconf: "/var/lib/kubelet/config.yaml"
defaultsvc: "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
@@ -234,6 +237,7 @@ etcd:
datadirs:
- /var/lib/etcd/default.etcd
- /var/lib/etcd/data.etcd
- /var/lib/rancher/k3s/server/db/etcd
confs:
- /etc/kubernetes/manifests/etcd.yaml
- /etc/kubernetes/manifests/etcd.yml
@@ -278,15 +282,31 @@ version_mapping:
"1.24": "cis-1.24"
"1.25": "cis-1.7"
"1.26": "cis-1.8"
"1.27": "cis-1.9"
"1.28": "cis-1.10"
"1.29": "cis-1.11"
"1.30": "cis-1.11"
"1.31": "cis-1.11"
"1.32": "cis-1.12"
"1.33": "cis-1.12"
"1.34": "cis-1.12"
"eks-1.0.1": "eks-1.0.1"
"eks-1.1.0": "eks-1.1.0"
"eks-1.2.0": "eks-1.2.0"
"eks-1.5.0": "eks-1.5.0"
"eks-1.7.0": "eks-1.7.0"
"eks-1.8.0": "eks-1.8.0"
"gke-1.0": "gke-1.0"
"gke-1.2.0": "gke-1.2.0"
"gke-1.6.0": "gke-1.6.0"
"gke-1.8.0": "gke-1.8.0"
"ocp-3.10": "rh-0.7"
"ocp-3.11": "rh-0.7"
"ocp-4.0": "rh-1.0"
"ocp-4.11": "rh-1.4"
"ocp-4.13": "rh-1.8"
"aks-1.0": "aks-1.0"
"aks-1.7": "aks-1.7"
"ack-1.0": "ack-1.0"
"cis-1.6-k3s": "cis-1.6-k3s"
"cis-1.24-microk8s": "cis-1.24-microk8s"
@@ -298,6 +318,7 @@ version_mapping:
"rke-cis-1.23": "rke-cis-1.23"
"rke-cis-1.24": "rke-cis-1.24"
"rke2-cis-1.7": "rke2-cis-1.7"
"rke2-cis-1.8": "rke2-cis-1.8"
"rke2-cis-1.23": "rke2-cis-1.23"
"rke2-cis-1.24": "rke2-cis-1.24"
@@ -356,6 +377,30 @@ target_mapping:
- "controlplane"
- "etcd"
- "policies"
"cis-1.9":
- "master"
- "node"
- "controlplane"
- "etcd"
- "policies"
"cis-1.10":
- "master"
- "node"
- "controlplane"
- "etcd"
- "policies"
"cis-1.11":
- "master"
- "node"
- "controlplane"
- "etcd"
- "policies"
"cis-1.12":
- "master"
- "node"
- "controlplane"
- "etcd"
- "policies"
"gke-1.0":
- "master"
- "node"
@@ -369,6 +414,18 @@ target_mapping:
- "controlplane"
- "policies"
- "managedservices"
"gke-1.6.0":
- "master"
- "node"
- "controlplane"
- "policies"
- "managedservices"
"gke-1.8.0":
- "master"
- "node"
- "controlplane"
- "policies"
- "managedservices"
"eks-1.0.1":
- "master"
- "node"
@@ -387,6 +444,24 @@ target_mapping:
- "controlplane"
- "policies"
- "managedservices"
"eks-1.5.0":
- "master"
- "node"
- "controlplane"
- "policies"
- "managedservices"
"eks-1.7.0":
- "master"
- "node"
- "controlplane"
- "policies"
- "managedservices"
"eks-1.8.0":
- "master"
- "node"
- "controlplane"
- "policies"
- "managedservices"
"rh-0.7":
- "master"
- "node"
@@ -396,6 +471,12 @@ target_mapping:
- "controlplane"
- "policies"
- "managedservices"
"aks-1.7":
- "master"
- "node"
- "controlplane"
- "policies"
- "managedservices"
"ack-1.0":
- "master"
- "node"
@@ -409,6 +490,18 @@ target_mapping:
- "controlplane"
- "policies"
- "etcd"
"rh-1.4":
- "master"
- "node"
- "controlplane"
- "policies"
- "etcd"
"rh-1.8":
- "master"
- "node"
- "controlplane"
- "policies"
- "etcd"
"eks-stig-kubernetes-v1r6":
- "node"
- "controlplane"
@@ -426,6 +519,12 @@ target_mapping:
- "controlplane"
- "node"
- "policies"
"k3s-cis-1.8":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"k3s-cis-1.23":
- "master"
- "etcd"
@@ -462,6 +561,12 @@ target_mapping:
- "controlplane"
- "node"
- "policies"
"rke2-cis-1.8":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"rke2-cis-1.23":
- "master"
- "etcd"

View File

@@ -0,0 +1,9 @@
---
## Version-specific settings that override the values in cfg/config.yaml
## These settings are required if you are using the --asff option to report findings to AWS Security Hub
## AWS account number is required.
AWS_ACCOUNT: "<AWS_ACCT_NUMBER>"
## AWS region is required.
AWS_REGION: "<AWS_REGION>"
## EKS Cluster ARN is required.
CLUSTER_ARN: "<AWS_CLUSTER_ARN>"

View File

@@ -0,0 +1,32 @@
---
controls:
version: "eks-1.5.0"
id: 2
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 2.1
text: "Logging"
checks:
- id: 2.1.1
text: "Enable audit Logs (Automated)"
remediation: |
From Console:
1. For each EKS Cluster in each region;
2. Go to 'Amazon EKS' > 'Clusters' > '' > 'Configuration' > 'Logging'.
3. Click 'Manage logging'.
4. Ensure that all options are toggled to 'Enabled'.
API server: Enabled
Audit: Enabled
Authenticator: Enabled
Controller manager: Enabled
Scheduler: Enabled
5. Click 'Save Changes'.
From CLI:
# For each EKS Cluster in each region;
aws eks update-cluster-config \
--region '${REGION_CODE}' \
--name '${CLUSTER_NAME}' \
--logging '{"clusterLogging":[{"types":["api","audit","authenticator","controllerManager","scheduler"],"enabled":true}]}'
scored: false

View File

@@ -0,0 +1,227 @@
---
controls:
version: "eks-1.5.0"
id: 5
text: "Managed Services"
type: "managedservices"
groups:
- id: 5.1
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning using Amazon ECR image scanning or a third party provider (Automated)"
type: "manual"
remediation: |
To utilize AWS ECR for Image scanning please follow the steps below:
To create a repository configured for scan on push (AWS CLI):
aws ecr create-repository --repository-name $REPO_NAME --image-scanning-configuration scanOnPush=true --region $REGION_CODE
To edit the settings of an existing repository (AWS CLI):
aws ecr put-image-scanning-configuration --repository-name $REPO_NAME --image-scanning-configuration scanOnPush=true --region $REGION_CODE
Use the following steps to start a manual image scan using the AWS Management Console.
1. Open the Amazon ECR console at https://console.aws.amazon.com/ecr/repositories.
2. From the navigation bar, choose the Region to create your repository in.
3. In the navigation pane, choose Repositories.
4. On the Repositories page, choose the repository that contains the image to scan.
5. On the Images page, select the image to scan and then choose Scan.
scored: false
- id: 5.1.2
text: "Minimize user access to Amazon ECR (Manual)"
type: "manual"
remediation: |
Before you use IAM to manage access to Amazon ECR, you should understand what IAM features
are available to use with Amazon ECR. To get a high-level view of how Amazon ECR and other
AWS services work with IAM, see AWS Services That Work with IAM in the IAM User Guide.
scored: false
- id: 5.1.3
text: "Minimize cluster access to read-only for Amazon ECR (Manual)"
type: "manual"
remediation: |
You can use your Amazon ECR images with Amazon EKS, but you need to satisfy the following prerequisites.
The Amazon EKS worker node IAM role (NodeInstanceRole) that you use with your worker nodes must possess
the following IAM policy permissions for Amazon ECR.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecr:GetAuthorizationToken"
],
"Resource": "*"
}
]
}
scored: false
- id: 5.1.4
text: "Minimize Container Registries to only those approved (Manual)"
type: "manual"
remediation: |
To minimize AWS ECR container registries to only those approved, you can follow these steps:
1. Define your approval criteria: Determine the criteria that containers must meet to
be considered approved. This can include factors such as security, compliance,
compatibility, and other requirements.
2. Identify all existing ECR registries: Identify all ECR registries that are currently
being used in your organization.
3. Evaluate ECR registries against approval criteria: Evaluate each ECR registry
against your approval criteria to determine whether it should be approved or not.
This can be done by reviewing the registry settings and configuration, as well as
conducting security assessments and vulnerability scans.
4. Establish policies and procedures: Establish policies and procedures that outline
how ECR registries will be approved, maintained, and monitored. This should
include guidelines for developers to follow when selecting a registry for their
container images.
5. Implement access controls: Implement access controls to ensure that only
approved ECR registries are used to store and distribute container images. This
can be done by setting up IAM policies and roles that restrict access to
unapproved registries or create a whitelist of approved registries.
6. Monitor and review: Continuously monitor and review the use of ECR registries
to ensure that they continue to meet your approval criteria. This can include
scored: false
- id: 5.2
text: "Identity and Access Management (IAM)"
checks:
- id: 5.2.1
text: "Prefer using dedicated Amazon EKS Service Accounts (Automated)"
type: "manual"
remediation: |
With IAM roles for service accounts on Amazon EKS clusters, you can associate an
IAM role with a Kubernetes service account. This service account can then provide
AWS permissions to the containers in any pod that uses that service account. With this
feature, you no longer need to provide extended permissions to the worker node IAM
role so that pods on that node can call AWS APIs.
Applications must sign their AWS API requests with AWS credentials. This feature
provides a strategy for managing credentials for your applications, similar to the way
that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances.
Instead of creating and distributing your AWS credentials to the containers or using the
Amazon EC2 instances role, you can associate an IAM role with a Kubernetes service
account. The applications in the pods containers can then use an AWS SDK or the
AWS CLI to make API requests to authorized AWS services.
The IAM roles for service accounts feature provides the following benefits:
- Least privilege - By using the IAM roles for service accounts feature, you no
longer need to provide extended permissions to the worker node IAM role so that
pods on that node can call AWS APIs. You can scope IAM permissions to a
service account, and only pods that use that service account have access to
those permissions. This feature also eliminates the need for third-party solutions
such as kiam or kube2iam.
- Credential isolation - A container can only retrieve credentials for the IAM role
that is associated with the service account to which it belongs. A container never
has access to credentials that are intended for another container that belongs to
another pod.
- Audit-ability - Access and event logging is available through CloudTrail to help
ensure retrospective auditing.
scored: false
- id: 5.3
text: "AWS EKS Key Management Service"
checks:
- id: 5.3.1
text: "Ensure Kubernetes Secrets are encrypted using Customer Master Keys (CMKs) managed in AWS KMS (Manual)"
type: "manual"
remediation: |
This process can only be performed during Cluster Creation.
Enable 'Secrets Encryption' during Amazon EKS cluster creation as described
in the links within the 'References' section.
scored: false
- id: 5.4
text: "Cluster Networking"
checks:
- id: 5.4.1
text: "Restrict Access to the Control Plane Endpoint (Automated)"
type: "manual"
remediation: |
By enabling private endpoint access to the Kubernetes API server, all communication
between your nodes and the API server stays within your VPC. You can also limit the IP
addresses that can access your API server from the internet, or completely disable
internet access to the API server.
With this in mind, you can update your cluster accordingly using the AWS CLI to ensure
that Private Endpoint Access is enabled.
If you choose to also enable Public Endpoint Access then you should also configure a
list of allowable CIDR blocks, resulting in restricted access from the internet. If you
specify no CIDR blocks, then the public API server endpoint is able to receive and
process requests from all IP addresses by defaulting to ['0.0.0.0/0'].
For example, the following command would enable private access to the Kubernetes
API as well as limited public access over the internet from a single IP address (noting
the /32 CIDR suffix):
aws eks update-cluster-config --region $AWS_REGION --name $CLUSTER_NAME --resources-vpc-config endpointPrivateAccess=true,endpointPrivateAccess=true,publicAccessCidrs="203.0.113.5/32"
Note: The CIDR blocks specified cannot include reserved addresses.
There is a maximum number of CIDR blocks that you can specify. For more information,
see the EKS Service Quotas link in the references section.
For more detailed information, see the EKS Cluster Endpoint documentation link in the
references section.
scored: false
- id: 5.4.2
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Automated)"
type: "manual"
remediation: |
By enabling private endpoint access to the Kubernetes API server, all communication
between your nodes and the API server stays within your VPC.
With this in mind, you can update your cluster accordingly using the AWS CLI to ensure
that Private Endpoint Access is enabled.
For example, the following command would enable private access to the Kubernetes
API and ensure that no public access is permitted:
aws eks update-cluster-config --region $AWS_REGION --name $CLUSTER_NAME --resources-vpc-config endpointPrivateAccess=true,endpointPublicAccess=false
Note: For more detailed information, see the EKS Cluster Endpoint documentation link
in the references section.
scored: false
- id: 5.4.3
text: "Ensure clusters are created with Private Nodes (Automated)"
type: "manual"
remediation: |
aws eks update-cluster-config \
--region region-code \
--name my-cluster \
--resources-vpc-config endpointPublicAccess=true,publicAccessCidrs="203.0.113.5/32",endpointPrivateAccess=true
scored: false
- id: 5.4.4
text: "Ensure Network Policy is Enabled and set as appropriate (Automated)"
type: "manual"
remediation: |
Utilize Calico or other network policy engine to segment and isolate your traffic.
scored: false
- id: 5.4.5
text: "Encrypt traffic to HTTPS load balancers with TLS certificates (Manual)"
type: "manual"
remediation: |
Your load balancer vendor can provide details on configuring HTTPS with TLS.
scored: false
- id: 5.5
text: "Authentication and Authorization"
checks:
- id: 5.5.1
text: "Manage Kubernetes RBAC users with AWS IAM Authenticator for Kubernetes or Upgrade to AWS CLI v1.16.156 or greater (Manual)"
type: "manual"
remediation: |
Refer to the 'Managing users or IAM roles for your cluster' in Amazon EKS documentation.
Note: If using AWS CLI version 1.16.156 or later there is no need to install the AWS
IAM Authenticator anymore.
The relevant AWS CLI commands, depending on the use case, are:
aws eks update-kubeconfig
aws eks get-token
scored: false

View File

@@ -0,0 +1,6 @@
---
controls:
version: "eks-1.5.0"
id: 1
text: "Control Plane Components"
type: "master"

453
cfg/eks-1.5.0/node.yaml Normal file
View File

@@ -0,0 +1,453 @@
---
controls:
version: "eks-1.5.0"
id: 3
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 3.1
text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletkubeconfig
scored: true
- id: 3.1.2
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 3.1.3
text: "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 644 $kubeletconf
scored: true
- id: 3.1.4
text: "Ensure that the kubelet configuration file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 3.2
text: "Kubelet"
checks:
- id: 3.2.1
text: "Ensure that the Anonymous Auth is Not Enabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
set: true
compare:
op: eq
value: false
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Disable Anonymous Authentication by setting the following parameter:
"authentication": { "anonymous": { "enabled": false } }
Remediation Method 2.
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--anonymous-auth=false
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
set: true
compare:
op: nothave
value: AlwaysAllow
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Enable Webhook Authentication by setting the following parameter:
"authentication": { "webhook": { "enabled": true } }
Next, set the Authorization Mode to Webhook by setting the following parameter:
"authorization": { "mode": "Webhook }
Finer detail of the authentication and authorization fields can be found in the
Kubelet Configuration documentation.
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--authentication-token-webhook
--authorization-mode=Webhook
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.3
text: "Ensure that a Client CA File is Configured (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Configure the client certificate authority file by setting the following parameter
appropriately:
"authentication": { "x509": {"clientCAFile": <path/to/client-ca-file> } }"
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--client-ca-file=<path/to/client-ca-file>
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.4
text: "Ensure that the --read-only-port is disabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: true
compare:
op: eq
value: 0
remediation: |
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to 0
"readOnlyPort": 0
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--read-only-port=0
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: true
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to a
non-zero value in the format of #h#m#s
"streamingConnectionIdleTimeout": "4h0m0s"
You should ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not
specify a --streaming-connection-idle-timeout argument because it would
override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--streaming-connection-idle-timeout=4h0m0s
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"streamingConnectionIdleTimeout": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: true
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"makeIPTablesUtilChains": true
Ensure that /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf
does not set the --make-iptables-util-chains argument because that would
override your Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--make-iptables-util-chains:true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"makeIPTablesUtilChains.: true by extracting the live configuration from the nodes
running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.7
text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
set: true
compare:
op: gte
value: 0
remediation: |
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate
level.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node
and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.8
text: "Ensure that the --rotate-certificates argument is not present or is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: true
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"RotateCertificate":true
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set the --RotateCertificate
executable argument to false because this would override the Kubelet
config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--RotateCertificate=true
scored: true
- id: 3.2.9
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: true
compare:
op: eq
value: true
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"featureGates": {
"RotateKubeletServerCertificate":true
},
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set
the --rotate-kubelet-server-certificate executable argument to false because
this would override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--rotate-kubelet-server-certificate=true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"RotateKubeletServerCertificate": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediation methods:
Restart the kubelet service and check status. The example below is for when using
systemctl to manage services:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true

250
cfg/eks-1.5.0/policies.yaml Normal file
View File

@@ -0,0 +1,250 @@
---
controls:
version: "eks-1.5.0"
id: 4
text: "Policies"
type: "policies"
groups:
- id: 4.1
text: "RBAC and Service Accounts"
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and if
they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 4.1.2
text: "Minimize access to secrets (Automated)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to secret objects in the cluster.
scored: false
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 4.1.4
text: "Minimize access to create pods (Automated)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 4.1.5
text: "Ensure that default service accounts are not actively used. ((Automated)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific
access to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
Automatic remediation for the default account:
kubectl patch serviceaccount default -p
$'automountServiceAccountToken: false'
scored: false
- id: 4.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 4.1.7
text: "Avoid use of system:masters group (Automated)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 4.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 4.2
text: "Pod Security Standards"
checks:
- id: 4.2.1
text: "Minimize the admission of privileged containers (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
To enable PSA for a namespace in your cluster, set the pod-security.kubernetes.io/enforce
label with the policy value you want to enforce.
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
The above command enforces the restricted policy for the NAMESPACE namespace.
You can also enable Pod Security Admission for all your namespaces. For example:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
scored: false
- id: 4.2.2
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of hostPID containers.
scored: false
- id: 4.2.3
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of hostIPC containers.
scored: false
- id: 4.2.4
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of hostNetwork containers.
scored: false
- id: 4.2.5
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with .spec.allowPrivilegeEscalation set to true.
scored: false
- id: 4.3
text: "CNI Plugin"
checks:
- id: 4.3.1
text: "Ensure CNI plugin supports network policies (Manual)"
type: "manual"
remediation: |
As with RBAC policies, network policies should adhere to the policy of least privileged
access. Start by creating a deny all policy that restricts all inbound and outbound traffic
from a namespace or create a global policy using Calico.
scored: false
- id: 4.3.2
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 4.4
text: "Secrets Management"
checks:
- id: 4.4.1
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
type: "manual"
remediation: |
If possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 4.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 4.5
text: "General Policies"
checks:
- id: 4.5.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 4.5.2
text: "Apply Security Context to Your Pods and Containers (Manual)"
type: "manual"
remediation: |
As a best practice we recommend that you scope the binding for privileged pods to
service accounts within a particular namespace, e.g. kube-system, and limiting access
to that namespace. For all other serviceaccounts/namespaces, we recommend
implementing a more restrictive policy such as this:
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: restricted
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default'
apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default'
seccomp.security.alpha.kubernetes.io/defaultProfileName: 'runtime/default'
apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default'
spec:
privileged: false
# Required to prevent escalations to root.
allowPrivilegeEscalation: false
# This is redundant with non-root + disallow privilege escalation,
# but we can provide it for defense in depth.
requiredDropCapabilities:
- ALL
# Allow core volume types.
volumes:
- 'configMap'
- 'emptyDir'
- 'projected'
- 'secret'
- 'downwardAPI'
# Assume that persistentVolumes set up by the cluster admin are safe to use.
- 'persistentVolumeClaim'
hostNetwork: false
hostIPC: false
hostPID: false
runAsUser:
# Require the container to run without root privileges.
rule: 'MustRunAsNonRoot'
seLinux:
# This policy assumes the nodes are using AppArmor rather than SELinux.
rule: 'RunAsAny'
supplementalGroups:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
fsGroup:
rule: 'MustRunAs'
ranges:
# Forbid adding the root group.
- min: 1
max: 65535
readOnlyRootFilesystem: false
This policy prevents pods from running as privileged or escalating privileges. It also
restricts the types of volumes that can be mounted and the root supplemental groups
that can be added.
Another, albeit similar, approach is to start with policy that locks everything down and
incrementally add exceptions for applications that need looser restrictions such as
logging agents which need the ability to mount a host path.
scored: false
- id: 4.5.3
text: "The default namespace should not be used (Automated)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -0,0 +1,9 @@
---
## Version-specific settings that override the values in cfg/config.yaml
## These settings are required if you are using the --asff option to report findings to AWS Security Hub
## AWS account number is required.
AWS_ACCOUNT: "<AWS_ACCT_NUMBER>"
## AWS region is required.
AWS_REGION: "<AWS_REGION>"
## EKS Cluster ARN is required.
CLUSTER_ARN: "<AWS_CLUSTER_ARN>"

View File

@@ -0,0 +1,69 @@
---
controls:
version: "eks-1.7.0"
id: 2
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 2.1
text: "Logging"
checks:
- id: 2.1.1
text: "Enable audit Logs (Manual)"
type: manual
remediation: |
From Console:
1. For each EKS Cluster in each region;
2. Go to 'Amazon EKS' > 'Clusters' > '' > 'Configuration' > 'Logging'.
3. Click 'Manage logging'.
4. Ensure that all options are toggled to 'Enabled'.
API server: Enabled
Audit: Enabled
Authenticator: Enabled
Controller manager: Enabled
Scheduler: Enabled
5. Click 'Save Changes'.
From CLI:
# For each EKS Cluster in each region;
aws eks update-cluster-config \
--region '${REGION_CODE}' \
--name '${CLUSTER_NAME}' \
--logging '{"clusterLogging":[{"types":["api","audit","authenticator","controllerManager","scheduler"],"enabled":true}]}'
scored: false
- id: 2.1.2
text: "Ensure audit logs are collected and managed (Manual)"
type: manual
remediation: |
Create or update the audit-policy.yaml to specify the audit logging configuration:
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
resources:
- group: ""
resources: ["pods"]
Apply the audit policy configuration to the cluster:
kubectl apply -f <path-to-audit-policy>.yaml
Ensure audit logs are forwarded to a centralized logging system like CloudWatch, Elasticsearch, or another log management solution:
kubectl create configmap cluster-audit-policy --from-file=audit-policy.yaml -n kube-system
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: audit-logging
namespace: kube-system
spec:
containers:
- name: audit-log-forwarder
image: my-log-forwarder-image
volumeMounts:
- mountPath: /etc/kubernetes/audit
name: audit-config
volumes:
- name: audit-config
configMap:
name: cluster-audit-policy
EOF
scored: false

View File

@@ -0,0 +1,227 @@
---
controls:
version: "eks-1.7.0"
id: 5
text: "Managed Services"
type: "managedservices"
groups:
- id: 5.1
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning using Amazon ECR image scanning or a third party provider (Manual)"
type: "manual"
remediation: |
To utilize AWS ECR for Image scanning please follow the steps below:
To create a repository configured for scan on push (AWS CLI):
aws ecr create-repository --repository-name $REPO_NAME --image-scanning-configuration scanOnPush=true --region $REGION_CODE
To edit the settings of an existing repository (AWS CLI):
aws ecr put-image-scanning-configuration --repository-name $REPO_NAME --image-scanning-configuration scanOnPush=true --region $REGION_CODE
Use the following steps to start a manual image scan using the AWS Management Console.
1. Open the Amazon ECR console at https://console.aws.amazon.com/ecr/repositories.
2. From the navigation bar, choose the Region to create your repository in.
3. In the navigation pane, choose Repositories.
4. On the Repositories page, choose the repository that contains the image to scan.
5. On the Images page, select the image to scan and then choose Scan.
scored: false
- id: 5.1.2
text: "Minimize user access to Amazon ECR (Manual)"
type: "manual"
remediation: |
Before you use IAM to manage access to Amazon ECR, you should understand what IAM features
are available to use with Amazon ECR. To get a high-level view of how Amazon ECR and other
AWS services work with IAM, see AWS Services That Work with IAM in the IAM User Guide.
scored: false
- id: 5.1.3
text: "Minimize cluster access to read-only for Amazon ECR (Manual)"
type: "manual"
remediation: |
You can use your Amazon ECR images with Amazon EKS, but you need to satisfy the following prerequisites.
The Amazon EKS worker node IAM role (NodeInstanceRole) that you use with your worker nodes must possess
the following IAM policy permissions for Amazon ECR.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecr:GetAuthorizationToken"
],
"Resource": "*"
}
]
}
scored: false
- id: 5.1.4
text: "Minimize Container Registries to only those approved (Manual)"
type: "manual"
remediation: |
To minimize AWS ECR container registries to only those approved, you can follow these steps:
1. Define your approval criteria: Determine the criteria that containers must meet to
be considered approved. This can include factors such as security, compliance,
compatibility, and other requirements.
2. Identify all existing ECR registries: Identify all ECR registries that are currently
being used in your organization.
3. Evaluate ECR registries against approval criteria: Evaluate each ECR registry
against your approval criteria to determine whether it should be approved or not.
This can be done by reviewing the registry settings and configuration, as well as
conducting security assessments and vulnerability scans.
4. Establish policies and procedures: Establish policies and procedures that outline
how ECR registries will be approved, maintained, and monitored. This should
include guidelines for developers to follow when selecting a registry for their
container images.
5. Implement access controls: Implement access controls to ensure that only
approved ECR registries are used to store and distribute container images. This
can be done by setting up IAM policies and roles that restrict access to
unapproved registries or create a whitelist of approved registries.
6. Monitor and review: Continuously monitor and review the use of ECR registries
to ensure that they continue to meet your approval criteria. This can include
scored: false
- id: 5.2
text: "Identity and Access Management (IAM)"
checks:
- id: 5.2.1
text: "Prefer using dedicated Amazon EKS Service Accounts (Manual)"
type: "manual"
remediation: |
With IAM roles for service accounts on Amazon EKS clusters, you can associate an
IAM role with a Kubernetes service account. This service account can then provide
AWS permissions to the containers in any pod that uses that service account. With this
feature, you no longer need to provide extended permissions to the worker node IAM
role so that pods on that node can call AWS APIs.
Applications must sign their AWS API requests with AWS credentials. This feature
provides a strategy for managing credentials for your applications, similar to the way
that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances.
Instead of creating and distributing your AWS credentials to the containers or using the
Amazon EC2 instances role, you can associate an IAM role with a Kubernetes service
account. The applications in the pods containers can then use an AWS SDK or the
AWS CLI to make API requests to authorized AWS services.
The IAM roles for service accounts feature provides the following benefits:
- Least privilege - By using the IAM roles for service accounts feature, you no
longer need to provide extended permissions to the worker node IAM role so that
pods on that node can call AWS APIs. You can scope IAM permissions to a
service account, and only pods that use that service account have access to
those permissions. This feature also eliminates the need for third-party solutions
such as kiam or kube2iam.
- Credential isolation - A container can only retrieve credentials for the IAM role
that is associated with the service account to which it belongs. A container never
has access to credentials that are intended for another container that belongs to
another pod.
- Audit-ability - Access and event logging is available through CloudTrail to help
ensure retrospective auditing.
scored: false
- id: 5.3
text: "AWS EKS Key Management Service"
checks:
- id: 5.3.1
text: "Ensure Kubernetes Secrets are encrypted using Customer Master Keys (CMKs) managed in AWS KMS (Manual)"
type: "manual"
remediation: |
This process can only be performed during Cluster Creation.
Enable 'Secrets Encryption' during Amazon EKS cluster creation as described
in the links within the 'References' section.
scored: false
- id: 5.4
text: "Cluster Networking"
checks:
- id: 5.4.1
text: "Restrict Access to the Control Plane Endpoint (Manual)"
type: "manual"
remediation: |
By enabling private endpoint access to the Kubernetes API server, all communication
between your nodes and the API server stays within your VPC. You can also limit the IP
addresses that can access your API server from the internet, or completely disable
internet access to the API server.
With this in mind, you can update your cluster accordingly using the AWS CLI to ensure
that Private Endpoint Access is enabled.
If you choose to also enable Public Endpoint Access then you should also configure a
list of allowable CIDR blocks, resulting in restricted access from the internet. If you
specify no CIDR blocks, then the public API server endpoint is able to receive and
process requests from all IP addresses by defaulting to ['0.0.0.0/0'].
For example, the following command would enable private access to the Kubernetes
API as well as limited public access over the internet from a single IP address (noting
the /32 CIDR suffix):
aws eks update-cluster-config --region $AWS_REGION --name $CLUSTER_NAME --resources-vpc-config endpointPrivateAccess=true,endpointPrivateAccess=true,publicAccessCidrs="203.0.113.5/32"
Note: The CIDR blocks specified cannot include reserved addresses.
There is a maximum number of CIDR blocks that you can specify. For more information,
see the EKS Service Quotas link in the references section.
For more detailed information, see the EKS Cluster Endpoint documentation link in the
references section.
scored: false
- id: 5.4.2
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
type: "manual"
remediation: |
By enabling private endpoint access to the Kubernetes API server, all communication
between your nodes and the API server stays within your VPC.
With this in mind, you can update your cluster accordingly using the AWS CLI to ensure
that Private Endpoint Access is enabled.
For example, the following command would enable private access to the Kubernetes
API and ensure that no public access is permitted:
aws eks update-cluster-config --region $AWS_REGION --name $CLUSTER_NAME --resources-vpc-config endpointPrivateAccess=true,endpointPublicAccess=false
Note: For more detailed information, see the EKS Cluster Endpoint documentation link
in the references section.
scored: false
- id: 5.4.3
text: "Ensure clusters are created with Private Nodes (Manual)"
type: "manual"
remediation: |
aws eks update-cluster-config \
--region region-code \
--name my-cluster \
--resources-vpc-config endpointPublicAccess=true,publicAccessCidrs="203.0.113.5/32",endpointPrivateAccess=true
scored: false
- id: 5.4.4
text: "Ensure Network Policy is Enabled and set as appropriate (Manual)"
type: "manual"
remediation: |
Utilize Calico or other network policy engine to segment and isolate your traffic.
scored: false
- id: 5.4.5
text: "Encrypt traffic to HTTPS load balancers with TLS certificates (Manual)"
type: "manual"
remediation: |
Your load balancer vendor can provide details on configuring HTTPS with TLS.
scored: false
- id: 5.5
text: "Authentication and Authorization"
checks:
- id: 5.5.1
text: "Manage Kubernetes RBAC users with AWS IAM Authenticator for Kubernetes or Upgrade to AWS CLI v1.16.156 or greater (Manual)"
type: "manual"
remediation: |
Refer to the 'Managing users or IAM roles for your cluster' in Amazon EKS documentation.
Note: If using AWS CLI version 1.16.156 or later there is no need to install the AWS
IAM Authenticator anymore.
The relevant AWS CLI commands, depending on the use case, are:
aws eks update-kubeconfig
aws eks get-token
scored: false

View File

@@ -0,0 +1,6 @@
---
controls:
version: "eks-1.7.0"
id: 1
text: "Control Plane Components"
type: "master"

456
cfg/eks-1.7.0/node.yaml Normal file
View File

@@ -0,0 +1,456 @@
---
controls:
version: "eks-1.7.0"
id: 3
text: "Worker Nodes"
type: "node"
groups:
- id: 3.1
text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletkubeconfig
scored: true
- id: 3.1.2
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 3.1.3
text: "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 644 $kubeletconf
scored: true
- id: 3.1.4
text: "Ensure that the kubelet configuration file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 3.2
text: "Kubelet"
checks:
- id: 3.2.1
text: "Ensure that the Anonymous Auth is Not Enabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
set: true
compare:
op: eq
value: false
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Disable Anonymous Authentication by setting the following parameter:
"authentication": { "anonymous": { "enabled": false } }
Remediation Method 2.
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--anonymous-auth=false
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
set: true
compare:
op: nothave
value: AlwaysAllow
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Enable Webhook Authentication by setting the following parameter:
"authentication": { "webhook": { "enabled": true } }
Next, set the Authorization Mode to Webhook by setting the following parameter:
"authorization": { "mode": "Webhook }
Finer detail of the authentication and authorization fields can be found in the
Kubelet Configuration documentation.
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--authentication-token-webhook
--authorization-mode=Webhook
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.3
text: "Ensure that a Client CA File is Configured (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Configure the client certificate authority file by setting the following parameter
appropriately:
"authentication": { "x509": {"clientCAFile": <path/to/client-ca-file> } }"
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--client-ca-file=<path/to/client-ca-file>
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.4
text: "Ensure that the --read-only-port is disabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: true
compare:
op: eq
value: 0
remediation: |
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to 0
"readOnlyPort": 0
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--read-only-port=0
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: true
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to a
non-zero value in the format of #h#m#s
"streamingConnectionIdleTimeout": "4h0m0s"
You should ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not
specify a --streaming-connection-idle-timeout argument because it would
override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--streaming-connection-idle-timeout=4h0m0s
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"streamingConnectionIdleTimeout": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: true
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"makeIPTablesUtilChains": true
Ensure that /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf
does not set the --make-iptables-util-chains argument because that would
override your Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--make-iptables-util-chains:true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"makeIPTablesUtilChains.: true by extracting the live configuration from the nodes
running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.7
text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
set: true
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate
level.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node
and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.8
text: "Ensure that the --rotate-certificates argument is not present or is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"RotateCertificate":true
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set the --RotateCertificate
executable argument to false because this would override the Kubelet
config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--RotateCertificate=true
scored: true
- id: 3.2.9
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: true
compare:
op: eq
value: true
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"featureGates": {
"RotateKubeletServerCertificate":true
},
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set
the --rotate-kubelet-server-certificate executable argument to false because
this would override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--rotate-kubelet-server-certificate=true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"RotateKubeletServerCertificate": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediation methods:
Restart the kubelet service and check status. The example below is for when using
systemctl to manage services:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true

358
cfg/eks-1.7.0/policies.yaml Normal file
View File

@@ -0,0 +1,358 @@
---
controls:
version: "eks-1.7.0"
id: 4
text: "Policies"
type: "policies"
groups:
- id: 4.1
text: "RBAC and Service Accounts"
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
audit: |
kubectl get clusterrolebindings -o json | jq -r '
.items[]
| select(.roleRef.name == "cluster-admin")
| .subjects[]?
| select(.kind != "Group" or (.name != "system:masters" and .name != "system:nodes"))
| "FOUND_CLUSTER_ADMIN_BINDING"
' || echo "NO_CLUSTER_ADMIN_BINDINGS"
tests:
test_items:
- flag: "NO_CLUSTER_ADMIN_BINDINGS"
set: true
compare:
op: eq
value: "NO_CLUSTER_ADMIN_BINDINGS"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and if
they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: true
- id: 4.1.2
text: "Minimize access to secrets (Automated)"
audit: |
count=$(kubectl get roles --all-namespaces -o json | jq '
.items[]
| select(.rules[]?
| (.resources[]? == "secrets")
and ((.verbs[]? == "get") or (.verbs[]? == "list") or (.verbs[]? == "watch"))
)' | wc -l)
if [ "$count" -gt 0 ]; then
echo "SECRETS_ACCESS_FOUND"
fi
tests:
test_items:
- flag: "SECRETS_ACCESS_FOUND"
set: false
remediation: |
Where possible, remove get, list and watch access to secret objects in the cluster.
scored: true
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
audit: |
wildcards=$(kubectl get roles --all-namespaces -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
wildcards_clusterroles=$(kubectl get clusterroles -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
total=$((wildcards + wildcards_clusterroles))
if [ "$total" -gt 0 ]; then
echo "wildcards_present"
fi
tests:
test_items:
- flag: wildcards_present
set: false
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: true
- id: 4.1.4
text: "Minimize access to create pods (Automated)"
audit: |
access=$(kubectl get roles,clusterroles -A -o json | jq '
[.items[] |
select(
.rules[]? |
(.resources[]? == "pods" and .verbs[]? == "create")
)
] | length')
if [ "$access" -gt 0 ]; then
echo "pods_create_access"
fi
tests:
test_items:
- flag: pods_create_access
set: false
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: true
- id: 4.1.5
text: "Ensure that default service accounts are not actively used. (Automated)"
audit: |
default_sa_count=$(kubectl get serviceaccounts --all-namespaces -o json | jq '
[.items[] | select(.metadata.name == "default" and (.automountServiceAccountToken != false))] | length')
if [ "$default_sa_count" -gt 0 ]; then
echo "default_sa_not_auto_mounted"
fi
pods_using_default_sa=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.serviceAccountName == "default")] | length')
if [ "$pods_using_default_sa" -gt 0 ]; then
echo "default_sa_used_in_pods"
fi
tests:
test_items:
- flag: default_sa_not_auto_mounted
set: false
- flag: default_sa_used_in_pods
set: false
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific
access to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
Automatic remediation for the default account:
kubectl patch serviceaccount default -p
$'automountServiceAccountToken: false'
scored: true
- id: 4.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
audit: |
pods_with_token_mount=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.automountServiceAccountToken != false)] | length')
if [ "$pods_with_token_mount" -gt 0 ]; then
echo "automountServiceAccountToken"
fi
tests:
test_items:
- flag: automountServiceAccountToken
set: false
remediation: |
Regularly review pod and service account objects in the cluster to ensure that the automountServiceAccountToken setting is false for pods and accounts that do not explicitly require API server access.
scored: true
- id: 4.1.7
text: "Cluster Access Manager API to streamline and enhance the management of access controls within EKS clusters (Manual)"
type: "manual"
remediation: |
Log in to the AWS Management Console.
Navigate to Amazon EKS and select your EKS cluster.
Go to the Access tab and click on "Manage Access" in the "Access Configuration section".
Under Cluster Authentication Mode for Cluster Access settings.
Click EKS API to change cluster will source authenticated IAM principals only from EKS access entry APIs.
Click ConfigMap to change cluster will source authenticated IAM principals only from the aws-auth ConfigMap.
Note: EKS API and ConfigMap must be selected during Cluster creation and cannot be changed once the Cluster is provisioned.
scored: false
- id: 4.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 4.2
text: "Pod Security Standards"
checks:
- id: 4.2.1
text: "Minimize the admission of privileged containers (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?.spec.containers[]?; .securityContext?.privileged == true) then "PRIVILEGED_FOUND" else "NO_PRIVILEGED" end'
tests:
test_items:
- flag: "NO_PRIVILEGED"
set: true
compare:
op: eq
value: "NO_PRIVILEGED"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the admission of privileged containers.
To enable PSA for a namespace in your cluster, set the pod-security.kubernetes.io/enforce label with the policy value you want to enforce.
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
The above command enforces the restricted policy for the NAMESPACE namespace.
You can also enable Pod Security Admission for all your namespaces. For example:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
scored: true
- id: 4.2.2
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?; .spec.hostPID == true) then "HOSTPID_FOUND" else "NO_HOSTPID" end'
tests:
test_items:
- flag: "NO_HOSTPID"
set: true
compare:
op: eq
value: "NO_HOSTPID"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of hostPID containers.
scored: true
- id: 4.2.3
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r 'if any(.items[]?; .spec.hostIPC == true) then "HOSTIPC_FOUND" else "NO_HOSTIPC" end'
tests:
test_items:
- flag: "NO_HOSTIPC"
set: true
compare:
op: eq
value: "NO_HOSTIPC"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of hostIPC containers.
scored: true
- id: 4.2.4
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r 'if any(.items[]?; .spec.hostNetwork == true) then "HOSTNETWORK_FOUND" else "NO_HOSTNETWORK" end'
tests:
test_items:
- flag: "NO_HOSTNETWORK"
set: true
compare:
op: eq
value: "NO_HOSTNETWORK"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of hostNetwork containers.
scored: true
- id: 4.2.5
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?.spec.containers[]?; .securityContext?.allowPrivilegeEscalation == true) then "ALLOWPRIVILEGEESCALTION_FOUND" else "NO_ALLOWPRIVILEGEESCALTION" end'
tests:
test_items:
- flag: "NO_ALLOWPRIVILEGEESCALTION"
set: true
compare:
op: eq
value: "NO_ALLOWPRIVILEGEESCALTION"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with .spec.allowPrivilegeEscalation set to true.
scored: true
- id: 4.3
text: "CNI Plugin"
checks:
- id: 4.3.1
text: "Ensure CNI plugin supports network policies (Manual)"
type: "manual"
remediation: |
As with RBAC policies, network policies should adhere to the policy of least privileged
access. Start by creating a deny all policy that restricts all inbound and outbound traffic
from a namespace or create a global policy using Calico.
scored: false
- id: 4.3.2
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
audit: |
ns_without_np=$(kubectl get namespaces -o json | jq -r '.items[].metadata.name' | while read ns; do
count=$(kubectl get networkpolicy -n $ns --no-headers 2>/dev/null | wc -l)
if [ "$count" -eq 0 ]; then echo $ns; fi
done)
if [ -z "$ns_without_np" ]; then
echo "ALL_NAMESPACES_HAVE_NETWORK_POLICIES"
else
echo "NAMESPACES_WITHOUT_NETWORK_POLICIES: $ns_without_np"
fi
tests:
test_items:
- flag: "ALL_NAMESPACES_HAVE_NETWORK_POLICIES"
set: true
compare:
op: eq
value: "ALL_NAMESPACES_HAVE_NETWORK_POLICIES"
remediation: |
Create at least one NetworkPolicy in each namespace to control and restrict traffic between pods as needed.
scored: true
- id: 4.4
text: "Secrets Management"
checks:
- id: 4.4.1
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
audit: |
result=$(kubectl get all --all-namespaces -o jsonpath='{range .items[?(@..secretKeyRef)]}{.metadata.namespace} {.kind} {.metadata.name}{"\n"}{end}')
if [ -z "$result" ]; then
echo "NO_SECRETS_AS_ENV_VARS"
else
echo "SECRETS_AS_ENV_VARS_FOUND: $result"
fi
tests:
test_items:
- flag: "NO_SECRETS_AS_ENV_VARS"
set: true
compare:
op: eq
value: "NO_SECRETS_AS_ENV_VARS"
remediation: |
If possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
scored: true
- id: 4.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 4.5
text: "General Policies"
checks:
- id: 4.5.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 4.5.2
text: "The default namespace should not be used (Automated)"
audit: |
output=$(kubectl get $(kubectl api-resources --verbs=list --namespaced=true -o name | paste -sd, -) --ignore-not-found -n default 2>/dev/null | grep -v "^kubernetes ")
if [ -z "$output" ]; then
echo "NO_USER_RESOURCES_IN_DEFAULT"
else
echo "USER_RESOURCES_IN_DEFAULT_FOUND: $output"
fi
tests:
test_items:
- flag: "NO_USER_RESOURCES_IN_DEFAULT"
set: true
remediation: |
Create and use dedicated namespaces for resources instead of the default namespace. Move any user-defined objects out of the default namespace to improve resource segregation and RBAC control.
scored: true

View File

@@ -0,0 +1,9 @@
---
## Version-specific settings that override the values in cfg/config.yaml
## These settings are required if you are using the --asff option to report findings to AWS Security Hub
## AWS account number is required.
AWS_ACCOUNT: "<AWS_ACCT_NUMBER>"
## AWS region is required.
AWS_REGION: "<AWS_REGION>"
## EKS Cluster ARN is required.
CLUSTER_ARN: "<AWS_CLUSTER_ARN>"

View File

@@ -0,0 +1,69 @@
---
controls:
version: "eks-1.8.0"
id: 2
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 2.1
text: "Logging"
checks:
- id: 2.1.1
text: "Enable audit Logs (Manual)"
type: manual
remediation: |
From Console:
1. For each EKS Cluster in each region;
2. Go to 'Amazon EKS' > 'Clusters' > '' > 'Configuration' > 'Logging'.
3. Click 'Manage logging'.
4. Ensure that all options are toggled to 'Enabled'.
API server: Enabled
Audit: Enabled
Authenticator: Enabled
Controller manager: Enabled
Scheduler: Enabled
5. Click 'Save Changes'.
From CLI:
# For each EKS Cluster in each region;
aws eks update-cluster-config \
--region '${REGION_CODE}' \
--name '${CLUSTER_NAME}' \
--logging '{"clusterLogging":[{"types":["api","audit","authenticator","controllerManager","scheduler"],"enabled":true}]}'
scored: false
- id: 2.1.2
text: "Ensure audit logs are collected and managed (Manual)"
type: manual
remediation: |
Create or update the audit-policy.yaml to specify the audit logging configuration:
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
resources:
- group: ""
resources: ["pods"]
Apply the audit policy configuration to the cluster:
kubectl apply -f <path-to-audit-policy>.yaml
Ensure audit logs are forwarded to a centralized logging system like CloudWatch, Elasticsearch, or another log management solution:
kubectl create configmap cluster-audit-policy --from-file=audit-policy.yaml -n kube-system
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: audit-logging
namespace: kube-system
spec:
containers:
- name: audit-log-forwarder
image: my-log-forwarder-image
volumeMounts:
- mountPath: /etc/kubernetes/audit
name: audit-config
volumes:
- name: audit-config
configMap:
name: cluster-audit-policy
EOF
scored: false

View File

@@ -0,0 +1,227 @@
---
controls:
version: "eks-1.8.0"
id: 5
text: "Managed Services"
type: "managedservices"
groups:
- id: 5.1
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning using Amazon ECR image scanning or a third party provider (Manual)"
type: "manual"
remediation: |
To utilize AWS ECR for Image scanning please follow the steps below:
To create a repository configured for scan on push (AWS CLI):
aws ecr create-repository --repository-name $REPO_NAME --image-scanning-configuration scanOnPush=true --region $REGION_CODE
To edit the settings of an existing repository (AWS CLI):
aws ecr put-image-scanning-configuration --repository-name $REPO_NAME --image-scanning-configuration scanOnPush=true --region $REGION_CODE
Use the following steps to start a manual image scan using the AWS Management Console.
1. Open the Amazon ECR console at https://console.aws.amazon.com/ecr/repositories.
2. From the navigation bar, choose the Region to create your repository in.
3. In the navigation pane, choose Repositories.
4. On the Repositories page, choose the repository that contains the image to scan.
5. On the Images page, select the image to scan and then choose Scan.
scored: false
- id: 5.1.2
text: "Minimize user access to Amazon ECR (Manual)"
type: "manual"
remediation: |
Before you use IAM to manage access to Amazon ECR, you should understand what IAM features
are available to use with Amazon ECR. To get a high-level view of how Amazon ECR and other
AWS services work with IAM, see AWS Services That Work with IAM in the IAM User Guide.
scored: false
- id: 5.1.3
text: "Minimize cluster access to read-only for Amazon ECR (Manual)"
type: "manual"
remediation: |
You can use your Amazon ECR images with Amazon EKS, but you need to satisfy the following prerequisites.
The Amazon EKS worker node IAM role (NodeInstanceRole) that you use with your worker nodes must possess
the following IAM policy permissions for Amazon ECR.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecr:GetAuthorizationToken"
],
"Resource": "*"
}
]
}
scored: false
- id: 5.1.4
text: "Minimize Container Registries to only those approved (Manual)"
type: "manual"
remediation: |
To minimize AWS ECR container registries to only those approved, you can follow these steps:
1. Define your approval criteria: Determine the criteria that containers must meet to
be considered approved. This can include factors such as security, compliance,
compatibility, and other requirements.
2. Identify all existing ECR registries: Identify all ECR registries that are currently
being used in your organization.
3. Evaluate ECR registries against approval criteria: Evaluate each ECR registry
against your approval criteria to determine whether it should be approved or not.
This can be done by reviewing the registry settings and configuration, as well as
conducting security assessments and vulnerability scans.
4. Establish policies and procedures: Establish policies and procedures that outline
how ECR registries will be approved, maintained, and monitored. This should
include guidelines for developers to follow when selecting a registry for their
container images.
5. Implement access controls: Implement access controls to ensure that only
approved ECR registries are used to store and distribute container images. This
can be done by setting up IAM policies and roles that restrict access to
unapproved registries or create a whitelist of approved registries.
6. Monitor and review: Continuously monitor and review the use of ECR registries
to ensure that they continue to meet your approval criteria. This can include
scored: false
- id: 5.2
text: "Identity and Access Management (IAM)"
checks:
- id: 5.2.1
text: "Prefer using dedicated Amazon EKS Service Accounts (Manual)"
type: "manual"
remediation: |
With IAM roles for service accounts on Amazon EKS clusters, you can associate an
IAM role with a Kubernetes service account. This service account can then provide
AWS permissions to the containers in any pod that uses that service account. With this
feature, you no longer need to provide extended permissions to the worker node IAM
role so that pods on that node can call AWS APIs.
Applications must sign their AWS API requests with AWS credentials. This feature
provides a strategy for managing credentials for your applications, similar to the way
that Amazon EC2 instance profiles provide credentials to Amazon EC2 instances.
Instead of creating and distributing your AWS credentials to the containers or using the
Amazon EC2 instances role, you can associate an IAM role with a Kubernetes service
account. The applications in the pods containers can then use an AWS SDK or the
AWS CLI to make API requests to authorized AWS services.
The IAM roles for service accounts feature provides the following benefits:
- Least privilege - By using the IAM roles for service accounts feature, you no
longer need to provide extended permissions to the worker node IAM role so that
pods on that node can call AWS APIs. You can scope IAM permissions to a
service account, and only pods that use that service account have access to
those permissions. This feature also eliminates the need for third-party solutions
such as kiam or kube2iam.
- Credential isolation - A container can only retrieve credentials for the IAM role
that is associated with the service account to which it belongs. A container never
has access to credentials that are intended for another container that belongs to
another pod.
- Audit-ability - Access and event logging is available through CloudTrail to help
ensure retrospective auditing.
scored: false
- id: 5.3
text: "AWS EKS Key Management Service"
checks:
- id: 5.3.1
text: "Ensure Kubernetes Secrets are encrypted using Customer Master Keys (CMKs) managed in AWS KMS (Manual)"
type: "manual"
remediation: |
This process can only be performed during Cluster Creation.
Enable 'Secrets Encryption' during Amazon EKS cluster creation as described
in the links within the 'References' section.
scored: false
- id: 5.4
text: "Cluster Networking"
checks:
- id: 5.4.1
text: "Restrict Access to the Control Plane Endpoint (Manual)"
type: "manual"
remediation: |
By enabling private endpoint access to the Kubernetes API server, all communication
between your nodes and the API server stays within your VPC. You can also limit the IP
addresses that can access your API server from the internet, or completely disable
internet access to the API server.
With this in mind, you can update your cluster accordingly using the AWS CLI to ensure
that Private Endpoint Access is enabled.
If you choose to also enable Public Endpoint Access then you should also configure a
list of allowable CIDR blocks, resulting in restricted access from the internet. If you
specify no CIDR blocks, then the public API server endpoint is able to receive and
process requests from all IP addresses by defaulting to ['0.0.0.0/0'].
For example, the following command would enable private access to the Kubernetes
API as well as limited public access over the internet from a single IP address (noting
the /32 CIDR suffix):
aws eks update-cluster-config --region $AWS_REGION --name $CLUSTER_NAME --resources-vpc-config endpointPrivateAccess=true,publicAccessCidrs="203.0.113.5/32"
Note: The CIDR blocks specified cannot include reserved addresses.
There is a maximum number of CIDR blocks that you can specify. For more information,
see the EKS Service Quotas link in the references section.
For more detailed information, see the EKS Cluster Endpoint documentation link in the
references section.
scored: false
- id: 5.4.2
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
type: "manual"
remediation: |
By enabling private endpoint access to the Kubernetes API server, all communication
between your nodes and the API server stays within your VPC.
With this in mind, you can update your cluster accordingly using the AWS CLI to ensure
that Private Endpoint Access is enabled.
For example, the following command would enable private access to the Kubernetes
API and ensure that no public access is permitted:
aws eks update-cluster-config --region $AWS_REGION --name $CLUSTER_NAME --resources-vpc-config endpointPrivateAccess=true,endpointPublicAccess=false
Note: For more detailed information, see the EKS Cluster Endpoint documentation link
in the references section.
scored: false
- id: 5.4.3
text: "Ensure clusters are created with Private Nodes (Manual)"
type: "manual"
remediation: |
aws eks update-cluster-config \
--region region-code \
--name my-cluster \
--resources-vpc-config endpointPublicAccess=true,publicAccessCidrs="203.0.113.5/32",endpointPrivateAccess=true
scored: false
- id: 5.4.4
text: "Ensure Network Policy is Enabled and set as appropriate (Manual)"
type: "manual"
remediation: |
Utilize Calico or other network policy engine to segment and isolate your traffic.
scored: false
- id: 5.4.5
text: "Encrypt traffic to HTTPS load balancers with TLS certificates (Manual)"
type: "manual"
remediation: |
Your load balancer vendor can provide details on configuring HTTPS with TLS.
scored: false
- id: 5.5
text: "Authentication and Authorization"
checks:
- id: 5.5.1
text: "Manage Kubernetes RBAC users with AWS IAM Authenticator for Kubernetes or Upgrade to AWS CLI v1.16.156 or greater (Manual)"
type: "manual"
remediation: |
Refer to the 'Managing users or IAM roles for your cluster' in Amazon EKS documentation.
Note: If using AWS CLI version 1.16.156 or later there is no need to install the AWS
IAM Authenticator anymore.
The relevant AWS CLI commands, depending on the use case, are:
aws eks update-kubeconfig
aws eks get-token
scored: false

View File

@@ -0,0 +1,6 @@
---
controls:
version: "eks-1.8.0"
id: 1
text: "Control Plane Components"
type: "master"

456
cfg/eks-1.8.0/node.yaml Normal file
View File

@@ -0,0 +1,456 @@
---
controls:
version: "eks-1.8.0"
id: 3
text: "Worker Nodes"
type: "node"
groups:
- id: 3.1
text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletkubeconfig
scored: true
- id: 3.1.2
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 3.1.3
text: "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 644 $kubeletconf
scored: true
- id: 3.1.4
text: "Ensure that the kubelet configuration file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 3.2
text: "Kubelet"
checks:
- id: 3.2.1
text: "Ensure that the Anonymous Auth is Not Enabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
set: true
compare:
op: eq
value: false
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Disable Anonymous Authentication by setting the following parameter:
"authentication": { "anonymous": { "enabled": false } }
Remediation Method 2.
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--anonymous-auth=false
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
set: true
compare:
op: nothave
value: AlwaysAllow
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Enable Webhook Authentication by setting the following parameter:
"authentication": { "webhook": { "enabled": true } }
Next, set the Authorization Mode to Webhook by setting the following parameter:
"authorization": { "mode": "Webhook }
Finer detail of the authentication and authorization fields can be found in the
Kubelet Configuration documentation.
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--authentication-token-webhook
--authorization-mode=Webhook
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.3
text: "Ensure that a Client CA File is Configured (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Configure the client certificate authority file by setting the following parameter
appropriately:
"authentication": { "x509": {"clientCAFile": <path/to/client-ca-file> } }"
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--client-ca-file=<path/to/client-ca-file>
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.4
text: "Ensure that the --read-only-port is disabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: true
compare:
op: eq
value: 0
remediation: |
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to 0
"readOnlyPort": 0
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--read-only-port=0
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: true
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to a
non-zero value in the format of #h#m#s
"streamingConnectionIdleTimeout": "4h0m0s"
You should ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not
specify a --streaming-connection-idle-timeout argument because it would
override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--streaming-connection-idle-timeout=4h0m0s
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"streamingConnectionIdleTimeout": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: true
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"makeIPTablesUtilChains": true
Ensure that /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf
does not set the --make-iptables-util-chains argument because that would
override your Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--make-iptables-util-chains:true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"makeIPTablesUtilChains.: true by extracting the live configuration from the nodes
running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.7
text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
set: true
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate
level.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node
and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.8
text: "Ensure that the --rotate-certificates argument is not present or is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"RotateCertificate":true
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set the --RotateCertificate
executable argument to false because this would override the Kubelet
config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--rotate-certificates=true
scored: true
- id: 3.2.9
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: true
compare:
op: eq
value: true
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"featureGates": {
"RotateKubeletServerCertificate":true
},
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set
the --rotate-kubelet-server-certificate executable argument to false because
this would override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--rotate-kubelet-server-certificate=true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"RotateKubeletServerCertificate": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=ip-192.168.31.226.ec2.internal (example node name from "kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediation methods:
Restart the kubelet service and check status. The example below is for when using
systemctl to manage services:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true

396
cfg/eks-1.8.0/policies.yaml Normal file
View File

@@ -0,0 +1,396 @@
---
controls:
version: "eks-1.8.0"
id: 4
text: "Policies"
type: "policies"
groups:
- id: 4.1
text: "RBAC and Service Accounts"
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
audit: |
kubectl get clusterrolebindings -o json | jq -r '
.items[]
| select(.roleRef.name == "cluster-admin")
| .subjects[]?
| select(.kind != "Group" or (.name != "system:masters" and .name != "system:nodes"))
| "FOUND_CLUSTER_ADMIN_BINDING"
' || echo "NO_CLUSTER_ADMIN_BINDINGS"
tests:
test_items:
- flag: "NO_CLUSTER_ADMIN_BINDINGS"
set: true
compare:
op: eq
value: "NO_CLUSTER_ADMIN_BINDINGS"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and if
they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: true
- id: 4.1.2
text: "Minimize access to secrets (Automated)"
audit: |
count=$(kubectl get roles --all-namespaces -o json | jq '
.items[]
| select(.rules[]?
| (.resources[]? == "secrets")
and ((.verbs[]? == "get") or (.verbs[]? == "list") or (.verbs[]? == "watch"))
)' | wc -l)
if [ "$count" -gt 0 ]; then
echo "SECRETS_ACCESS_FOUND"
fi
tests:
test_items:
- flag: "SECRETS_ACCESS_FOUND"
set: false
remediation: |
Where possible, remove get, list and watch access to secret objects in the cluster.
scored: true
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
audit: |
wildcards=$(kubectl get roles --all-namespaces -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
wildcards_clusterroles=$(kubectl get clusterroles -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
total=$((wildcards + wildcards_clusterroles))
if [ "$total" -gt 0 ]; then
echo "wildcards_present"
fi
tests:
test_items:
- flag: wildcards_present
set: false
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: true
- id: 4.1.4
text: "Minimize access to create pods (Automated)"
audit: |
access=$(kubectl get roles,clusterroles -A -o json | jq '
[.items[] |
select(
.rules[]? |
(.resources[]? == "pods" and .verbs[]? == "create")
)
] | length')
if [ "$access" -gt 0 ]; then
echo "pods_create_access"
fi
tests:
test_items:
- flag: pods_create_access
set: false
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: true
- id: 4.1.5
text: "Ensure that default service accounts are not actively used. (Automated)"
audit: |
default_sa_count=$(kubectl get serviceaccounts --all-namespaces -o json | jq '
[.items[] | select(.metadata.name == "default" and (.automountServiceAccountToken != false))] | length')
if [ "$default_sa_count" -gt 0 ]; then
echo "default_sa_not_auto_mounted"
fi
pods_using_default_sa=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.serviceAccountName == "default")] | length')
if [ "$pods_using_default_sa" -gt 0 ]; then
echo "default_sa_used_in_pods"
fi
tests:
test_items:
- flag: default_sa_not_auto_mounted
set: false
- flag: default_sa_used_in_pods
set: false
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific
access to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
Automatic remediation for the default account:
kubectl patch serviceaccount default -p
$'automountServiceAccountToken: false'
scored: true
- id: 4.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
audit: |
pods_with_token_mount=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.automountServiceAccountToken != false)] | length')
if [ "$pods_with_token_mount" -gt 0 ]; then
echo "automountServiceAccountToken"
fi
tests:
test_items:
- flag: automountServiceAccountToken
set: false
remediation: |
Regularly review pod and service account objects in the cluster to ensure that the automountServiceAccountToken setting is false for pods and accounts that do not explicitly require API server access.
scored: true
- id: 4.1.7
text: "Cluster Access Manager API to streamline and enhance the management of access controls within EKS clusters (Manual)"
type: "manual"
remediation: |
Log in to the AWS Management Console.
Navigate to Amazon EKS and select your EKS cluster.
Go to the Access tab and click on "Manage Access" in the "Access Configuration section".
Under Cluster Authentication Mode for Cluster Access settings.
Click EKS API to change cluster will source authenticated IAM principals only from EKS access entry APIs.
Click ConfigMap to change cluster will source authenticated IAM principals only from the aws-auth ConfigMap.
Note: EKS API and ConfigMap must be selected during Cluster creation and cannot be changed once the Cluster is provisioned.
scored: false
- id: 4.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 4.1.9
text: "Minimize access to create PersistentVolume objects (Manual)"
type: "manual"
remediation: |
Review the RBAC rules in the cluster and identify users, groups, or service accounts
with create permissions on PersistentVolume resources.
Where possible, remove or restrict create access to PersistentVolume objects to
trusted administrators only.
scored: false
- id: 4.1.10
text: "Minimize access to the proxy sub-resource of Node objects (Manual)"
type: "manual"
remediation: |
Review RBAC roles and bindings in the cluster to identify users, groups,
or service accounts with access to the proxy sub-resource of Node objects.
Where possible, remove or restrict access to the node proxy sub-resource
to trusted administrators only.
scored: false
- id: 4.1.11
text: "Minimize access to webhook configuration objects (Manual)"
type: "manual"
remediation: |
Review RBAC roles and bindings in the cluster to identify users, groups,
or service accounts with access to validatingwebhookconfigurations or
mutatingwebhookconfigurations objects. Where possible, remove or restrict
access to these webhook configuration objects to trusted administrators only.
scored: false
- id: 4.1.12
text: "Minimize access to the service account token creation (Manual)"
type: "manual"
remediation: |
Review RBAC roles and bindings in the cluster to identify users, groups,
or service accounts with access to create the token sub-resource of
serviceaccount objects. Where possible, remove or restrict access to
token creation to trusted administrators only.
scored: false
- id: 4.2
text: "Pod Security Standards"
checks:
- id: 4.2.1
text: "Minimize the admission of privileged containers (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?.spec.containers[]?; .securityContext?.privileged == true) then "PRIVILEGED_FOUND" else "NO_PRIVILEGED" end'
tests:
test_items:
- flag: "NO_PRIVILEGED"
set: true
compare:
op: eq
value: "NO_PRIVILEGED"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the admission of privileged containers.
To enable PSA for a namespace in your cluster, set the pod-security.kubernetes.io/enforce label with the policy value you want to enforce.
kubectl label --overwrite ns NAMESPACE pod-security.kubernetes.io/enforce=restricted
The above command enforces the restricted policy for the NAMESPACE namespace.
You can also enable Pod Security Admission for all your namespaces. For example:
kubectl label --overwrite ns --all pod-security.kubernetes.io/warn=baseline
scored: true
- id: 4.2.2
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?; .spec.hostPID == true) then "HOSTPID_FOUND" else "NO_HOSTPID" end'
tests:
test_items:
- flag: "NO_HOSTPID"
set: true
compare:
op: eq
value: "NO_HOSTPID"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of hostPID containers.
scored: true
- id: 4.2.3
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r 'if any(.items[]?; .spec.hostIPC == true) then "HOSTIPC_FOUND" else "NO_HOSTIPC" end'
tests:
test_items:
- flag: "NO_HOSTIPC"
set: true
compare:
op: eq
value: "NO_HOSTIPC"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of hostIPC containers.
scored: true
- id: 4.2.4
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | jq -r 'if any(.items[]?; .spec.hostNetwork == true) then "HOSTNETWORK_FOUND" else "NO_HOSTNETWORK" end'
tests:
test_items:
- flag: "NO_HOSTNETWORK"
set: true
compare:
op: eq
value: "NO_HOSTNETWORK"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of hostNetwork containers.
scored: true
- id: 4.2.5
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
audit: |
kubectl get pods --all-namespaces -o json | \
jq -r 'if any(.items[]?.spec.containers[]?; .securityContext?.allowPrivilegeEscalation == true) then "ALLOWPRIVILEGEESCALTION_FOUND" else "NO_ALLOWPRIVILEGEESCALATION" end'
tests:
test_items:
- flag: "NO_ALLOWPRIVILEGEESCALATION"
set: true
compare:
op: eq
value: "NO_ALLOWPRIVILEGEESCALATION"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with .spec.allowPrivilegeEscalation set to true.
scored: true
- id: 4.3
text: "CNI Plugin"
checks:
- id: 4.3.1
text: "Ensure CNI plugin supports network policies (Manual)"
type: "manual"
remediation: |
As with RBAC policies, network policies should adhere to the policy of least privileged
access. Start by creating a deny all policy that restricts all inbound and outbound traffic
from a namespace or create a global policy using Calico.
scored: false
- id: 4.3.2
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
audit: |
ns_without_np=$(kubectl get namespaces -o json | jq -r '.items[].metadata.name' | while read ns; do
count=$(kubectl get networkpolicy -n $ns --no-headers 2>/dev/null | wc -l)
if [ "$count" -eq 0 ]; then echo $ns; fi
done)
if [ -z "$ns_without_np" ]; then
echo "ALL_NAMESPACES_HAVE_NETWORK_POLICIES"
else
echo "NAMESPACES_WITHOUT_NETWORK_POLICIES: $ns_without_np"
fi
tests:
test_items:
- flag: "ALL_NAMESPACES_HAVE_NETWORK_POLICIES"
set: true
compare:
op: eq
value: "ALL_NAMESPACES_HAVE_NETWORK_POLICIES"
remediation: |
Create at least one NetworkPolicy in each namespace to control and restrict traffic between pods as needed.
scored: true
- id: 4.4
text: "Secrets Management"
checks:
- id: 4.4.1
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
audit: |
result=$(kubectl get all --all-namespaces -o jsonpath='{range .items[?(@..secretKeyRef)]}{.metadata.namespace} {.kind} {.metadata.name}{"\n"}{end}')
if [ -z "$result" ]; then
echo "NO_SECRETS_AS_ENV_VARS"
else
echo "SECRETS_AS_ENV_VARS_FOUND: $result"
fi
tests:
test_items:
- flag: "NO_SECRETS_AS_ENV_VARS"
set: true
compare:
op: eq
value: "NO_SECRETS_AS_ENV_VARS"
remediation: |
If possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
scored: true
- id: 4.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 4.5
text: "General Policies"
checks:
- id: 4.5.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 4.5.2
text: "The default namespace should not be used (Automated)"
audit: |
output=$(kubectl get $(kubectl api-resources --verbs=list --namespaced=true -o name | paste -sd, -) --ignore-not-found -n default 2>/dev/null | grep -v "^kubernetes ")
if [ -z "$output" ]; then
echo "NO_USER_RESOURCES_IN_DEFAULT"
else
echo "USER_RESOURCES_IN_DEFAULT_FOUND: $output"
fi
tests:
test_items:
- flag: "NO_USER_RESOURCES_IN_DEFAULT"
set: true
remediation: |
Create and use dedicated namespaces for resources instead of the default namespace. Move any user-defined objects out of the default namespace to improve resource segregation and RBAC control.
scored: true

View File

@@ -0,0 +1,9 @@
---
## Version-specific settings that override the values in cfg/config.yaml
node:
proxy:
defaultkubeconfig: "/var/lib/kubelet/kubeconfig"
kubelet:
defaultconf: "/etc/kubernetes/kubelet/kubelet-config.yaml"

View File

@@ -0,0 +1,20 @@
---
controls:
version: "gke-1.6.0"
id: 2
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 2.1
text: "Authentication and Authorization"
checks:
- id: 2.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
You can remediate the availability of client certificates in your GKE cluster. See
Recommendation 5.8.1.
scored: false

View File

@@ -0,0 +1,617 @@
---
controls:
version: "gke-1.6.0"
id: 5
text: "Managed Services"
type: "managedservices"
groups:
- id: 5.1
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning is enabled (Automated)"
type: "manual"
remediation: |
For Images Hosted in GCR:
Using Command Line:
gcloud services enable containeranalysis.googleapis.com
For Images Hosted in AR:
Using Command Line:
gcloud services enable containerscanning.googleapis.com
scored: false
- id: 5.1.2
text: "Minimize user access to Container Image repositories (Manual)"
type: "manual"
remediation: |
For Images Hosted in AR:
Using Command Line:
gcloud artifacts repositories set-iam-policy <repository-name> <path-to-policy-file> \
--location <repository-location>
To learn how to configure policy files see: https://cloud.google.com/artifact-registry/docs/access-control#grant
For Images Hosted in GCR:
Using Command Line:
To change roles at the GCR bucket level:
Firstly, run the following if read permissions are required:
gsutil iam ch <type>:<email_address>:objectViewer gs://artifacts.<project_id>.appspot.com
Then remove the excessively privileged role (Storage Admin / Storage Object
Admin / Storage Object Creator) using:
gsutil iam ch -d <type>:<email_address>:<role> gs://artifacts.<project_id>.appspot.com
where:
<type> can be one of the following:
user, if the <email_address> is a Google account.
serviceAccount, if <email_address> specifies a Service account.
<email_address> can be one of the following:
a Google account (for example, someone@example.com).
a Cloud IAM service account.
To modify roles defined at the project level and subsequently inherited within the GCR
bucket, or the Service Account User role, extract the IAM policy file, modify it
accordingly and apply it using:
gcloud projects set-iam-policy <project_id> <policy_file>
scored: false
- id: 5.1.3
text: "Minimize cluster access to read-only for Container Image repositories (Manual)"
type: "manual"
remediation: |
For Images Hosted in AR:
Using Command Line:
Add artifactregistry.reader role
gcloud artifacts repositories add-iam-policy-binding <repository> \
--location=<repository-location> \
--member='serviceAccount:<email-address>' \
--role='roles/artifactregistry.reader'
Remove any roles other than artifactregistry.reader
gcloud artifacts repositories remove-iam-policy-binding <repository> \
--location <repository-location> \
--member='serviceAccount:<email-address>' \
--role='<role-name>'
For Images Hosted in GCR:
For an account explicitly granted to the bucket:
Firstly add read access to the Kubernetes Service Account:
gsutil iam ch <type>:<email_address>:objectViewer gs://artifacts.<project_id>.appspot.com
where:
<type> can be one of the following:
user, if the <email_address> is a Google account.
serviceAccount, if <email_address> specifies a Service account.
<email_address> can be one of the following:
a Google account (for example, someone@example.com).
a Cloud IAM service account.
Then remove the excessively privileged role (Storage Admin / Storage Object
Admin / Storage Object Creator) using:
gsutil iam ch -d <type>:<email_address>:<role> gs://artifacts.<project_id>.appspot.com
For an account that inherits access to the GCR Bucket through Project level
permissions, modify the Projects IAM policy file accordingly, then upload it using:
gcloud projects set-iam-policy <project_id> <policy_file>
scored: false
- id: 5.1.4
text: "Ensure only trusted container images are used (Manual)"
type: "manual"
remediation: |
Using Command Line:
Update the cluster to enable Binary Authorization:
gcloud container cluster update <cluster_name> --enable-binauthz
Create a Binary Authorization Policy using the Binary Authorization Policy Reference:
https://cloud.google.com/binary-authorization/docs/policy-yaml-reference for guidance.
Import the policy file into Binary Authorization:
gcloud container binauthz policy import <yaml_policy>
scored: false
- id: 5.2
text: "Identity and Access Management (IAM)"
checks:
- id: 5.2.1
text: "Ensure GKE clusters are not running using the Compute Engine default service account (Automated))"
type: "manual"
remediation: |
Using Command Line:
To create a minimally privileged service account:
gcloud iam service-accounts create <node_sa_name> \
--display-name "GKE Node Service Account"
export NODE_SA_EMAIL=gcloud iam service-accounts list \
--format='value(email)' --filter='displayName:GKE Node Service Account'
Grant the following roles to the service account:
export PROJECT_ID=gcloud config get-value project
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/monitoring.metricWriter
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/monitoring.viewer
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/logging.logWriter
To create a new Node pool using the Service account, run the following command:
gcloud container node-pools create <node_pool> \
--service-account=<sa_name>@<project_id>.iam.gserviceaccount.com \
--cluster=<cluster_name> --zone <compute_zone>
Note: The workloads will need to be migrated to the new Node pool, and the old node
pools that use the default service account should be deleted to complete the
remediation.
scored: false
- id: 5.2.2
text: "Prefer using dedicated GCP Service Accounts and Workload Identity (Manual)"
type: "manual"
remediation: |
Using Command Line:
gcloud container clusters update <cluster_name> --zone <cluster_zone> \
--workload-pool <project_id>.svc.id.goog
Note that existing Node pools are unaffected. New Node pools default to --workload-
metadata-from-node=GKE_METADATA_SERVER.
Then, modify existing Node pools to enable GKE_METADATA_SERVER:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <cluster_zone> --workload-metadata=GKE_METADATA
Workloads may need to be modified in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity.
Also consider the effects on the availability of hosted workloads as Node pools
are updated. It may be more appropriate to create new Node Pools.
scored: false
- id: 5.3
text: "Cloud Key Management Service (Cloud KMS)"
checks:
- id: 5.3.1
text: "Ensure Kubernetes Secrets are encrypted using keys managed in Cloud KMS (Automated)"
type: "manual"
remediation: |
To create a key:
Create a key ring:
gcloud kms keyrings create <ring_name> --location <location> --project \
<key_project_id>
Create a key:
gcloud kms keys create <key_name> --location <location> --keyring <ring_name> \
--purpose encryption --project <key_project_id>
Grant the Kubernetes Engine Service Agent service account the Cloud KMS
CryptoKey Encrypter/Decrypter role:
gcloud kms keys add-iam-policy-binding <key_name> --location <location> \
--keyring <ring_name> --member serviceAccount:<service_account_name> \
--role roles/cloudkms.cryptoKeyEncrypterDecrypter --project <key_project_id>
To create a new cluster with Application-layer Secrets Encryption:
gcloud container clusters create <cluster_name> --cluster-version=latest \
--zone <zone> \
--database-encryption-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name> \
--project <cluster_project_id>
To enable on an existing cluster:
gcloud container clusters update <cluster_name> --zone <zone> \
--database-encryption-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name> \
--project <cluster_project_id>
scored: false
- id: 5.4
text: "Node Metadata"
checks:
- id: 5.4.1
text: "Ensure the GKE Metadata Server is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
gcloud container clusters update <cluster_name> --identity-namespace=<project_id>.svc.id.goog
Note that existing Node pools are unaffected. New Node pools default to --workload-
metadata-from-node=GKE_METADATA_SERVER.
To modify an existing Node pool to enable GKE Metadata Server:
gcloud container node-pools update <node_pool_name> --cluster=<cluster_name> \
--workload-metadata-from-node=GKE_METADATA_SERVER
Workloads may need modification in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity.
scored: false
- id: 5.5
text: "Node Configuration and Maintenance"
checks:
- id: 5.5.1
text: "Ensure Container-Optimized OS (cos_containerd) is used for GKE node images (Automated)"
type: "manual"
remediation: |
Using Command Line:
To set the node image to cos for an existing cluster's Node pool:
gcloud container clusters upgrade <cluster_name> --image-type cos_containerd \
--zone <compute_zone> --node-pool <node_pool_name>
scored: false
- id: 5.5.2
text: "Ensure Node Auto-Repair is enabled for GKE nodes (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable node auto-repair for an existing cluster's Node pool:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --enable-autorepair
scored: false
- id: 5.5.3
text: "Ensure Node Auto-Upgrade is enabled for GKE nodes (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable node auto-upgrade for an existing cluster's Node pool, run the following
command:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <cluster_zone> --enable-autoupgrade
scored: false
- id: 5.5.4
text: "When creating New Clusters - Automate GKE version management using Release Channels (Automated)"
type: "manual"
remediation: |
Using Command Line:
Create a new cluster by running the following command:
gcloud container clusters create <cluster_name> --zone <cluster_zone> \
--release-channel <release_channel>
where <release_channel> is stable or regular, according to requirements.
scored: false
- id: 5.5.5
text: "Ensure Shielded GKE Nodes are Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To migrate an existing cluster, the flag --enable-shielded-nodes needs to be
specified in the cluster update command:
gcloud container clusters update <cluster_name> --zone <cluster_zone> \
--enable-shielded-nodes
scored: false
- id: 5.5.6
text: "Ensure Integrity Monitoring for Shielded GKE Nodes is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To create a Node pool within the cluster with Integrity Monitoring enabled, run the
following command:
gcloud container node-pools create <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --shielded-integrity-monitoring
Workloads from existing non-conforming Node pools will need to be migrated to the
newly created Node pool, then delete non-conforming Node pools to complete the
remediation
scored: false
- id: 5.5.7
text: "Ensure Secure Boot for Shielded GKE Nodes is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To create a Node pool within the cluster with Secure Boot enabled, run the following
command:
gcloud container node-pools create <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --shielded-secure-boot
Workloads will need to be migrated from existing non-conforming Node pools to the
newly created Node pool, then delete the non-conforming pools.
scored: false
- id: 5.6
text: "Cluster Networking"
checks:
- id: 5.6.1
text: "Enable VPC Flow Logs and Intranode Visibility (Automated)"
type: "manual"
remediation: |
Using Command Line:
1. Find the subnetwork name associated with the cluster.
gcloud container clusters describe <cluster_name> \
--region <cluster_region> - -format json | jq '.subnetwork'
2. Update the subnetwork to enable VPC Flow Logs.
gcloud compute networks subnets update <subnet_name> --enable-flow-logs
scored: false
- id: 5.6.2
text: "Ensure use of VPC-native clusters (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable Alias IP on a new cluster, run the following command:
gcloud container clusters create <cluster_name> --zone <compute_zone> \
--enable-ip-alias
If using Autopilot configuration mode:
gcloud container clusters create-auto <cluster_name> \
--zone <compute_zone>
scored: false
- id: 5.6.3
text: "Ensure Control Plane Authorized Networks is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable Control Plane Authorized Networks for an existing cluster, run the following
command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--enable-master-authorized-networks
Along with this, you can list authorized networks using the --master-authorized-networks
flag which contains a list of up to 20 external networks that are allowed to
connect to your cluster's control plane through HTTPS. You provide these networks as
a comma-separated list of addresses in CIDR notation (such as 90.90.100.0/24).
scored: false
- id: 5.6.4
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
type: "manual"
remediation: |
Using Command Line:
Create a cluster with a Private Endpoint enabled and Public Access disabled by including
the --enable-private-endpoint flag within the cluster create command:
gcloud container clusters create <cluster_name> --enable-private-endpoint
Setting this flag also requires the setting of --enable-private-nodes, --enable-ip-alias
and --master-ipv4-cidr=<master_cidr_range>.
scored: false
- id: 5.6.5
text: "Ensure clusters are created with Private Nodes (Manual)"
type: "manual"
remediation: |
Using Command Line:
To create a cluster with Private Nodes enabled, include the --enable-private-nodes
flag within the cluster create command:
gcloud container clusters create <cluster_name> --enable-private-nodes
Setting this flag also requires the setting of --enable-ip-alias and
--master-ipv4-cidr=<master_cidr_range>.
scored: false
- id: 5.6.6
text: "Consider firewalling GKE worker nodes (Manual)"
type: "manual"
remediation: |
Using Command Line:
Use the following command to generate firewall rules, setting the variables as
appropriate:
gcloud compute firewall-rules create <firewall_rule_name> \
--network <network> --priority <priority> --direction <direction> \
--action <action> --target-tags <tag> \
--target-service-accounts <service_account> \
--source-ranges <source_cidr_range> --source-tags <source_tags> \
--source-service-accounts <source_service_account> \
--destination-ranges <destination_cidr_range> --rules <rules>
scored: false
- id: 5.6.7
text: "Ensure use of Google-managed SSL Certificates (Automated)"
type: "manual"
remediation: |
If services of type:LoadBalancer are discovered, consider replacing the Service with
an Ingress.
To configure the Ingress and use Google-managed SSL certificates, follow the
instructions as listed at: https://cloud.google.com/kubernetes-engine/docs/how-
to/managed-certs.
scored: false
- id: 5.7
text: "Logging"
checks:
- id: 5.7.1
text: "Ensure Logging and Cloud Monitoring is Enabled (Automated)"
type: "manual"
remediation: |
To enable Logging for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--logging=<components_to_be_logged>
See https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--logging
for a list of available components for logging.
To enable Cloud Monitoring for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--monitoring=<components_to_be_logged>
See https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--
monitoring for a list of available components for Cloud Monitoring.
scored: false
- id: 5.7.2
text: "Enable Linux auditd logging (Manual)"
type: "manual"
remediation: |
Using Command Line:
Download the example manifests:
curl https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/os-audit/cos-auditd-logging.yaml > cos-auditd-logging.yaml
Edit the example manifests if needed. Then, deploy them:
kubectl apply -f cos-auditd-logging.yaml
Verify that the logging Pods have started. If a different Namespace was defined in the
manifests, replace cos-auditd with the name of the namespace being used:
kubectl get pods --namespace=cos-auditd
scored: false
- id: 5.8
text: "Authentication and Authorization"
checks:
- id: 5.8.1
text: "Ensure authentication using Client Certificates is Disabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
Create a new cluster without a Client Certificate:
gcloud container clusters create [CLUSTER_NAME] \
--no-issue-client-certificate
scored: false
- id: 5.8.2
text: "Manage Kubernetes RBAC users with Google Groups for GKE (Manual)"
type: "manual"
remediation: |
Using Command Line:
Follow the G Suite Groups instructions at: https://cloud.google.com/kubernetes-
engine/docs/how-to/role-based-access-control#google-groups-for-gke.
Then, create a cluster with:
gcloud container clusters create <cluster_name> --security-group <security_group_name>
Finally create Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings that
reference the G Suite Groups.
scored: false
- id: 5.8.3
text: "Ensure Legacy Authorization (ABAC) is Disabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To disable Legacy Authorization for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--no-enable-legacy-authorization
scored: false
- id: 5.9
text: "Storage"
checks:
- id: 5.9.1
text: "Enable Customer-Managed Encryption Keys (CMEK) for GKE Persistent Disks (PD) (Manual)"
type: "manual"
remediation: |
Using Command Line:
Follow the instructions detailed at: https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek.
scored: false
- id: 5.9.2
text: "Enable Customer-Managed Encryption Keys (CMEK) for Boot Disks (Automated)"
type: "manual"
remediation: |
Using Command Line:
Create a new node pool using customer-managed encryption keys for the node boot
disk, of <disk_type> either pd-standard or pd-ssd:
gcloud container node-pools create <cluster_name> --disk-type <disk_type> \
--boot-disk-kms-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name>
Create a cluster using customer-managed encryption keys for the node boot disk, of
<disk_type> either pd-standard or pd-ssd:
gcloud container clusters create <cluster_name> --disk-type <disk_type> \
--boot-disk-kms-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name>
scored: false
- id: 5.10
text: "Other Cluster Configurations"
checks:
- id: 5.10.1
text: "Ensure Kubernetes Web UI is Disabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To disable the Kubernetes Dashboard on an existing cluster, run the following
command:
gcloud container clusters update <cluster_name> --zone <zone> \
--update-addons=KubernetesDashboard=DISABLED
scored: false
- id: 5.10.2
text: "Ensure that Alpha clusters are not used for production workloads (Automated)"
type: "manual"
remediation: |
Using Command Line:
Upon creating a new cluster
gcloud container clusters create [CLUSTER_NAME] \
--zone [COMPUTE_ZONE]
Do not use the --enable-kubernetes-alpha argument.
scored: false
- id: 5.10.3
text: "Consider GKE Sandbox for running untrusted workloads (Manual)"
type: "manual"
remediation: |
Using Command Line:
To enable GKE Sandbox on an existing cluster, a new Node pool must be created,
which can be done using:
gcloud container node-pools create <node_pool_name> --zone <compute-zone> \
--cluster <cluster_name> --image-type=cos_containerd --sandbox="type=gvisor"
scored: false
- id: 5.10.4
text: "Ensure use of Binary Authorization (Automated)"
type: "manual"
remediation: |
Using Command Line:
Update the cluster to enable Binary Authorization:
gcloud container cluster update <cluster_name> --zone <compute_zone> \
--binauthz-evaluation-mode=<evaluation_mode>
Example:
gcloud container clusters update $CLUSTER_NAME --zone $COMPUTE_ZONE \
--binauthz-evaluation-mode=PROJECT_SINGLETON_POLICY_ENFORCE
See: https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--binauthz-evaluation-mode
for more details around the evaluation modes available.
Create a Binary Authorization Policy using the Binary Authorization Policy Reference:
https://cloud.google.com/binary-authorization/docs/policy-yaml-reference for guidance.
Import the policy file into Binary Authorization:
gcloud container binauthz policy import <yaml_policy>
scored: false
- id: 5.10.5
text: "Enable Security Posture (Manual)"
type: "manual"
remediation: |
Enable security posture via the UI, gCloud or API.
https://cloud.google.com/kubernetes-engine/docs/how-to/protect-workload-configuration
scored: false

View File

@@ -0,0 +1,6 @@
---
controls:
version: "gke-1.6.0"
id: 1
text: "Control Plane Components"
type: "master"

506
cfg/gke-1.6.0/node.yaml Normal file
View File

@@ -0,0 +1,506 @@
---
controls:
version: "gke-1.6.0"
id: 3
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 3.1
text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on each worker node.
For example,
chmod 644 $proxykubeconfig
scored: true
- id: 3.1.2
text: "Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on each worker node.
For example:
chown root:root $proxykubeconfig
scored: true
- id: 3.1.3
text: "Ensure that the kubelet configuration file has permissions set to 600 (Manual)"
audit: '/bin/sh -c ''if test -e /home/kubernetes/kubelet-config.yaml; then stat -c permissions=%a /home/kubernetes/kubelet-config.yaml; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the kubelet config file location)
chmod 644 /home/kubernetes/kubelet-config.yaml
scored: true
- id: 3.1.4
text: "Ensure that the kubelet configuration file ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e /home/kubernetes/kubelet-config.yaml; then stat -c %U:%G /home/kubernetes/kubelet-config.yaml; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identied in the Audit step)
chown root:root /home/kubernetes/kubelet-config.yaml
scored: true
- id: 3.2
text: "Kubelet"
checks:
- id: 3.2.1
text: "Ensure that the Anonymous Auth is Not Enabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /home/kubernetes/kubelet-config.yaml
Disable Anonymous Authentication by setting the following parameter:
"authentication": { "anonymous": { "enabled": false } }
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--anonymous-auth=false
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Enable Webhook Authentication by setting the following parameter:
"authentication": { "webhook": { "enabled": true } }
Next, set the Authorization Mode to Webhook by setting the following parameter:
"authorization": { "mode": "Webhook }
Finer detail of the authentication and authorization fields can be found in the
Kubelet Configuration documentation (https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/).
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--authentication-token-webhook
--authorization-mode=Webhook
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.3
text: "Ensure that a Client CA File is Configured (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Configure the client certificate authority file by setting the following parameter
appropriately:
"authentication": { "x509": {"clientCAFile": <path/to/client-ca-file> } }"
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--client-ca-file=<path/to/client-ca-file>
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.4
text: "Ensure that the --read-only-port argument is disabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
bin_op: or
remediation: |
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to 0
"readOnlyPort": 0
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--read-only-port=0
For each remediation:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet-config.yaml and set the below parameter to a non-zero
value in the format of #h#m#s
"streamingConnectionIdleTimeout": "4h0m0s"
You should ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not
specify a --streaming-connection-idle-timeout argument because it would
override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--streaming-connection-idle-timeout=4h0m0s
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"streamingConnectionIdleTimeout": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),
and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from
"kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-utils-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"makeIPTablesUtilChains": true
Ensure that /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf
does not set the --make-iptables-util-chains argument because that would
override your Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--make-iptables-util-chains:true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"makeIPTablesUtilChains.: true by extracting the live configuration from the nodes
running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),
and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from
"kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.7
text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
set: true
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.8
text: "Ensure that the --rotate-certificates argument is not present or is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.yaml file
/etc/kubernetes/kubelet/kubelet-config.yaml and set the below parameter to
true
"RotateCertificate":true
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set the --RotateCertificate
executable argument to false because this would override the Kubelet
config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--RotateCertificate=true
scored: true
- id: 3.2.9
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: eq
value: true
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet-config.yaml and set the below parameter to true
"featureGates": {
"RotateKubeletServerCertificate":true
},
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set
the --rotate-kubelet-server-certificate executable argument to false because
this would override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--rotate-kubelet-server-certificate=true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"RotateKubeletServerCertificate": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),
and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from
"kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediation methods:
Restart the kubelet service and check status. The example below is for when using
systemctl to manage services:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true

238
cfg/gke-1.6.0/policies.yaml Normal file
View File

@@ -0,0 +1,238 @@
---
controls:
version: "gke-1.6.0"
id: 4
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 4.1
text: "RBAC and Service Accounts"
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 4.1.2
text: "Minimize access to secrets (Automated)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to secret objects in the cluster.
scored: false
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 4.1.4
text: "Ensure that default service accounts are not actively used (Automated)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific
access to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 4.1.5
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 4.1.6
text: "Avoid use of system:masters group (Automated)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 4.1.7
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 4.1.8
text: "Avoid bindings to system:anonymous (Automated)"
type: "manual"
remediation: |
Identify all clusterrolebindings and rolebindings to the user system:anonymous.
Check if they are used and review the permissions associated with the binding using the
commands in the Audit section above or refer to GKE documentation
(https://cloud.google.com/kubernetes-engine/docs/best-practices/rbac#detect-prevent-default).
Strongly consider replacing unsafe bindings with an authenticated, user-defined group.
Where possible, bind to non-default, user-defined groups with least-privilege roles.
If there are any unsafe bindings to the user system:anonymous, proceed to delete them
after consideration for cluster operations with only necessary, safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: false
- id: 4.1.9
text: "Avoid non-default bindings to system:unauthenticated (Automated)"
type: "manual"
remediation: |
Identify all non-default clusterrolebindings and rolebindings to the group
system:unauthenticated. Check if they are used and review the permissions
associated with the binding using the commands in the Audit section above or refer to
GKE documentation (https://cloud.google.com/kubernetes-engine/docs/best-practices/rbac#detect-prevent-default).
Strongly consider replacing non-default, unsafe bindings with an authenticated, user-
defined group. Where possible, bind to non-default, user-defined groups with least-
privilege roles.
If there are any non-default, unsafe bindings to the group system:unauthenticated,
proceed to delete them after consideration for cluster operations with only necessary,
safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: false
- id: 4.1.10
text: "Avoid non-default bindings to system:authenticated (Automated)"
type: "manual"
remediation: |
Identify all non-default clusterrolebindings and rolebindings to the group
system:authenticated. Check if they are used and review the permissions associated
with the binding using the commands in the Audit section above or refer to GKE
documentation.
Strongly consider replacing non-default, unsafe bindings with an authenticated, user-
defined group. Where possible, bind to non-default, user-defined groups with least-
privilege roles.
If there are any non-default, unsafe bindings to the group system:authenticated,
proceed to delete them after consideration for cluster operations with only necessary,
safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: false
- id: 4.2
text: "Pod Security Standards"
checks:
- id: 4.2.1
text: "Ensure that the cluster enforces Pod Security Standard Baseline profile or stricter for all namespaces. (Manual)"
type: "manual"
remediation: |
Ensure that Pod Security Admission is in place for every namespace which contains
user workloads.
Run the following command to enforce the Baseline profile in a namespace:
kubectl label namespace pod-security.kubernetes.io/enforce=baseline
scored: false
- id: 4.3
text: "Network Policies and CNI"
checks:
- id: 4.3.1
text: "Ensure that the CNI in use supports Network Policies (Manual)"
type: "manual"
remediation: |
To use a CNI plugin with Network Policy, enable Network Policy in GKE, and the CNI plugin
will be updated. See Recommendation 5.6.7.
scored: false
- id: 4.3.2
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as needed.
See: https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy#creating_a_network_policy
for more information.
scored: false
- id: 4.4
text: "Secrets Management"
checks:
- id: 4.4.1
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
type: "manual"
remediation: |
if possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 4.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 4.5
text: "Extensible Admission Control"
checks:
- id: 4.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
Also see recommendation 5.10.4.
scored: false
- id: 4.6
text: "General Policies"
checks:
- id: 4.6.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 4.6.2
text: "Ensure that the seccomp profile is set to RuntimeDefault in your pod definitions (Automated)"
type: "manual"
remediation: |
Use security context to enable the RuntimeDefault seccomp profile in your pod
definitions. An example is as below:
{
"namespace": "kube-system",
"name": "metrics-server-v0.7.0-dbcc8ddf6-gz7d4",
"seccompProfile": "RuntimeDefault"
}
scored: false
- id: 4.6.3
text: "Apply Security Context to Your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Google Container-
Optimized OS Benchmark.
scored: false
- id: 4.6.4
text: "The default namespace should not be used (Automated)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -0,0 +1,9 @@
---
## Version-specific settings that override the values in cfg/config.yaml
node:
proxy:
defaultkubeconfig: "/var/lib/kubelet/kubeconfig"
kubelet:
defaultconf: "/etc/kubernetes/kubelet-config.yaml"

View File

@@ -0,0 +1,6 @@
---
controls:
version: "gke-1.8.0"
id: 2
text: "Control Plane Configuration"
type: "controlplane"

View File

@@ -0,0 +1,719 @@
---
controls:
version: "gke-1.8.0"
id: 5
text: "Managed Services"
type: "managedservices"
groups:
- id: 5.1
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning is enabled (Automated)"
audit: "gcloud services list --enabled"
type: "manual"
remediation: |
For Images Hosted in GCR:
Using Google Cloud Console
Go to GCR by visiting: https://console.cloud.google.com/gcr
Select Settings and, under the Vulnerability Scanning heading, click the TURN ON button.
Using Command Line
gcloud services enable containeranalysis.googleapis.com
For Images Hosted in AR:
Using Google Cloud Console
Go to GCR by visiting: https://console.cloud.google.com/artifacts
Select Settings and, under the Vulnerability Scanning heading, click the ENABLE button.
Using Command Line
gcloud services enable containerscanning.googleapis.com
scored: false
- id: 5.1.2
text: "Minimize user access to Container Image repositories (Manual)"
audit: |
gcloud projects get-iam-policy <project_id> \
--flatten="bindings[].members" \
--format='table(bindings.members,bindings.role)' \
--filter="bindings.role:roles/storage.admin OR bindings.role:roles/storage.objectAdmin OR bindings.role:roles/storage.objectCreator OR bindings.role:roles/storage.legacyBucketOwner OR bindings.role:roles/storage.legacyBucketWriter OR bindings.role:roles/storage.legacyObjectOwner"
type: "manual"
remediation: |
For Images Hosted in AR:
Using Command Line:
gcloud artifacts repositories set-iam-policy <repository-name> <path-to-policy-file> \
--location <repository-location>
To learn how to configure policy files see: https://cloud.google.com/artifact-registry/docs/access-control#grant
For Images Hosted in GCR:
Using Command Line:
To change roles at the GCR bucket level:
Firstly, run the following if read permissions are required:
gsutil iam ch <type>:<email_address>:objectViewer gs://artifacts.<project_id>.appspot.com
Then remove the excessively privileged role (Storage Admin / Storage Object
Admin / Storage Object Creator) using:
gsutil iam ch -d <type>:<email_address>:<role> gs://artifacts.<project_id>.appspot.com
where:
<type> can be one of the following:
user, if the <email_address> is a Google account.
serviceAccount, if <email_address> specifies a Service account.
<email_address> can be one of the following:
a Google account (for example, someone@example.com).
a Cloud IAM service account.
To modify roles defined at the project level and subsequently inherited within the GCR
bucket, or the Service Account User role, extract the IAM policy file, modify it
accordingly and apply it using:
gcloud projects set-iam-policy <project_id> <policy_file>
scored: false
- id: 5.1.3
text: "Minimize cluster access to read-only for Container Image repositories (Manual)"
audit: |
gcloud projects get-iam-policy <project_id> \
--flatten="bindings[].members" \
--format='table(bindings.members,bindings.role)' \
--filter="bindings.role:roles/storage.admin OR bindings.role:roles/storage.objectAdmin OR bindings.role:roles/storage.objectCreator OR bindings.role:roles/storage.legacyBucketOwner OR bindings.role:roles/storage.legacyBucketWriter OR bindings.role:roles/storage.legacyObjectOwner"
type: "manual"
remediation: |
For Images Hosted in AR:
Using Command Line:
Add artifactregistry.reader role
gcloud artifacts repositories add-iam-policy-binding <repository> \
--location=<repository-location> \
--member='serviceAccount:<email-address>' \
--role='roles/artifactregistry.reader'
Remove any roles other than artifactregistry.reader
gcloud artifacts repositories remove-iam-policy-binding <repository> \
--location <repository-location> \
--member='serviceAccount:<email-address>' \
--role='<role-name>'
For Images Hosted in GCR:
For an account explicitly granted to the bucket:
Firstly add read access to the Kubernetes Service Account:
gsutil iam ch <type>:<email_address>:objectViewer gs://artifacts.<project_id>.appspot.com
where:
<type> can be one of the following:
user, if the <email_address> is a Google account.
serviceAccount, if <email_address> specifies a Service account.
<email_address> can be one of the following:
a Google account (for example, someone@example.com).
a Cloud IAM service account.
Then remove the excessively privileged role (Storage Admin / Storage Object
Admin / Storage Object Creator) using:
gsutil iam ch -d <type>:<email_address>:<role> gs://artifacts.<project_id>.appspot.com
For an account that inherits access to the GCR Bucket through Project level
permissions, modify the Projects IAM policy file accordingly, then upload it using:
gcloud projects set-iam-policy <project_id> <policy_file>
scored: false
- id: 5.1.4
text: "Ensure only trusted container images are used (Manual)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq .binaryAuthorization
type: "manual"
remediation: |
Using Command Line:
Update the cluster to enable Binary Authorization:
gcloud container cluster update <cluster_name> --enable-binauthz
Create a Binary Authorization Policy using the Binary Authorization Policy Reference:
https://cloud.google.com/binary-authorization/docs/policy-yaml-reference for guidance.
Import the policy file into Binary Authorization:
gcloud container binauthz policy import <yaml_policy>
scored: false
- id: 5.2
text: "Identity and Access Management (IAM)"
checks:
- id: 5.2.1
text: "Ensure GKE clusters are not running using the Compute Engine default service account (Automated))"
audit: |
gcloud container node-pools describe $NODE_POOL --cluster $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.config.serviceAccount'
type: "manual"
remediation: |
Using Command Line:
To create a minimally privileged service account:
gcloud iam service-accounts create <node_sa_name> \
--display-name "GKE Node Service Account"
export NODE_SA_EMAIL=gcloud iam service-accounts list \
--format='value(email)' --filter='displayName:GKE Node Service Account'
Grant the following roles to the service account:
export PROJECT_ID=gcloud config get-value project
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/monitoring.metricWriter
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/monitoring.viewer
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/logging.logWriter
To create a new Node pool using the Service account, run the following command:
gcloud container node-pools create <node_pool> \
--service-account=<sa_name>@<project_id>.iam.gserviceaccount.com \
--cluster=<cluster_name> --zone <compute_zone>
Note: The workloads will need to be migrated to the new Node pool, and the old node
pools that use the default service account should be deleted to complete the
remediation.
scored: false
- id: 5.2.2
text: "Prefer using dedicated GCP Service Accounts and Workload Identity (Manual)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq .workloadIdentityConfig
type: "manual"
remediation: |
Using Command Line:
gcloud container clusters update <cluster_name> --zone <cluster_zone> \
--workload-pool <project_id>.svc.id.goog
Note that existing Node pools are unaffected. New Node pools default to --workload-
metadata-from-node=GKE_METADATA_SERVER.
Then, modify existing Node pools to enable GKE_METADATA_SERVER:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <cluster_zone> --workload-metadata=GKE_METADATA
Workloads may need to be modified in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity.
Also consider the effects on the availability of hosted workloads as Node pools
are updated. It may be more appropriate to create new Node Pools.
scored: false
- id: 5.3
text: "Cloud Key Management Service (Cloud KMS)"
checks:
- id: 5.3.1
text: "Ensure Kubernetes Secrets are encrypted using keys managed in Cloud KMS (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.databaseEncryption'
type: "manual"
remediation: |
To create a key:
Create a key ring:
gcloud kms keyrings create <ring_name> --location <location> --project \
<key_project_id>
Create a key:
gcloud kms keys create <key_name> --location <location> --keyring <ring_name> \
--purpose encryption --project <key_project_id>
Grant the Kubernetes Engine Service Agent service account the Cloud KMS
CryptoKey Encrypter/Decrypter role:
gcloud kms keys add-iam-policy-binding <key_name> --location <location> \
--keyring <ring_name> --member serviceAccount:<service_account_name> \
--role roles/cloudkms.cryptoKeyEncrypterDecrypter --project <key_project_id>
To create a new cluster with Application-layer Secrets Encryption:
gcloud container clusters create <cluster_name> --cluster-version=latest \
--zone <zone> \
--database-encryption-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name> \
--project <cluster_project_id>
To enable on an existing cluster:
gcloud container clusters update <cluster_name> --zone <zone> \
--database-encryption-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name> \
--project <cluster_project_id>
scored: false
- id: 5.4
text: "Node Metadata"
checks:
- id: 5.4.1
text: "Ensure the GKE Metadata Server is Enabled (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.nodePools[].config.workloadMetadataConfig'
type: "manual"
remediation: |
Using Command Line:
gcloud container clusters update <cluster_name> --identity-namespace=<project_id>.svc.id.goog
Note that existing Node pools are unaffected. New Node pools default to --workload-
metadata-from-node=GKE_METADATA_SERVER.
To modify an existing Node pool to enable GKE Metadata Server:
gcloud container node-pools update <node_pool_name> --cluster=<cluster_name> \
--workload-metadata-from-node=GKE_METADATA_SERVER
Workloads may need modification in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity.
scored: false
- id: 5.5
text: "Node Configuration and Maintenance"
checks:
- id: 5.5.1
text: "Ensure Container-Optimized OS (cos_containerd) is used for GKE node images (Automated)"
audit: |
gcloud container node-pools describe $NODE_POOL --cluster $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.config.imageType'
type: "manual"
remediation: |
Using Command Line:
To set the node image to cos for an existing cluster's Node pool:
gcloud container clusters upgrade <cluster_name> --image-type cos_containerd \
--zone <compute_zone> --node-pool <node_pool_name>
scored: false
- id: 5.5.2
text: "Ensure Node Auto-Repair is enabled for GKE nodes (Automated)"
audit: |
gcloud container node-pools describe $POOL_NAME --cluster $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.management'
type: "manual"
remediation: |
Using Command Line:
To enable node auto-repair for an existing cluster's Node pool:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --enable-autorepair
scored: false
- id: 5.5.3
text: "Ensure Node Auto-Upgrade is enabled for GKE nodes (Automated)"
audit: |
gcloud container node-pools describe $POOL_NAME --cluster $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.management'
type: "manual"
remediation: |
Using Command Line:
To enable node auto-upgrade for an existing cluster's Node pool, run the following
command:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <cluster_zone> --enable-autoupgrade
scored: false
- id: 5.5.4
text: "When creating New Clusters - Automate GKE version management using Release Channels (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq .releaseChannel.channel
type: "manual"
remediation: |
Using Command Line:
Create a new cluster by running the following command:
gcloud container clusters create <cluster_name> --zone <cluster_zone> \
--release-channel <release_channel>
where <release_channel> is stable or regular, according to requirements.
scored: false
- id: 5.5.5
text: "Ensure Shielded GKE Nodes are Enabled (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.shieldedNodes'
type: "manual"
remediation: |
Using Command Line:
To migrate an existing cluster, the flag --enable-shielded-nodes needs to be
specified in the cluster update command:
gcloud container clusters update <cluster_name> --zone <cluster_zone> \
--enable-shielded-nodes
scored: false
- id: 5.5.6
text: "Ensure Integrity Monitoring for Shielded GKE Nodes is Enabled (Automated)"
audit: |
gcloud container node-pools describe $POOL_NAME --cluster $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq .config.shieldedInstanceConfig
type: "manual"
remediation: |
Using Command Line:
To create a Node pool within the cluster with Integrity Monitoring enabled, run the
following command:
gcloud container node-pools create <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --shielded-integrity-monitoring
Workloads from existing non-conforming Node pools will need to be migrated to the
newly created Node pool, then delete non-conforming Node pools to complete the
remediation
scored: false
- id: 5.5.7
text: "Ensure Secure Boot for Shielded GKE Nodes is Enabled (Automated)"
audit: |
gcloud container node-pools describe $POOL_NAME --cluster $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq .config.shieldedInstanceConfig
type: "manual"
remediation: |
Using Command Line:
To create a Node pool within the cluster with Secure Boot enabled, run the following
command:
gcloud container node-pools create <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --shielded-secure-boot
Workloads will need to be migrated from existing non-conforming Node pools to the
newly created Node pool, then delete the non-conforming pools.
scored: false
- id: 5.6
text: "Cluster Networking"
checks:
- id: 5.6.1
text: "Enable VPC Flow Logs and Intranode Visibility (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.networkConfig.enableIntraNodeVisibility'
type: "manual"
remediation: |
Using Command Line:
1. Find the subnetwork name associated with the cluster.
gcloud container clusters describe <cluster_name> \
--region <cluster_region> - -format json | jq '.subnetwork'
2. Update the subnetwork to enable VPC Flow Logs.
gcloud compute networks subnets update <subnet_name> --enable-flow-logs
scored: false
- id: 5.6.2
text: "Ensure use of VPC-native clusters (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.ipAllocationPolicy.useIpAliases'
type: "manual"
remediation: |
Using Command Line:
To enable Alias IP on a new cluster, run the following command:
gcloud container clusters create <cluster_name> --zone <compute_zone> \
--enable-ip-alias
scored: false
- id: 5.6.3
text: "Ensure Control Plane Authorized Networks is Enabled (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.masterAuthorizedNetworksConfig'
type: "manual"
remediation: |
Using Command Line:
To enable Control Plane Authorized Networks for an existing cluster, run the following
command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--enable-master-authorized-networks
Along with this, you can list authorized networks using the --master-authorized-networks
flag which contains a list of up to 20 external networks that are allowed to
connect to your cluster's control plane through HTTPS. You provide these networks as
a comma-separated list of addresses in CIDR notation (such as 90.90.100.0/24).
scored: false
- id: 5.6.4
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.privateClusterConfig.enablePrivateEndpoint'
type: "manual"
remediation: |
Using Command Line:
Create a cluster with a Private Endpoint enabled and Public Access disabled by including
the --enable-private-endpoint flag within the cluster create command:
gcloud container clusters create <cluster_name> --enable-private-endpoint
Setting this flag also requires the setting of --enable-private-nodes, --enable-ip-alias
and --master-ipv4-cidr=<master_cidr_range>.
scored: false
- id: 5.6.5
text: "Ensure clusters are created with Private Nodes (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.privateClusterConfig.enablePrivateNodes'
type: "manual"
remediation: |
Using Command Line:
To create a cluster with Private Nodes enabled, include the --enable-private-nodes
flag within the cluster create command:
gcloud container clusters create <cluster_name> --enable-private-nodes
Setting this flag also requires the setting of --enable-ip-alias and
--master-ipv4-cidr=<master_cidr_range>.
scored: false
- id: 5.6.6
text: "Consider firewalling GKE worker nodes (Manual)"
audit: |
gcloud compute instances describe $INSTANCE_NAME --zone $COMPUTE_ZONE --format json | jq '{tags: .tags.items[], serviceaccount:.serviceAccounts[].email, network: .networkInterfaces[].network}'
type: "manual"
remediation: |
Using Command Line:
Use the following command to generate firewall rules, setting the variables as
appropriate:
gcloud compute firewall-rules create <firewall_rule_name> \
--network <network> --priority <priority> --direction <direction> \
--action <action> --target-tags <tag> \
--target-service-accounts <service_account> \
--source-ranges <source_cidr_range> --source-tags <source_tags> \
--source-service-accounts <source_service_account> \
--destination-ranges <destination_cidr_range> --rules <rules>
scored: false
- id: 5.6.7
text: "Ensure use of Google-managed SSL Certificates (Automated)"
audit: |
svc_json="$(kubectl get svc -A -o json 2>/dev/null || echo '{"items":[],"__err":"SVC_FORBIDDEN"}')"
ing_json="$(kubectl get ingress -A -o json 2>/dev/null || echo '{"items":[],"__err":"INGRESS_FORBIDDEN"}')"
mc_json ="$(kubectl get managedcertificates -A -o json 2>/dev/null || echo '{"items":[],"__err":"MC_FORBIDDEN"}')"
printf '%s\n%s\n%s\n' "$svc_json" "$ing_json" "$mc_json" \
| jq -rs '
(.[0] // {}) as $svcsRaw |
(.[1] // {}) as $ingsRaw |
(.[2] // {}) as $mcsRaw |
# If any list failed, surface an error and DO NOT print the success string
if ($svcsRaw.__err or $ingsRaw.__err or $mcsRaw.__err) then
"ERROR_KUBECTL_LIST:" +
([
($svcsRaw.__err // empty),
($ingsRaw.__err // empty),
($mcsRaw.__err // empty)
] | join(","))
else
($svcsRaw.items // []) as $svcs |
($ingsRaw.items // []) as $ings |
($mcsRaw.items // []) as $mcs |
def trim: gsub("^\\s+|\\s+$";"");
def hasmc($ns;$name): any($mcs[]?; .metadata.namespace==$ns and .metadata.name==$name);
([
# Public Services (not eligible for managed certs)
$svcs[]? | select(.spec.type=="LoadBalancer")
| "FOUND_PUBLIC_LB_SERVICE:\(.metadata.namespace // "default"):\(.metadata.name)"
] + [
# Ingresses missing managed-certs annotation
$ings[]? as $i
| ($i.metadata.annotations."networking.gke.io/managed-certificates" // "") as $ann
| select($ann=="")
| "FOUND_INGRESS_WITHOUT_MANAGED_CERT:\($i.metadata.namespace // "default"):\($i.metadata.name)"
] + [
# Ingresses referencing non-existent ManagedCertificate(s)
$ings[]? as $i
| ($i.metadata.annotations."networking.gke.io/managed-certificates" // "") as $ann
| select($ann!="")
| ($i.metadata.namespace // "default") as $ns
| ($ann | split(",") | map(trim) | map(select(length>0)) | .[]) as $mc
| select(hasmc($ns;$mc) | not)
| "FOUND_MISSING_MANAGED_CERT_RESOURCE:\($ns):\($i.metadata.name):cert=\($mc)"
]) as $f
| if ($f|length)>0
then $f[]
else "ALL_INGRESSES_USE_MANAGED_CERTS_AND_NO_PUBLIC_LB_SERVICES"
end
end
'
tests:
test_items:
- flag: "ALL_INGRESSES_USE_MANAGED_CERTS_AND_NO_PUBLIC_LB_SERVICES"
set: true
compare:
op: eq
value: "ALL_INGRESSES_USE_MANAGED_CERTS_AND_NO_PUBLIC_LB_SERVICES"
remediation: |
If services of type:LoadBalancer are discovered, consider replacing the Service with
an Ingress.
To configure the Ingress and use Google-managed SSL certificates, follow the
instructions as listed at: https://cloud.google.com/kubernetes-engine/docs/how-
to/managed-certs.
scored: true
- id: 5.7
text: "Logging"
checks:
- id: 5.7.1
text: "Ensure Logging and Cloud Monitoring is Enabled (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.loggingService'
type: "manual"
remediation: |
To enable Logging for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--logging=<components_to_be_logged>
See https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--logging
for a list of available components for logging.
To enable Cloud Monitoring for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--monitoring=<components_to_be_logged>
See https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--
monitoring for a list of available components for Cloud Monitoring.
scored: false
- id: 5.7.2
text: "Enable Linux auditd logging (Manual)"
audit: |
kubectl get daemonsets -A -o json | jq '.items[] | select (.spec.template.spec.containers[].image | contains ("gcr.io/stackdriver-agents/stackdriver-logging-agent"))'| jq '{name: .metadata.name, annotations: .metadata.annotations."kubernetes.io/description", namespace: .metadata.namespace, status: .status}'
type: "manual"
remediation: |
Using Command Line:
Download the example manifests:
curl https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/os-audit/cos-auditd-logging.yaml > cos-auditd-logging.yaml
Edit the example manifests if needed. Then, deploy them:
kubectl apply -f cos-auditd-logging.yaml
Verify that the logging Pods have started. If a different Namespace was defined in the
manifests, replace cos-auditd with the name of the namespace being used:
kubectl get pods --namespace=cos-auditd
scored: false
- id: 5.8
text: "Authentication and Authorization"
checks:
- id: 5.8.1
text: "Ensure authentication using Client Certificates is Disabled (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.masterAuth.clientKey'
type: "manual"
remediation: |
Using Command Line:
Create a new cluster without a Client Certificate:
gcloud container clusters create [CLUSTER_NAME] \
--no-issue-client-certificate
scored: false
- id: 5.8.2
text: "Manage Kubernetes RBAC users with Google Groups for GKE (Manual)"
audit: |
gcloud container clusters create <cluster_name> --security-group <security_group_name>
type: "manual"
remediation: |
Using Command Line:
Follow the G Suite Groups instructions at: https://cloud.google.com/kubernetes-
engine/docs/how-to/role-based-access-control#google-groups-for-gke.
Then, create a cluster with:
gcloud container clusters create <cluster_name> --security-group <security_group_name>
Finally create Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings that
reference the G Suite Groups.
scored: false
- id: 5.8.3
text: "Ensure Legacy Authorization (ABAC) is Disabled (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.legacyAbac'
type: "manual"
remediation: |
Using Command Line:
To disable Legacy Authorization for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--no-enable-legacy-authorization
scored: false
- id: 5.9
text: "Storage"
checks:
- id: 5.9.1
text: "Enable Customer-Managed Encryption Keys (CMEK) for GKE Persistent Disks (PD) (Manual)"
audit: |
gcloud compute disks describe $PV_NAME --zone $COMPUTE_ZONE --format json | jq '.diskEncryptionKey.kmsKeyName'
type: "manual"
remediation: |
Using Command Line:
Follow the instructions detailed at: https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek.
scored: false
- id: 5.9.2
text: "Enable Customer-Managed Encryption Keys (CMEK) for Boot Disks (Automated)"
audit: |
gcloud container node-pools describe $NODE_POOL --cluster $CLUSTER_NAME --zone $COMPUTE_ZONE
type: "manual"
remediation: |
Using Command Line:
Create a new node pool using customer-managed encryption keys for the node boot
disk, of <disk_type> either pd-standard or pd-ssd:
gcloud container node-pools create <cluster_name> --disk-type <disk_type> \
--boot-disk-kms-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name>
Create a cluster using customer-managed encryption keys for the node boot disk, of
<disk_type> either pd-standard or pd-ssd:
gcloud container clusters create <cluster_name> --disk-type <disk_type> \
--boot-disk-kms-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name>
scored: false
- id: 5.10
text: "Other Cluster Configurations"
checks:
- id: 5.10.1
text: "Ensure Kubernetes Web UI is Disabled (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.addonsConfig.kubernetesDashboard'
type: "manual"
remediation: |
Using Command Line:
To disable the Kubernetes Dashboard on an existing cluster, run the following
command:
gcloud container clusters update <cluster_name> --zone <zone> \
--update-addons=KubernetesDashboard=DISABLED
scored: false
- id: 5.10.2
text: "Ensure that Alpha clusters are not used for production workloads (Automated)"
audit: |
gcloud container clusters describe $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.enableKubernetesAlpha'
type: "manual"
remediation: |
Using Command Line:
Upon creating a new cluster
gcloud container clusters create [CLUSTER_NAME] \
--zone [COMPUTE_ZONE]
Do not use the --enable-kubernetes-alpha argument.
scored: false
- id: 5.10.3
text: "Consider GKE Sandbox for running untrusted workloads (Manual)"
audit: |
gcloud container node-pools describe $NODE_POOL --cluster $CLUSTER_NAME --zone $COMPUTE_ZONE --format json | jq '.config.sandboxConfig'
type: "manual"
remediation: |
Using Command Line:
To enable GKE Sandbox on an existing cluster, a new Node pool must be created,
which can be done using:
gcloud container node-pools create <node_pool_name> --zone <compute-zone> \
--cluster <cluster_name> --image-type=cos_containerd --sandbox="type=gvisor"
scored: false
- id: 5.10.5
text: "Enable Security Posture (Manual)"
audit: "gcloud container clusters --location describe"
type: "manual"
remediation: |
Enable security posture via the UI, gCloud or API.
https://cloud.google.com/kubernetes-engine/docs/how-to/protect-workload-configuration
scored: false

View File

@@ -0,0 +1,6 @@
---
controls:
version: "gke-1.8.0"
id: 1
text: "Control Plane Components"
type: "master"

65
cfg/gke-1.8.0/node.yaml Normal file
View File

@@ -0,0 +1,65 @@
---
controls:
version: "gke-1.8.0"
id: 3
text: "Worker Nodes"
type: "node"
groups:
- id: 3.1
text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on each worker node.
For example,
chmod 644 $proxykubeconfig
scored: true
- id: 3.1.2
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on each worker node.
For example:
chown root:root $proxykubeconfig
scored: true
- id: 3.1.3
text: "Ensure that the kubelet configuration file has permissions set to 644 (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command (using the kubelet config file location)
chmod 644 $kubeletconf
scored: true
- id: 3.1.4
text: "Ensure that the kubelet configuration file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identied in the Audit step)
chown root:root $kubeletconf
scored: true

473
cfg/gke-1.8.0/policies.yaml Normal file
View File

@@ -0,0 +1,473 @@
---
controls:
version: "gke-1.8.0"
id: 4
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 4.1
text: "RBAC and Service Accounts"
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
audit: |
kubectl get clusterrolebindings -o json | jq -r '
[
.items[]
| select(.roleRef.name == "cluster-admin")
| .subjects[]?
| select(.kind != "Group" or .name != "system:masters")
]
| if length == 0
then "NO_CLUSTER_ADMIN_BINDINGS"
else "FOUND_CLUSTER_ADMIN_BINDING"
end
'
tests:
test_items:
- flag: "NO_CLUSTER_ADMIN_BINDINGS"
set: true
compare:
op: eq
value: "NO_CLUSTER_ADMIN_BINDINGS"
remediation: |
Identify all ClusterRoleBindings to the "cluster-admin" role and review their subjects:
kubectl get clusterrolebindings -o=custom-columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECTS:.subjects[*].name | grep cluster-admin
If non-system principals (users, groups, or service accounts) do not strictly require cluster-admin,
rebind them to a least-privileged (Cluster)Role and then remove the excessive binding:
kubectl delete clusterrolebinding <binding-name>
Notes:
- Do not modify bindings with the "system:" prefix that are required for core components.
- Prefer assigning narrowly scoped Roles/ClusterRoles that grant only the permissions needed.
scored: true
- id: 4.1.2
text: "Minimize access to secrets (Automated)"
audit: |
count=$(kubectl get roles --all-namespaces -o json | jq '
.items[]
| select(.rules[]?
| (.resources[]? == "secrets")
and ((.verbs[]? == "get") or (.verbs[]? == "list") or (.verbs[]? == "watch"))
)' | wc -l)
if [ "$count" -gt 0 ]; then
echo "SECRETS_ACCESS_FOUND"
fi
tests:
test_items:
- flag: "SECRETS_ACCESS_FOUND"
set: false
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: true
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
audit: |
wildcards=$(kubectl get roles --all-namespaces -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
wildcards_clusterroles=$(kubectl get clusterroles -o json | jq '
.items[] | select(
.rules[]? | (.verbs[]? == "*" or .resources[]? == "*" or .apiGroups[]? == "*")
)' | wc -l)
total=$((wildcards + wildcards_clusterroles))
if [ "$total" -gt 0 ]; then
echo "wildcards_present"
fi
tests:
test_items:
- flag: wildcards_present
set: false
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: true
- id: 4.1.4
text: "Ensure that default service accounts are not actively used (Automated)"
audit: |
echo "🔹 Default Service Accounts with automountServiceAccountToken enabled:"
default_sa_count=$(kubectl get serviceaccounts --all-namespaces -o json | jq '
[.items[] | select(.metadata.name == "default" and (.automountServiceAccountToken != false))] | length')
if [ "$default_sa_count" -gt 0 ]; then
echo "default_sa_not_auto_mounted"
fi
echo "\n🔹 Pods using default ServiceAccount:"
pods_using_default_sa=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.serviceAccountName == "default")] | length')
if [ "$pods_using_default_sa" -gt 0 ]; then
echo "default_sa_used_in_pods"
fi
tests:
test_items:
- flag: default_sa_not_auto_mounted
set: false
- flag: default_sa_used_in_pods
set: false
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific
access to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: true
- id: 4.1.5
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
audit: |
echo "🔹 Pods with automountServiceAccountToken enabled:"
pods_with_token_mount=$(kubectl get pods --all-namespaces -o json | jq '
[.items[] | select(.spec.automountServiceAccountToken != false)] | length')
if [ "$pods_with_token_mount" -gt 0 ]; then
echo "automountServiceAccountToken"
fi
tests:
test_items:
- flag: automountServiceAccountToken
set: false
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: true
- id: 4.1.6
text: "Avoid use of system:masters group (Automated)"
audit: |
found=0
for csr in $(kubectl get csr -o name 2>/dev/null | sed 's|^.*/||'); do
req=$(kubectl get csr "$csr" -o jsonpath='{.spec.request}' 2>/dev/null)
[ -z "$req" ] && continue
if echo "$req" | base64 -d 2>/dev/null | openssl req -noout -text 2>/dev/null | grep -q 'O = system:masters'; then
conds=$(kubectl get csr "$csr" -o json | jq -r '[.status.conditions[]?.type] | join(",")')
echo "FOUND_SYSTEM_MASTERS_CSR:${csr}:${conds:-NONE}"
found=1
fi
done
if [ "$found" -eq 0 ]; then
echo "NO_SYSTEM_MASTERS_CREDENTIALS_FOUND"
fi
tests:
test_items:
- flag: "NO_SYSTEM_MASTERS_CREDENTIALS_FOUND"
set: true
compare:
op: eq
value: "NO_SYSTEM_MASTERS_CREDENTIALS_FOUND"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: true
- id: 4.1.7
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 4.1.8
text: "Avoid bindings to system:anonymous (Automated)"
audit: |
# Flags any ClusterRoleBinding/RoleBinding that targets the user "system:anonymous".
# Prints "NO_ANONYMOUS_BINDINGS" when none are found.
(
kubectl get clusterrolebindings -o json | jq -r '
.items[]
| select((.subjects | length) > 0)
| select(any(.subjects[]?;
.kind=="User" and .name=="system:anonymous"
))
| "FOUND_ANONYMOUS:ClusterRoleBinding:\(.metadata.name):ROLE=\(.roleRef.kind)/\(.roleRef.name)"
';
kubectl get rolebindings -A -o json | jq -r '
.items[]
| select((.subjects | length) > 0)
| select(any(.subjects[]?;
.kind=="User" and .name=="system:anonymous"
))
| "FOUND_ANONYMOUS:RoleBinding:\(.metadata.namespace):\(.metadata.name):ROLE=\(.roleRef.kind)/\(.roleRef.name)"
'
) | (grep -q '^FOUND_ANONYMOUS:' && cat || echo 'NO_ANONYMOUS_BINDINGS')
tests:
test_items:
- flag: "NO_ANONYMOUS_BINDINGS"
set: true
compare:
op: eq
value: "NO_ANONYMOUS_BINDINGS"
remediation: |
Identify all clusterrolebindings and rolebindings to the user system:anonymous.
Check if they are used and review the permissions associated with the binding using the
commands in the Audit section above or refer to GKE documentation
(https://cloud.google.com/kubernetes-engine/docs/best-practices/rbac#detect-prevent-default).
Strongly consider replacing unsafe bindings with an authenticated, user-defined group.
Where possible, bind to non-default, user-defined groups with least-privilege roles.
If there are any unsafe bindings to the user system:anonymous, proceed to delete them
after consideration for cluster operations with only necessary, safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: true
- id: 4.1.9
text: "Avoid non-default bindings to system:unauthenticated (Automated)"
audit: |
# Flags any non-default binding to the group "system:unauthenticated".
# Prints "NO_NON_DEFAULT_UNAUTH_BINDINGS" when none are found.
(
kubectl get clusterrolebindings -o json | jq -r '
.items[]
| select(.metadata.name != "system:public-info-viewer")
| select((.subjects | length) > 0)
| select(any(.subjects[]?;
.kind=="Group" and .name=="system:unauthenticated"
))
| "FOUND_UNAUTH:ClusterRoleBinding:\(.metadata.name):ROLE=\(.roleRef.kind)/\(.roleRef.name)"
';
kubectl get rolebindings -A -o json | jq -r '
.items[]
| select((.subjects | length) > 0)
| select(any(.subjects[]?;
.kind=="Group" and .name=="system:unauthenticated"
))
| "FOUND_UNAUTH:RoleBinding:\(.metadata.namespace):\(.metadata.name):ROLE=\(.roleRef.kind)/\(.roleRef.name)"
'
) | (grep -q "^FOUND_UNAUTH:" && cat || echo "NO_NON_DEFAULT_UNAUTH_BINDINGS")
tests:
test_items:
- flag: "NO_NON_DEFAULT_UNAUTH_BINDINGS"
set: true
compare:
op: eq
value: "NO_NON_DEFAULT_UNAUTH_BINDINGS"
remediation: |
Identify all non-default clusterrolebindings and rolebindings to the group
system:unauthenticated. Check if they are used and review the permissions
associated with the binding using the commands in the Audit section above or refer to
GKE documentation (https://cloud.google.com/kubernetes-engine/docs/best-practices/rbac#detect-prevent-default).
Strongly consider replacing non-default, unsafe bindings with an authenticated, user-
defined group. Where possible, bind to non-default, user-defined groups with least-
privilege roles.
If there are any non-default, unsafe bindings to the group system:unauthenticated,
proceed to delete them after consideration for cluster operations with only necessary,
safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: true
- id: 4.1.10
text: "Avoid non-default bindings to system:authenticated (Automated)"
audit: |
# Flags any non-default binding to the group "system:authenticated".
# Allowed defaults (CRB): system:basic-user, system:discovery
# Prints "NO_NON_DEFAULT_AUTH_BINDINGS" when none are found.
(
kubectl get clusterrolebindings -o json | jq -r '
.items[]
| select((.metadata.name != "system:basic-user") and (.metadata.name != "system:discovery"))
| select((.subjects | length) > 0)
| select(any(.subjects[]?;
.kind=="Group" and .name=="system:authenticated"
))
| "FOUND_AUTH:ClusterRoleBinding:\(.metadata.name):ROLE=\(.roleRef.kind)/\(.roleRef.name)"
';
kubectl get rolebindings -A -o json | jq -r '
.items[]
| select((.subjects | length) > 0)
| select(any(.subjects[]?;
.kind=="Group" and .name=="system:authenticated"
))
| "FOUND_AUTH:RoleBinding:\(.metadata.namespace):\(.metadata.name):ROLE=\(.roleRef.kind)/\(.roleRef.name)"
'
) | (grep -q "^FOUND_AUTH:" && cat || echo "NO_NON_DEFAULT_AUTH_BINDINGS")
tests:
test_items:
- flag: "NO_NON_DEFAULT_AUTH_BINDINGS"
set: true
compare:
op: eq
value: "NO_NON_DEFAULT_AUTH_BINDINGS"
remediation: |
Identify all non-default clusterrolebindings and rolebindings to the group
system:authenticated. Check if they are used and review the permissions associated
with the binding using the commands in the Audit section above or refer to GKE
documentation.
Strongly consider replacing non-default, unsafe bindings with an authenticated, user-
defined group. Where possible, bind to non-default, user-defined groups with least-
privilege roles.
If there are any non-default, unsafe bindings to the group system:authenticated,
proceed to delete them after consideration for cluster operations with only necessary,
safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: true
- id: 4.2
text: "Pod Security Standards"
checks:
- id: 4.2.1
text: "Ensure that the cluster enforces Pod Security Standard Baseline profile or stricter for all namespaces. (Manual)"
type: "manual"
remediation: |
Ensure that Pod Security Admission is in place for every namespace which contains
user workloads.
Run the following command to enforce the Baseline profile in a namespace:
kubectl label namespace pod-security.kubernetes.io/enforce=baseline
scored: false
- id: 4.3
text: "Network Policies and CNI"
checks:
- id: 4.3.1
text: "Ensure that the CNI in use supports Network Policies (Manual)"
type: "manual"
remediation: |
To use a CNI plugin with Network Policy, enable Network Policy in GKE, and the CNI plugin
will be updated. See Recommendation 5.6.7.
scored: false
- id: 4.3.2
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
audit: |
(kubectl get ns -o json; kubectl get networkpolicy -A -o json) \
| jq -rs '
(.[0].items | map(.metadata.name)
| map(select(.!="kube-system" and .!="kube-public" and .!="kube-node-lease"))) as $ns
|
( (.[1].items // [])
| sort_by(.metadata.namespace)
| group_by(.metadata.namespace)
| map({key: .[0].metadata.namespace, value: length})
| from_entries
) as $np
|
[ $ns[] | select( ($np[.] // 0) == 0 ) ] as $missing
|
if ($missing|length)>0
then ($missing[] | "FOUND_NAMESPACE_WITHOUT_NETWORKPOLICY:"+.)
else "ALL_NAMESPACES_HAVE_NETWORK_POLICIES"
end
'
tests:
test_items:
- flag: "ALL_NAMESPACES_HAVE_NETWORK_POLICIES"
set: true
compare:
op: eq
value: "ALL_NAMESPACES_HAVE_NETWORK_POLICIES"
remediation: |
Follow the documentation and create NetworkPolicy objects as needed.
See: https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy#creating_a_network_policy
for more information.
scored: true
- id: 4.4
text: "Secrets Management"
checks:
- id: 4.4.1
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
audit: |
output=$(kubectl get all --all-namespaces -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}')
if [ -z "$output" ]; then echo "NO_ENV_SECRET_REFERENCES"; else echo "ENV_SECRET_REFERENCES_FOUND"; fi
tests:
test_items:
- flag: "NO_ENV_SECRET_REFERENCES"
set: true
compare:
op: eq
value: "NO_ENV_SECRET_REFERENCES"
remediation: |
if possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
scored: true
- id: 4.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 4.5
text: "Extensible Admission Control"
checks:
- id: 4.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
Also see recommendation 5.10.4.
scored: false
- id: 4.6
text: "General Policies"
checks:
- id: 4.6.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 4.6.2
text: "Ensure that the seccomp profile is set to RuntimeDefault in your pod definitions (Automated)"
type: "manual"
remediation: |
Use security context to enable the RuntimeDefault seccomp profile in your pod
definitions. An example is as below:
{
"namespace": "kube-system",
"name": "metrics-server-v0.7.0-dbcc8ddf6-gz7d4",
"seccompProfile": "RuntimeDefault"
}
scored: false
- id: 4.6.3
text: "Apply Security Context to Your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Google Container-
Optimized OS Benchmark.
scored: false
- id: 4.6.4
text: "The default namespace should not be used (Automated)"
audit: |
output=$(kubectl get all -n default --no-headers 2>/dev/null | grep -v '^service\s\+kubernetes\s' || true)
if [ -z "$output" ]; then echo "DEFAULT_NAMESPACE_UNUSED"; else echo "DEFAULT_NAMESPACE_IN_USE"; fi
tests:
test_items:
- flag: "DEFAULT_NAMESPACE_UNUSED"
set: true
compare:
op: eq
value: "DEFAULT_NAMESPACE_UNUSED"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: true

View File

@@ -24,7 +24,8 @@ master:
etcd:
bins:
- containerd
datadirs:
- /var/lib/rancher/k3s/server/db/etcd
node:
components:
- kubelet

View File

@@ -21,7 +21,7 @@ groups:
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'"
type: "manual"
tests:
test_items:

View File

@@ -10,15 +10,13 @@ groups:
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "check_for_k3s_etcd.sh 2.1"
audit: "grep -A 4 'client-transport-security' $etcdconf | grep -E 'cert-file|key-file'"
tests:
bin_op: and
test_items:
- flag: "cert-file"
env: "ETCD_CERT_FILE"
set: true
- flag: "key-file"
env: "ETCD_KEY_FILE"
set: true
remediation: |
Follow the etcd service documentation and configure TLS encryption.
@@ -30,14 +28,13 @@ groups:
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "check_for_k3s_etcd.sh 2.2"
audit: "grep -A 4 'client-transport-security' $etcdconf | grep 'client-cert-auth'"
tests:
bin_op: or
test_items:
- flag: "--client-cert-auth"
set: true
- flag: "client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
@@ -50,15 +47,13 @@ groups:
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "check_for_k3s_etcd.sh 2.3"
audit: "if grep -q '^auto-tls' $etcdconf;then grep '^auto-tls' $etcdconf;else echo 'notset';fi"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
@@ -70,15 +65,13 @@ groups:
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)"
audit: "check_for_k3s_etcd.sh 2.4"
audit: "grep -A 4 'peer-transport-security' $etcdconf | grep -E 'cert-file|key-file'"
tests:
bin_op: and
test_items:
- flag: "cert-file"
env: "ETCD_PEER_CERT_FILE"
set: true
- flag: "key-file"
env: "ETCD_PEER_KEY_FILE"
set: true
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
@@ -91,14 +84,13 @@ groups:
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "check_for_k3s_etcd.sh 2.5"
audit: "grep -A 4 'peer-transport-security' $etcdconf | grep 'client-cert-auth'"
tests:
bin_op: or
test_items:
- flag: "--client-cert-auth"
set: true
- flag: "client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
@@ -111,15 +103,13 @@ groups:
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "check_for_k3s_etcd.sh 2.6"
audit: "if grep -q '^peer-auto-tls' $etcdconf;then grep '^peer-auto-tls' $etcdconf;else echo 'notset';fi"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
@@ -132,11 +122,10 @@ groups:
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: "check_for_k3s_etcd.sh 2.7"
audit: "if grep -q 'trusted-ca-file' $etcdconf;then grep 'trusted-ca-file' $etcdconf;else echo 'notset';fi"
tests:
test_items:
- flag: "trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
set: true
remediation: |
[Manual test]

View File

@@ -155,7 +155,7 @@ groups:
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: "check_for_k3s_etcd.sh 1.1.11"
audit: "stat -c %a $etcddatadir"
tests:
test_items:
- flag: "700"
@@ -323,7 +323,7 @@ groups:
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'"
type: manual
tests:
test_items:
@@ -371,7 +371,7 @@ groups:
- id: 1.2.4
text: "Ensure that the --kubelet-https argument is set to true (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-https'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-https'"
type: "skip"
tests:
bin_op: or
@@ -389,7 +389,7 @@ groups:
- id: 1.2.5
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'"
tests:
bin_op: and
test_items:
@@ -406,7 +406,7 @@ groups:
- id: 1.2.6
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
@@ -420,7 +420,7 @@ groups:
- id: 1.2.7
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
@@ -436,7 +436,7 @@ groups:
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
@@ -451,7 +451,7 @@ groups:
- id: 1.2.9
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
@@ -466,7 +466,7 @@ groups:
- id: 1.2.10
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
test_items:
- flag: "--enable-admission-plugins"
@@ -483,7 +483,7 @@ groups:
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
bin_op: or
test_items:
@@ -517,7 +517,7 @@ groups:
- id: 1.2.13
text: "Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
bin_op: or
test_items:
@@ -538,7 +538,7 @@ groups:
- id: 1.2.14
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
tests:
bin_op: or
test_items:
@@ -557,7 +557,7 @@ groups:
- id: 1.2.15
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
tests:
bin_op: or
test_items:
@@ -575,7 +575,7 @@ groups:
- id: 1.2.16
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
test_items:
- flag: "--enable-admission-plugins"
@@ -592,7 +592,7 @@ groups:
- id: 1.2.17
text: "Ensure that the --secure-port argument is not set to 0 (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port'"
tests:
bin_op: or
test_items:
@@ -610,7 +610,7 @@ groups:
- id: 1.2.18
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'"
tests:
test_items:
- flag: "--profiling"
@@ -625,7 +625,7 @@ groups:
- id: 1.2.19
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
@@ -639,7 +639,7 @@ groups:
- id: 1.2.20
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
@@ -656,7 +656,7 @@ groups:
- id: 1.2.21
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
@@ -673,7 +673,7 @@ groups:
- id: 1.2.22
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
@@ -689,7 +689,7 @@ groups:
- id: 1.2.23
text: "Ensure that the --request-timeout argument is set as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
@@ -702,7 +702,7 @@ groups:
- id: 1.2.24
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
tests:
bin_op: or
test_items:
@@ -722,7 +722,7 @@ groups:
- id: 1.2.25
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
@@ -736,7 +736,7 @@ groups:
- id: 1.2.26
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "check_for_k3s_etcd.sh 1.2.29"
audit: "journalctl -m -u k3s | grep -m1 'Running kube-apiserver'"
tests:
bin_op: and
test_items:
@@ -754,7 +754,7 @@ groups:
- id: 1.2.27
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep -A1 'Running kube-apiserver' | tail -n2"
audit: "journalctl -m -u k3s | grep -A1 'Running kube-apiserver' | tail -n2"
tests:
bin_op: and
test_items:
@@ -772,7 +772,7 @@ groups:
- id: 1.2.28
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'"
tests:
test_items:
- flag: "--client-ca-file"
@@ -785,7 +785,7 @@ groups:
- id: 1.2.29
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'"
tests:
test_items:
- flag: "--etcd-cafile"
@@ -798,7 +798,7 @@ groups:
- id: 1.2.30
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config'"
tests:
test_items:
- flag: "--encryption-provider-config"
@@ -820,7 +820,7 @@ groups:
- id: 1.2.32
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'"
tests:
test_items:
- flag: "--tls-cipher-suites"
@@ -845,7 +845,7 @@ groups:
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
@@ -857,7 +857,7 @@ groups:
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'"
tests:
test_items:
- flag: "--profiling"
@@ -872,7 +872,7 @@ groups:
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'"
tests:
test_items:
- flag: "--use-service-account-credentials"
@@ -887,7 +887,7 @@ groups:
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'"
tests:
test_items:
- flag: "--service-account-private-key-file"
@@ -900,7 +900,7 @@ groups:
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'"
tests:
test_items:
- flag: "--root-ca-file"
@@ -912,7 +912,7 @@ groups:
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'RotateKubeletServerCertificate'"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'RotateKubeletServerCertificate'"
type: "skip"
tests:
bin_op: or
@@ -953,7 +953,7 @@ groups:
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1"
audit: "journalctl -m -u k3s | grep 'Running kube-scheduler' | tail -n1"
tests:
test_items:
- flag: "--profiling"
@@ -969,7 +969,7 @@ groups:
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'"
audit: "journalctl -m -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'"
tests:
bin_op: or
test_items:

View File

@@ -186,7 +186,7 @@ groups:
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'' '
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | wc -l) -gt 0; then journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'' '
tests:
test_items:
- flag: "--anonymous-auth"
@@ -209,7 +209,7 @@ groups:
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" | grep -v grep; else echo "--authorization-mode=Webhook"; fi'' '
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | wc -l) -gt 0; then journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | tail -n1 | grep "authorization-mode" | grep -v grep; else echo "--authorization-mode=Webhook"; fi'' '
tests:
test_items:
- flag: --authorization-mode
@@ -231,7 +231,7 @@ groups:
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -D /var/log/journal -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" | grep -v grep; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'' '
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | wc -l) -gt 0; then journalctl -m -u k3s -u k3s-agent | grep "Running kubelet" | tail -n1 | grep "client-ca-file" | grep -v grep; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'' '
tests:
test_items:
- flag: --client-ca-file
@@ -251,7 +251,7 @@ groups:
- id: 4.2.4
text: "Ensure that the --read-only-port argument is set to 0 (Manual)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port' "
audit: "journalctl -m -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port' "
tests:
bin_op: or
test_items:
@@ -276,7 +276,7 @@ groups:
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout'"
audit: "journalctl -m -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout'"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
@@ -302,7 +302,7 @@ groups:
- id: 4.2.6
text: "Ensure that the --protect-kernel-defaults argument is set to true (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'protect-kernel-defaults'"
audit: "journalctl -m -u k3s | grep 'Running kubelet' | tail -n1 | grep 'protect-kernel-defaults'"
type: "skip"
tests:
test_items:
@@ -325,7 +325,7 @@ groups:
- id: 4.2.7
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains'"
audit: "journalctl -m -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains'"
type: "skip"
tests:
test_items:
@@ -393,7 +393,7 @@ groups:
- id: 4.2.10
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kubelet' | tail -n1"
audit: "journalctl -m -u k3s | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --tls-cert-file
@@ -477,7 +477,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -153,7 +153,7 @@ groups:
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false

View File

@@ -16,31 +16,43 @@ master:
scheduler:
bins:
- containerd
kubeconfig:
- /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig
controllermanager:
bins:
- containerd
kubeconfig:
- /var/lib/rancher/k3s/server/cred/controller.kubeconfig
etcd:
bins:
- containerd
node:
components:
- kubelet
- proxy
etcd:
components:
- etcd
kubelet:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubelet.kubeconfig
defaultcafile: /var/lib/rancher/k3s/agent/client-ca.crt
etcd:
confs: /var/lib/rancher/k3s/server/db/etcd/config
proxy:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
node:
components:
- kubelet
- proxy
policies:
components:
- policies
kubelet:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubelet.kubeconfig
defaultcafile: /var/lib/rancher/k3s/agent/client-ca.crt
proxy:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
policies:
components:
- policies

View File

@@ -21,7 +21,7 @@ groups:
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Automated)"
audit: "journalctl -D /var/log/journal -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'"
tests:
test_items:
- flag: "--audit-policy-file"

View File

@@ -10,139 +10,135 @@ groups:
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "check_for_k3s_etcd.sh 2.1"
audit_config: "cat $etcdconf"
tests:
bin_op: and
test_items:
- flag: "cert-file"
env: "ETCD_CERT_FILE"
set: true
- flag: "key-file"
env: "ETCD_KEY_FILE"
set: true
- path: "{.client-transport-security.cert-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/server-client.crt"
- path: "{.client-transport-security.key-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/server-client.key"
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates cert and key files for etcd.
These are located in /var/lib/rancher/k3s/server/tls/etcd/.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use custom cert and key files.
scored: false
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "check_for_k3s_etcd.sh 2.2"
audit_config: "cat $etcdconf"
tests:
bin_op: or
test_items:
- flag: "--client-cert-auth"
set: true
- flag: "client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
- path: "{.client-transport-security.client-cert-auth}"
compare:
op: eq
value: true
set: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s sets the --client-cert-auth parameter to true.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to disable client certificate authentication.
scored: false
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "check_for_k3s_etcd.sh 2.3"
audit_config: "cat $etcdconf"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
- path: "{.client-transport-security.auto-tls}"
compare:
op: eq
value: false
- path: "{.client-transport-security.auto-tls}"
set: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s does not set the --auto-tls parameter.
If this check fails, edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
client-transport-security:
auto-tls: false
scored: false
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)"
audit: "check_for_k3s_etcd.sh 2.4"
audit_config: "cat $etcdconf"
tests:
bin_op: and
test_items:
- flag: "cert-file"
env: "ETCD_PEER_CERT_FILE"
set: true
- flag: "key-file"
env: "ETCD_PEER_KEY_FILE"
set: true
- path: "{.peer-transport-security.cert-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt"
- path: "{.peer-transport-security.key-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key"
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates peer cert and key files for etcd.
These are located in /var/lib/rancher/k3s/server/tls/etcd/.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use custom peer cert and key files.
scored: false
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "check_for_k3s_etcd.sh 2.5"
audit_config: "cat $etcdconf"
tests:
bin_op: or
test_items:
- flag: "--client-cert-auth"
set: true
- flag: "client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
- path: "{.peer-transport-security.client-cert-auth}"
compare:
op: eq
value: true
set: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s sets the --peer-cert-auth parameter to true.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to disable peer client certificate authentication.
scored: false
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "check_for_k3s_etcd.sh 2.6"
audit_config: "cat $etcdconf"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
- path: "{.peer-transport-security.auto-tls}"
compare:
op: eq
value: false
set: true
- path: "{.peer-transport-security.auto-tls}"
set: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s does not set the --peer-auto-tls parameter.
If this check fails, edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
peer-transport-security:
auto-tls: false
scored: false
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Automated)"
audit: "check_for_k3s_etcd.sh 2.7"
audit_config: "cat $etcdconf"
tests:
test_items:
- flag: "trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
set: true
- path: "{.peer-transport-security.trusted-ca-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt"
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates a unique certificate authority for etcd.
This is located at /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use a shared certificate authority.
scored: false

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More