Compare commits

...

142 Commits

Author SHA1 Message Date
afdesk
4de7b2095a release: prepare v0.9.2 (#1730) 2024-11-16 16:05:57 +06:00
Saurabh Misra
5eccb498c1 FIX| RKE-CIS-1.24- CHECK 1.1.19 (#1722)
We have added the missing script required for check 1.1.19 in rke-cis-1.24 and made it available to the kube-bench file system(https://github.com/rancher/security-scan/blob/master/package/helper_scripts/check_files_owner_in_dir.sh).
2024-11-15 18:32:24 +06:00
dependabot[bot]
7ce327f1db build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1728)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.27.37 to 1.28.4.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.27.37...config/v1.28.4)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 18:21:14 +06:00
dependabot[bot]
8656945200 build(deps): bump github.com/golang/glog from 1.2.2 to 1.2.3 (#1726)
Bumps [github.com/golang/glog](https://github.com/golang/glog) from 1.2.2 to 1.2.3.
- [Release notes](https://github.com/golang/glog/releases)
- [Commits](https://github.com/golang/glog/compare/v1.2.2...v1.2.3)

---
updated-dependencies:
- dependency-name: github.com/golang/glog
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 17:53:09 +06:00
dependabot[bot]
702107daff build(deps): bump github.com/spf13/viper from 1.18.2 to 1.19.0 (#1720)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.18.2 to 1.19.0.
- [Release notes](https://github.com/spf13/viper/releases)
- [Commits](https://github.com/spf13/viper/compare/v1.18.2...v1.19.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/viper
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 17:34:30 +06:00
dependabot[bot]
5fac7f626b build(deps): bump github.com/fatih/color from 1.16.0 to 1.18.0 (#1719)
Bumps [github.com/fatih/color](https://github.com/fatih/color) from 1.16.0 to 1.18.0.
- [Release notes](https://github.com/fatih/color/releases)
- [Commits](https://github.com/fatih/color/compare/v1.16.0...v1.18.0)

---
updated-dependencies:
- dependency-name: github.com/fatih/color
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 16:14:30 +06:00
dependabot[bot]
27a1942bcc build(deps): bump golang from 1.23.2 to 1.23.3 (#1727)
Bumps golang from 1.23.2 to 1.23.3.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 15:39:05 +06:00
dependabot[bot]
9f0f5567ae build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1724)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.54.4 to 1.54.6.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.54.4...service/lambda/v1.54.6)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-15 11:32:36 +06:00
dependabot[bot]
ea24d0e240 build(deps): bump engineerd/setup-kind from 0.5.0 to 0.6.2 (#1721)
Bumps [engineerd/setup-kind](https://github.com/engineerd/setup-kind) from 0.5.0 to 0.6.2.
- [Release notes](https://github.com/engineerd/setup-kind/releases)
- [Commits](https://github.com/engineerd/setup-kind/compare/v0.5.0...v0.6.2)

---
updated-dependencies:
- dependency-name: engineerd/setup-kind
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-06 10:15:05 +06:00
dependabot[bot]
74f5c8b800 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1716)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.53.3 to 1.54.4.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/iot/v1.53.3...service/s3/v1.54.4)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-01 11:14:40 +06:00
dependabot[bot]
e2a97f49f5 build(deps): bump github.com/spf13/cobra from 1.8.0 to 1.8.1 (#1718)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.8.0 to 1.8.1.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.8.0...v1.8.1)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-01 10:31:03 +06:00
dependabot[bot]
b4000f677b build(deps): bump gorm.io/gorm from 1.25.10 to 1.25.12 (#1714)
Bumps [gorm.io/gorm](https://github.com/go-gorm/gorm) from 1.25.10 to 1.25.12.
- [Release notes](https://github.com/go-gorm/gorm/releases)
- [Commits](https://github.com/go-gorm/gorm/compare/v1.25.10...v1.25.12)

---
updated-dependencies:
- dependency-name: gorm.io/gorm
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 15:37:35 +06:00
dependabot[bot]
86c6a27cc4 build(deps): bump golang from 1.22.7 to 1.23.2 (#1697)
Bumps golang from 1.22.7 to 1.23.2.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 14:08:26 +06:00
dependabot[bot]
8a695eb8d1 build(deps): bump k8s.io/client-go from 0.29.3 to 0.31.2 (#1712)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.29.3 to 0.31.2.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.29.3...v0.31.2)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:26:08 +06:00
dependabot[bot]
e48c3dd7b5 build(deps): bump golangci/golangci-lint-action from 5 to 6 (#1707)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 5 to 6.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-25 10:05:00 +06:00
dependabot[bot]
ddb586d441 build(deps): bump k8s.io/apimachinery from 0.29.3 to 0.31.1 (#1681)
* build(deps): bump k8s.io/apimachinery from 0.29.3 to 0.31.1

Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.29.3 to 0.31.1.
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.29.3...v0.31.1)

---
updated-dependencies:
- dependency-name: k8s.io/apimachinery
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* skip go toolchain

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-24 12:51:14 +06:00
afdesk
5568895095 chore: add go toolchain version (#1710)
* chore: add go toolchain version

* bump up toolchain to 1.22.7
2024-10-24 12:40:41 +06:00
dependabot[bot]
d5ba5edca0 build(deps): bump actions/setup-python from 4 to 5 (#1536)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 4 to 5.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-22 11:30:08 +06:00
dependabot[bot]
0e3dbfa985 build(deps): bump docker/build-push-action from 5 to 6 (#1631)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-21 23:30:31 +06:00
dependabot[bot]
e9ea1dbb74 build(deps): bump golangci/golangci-lint-action from 4 to 5 (#1604)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 4 to 5.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-21 15:47:43 +06:00
afdesk
c5dc28ee6f release: prepare v0.9.1 (#1705) 2024-10-16 19:48:17 +06:00
Omar kamoun
fa478ce238 fix: correct TLSCipherSuites to tlsCipherSuites (#1703) 2024-10-16 11:50:10 +06:00
dependabot[bot]
1d8f80e846 build(deps): bump github.com/golang/glog from 1.2.0 to 1.2.2 (#1702)
Bumps [github.com/golang/glog](https://github.com/golang/glog) from 1.2.0 to 1.2.2.
- [Release notes](https://github.com/golang/glog/releases)
- [Commits](https://github.com/golang/glog/compare/v1.2.0...v1.2.2)

---
updated-dependencies:
- dependency-name: github.com/golang/glog
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-15 11:14:14 +06:00
Abubakr-Sadik Nii Nai Davis
a15e8acaa3 Add GKE 1.6 CIS benchmark for GCP environment (#1672)
* Add config entries for GKE 1.6 controls

* Add gke1.6 control plane recommendations

* Add gke-1.6.0 worker node recommendations

* Add gke-1.6.0 policy recommendations

* Add managed services and policy recommendation

* Add master recommendations

* Fix formatting across gke-1.6.0 files

* Add gke-1.6.0 benchmark selection based on k8s version

* Workaround: hardcode kubelet config path for gke-1.6.0

* Fix tests for makeIPTablesUtilChaings

* Change scored field for all node tests to true

* Fix kubelet file permission to check for

---------

Co-authored-by: afdesk <work@afdesk.com>
2024-10-11 10:49:35 +06:00
dependabot[bot]
e47725299e build(deps): bump gorm.io/driver/postgres from 1.5.6 to 1.5.9 (#1698)
Bumps [gorm.io/driver/postgres](https://github.com/go-gorm/postgres) from 1.5.6 to 1.5.9.
- [Commits](https://github.com/go-gorm/postgres/compare/v1.5.6...v1.5.9)

---
updated-dependencies:
- dependency-name: gorm.io/driver/postgres
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-10 10:37:41 +06:00
Matthias Muth
e8562f2944 Extend default kubelet configlist to fit AWS EKS (#1637)
- the latest default Kubernetes setup of AWS has
  its kubelet config path in the added location.
  Proposing to extend the list of scanned paths in
  order to make kube-bench execution more painless
  and "quick start like" in default setups.
2024-10-04 14:08:03 +06:00
Arano-kai
3a0ccc440c fix: rh-1.0 check 4.1.3 typo (#1652)
Co-authored-by: Arano-kai <captcha.is(dot)evil(meov)gmail.com>
2024-10-04 13:42:56 +06:00
dependabot[bot]
c683e93968 build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1696)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.53.1 to 1.53.3.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/s3/v1.53.1...service/iot/v1.53.3)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-04 12:21:07 +06:00
jdesouza
e75cd6bbc8 Updated KUBECTL_VERSION to 1.31.0 for fixing vulnerabilities (#1690)
* Bumped Go to 1.22.7 for fixing Critical/High vulberabilities

* Bumped Go to 1.22.7 for fixing Critical/High vulberabilities

* Bumped kubectl version for fixing vulnerabilities

* Fixed kubectl version

* Update go.mod
2024-10-03 22:43:01 +06:00
dependabot[bot]
d8f041a826 build(deps): bump alpine from 3.20.0 to 3.20.3 (#1676)
Bumps alpine from 3.20.0 to 3.20.3.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-10-03 09:20:12 +06:00
Winnerson Kharsunai
7ea1d59bb1 update audit script for cis-1.9 kubernetes policies id 5.1.6 (#1655) 2024-10-01 11:48:02 +06:00
Winnerson Kharsunai
89842dcaaa update dockerfile to add package findutils (#1657) 2024-10-01 11:32:23 +06:00
za
674d8e8bb7 Update command to build docker to run in EKS cluster (#1648)
because with the previous command unable to get the argument.

Issue: https://github.com/aquasecurity/kube-bench/issues/1647

Co-authored-by: za <za@noreply.users.github.com>
2024-09-30 12:13:10 +06:00
Andy Pitcher
4b4c1ce709 Modify 1.2.3 Ensure that the DenyServiceExternalIPs is set in CIS-1.7/1.8 (#1607)
* Modify 1.2.3 Ensure that the DenyServiceExternalIPs is set
 - op changed from `have` to `has` and removed bin_op: or
 - remediation description changed to only include --enable-admission-plugins

* Apply changes for CIS-1.9
2024-09-30 10:30:59 +06:00
Andy Pitcher
b85ec78a84 Fix CIS-1.9 policies 5.1.1/5.1.5 typos (#1658)
* Fix CIS-1.9 policies 5.1.1 typo

* Fix typo CIS-1.9 5.1.5

* Add new lines to CIS-1.9
2024-09-30 09:54:45 +06:00
Wolfgang Reichert
f6877e3c17 Fix issue 1595: failed to output to ASFF (#1691)
A breaking change was introduced in aws-sdk-go-v2.
See https://github.com/aws/aws-sdk-go-v2/issues/2370#issuecomment-1953308268.

Mixing aws-sdk-go-v2 packages from versions before and after the breaking change causes kube-bench to fail. This issue occurs when it attempts to access AWS Security Hub.

Addressed issue: https://github.com/aquasecurity/kube-bench/issues/1595

Supersedes bot PR: https://github.com/aquasecurity/kube-bench/pull/1689
Besides upgrading to latest SDK version, some variable types need to be adapted.
2024-09-28 13:36:44 +06:00
Andy Pitcher
2751f87034 Fix audit and remediation for CIS-1.9 master 1.1.13/1.1.14 (#1649)
* Fix audit and remediation for CIS-1.9 master 1.1.13/1.1.14

* Fix loop syntax for file paths

---------

Co-authored-by: afdesk <work@afdesk.com>
2024-09-26 10:45:48 +06:00
Derek Nola
a9422a6623 Overhaul of K3s scans (#1659)
* Overhaul K3s 1.X checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Overhaul K3s 2.X Checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Overhaul K3s 4.X checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Overhaul K3s 5.X checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Add K3s cis-1.8 scan

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Fix K3s 1.1.10 check

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Merge journalctl checks for K3s

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Matched Manual/Automated to correct scoring (false/true)

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Remove incorrect use of check_for_default_sa.sh script

Signed-off-by: Derek Nola <derek.nola@suse.com>

---------

Signed-off-by: Derek Nola <derek.nola@suse.com>
Co-authored-by: afdesk <work@afdesk.com>
2024-09-25 13:12:02 +06:00
mjshastha
f8b6f2fc19 chore: fixed vulns - bump Go version (#1687) 2024-09-24 12:12:40 +06:00
Saurabh Misra
c533d68bad FIXING RKE-2-CIS-1.24 Checks (#1688)
MASTER:
          Checks 1.1.10,1.1.20 are manual 
NODE:
            a. Check 4.2.12 is the node-level equivalent of the master-level check 1.3.6 and is treated the same way.
2024-09-24 11:56:58 +06:00
dependabot[bot]
5a3fd1d896 build(deps): bump golang from 1.22.2 to 1.22.4 (#1629)
Bumps golang from 1.22.2 to 1.22.4.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-07-04 08:46:34 +03:00
chenk
366e79ddda release: prepare v0.8.0 (#1639)
Signed-off-by: chenk <hen.keinan@gmail.com>
2024-07-02 10:35:09 +03:00
dependabot[bot]
871027447f build(deps): bump goreleaser/goreleaser-action from 5 to 6 (#1628)
Bumps [goreleaser/goreleaser-action](https://github.com/goreleaser/goreleaser-action) from 5 to 6.
- [Release notes](https://github.com/goreleaser/goreleaser-action/releases)
- [Commits](https://github.com/goreleaser/goreleaser-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: goreleaser/goreleaser-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-29 15:53:49 +03:00
Andy Pitcher
7027b6b2ec Add CIS kubernetes CIS-1.9 for k8s v1.27 - v1.29 (#1617)
* Create cis-1.9 yamls and Update info
      - policies.yaml
          - 5.1.1 to 5.1.6 were adapted from Manual to Automated
          - 5.1.3 got broken down into 5.1.3.1 and 5.1.3.2
          - 5.1.6 got broken down into 5.1.6.1 and 5.1.6.2
          - version was set to cis-1.9
       - node.yaml master.yaml controlplane.yaml etcd.yaml
          - version was set to cis-1.9

* Adapt master.yaml
    - Expand 1.1.13/1.1.14 checks by adding super-admin.conf to the permission and ownership verification
    - Remove 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
    - Adjust numbering from 1.2.12 to 1.2.29

* Adjust policies.yaml
   - Check 5.2.3 to 5.2.9 Title Automated to Manual

* Append node.yaml
   - Create 4.3 kube-config group
   - Create 4.3.1 Ensure that the kube-proxy metrics service is bound to localhost (Automated)

* Adjust policies 5.1.3 and 5.1.6

   - Merge 5.1.3.1 and 5.1.3.2 into 5.1.3 (use role_is_compliant and clusterrole_is_compliant)
   - Remove 5.1.6.1 and promote 5.1.6.2 to 5.1.6 since it natively covered 5.1.6.1 artifacts

* Add kubectl dependency and update publish
   - Download kubectl (build stage) based on version and architecture
   - Add binary checksum verification
   - Use go env GOARCH for ARCH
2024-06-26 15:53:57 +03:00
dependabot[bot]
d8fc37649a build(deps): bump alpine from 3.19.1 to 3.20.0 (#1621)
Bumps alpine from 3.19.1 to 3.20.0.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-31 17:28:56 +03:00
Paulo Gomes
0f8dfaf115 Statically link binaries and remove debug information (#1615)
Signed-off-by: Paulo Gomes <pjbgf@linux.com>
2024-05-22 08:37:36 +03:00
Derek Nola
ed51191d7c Replace custom k3s etcd script checks with vanilla grep checks (#1601)
* Replace custom k3s etcd script checks with vanilla grep checks

Signed-off-by: Derek Nola <derek.nola@suse.com>

* Rework etcd grep, remove etcd ENV checks (no-op), add correct k3s etcddatadir

Signed-off-by: Derek Nola <derek.nola@suse.com>

* chore: update go-linter version

Signed-off-by: chenk <hen.keinan@gmail.com>

* Use etcddatadir variable

Signed-off-by: Derek Nola <derek.nola@suse.com>

---------

Signed-off-by: Derek Nola <derek.nola@suse.com>
Signed-off-by: chenk <hen.keinan@gmail.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-05-20 13:47:15 +03:00
dependabot[bot]
2a8615befd build(deps): bump golang from 1.22.1 to 1.22.2 (#1596)
Bumps golang from 1.22.1 to 1.22.2.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-03 19:35:58 +03:00
chenk
ff9341a5d0 release: prepare-v0.7.3 (#1599)
Signed-off-by: chenk <hen.keinan@gmail.com>
2024-04-18 09:58:44 +03:00
dependabot[bot]
65c484e85a build(deps): bump k8s.io/client-go from 0.29.1 to 0.29.3 (#1587)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.29.1 to 0.29.3.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.29.1...v0.29.3)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-04-18 09:54:55 +03:00
mjshastha
d2d3e72271 Currently, certain commands involve retrieving all node names or pods and then executing additional commands in a loop, resulting in a time complexity linearly proportional to the number of nodes. (#1597)
This approach becomes time-consuming for larger clusters.

As kube-bench is executed as a job on every node in the cluster, To enhance performance, Streamlined the commands to execute directly on current node where kube-bench operates.
This change ensures that the time complexity remains constant, regardless of the cluster size.
By running the necessary commands only once per node, regardless of how many nodes are in the cluster, this approach significantly boosts performance and efficiency.
2024-04-18 09:01:17 +03:00
dependabot[bot]
73e1377ce0 build(deps): bump github.com/jackc/pgx/v5 from 5.4.3 to 5.5.4 (#1586)
Bumps [github.com/jackc/pgx/v5](https://github.com/jackc/pgx) from 5.4.3 to 5.5.4.
- [Changelog](https://github.com/jackc/pgx/blob/master/CHANGELOG.md)
- [Commits](https://github.com/jackc/pgx/compare/v5.4.3...v5.5.4)

---
updated-dependencies:
- dependency-name: github.com/jackc/pgx/v5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-06 08:59:45 +03:00
dependabot[bot]
dc8f4d37f0 build(deps): bump github.com/aws/aws-sdk-go-v2 from 1.25.2 to 1.26.0 (#1589)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.25.2 to 1.26.0.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.25.2...v1.26.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-03-30 12:41:07 +03:00
dependabot[bot]
dc7441620f build(deps): bump golang from 1.22.0 to 1.22.1 (#1583)
Bumps golang from 1.22.0 to 1.22.1.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-29 14:10:34 +03:00
dependabot[bot]
45afbd76c2 build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1577)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.26.6 to 1.27.4.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.26.6...config/v1.27.4)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-03-08 15:36:33 +02:00
chenk
abfa7d9613 release: prepare v0.7.2 (#1578)
Signed-off-by: chenk <hen.keinan@gmail.com>
2024-02-29 13:37:20 +02:00
dependabot[bot]
3db3f736f8 build(deps): bump golangci/golangci-lint-action from 3 to 4 (#1568)
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 3 to 4.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-02-19 13:12:30 +02:00
dependabot[bot]
57132a69fd build(deps): bump gorm.io/driver/postgres from 1.5.4 to 1.5.6 (#1567)
Bumps [gorm.io/driver/postgres](https://github.com/go-gorm/postgres) from 1.5.4 to 1.5.6.
- [Commits](https://github.com/go-gorm/postgres/compare/v1.5.4...v1.5.6)

---
updated-dependencies:
- dependency-name: gorm.io/driver/postgres
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-02-19 10:27:44 +02:00
dependabot[bot]
f297da6603 build(deps): bump golang from 1.21.6 to 1.22.0 (#1569)
Bumps golang from 1.21.6 to 1.22.0.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-02-19 09:51:35 +02:00
dependabot[bot]
66a215189e build(deps): bump codecov/codecov-action from 3 to 4 (#1561)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 3 to 4.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-02-19 09:31:45 +02:00
dependabot[bot]
72eee4b7a4 build(deps): bump alpine from 3.19.0 to 3.19.1 (#1557)
Bumps alpine from 3.19.0 to 3.19.1.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-02-19 09:15:28 +02:00
Kiran Bodipi
ee5e4aff51 update rke-cis-1.24 benchmarks: corrected errors and tests (#1570)
corrected few benchmarks with title and respective tests
Handled type and title mismatch
Added missing audit commands
2024-02-15 11:34:31 +02:00
Kiran Bodipi
2374e7b07f Rancher checks correction (#1563)
1. Have modified test criteria such that it produces right output in case of there is no file exists.
2. Have modified the tests wherever root:root is checked multiple times.
2024-02-12 15:29:36 +02:00
Andrey Polovov
faeceb5dfa job.yaml: Adding /var/lib/cni mounts for proper CIS 1.1.9 and 1.1.0 checking (#1547)
Signed-off-by: Andrey Polovov <andrey.polovov@flant.com>
Signed-off-by: Andrey Pavlov <andrey.pavlov@flant.com>
Co-authored-by: Andrey Pavlov <andrey.pavlov@flant.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-02-11 11:23:17 +02:00
dependabot[bot]
30217061ac build(deps): bump github.com/aws/aws-sdk-go-v2/config (#1554)
Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.18.4 to 1.26.6.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.18.4...config/v1.26.6)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/config
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-02-03 09:35:10 +02:00
chenk
445c1160cf release: prepare v0.7.1 (#1559)
Signed-off-by: chenk <hen.keinan@gmail.com>
2024-01-31 11:57:16 +02:00
Devendra Turkar
57fba224fa chore: update base image to ubi9 (#1556) 2024-01-31 09:51:08 +02:00
dependabot[bot]
a93b19f0c0 build(deps): bump k8s.io/client-go from 0.29.0 to 0.29.1 (#1552)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.29.0 to 0.29.1.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.29.0...v0.29.1)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-28 12:21:47 +02:00
dependabot[bot]
b17aa709b3 build(deps): bump k8s.io/apimachinery from 0.29.0 to 0.29.1 (#1553)
Bumps [k8s.io/apimachinery](https://github.com/kubernetes/apimachinery) from 0.29.0 to 0.29.1.
- [Commits](https://github.com/kubernetes/apimachinery/compare/v0.29.0...v0.29.1)

---
updated-dependencies:
- dependency-name: k8s.io/apimachinery
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-01-27 11:18:39 +02:00
dependabot[bot]
faa1b4be3d build(deps): bump actions/cache from 3 to 4 (#1551)
Bumps [actions/cache](https://github.com/actions/cache) from 3 to 4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-01-26 13:40:45 +02:00
dependabot[bot]
628999c9c5 build(deps): bump golang from 1.21.5 to 1.21.6 (#1549)
Bumps golang from 1.21.5 to 1.21.6.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-01-26 13:12:14 +02:00
Kiran Bodipi
13da372a87 Updating the rh-1.0 OCP checks (#1548)
1. Added audit commands wherever required.
2. Updated the scripts with type to manual to match the title.
3. Updated the scripts with test_items wherever required.
4. Fixed a typo.
2024-01-23 08:56:40 +02:00
dependabot[bot]
38949874d1 build(deps): bump github.com/aws/aws-sdk-go-v2 from 1.18.0 to 1.24.1 (#1550)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.18.0 to 1.24.1.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.18.0...v1.24.1)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-20 12:22:18 +02:00
dependabot[bot]
39c29fb07a build(deps): bump alpine from 3.18.3 to 3.19.0 (#1535)
Bumps alpine from 3.18.3 to 3.19.0.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-01-12 08:01:55 +02:00
dependabot[bot]
cc6c091b41 build(deps): bump gorm.io/driver/postgres from 1.4.6 to 1.5.4 (#1514)
Bumps [gorm.io/driver/postgres](https://github.com/go-gorm/postgres) from 1.4.6 to 1.5.4.
- [Commits](https://github.com/go-gorm/postgres/compare/v1.4.6...v1.5.4)

---
updated-dependencies:
- dependency-name: gorm.io/driver/postgres
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-11 11:02:00 +02:00
dependabot[bot]
7efba2b94d build(deps): bump k8s.io/client-go from 0.26.0 to 0.29.0 (#1540)
Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.26.0 to 0.29.0.
- [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md)
- [Commits](https://github.com/kubernetes/client-go/compare/v0.26.0...v0.29.0)

---
updated-dependencies:
- dependency-name: k8s.io/client-go
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-11 10:34:21 +02:00
Devendra Turkar
a4b46f50de chore: update go version to 1.21 (#1546)
Updating go version to 1.21
2024-01-10 14:26:50 +02:00
dependabot[bot]
221ff4fd42 build(deps): bump actions/setup-go from 4 to 5 (#1537)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2024-01-06 11:12:57 +02:00
dependabot[bot]
8c47d59e99 build(deps): bump github.com/spf13/viper from 1.14.0 to 1.18.2 (#1541)
Bumps [github.com/spf13/viper](https://github.com/spf13/viper) from 1.14.0 to 1.18.2.
- [Release notes](https://github.com/spf13/viper/releases)
- [Commits](https://github.com/spf13/viper/compare/v1.14.0...v1.18.2)

---
updated-dependencies:
- dependency-name: github.com/spf13/viper
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-05 07:59:17 +02:00
dependabot[bot]
151efc3494 build(deps): bump golang.org/x/crypto from 0.14.0 to 0.17.0 (#1542)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.14.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-12-29 20:57:02 +02:00
chenk
58a49da713 release: prepare v0.7.0 (#1543)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-12-19 09:08:02 +02:00
dependabot[bot]
64c0492401 build(deps): bump docker/login-action from 2 to 3 (#1500)
Bumps [docker/login-action](https://github.com/docker/login-action) from 2 to 3.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-12-19 08:42:25 +02:00
mjshastha
7a55d5d57c Issue: The initial command produces "root:root" as its output only when the file is present. However, if the file is missing, the command will still run successfully, though the desired output of "root:root" won't be obtained. (#1538)
Fix: To address this, we've modified the command to achieve the following:

Verify the existence of the file.

If the file is found, show the user and group ownership in the "username:groupname" format.

If the file is not found, display the message "File not found."

To accommodate this change, we've integrated the expected output "File not found" for instances where the file is absent. This adjustment ensures the successful execution of the test.

Co-authored-by: mjshastha <manojshastha.madriki@aquasec.com>
2023-12-18 09:10:07 +02:00
dependabot[bot]
f353bc4cba build(deps): bump golang from 1.21.3 to 1.21.5 (#1534)
Bumps golang from 1.21.3 to 1.21.5.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-13 09:28:12 +02:00
dependabot[bot]
1393449298 build(deps): bump docker/setup-buildx-action from 2 to 3 (#1497)
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 2 to 3.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-12-10 14:07:09 +02:00
dependabot[bot]
875fbc7f20 build(deps): bump github.com/spf13/cobra from 1.6.1 to 1.8.0 (#1530)
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.6.1 to 1.8.0.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.6.1...v1.8.0)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-12-08 08:07:14 +02:00
guangwu
c3e3c4c31c chore: remove refs to deprecated io/ioutil (#1504)
Signed-off-by: guoguangwu <guoguangwu@magic-shield.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-12-05 10:52:24 +02:00
dependabot[bot]
292678a907 build(deps): bump actions/checkout from 3 to 4 (#1492)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-12-04 14:18:41 +02:00
Huang Huang
0c553cd2f6 fix wrong use of flag in test_items found in 4.13 and 4.14 (#1528)
* fix wrong use of flag in test_items found in 4.13 and 4.14

Fixes #1491

* fix for more benchmarks

* update integration test

* fix test
2023-12-03 09:06:35 +02:00
Huang Huang
92a18e7dfd support CIS Kubernetes Benchmark v1.8.0 (#1527)
* support CIS Kubernetes Benchmark v1.8.0

* update version info
2023-12-02 09:59:30 +02:00
dependabot[bot]
ade7cef969 build(deps): bump gorm.io/gorm from 1.25.1 to 1.25.5 (#1516)
Bumps [gorm.io/gorm](https://github.com/go-gorm/gorm) from 1.25.1 to 1.25.5.
- [Release notes](https://github.com/go-gorm/gorm/releases)
- [Commits](https://github.com/go-gorm/gorm/compare/v1.25.1...v1.25.5)

---
updated-dependencies:
- dependency-name: gorm.io/gorm
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-01 20:11:43 +02:00
Kiran Bodipi
f8fe5ee173 Add CIS Benchmarks support to Rancher Distributions RKE/RKE2/K3s (#1523)
* add Support VMware Tanzu(TKGI) Benchmarks v1.2.53
with this change, we are adding
1. latest kubernetes cis benchmarks for VMware Tanzu1.2.53
2. logic to kube-bench so that kube-bench can auto detect vmware platform, will be able to execute the respective vmware tkgi compliance checks.
3. job-tkgi.yaml file to run the benchmark as a job in tkgi cluster
Reference Document for checks: https://network.pivotal.io/products/p-compliance-scanner/#/releases/1248397

* add Support VMware Tanzu(TKGI) Benchmarks v1.2.53
with this change, we are adding
1. latest kubernetes cis benchmarks for VMware Tanzu1.2.53
2. logic to kube-bench so that kube-bench can auto detect vmware platform, will be able to execute the respective vmware tkgi compliance checks.
3. job-tkgi.yaml file to run the benchmark as a job in tkgi cluster
Reference Document for checks: https://network.pivotal.io/products/p-compliance-scanner/#/releases/1248397

* release: prepare v0.6.15 (#1455)

Signed-off-by: chenk <hen.keinan@gmail.com>

* build(deps): bump golang from 1.19.4 to 1.20.4 (#1436)

Bumps golang from 1.19.4 to 1.20.4.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* build(deps): bump actions/setup-go from 3 to 4 (#1402)

Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3 to 4.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>

* Fix test_items in cis-1.7 - node - 4.2.12 (#1469)

Related issue: https://github.com/aquasecurity/kube-bench/issues/1468

* Fix node.yaml - 4.1.7 and 4.1.8 audit by adding uniq (#1472)

* chore: add fips compliant images (#1473)

For fips complaince we need to generate fips compliant images.
As part of this change, we will create new kube-bench image which will be fips compliant. Image name follows this tag pattern <version>-ubi-fips

* release: prepare v0.6.16-rc (#1476)

* release: prepare v0.6.16-rc

Signed-off-by: chenk <hen.keinan@gmail.com>

* release: prepare v0.6.16-rc

Signed-off-by: chenk <hen.keinan@gmail.com>

---------

Signed-off-by: chenk <hen.keinan@gmail.com>

* release: prepare v0.6.16 official (#1479)

Signed-off-by: chenk <hen.keinan@gmail.com>

* Update job.yaml (#1477)

* Update job.yaml

Fix on typo for image version

* chore: sync with upstream

Signed-off-by: chenk <hen.keinan@gmail.com>

---------

Signed-off-by: chenk <hen.keinan@gmail.com>
Co-authored-by: chenk <hen.keinan@gmail.com>

* release: prepare v0.6.17 (#1480)

Signed-off-by: chenk <hen.keinan@gmail.com>

* Bump docker base images (#1465)

During a recent CVE scan we found kube-bench to use `alpine:3.18` as the final image which has a known high CVE.

```
grype aquasec/kube-bench:v0.6.15
 ✔ Vulnerability DB        [no update available]
 ✔ Loaded image
 ✔ Parsed image
 ✔ Cataloged packages      [73 packages]
 ✔ Scanning image...       [4 vulnerabilities]
   ├── 0 critical, 4 high, 0 medium, 0 low, 0 negligible
   └── 4 fixed
NAME        INSTALLED  FIXED-IN  TYPE  VULNERABILITY  SEVERITY
libcrypto3  3.1.0-r4   3.1.1-r0  apk   CVE-2023-2650  High
libssl3     3.1.0-r4   3.1.1-r0  apk   CVE-2023-2650  High
openssl     3.1.0-r4   3.1.1-r0  apk   CVE-2023-2650  High
```

The CVE in question was addressed in the latest [alpine release](https://www.alpinelinux.org/posts/Alpine-3.15.9-3.16.6-3.17.4-3.18.2-released.html), hence updating the dockerfiles accordingly

* build(deps): bump golang from 1.20.4 to 1.20.6 (#1475)

Bumps golang from 1.20.4 to 1.20.6.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Add CIS Benchmarks support to Rancher Distributions RKE/RKE2/K3s
Based on the information furnished in https://ranchermanager.docs.rancher.com/v2.7/pages-for-subheaders/rancher-hardening-guides
kube-bench executes CIS-1.23 (Kubernetes v1.23) , CIS-1.24(Kubernetes v1.24),CIS-1.7 (Kubernetes v1.25,v1.26,v1.27) CIS Benchmarks of respective distributions.

* RKE/RKE2 CIS Benchmarks
Updated the order of checks for RKE and RKE2 Platforms.

* fixed vulnerabilities|upgraded package golang.org/x/net to version v0.17.0

* Error handling for RKE Detection Pre-requisites

* Based on the information furnished in https://ranchermanager.docs.rancher.com/v2.7/pages-for-subheaders/rancher-hardening-guides#hardening-guides-and-benchmark-versions, kube-bench executes CIS-1.23 (Kubernetes v1.23) , CIS-1.24(Kubernetes v1.24),CIS-1.7 (Kubernetes v1.25,v1.26,v1.27) CIS Benchmarks of respective distributions.
updated documentation specific to added rancher platforms

* addressed review comments
1.Implemented IsRKE functionality in kube-bench
2. Removed containerd from global level config and accommodated in individual config file
3. Corrected the control id from 1.2.25 to 1.2.23 in master.yaml(k3s-cis-1.23 and k3s-cis-1.24)

* Removed unncessary dependency - kubernetes-provider-detector

---------

Signed-off-by: chenk <hen.keinan@gmail.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Andy Pitcher <andy.pitcher@suse.com>
Co-authored-by: Devendra Turkar <devendra.turkar@gmail.com>
Co-authored-by: Guille Vigil <contact@guillermotti.com>
Co-authored-by: Jonas-Taha El Sesiy <jonas-taha.elsesiy@snowflake.com>
2023-11-26 12:27:38 +02:00
Benjamin Schimke
fac90f756e feat(cis-1.24-microk8s): Add support to CIS-1.24 for microk8s distro (#1510) 2023-11-20 12:59:32 +02:00
dependabot[bot]
63055a7332 build(deps): bump github.com/fatih/color from 1.14.1 to 1.16.0 (#1520)
Bumps [github.com/fatih/color](https://github.com/fatih/color) from 1.14.1 to 1.16.0.
- [Release notes](https://github.com/fatih/color/releases)
- [Commits](https://github.com/fatih/color/compare/v1.14.1...v1.16.0)

---
updated-dependencies:
- dependency-name: github.com/fatih/color
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-12 10:47:01 +02:00
dependabot[bot]
dc0580cebe build(deps): bump golang from 1.21.1 to 1.21.3 (#1507)
Bumps golang from 1.21.1 to 1.21.3.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-11-03 18:33:42 +02:00
dependabot[bot]
0918b41eca build(deps): bump github.com/golang/glog from 1.0.0 to 1.1.2 (#1489)
Bumps [github.com/golang/glog](https://github.com/golang/glog) from 1.0.0 to 1.1.2.
- [Release notes](https://github.com/golang/glog/releases)
- [Commits](https://github.com/golang/glog/compare/v1.0.0...v1.1.2)

---
updated-dependencies:
- dependency-name: github.com/golang/glog
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-10-27 21:45:30 +03:00
dependabot[bot]
2b466ab239 build(deps): bump docker/setup-qemu-action from 2 to 3 (#1503)
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 2 to 3.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-10-27 21:35:49 +03:00
chenk
55a18aed87 release: prepare-0.6.19 (#1511)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-10-23 10:03:22 +03:00
dependabot[bot]
7f5a2eb78b build(deps): bump docker/build-push-action from 4 to 5 (#1498)
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 4 to 5.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-10-20 19:31:35 +03:00
chenk
18f8456abd release: prepare v0.6.18 (#1509)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-10-17 16:28:52 +03:00
chenk
8bc4daae10 release: prepare v0.6.18-rc (#1508)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-10-17 11:34:53 +03:00
AnaisUrlichs
7ad0f2fee6 updates to the readme
Signed-off-by: AnaisUrlichs <urlichsanais@gmail.com>
2023-10-02 12:39:24 +03:00
dependabot[bot]
276d30ad75 build(deps): bump crazy-max/ghaction-docker-meta from 4 to 5 (#1499)
Bumps [crazy-max/ghaction-docker-meta](https://github.com/crazy-max/ghaction-docker-meta) from 4 to 5.
- [Release notes](https://github.com/crazy-max/ghaction-docker-meta/releases)
- [Upgrade guide](https://github.com/docker/metadata-action/blob/master/UPGRADE.md)
- [Commits](https://github.com/crazy-max/ghaction-docker-meta/compare/v4...v5)

---
updated-dependencies:
- dependency-name: crazy-max/ghaction-docker-meta
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-30 19:34:22 +03:00
dependabot[bot]
e1c6c80d02 build(deps): bump golang from 1.20.6 to 1.21.1 (#1494)
Bumps golang from 1.20.6 to 1.21.1.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-16 12:59:20 +03:00
dependabot[bot]
34ef478b41 build(deps): bump goreleaser/goreleaser-action from 4 to 5 (#1495)
Bumps [goreleaser/goreleaser-action](https://github.com/goreleaser/goreleaser-action) from 4 to 5.
- [Release notes](https://github.com/goreleaser/goreleaser-action/releases)
- [Commits](https://github.com/goreleaser/goreleaser-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: goreleaser/goreleaser-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-12 08:11:27 +03:00
dependabot[bot]
3ef3e9a861 build(deps): bump alpine from 3.18.2 to 3.18.3 (#1487)
Bumps alpine from 3.18.2 to 3.18.3.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-09 21:37:29 +03:00
dependabot[bot]
d70459b77c build(deps): bump golang from 1.20.4 to 1.20.6 (#1475)
Bumps golang from 1.20.4 to 1.20.6.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-07-28 12:12:45 +03:00
Jonas-Taha El Sesiy
20ad80577c Bump docker base images (#1465)
During a recent CVE scan we found kube-bench to use `alpine:3.18` as the final image which has a known high CVE.

```
grype aquasec/kube-bench:v0.6.15
 ✔ Vulnerability DB        [no update available]
 ✔ Loaded image
 ✔ Parsed image
 ✔ Cataloged packages      [73 packages]
 ✔ Scanning image...       [4 vulnerabilities]
   ├── 0 critical, 4 high, 0 medium, 0 low, 0 negligible
   └── 4 fixed
NAME        INSTALLED  FIXED-IN  TYPE  VULNERABILITY  SEVERITY
libcrypto3  3.1.0-r4   3.1.1-r0  apk   CVE-2023-2650  High
libssl3     3.1.0-r4   3.1.1-r0  apk   CVE-2023-2650  High
openssl     3.1.0-r4   3.1.1-r0  apk   CVE-2023-2650  High
```

The CVE in question was addressed in the latest [alpine release](https://www.alpinelinux.org/posts/Alpine-3.15.9-3.16.6-3.17.4-3.18.2-released.html), hence updating the dockerfiles accordingly
2023-07-26 18:22:19 +03:00
chenk
456684462a release: prepare v0.6.17 (#1480)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-07-25 12:41:24 +03:00
Guille Vigil
c8cabc4b14 Update job.yaml (#1477)
* Update job.yaml

Fix on typo for image version

* chore: sync with upstream

Signed-off-by: chenk <hen.keinan@gmail.com>

---------

Signed-off-by: chenk <hen.keinan@gmail.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-07-25 12:30:14 +03:00
chenk
8c6915c478 release: prepare v0.6.16 official (#1479)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-07-25 10:33:54 +03:00
chenk
9363cdf8ef release: prepare v0.6.16-rc (#1476)
* release: prepare v0.6.16-rc

Signed-off-by: chenk <hen.keinan@gmail.com>

* release: prepare v0.6.16-rc

Signed-off-by: chenk <hen.keinan@gmail.com>

---------

Signed-off-by: chenk <hen.keinan@gmail.com>
2023-07-24 11:01:43 +03:00
Devendra Turkar
b29ed6b6ed chore: add fips compliant images (#1473)
For fips complaince we need to generate fips compliant images.
As part of this change, we will create new kube-bench image which will be fips compliant. Image name follows this tag pattern <version>-ubi-fips
2023-07-24 10:02:19 +03:00
Andy Pitcher
aa16551811 Fix node.yaml - 4.1.7 and 4.1.8 audit by adding uniq (#1472) 2023-07-11 11:45:06 +03:00
Andy Pitcher
40cdc1bfbb Fix test_items in cis-1.7 - node - 4.2.12 (#1469)
Related issue: https://github.com/aquasecurity/kube-bench/issues/1468
2023-07-02 10:50:07 +03:00
dependabot[bot]
e2e353a81a build(deps): bump actions/setup-go from 3 to 4 (#1402)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3 to 4.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-06-24 19:42:03 +03:00
dependabot[bot]
a727d73e8a build(deps): bump golang from 1.19.4 to 1.20.4 (#1436)
Bumps golang from 1.19.4 to 1.20.4.

---
updated-dependencies:
- dependency-name: golang
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-10 18:07:26 +03:00
chenk
76c25b2db2 release: prepare v0.6.15 (#1455)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-06-06 17:40:44 +03:00
KiranBodipi
ca8743c1f7 add support VMware Tanzu(TKGI) Benchmarks v1.2.53 (#1452)
* add Support VMware Tanzu(TKGI) Benchmarks v1.2.53
with this change, we are adding
1. latest kubernetes cis benchmarks for VMware Tanzu1.2.53
2. logic to kube-bench so that kube-bench can auto detect vmware platform, will be able to execute the respective vmware tkgi compliance checks.
3. job-tkgi.yaml file to run the benchmark as a job in tkgi cluster
Reference Document for checks: https://network.pivotal.io/products/p-compliance-scanner/#/releases/1248397

* add Support VMware Tanzu(TKGI) Benchmarks v1.2.53
with this change, we are adding
1. latest kubernetes cis benchmarks for VMware Tanzu1.2.53
2. logic to kube-bench so that kube-bench can auto detect vmware platform, will be able to execute the respective vmware tkgi compliance checks.
3. job-tkgi.yaml file to run the benchmark as a job in tkgi cluster
Reference Document for checks: https://network.pivotal.io/products/p-compliance-scanner/#/releases/1248397
2023-06-01 16:37:50 +03:00
dependabot[bot]
84f80b59b8 build(deps): bump alpine from 3.17 to 3.18 (#1443)
Bumps alpine from 3.17 to 3.18.

---
updated-dependencies:
- dependency-name: alpine
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-26 13:41:30 +03:00
Huang Huang
60dde65d72 support CIS Amazon Elastic Kubernetes Service (EKS) Benchmark v1.2.0 (#1449)
closes #1448
2023-05-21 17:53:58 +03:00
Huang Huang
124c57c6f4 support CIS Kubernetes Benchmark v1.7.0 (#1424) 2023-05-21 15:46:16 +03:00
Huang Huang
e41755ba90 cis-1.24: fix tests of 1.1.1 and 4.2.9 were wrong (#1423)
fixes #1410
fixes #1421
2023-05-21 11:39:51 +03:00
dependabot[bot]
6de03bbd7d build(deps): bump github.com/aws/aws-sdk-go-v2 from 1.17.6 to 1.18.0 (#1433)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.17.6 to 1.18.0.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.17.6...v1.18.0)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-05-20 18:45:31 +03:00
chenk
c2880848f0 release: prepare v0.6.14 (#1446)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-05-18 10:32:39 +03:00
wangxiaoer
968ee5814e replace with constant (#1445) 2023-05-16 11:41:49 +03:00
chenk
29c8f16167 release: prepare v0.6.14-rc (#1442)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-05-15 15:34:00 +03:00
Devendra Turkar
b0e49c8789 fix: ignore the error from findConfigFile (#1440)
When we are trying to access a file from a directory which is not present then we get different error.
We dont have standard error method to check the msg so added string match for this case
2023-05-15 15:01:30 +03:00
dependabot[bot]
e38c829dbc build(deps): bump gorm.io/gorm from 1.24.2 to 1.25.1 (#1437)
Bumps [gorm.io/gorm](https://github.com/go-gorm/gorm) from 1.24.2 to 1.25.1.
- [Release notes](https://github.com/go-gorm/gorm/releases)
- [Commits](https://github.com/go-gorm/gorm/compare/v1.24.2...v1.25.1)

---
updated-dependencies:
- dependency-name: gorm.io/gorm
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-05-13 19:37:45 +03:00
chenk
8098489433 release: prepare v0.6.13 (#1429)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-04-24 11:02:19 +03:00
Murali Paluru
b43f58dcda add darwin builds (#1428) 2023-04-18 21:15:05 +03:00
chenk
dd6573f3ed release: prepare v0.6.13-rc2 (#1426)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-04-17 16:19:37 +03:00
Devendra Turkar
0ff5dd0b8e chore: Add license file for ubi image (#1425) 2023-04-17 16:07:31 +03:00
chenk
124a8b3a5a release: prepare v0.6.13-rc (#1416)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-04-10 13:59:13 +03:00
Rayan Das
c3b6871766 Fix version in policies.yaml (#1415) 2023-04-07 17:33:52 +03:00
Devendra Turkar
96c6b385ef chore: publish ubi based image (#1412)
* chore: publish ubi based image

- added publish step to publish ubi image
- updated base image for alpine based dockerfile

* chore: update pipeline image to ubuntu-latest
2023-04-05 13:02:36 +03:00
dependabot[bot]
9e41099cec build(deps): bump github.com/aws/aws-sdk-go-v2/service/securityhub (#1397)
Bumps [github.com/aws/aws-sdk-go-v2/service/securityhub](https://github.com/aws/aws-sdk-go-v2) from 1.23.5 to 1.29.1.
- [Release notes](https://github.com/aws/aws-sdk-go-v2/releases)
- [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/CHANGELOG.md)
- [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/ecs/v1.23.5...service/s3/v1.29.1)

---
updated-dependencies:
- dependency-name: github.com/aws/aws-sdk-go-v2/service/securityhub
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chenk <hen.keinan@gmail.com>
2023-03-25 12:34:54 +03:00
Jack Henschel
0decc8a53f docs: Clarify how to run Job on OpenShift (#1401)
Signed-off-by: Jack Henschel <jackdev@mailbox.org>
2023-03-18 19:30:19 +02:00
dependabot[bot]
7aeb6c3977 build(deps): bump github.com/fatih/color from 1.13.0 to 1.14.1 (#1363)
Bumps [github.com/fatih/color](https://github.com/fatih/color) from 1.13.0 to 1.14.1.
- [Release notes](https://github.com/fatih/color/releases)
- [Commits](https://github.com/fatih/color/compare/v1.13.0...v1.14.1)

---
updated-dependencies:
- dependency-name: github.com/fatih/color
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-03-11 09:29:38 +02:00
chenk
7d0d8ca993 release: prepare v0.6.12 (#1387)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-02-23 13:30:56 +02:00
chenk
823f3e1064 release: prepare v0.6.12-rc (#1385)
Signed-off-by: chenk <hen.keinan@gmail.com>
2023-02-23 09:09:31 +02:00
Devendra Turkar
fc72a8a620 bugfix: false negative when audit_config file not found (#1376)
In case of RKE, env error comes with exit status 1, so added OR codition to match with error text as well.

resolve: #1364
2023-02-14 10:32:02 +02:00
152 changed files with 33147 additions and 1080 deletions

View File

@@ -14,56 +14,56 @@ on:
- "LICENSE"
- "NOTICE"
env:
GO_VERSION: "1.19"
GO_VERSION: "1.22.7"
KIND_VERSION: "v0.11.1"
KIND_IMAGE: "kindest/node:v1.21.1@sha256:69860bda5563ac81e3c0057d654b5253219618a22ec3a346306239bba8cfa1a6"
jobs:
lint:
name: Lint
runs-on: ubuntu-18.04
runs-on: ubuntu-latest
steps:
- name: Setup Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: yaml-lint
uses: ibiqlik/action-yamllint@v3
- name: Setup golangci-lint
uses: golangci/golangci-lint-action@v3
uses: golangci/golangci-lint-action@v6
with:
version: latest
version: v1.57.2
args: --verbose
unit:
name: Unit tests
runs-on: ubuntu-18.04
runs-on: ubuntu-latest
steps:
- name: Setup Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Run unit tests
run: make tests
- name: Upload code coverage
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
with:
file: ./coverage.txt
e2e:
name: E2e tests
runs-on: ubuntu-18.04
runs-on: ubuntu-latest
steps:
- name: Setup Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Setup Kubernetes cluster (KIND)
uses: engineerd/setup-kind@v0.5.0
uses: engineerd/setup-kind@v0.6.2
with:
version: ${{ env.KIND_VERSION }}
image: ${{ env.KIND_IMAGE }}
@@ -83,19 +83,19 @@ jobs:
expected_result: PASSED
release:
name: Release snapshot
runs-on: ubuntu-18.04
runs-on: ubuntu-latest
needs: [e2e, unit]
steps:
- name: Setup Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Dry-run release snapshot
uses: goreleaser/goreleaser-action@v4
uses: goreleaser/goreleaser-action@v6
with:
distribution: goreleaser
version: v1.7.0

View File

@@ -13,14 +13,14 @@ on:
jobs:
deploy:
name: Deploy documentation
runs-on: ubuntu-18.04
runs-on: ubuntu-latest
steps:
- name: Checkout main
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
persist-credentials: true
- uses: actions/setup-python@v4
- uses: actions/setup-python@v5
with:
python-version: 3.x
- run: |

View File

@@ -9,47 +9,51 @@ env:
ALIAS: aquasecurity
DOCKERHUB_ALIAS: aquasec
REP: kube-bench
jobs:
publish:
name: Publish
runs-on: ubuntu-18.04
runs-on: ubuntu-latest
steps:
- name: Check Out Repo
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
- name: Cache Docker layers
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildxarch-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildxarch-
- name: Login to Docker Hub
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USER }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to ECR
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: public.ecr.aws
username: ${{ secrets.ECR_ACCESS_KEY_ID }}
password: ${{ secrets.ECR_SECRET_ACCESS_KEY }}
- name: Get version
id: get_version
uses: crazy-max/ghaction-docker-meta@v4
uses: crazy-max/ghaction-docker-meta@v5
with:
images: ${{ env.REP }}
tag-semver: |
{{version}}
- name: Extract variables from makefile (kubectl)
id: extract_vars
run: |
echo "KUBECTL_VERSION=$(grep -oP '^KUBECTL_VERSION\s*\?=\s*\K.*' makefile)" >> $GITHUB_ENV
- name: Build and push - Docker/ECR
id: docker_build
uses: docker/build-push-action@v3
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64,linux/ppc64le,linux/s390x
@@ -57,6 +61,7 @@ jobs:
push: true
build-args: |
KUBEBENCH_VERSION=${{ steps.get_version.outputs.version }}
KUBECTL_VERSION=${{ env.KUBECTL_VERSION }}
tags: |
${{ env.DOCKERHUB_ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}
public.ecr.aws/${{ env.ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}
@@ -64,5 +69,43 @@ jobs:
public.ecr.aws/${{ env.ALIAS }}/${{ env.REP }}:latest
cache-from: type=local,src=/tmp/.buildx-cache/release
cache-to: type=local,mode=max,dest=/tmp/.buildx-cache/release
- name: Build and push ubi image - Docker/ECR
id: docker_build_ubi
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64,linux/ppc64le,linux/s390x
builder: ${{ steps.buildx.outputs.name }}
push: true
file: Dockerfile.ubi
build-args: |
KUBEBENCH_VERSION=${{ steps.get_version.outputs.version }}
KUBECTL_VERSION=${{ env.KUBECTL_VERSION }}
tags: |
${{ env.DOCKERHUB_ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}-ubi
public.ecr.aws/${{ env.ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}-ubi
cache-from: type=local,src=/tmp/.buildx-cache/release
cache-to: type=local,mode=max,dest=/tmp/.buildx-cache/release
- name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}
- name: Build and push fips ubi image - Docker/ECR
id: docker_build_fips_ubi
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64,linux/ppc64le,linux/s390x
builder: ${{ steps.buildx.outputs.name }}
push: true
file: Dockerfile.fips.ubi
build-args: |
KUBEBENCH_VERSION=${{ steps.get_version.outputs.version }}
KUBECTL_VERSION=${{ env.KUBECTL_VERSION }}
tags: |
${{ env.DOCKERHUB_ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}-ubi-fips
public.ecr.aws/${{ env.ALIAS }}/${{ env.REP }}:${{ steps.get_version.outputs.version }}-ubi-fips
cache-from: type=local,src=/tmp/.buildx-cache/release
cache-to: type=local,mode=max,dest=/tmp/.buildx-cache/release
- name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}

View File

@@ -5,27 +5,27 @@ on:
tags:
- "v*"
env:
GO_VERSION: "1.19"
GO_VERSION: "1.22.7"
KIND_VERSION: "v0.11.1"
KIND_IMAGE: "kindest/node:v1.21.1@sha256:69860bda5563ac81e3c0057d654b5253219618a22ec3a346306239bba8cfa1a6"
jobs:
release:
name: Release
runs-on: ubuntu-18.04
runs-on: ubuntu-latest
steps:
- name: Setup Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
- name: Checkout code
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Run unit tests
run: make tests
- name: Setup Kubernetes cluster (KIND)
uses: engineerd/setup-kind@v0.5.0
uses: engineerd/setup-kind@v0.6.2
with:
version: ${{ env.KIND_VERSION }}
image: ${{ env.KIND_IMAGE }}
@@ -44,7 +44,7 @@ jobs:
second_file_path: integration/testdata/Expected_output.data
expected_result: PASSED
- name: Release
uses: goreleaser/goreleaser-action@v4
uses: goreleaser/goreleaser-action@v6
with:
distribution: goreleaser
version: v1.7.0

View File

@@ -2,12 +2,18 @@
project_name: kube-bench
env:
- GO111MODULE=on
- CGO_ENABLED=0
- KUBEBENCH_CFG=/etc/kube-bench/cfg
builds:
- main: main.go
binary: kube-bench
tags:
- osusergo
- netgo
- static_build
goos:
- linux
- darwin
goarch:
- amd64
- arm
@@ -18,6 +24,9 @@ builds:
- 6
- 7
ldflags:
- "-s"
- "-w"
- "-extldflags '-static'"
- "-X github.com/aquasecurity/kube-bench/cmd.KubeBenchVersion={{.Version}}"
- "-X github.com/aquasecurity/kube-bench/cmd.cfgDir={{.Env.KUBEBENCH_CFG}}"
# Archive customization

View File

@@ -1,4 +1,4 @@
FROM golang:1.19.4 AS build
FROM golang:1.23.3 AS build
WORKDIR /go/src/github.com/aquasecurity/kube-bench/
COPY makefile makefile
COPY go.mod go.sum ./
@@ -9,11 +9,22 @@ COPY internal/ internal/
ARG KUBEBENCH_VERSION
RUN make build && cp kube-bench /go/bin/kube-bench
FROM alpine:3.17.0 AS run
# Add kubectl to run policies checks
ARG KUBECTL_VERSION TARGETARCH
RUN wget -O /usr/local/bin/kubectl "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl"
RUN wget -O kubectl.sha256 "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl.sha256"
# Verify kubectl sha256sum
RUN /bin/bash -c 'echo "$(<kubectl.sha256) /usr/local/bin/kubectl" | sha256sum -c -'
RUN chmod +x /usr/local/bin/kubectl
FROM alpine:3.20.3 AS run
WORKDIR /opt/kube-bench/
# add GNU ps for -C, -o cmd, and --no-headers support
# add GNU ps for -C, -o cmd, --no-headers support and add findutils to get GNU xargs
# https://github.com/aquasecurity/kube-bench/issues/109
RUN apk --no-cache add procps
# https://github.com/aquasecurity/kube-bench/issues/1656
RUN apk --no-cache add procps findutils
# Upgrading apk-tools to remediate CVE-2021-36159 - https://snyk.io/vuln/SNYK-ALPINE314-APKTOOLS-1533752
# https://github.com/aquasecurity/kube-bench/issues/943
@@ -32,8 +43,10 @@ RUN apk add jq
ENV PATH=$PATH:/usr/local/mount-from-host/bin
COPY --from=build /go/bin/kube-bench /usr/local/bin/kube-bench
COPY --from=build /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY entrypoint.sh .
COPY cfg/ cfg/
COPY helper_scripts/check_files_owner_in_dir.sh /go/bin
ENTRYPOINT ["./entrypoint.sh"]
CMD ["install"]

59
Dockerfile.fips.ubi Normal file
View File

@@ -0,0 +1,59 @@
FROM golang:1.23.3 AS build
WORKDIR /go/src/github.com/aquasecurity/kube-bench/
COPY makefile makefile
COPY go.mod go.sum ./
COPY main.go .
COPY check/ check/
COPY cmd/ cmd/
COPY internal/ internal/
ARG KUBEBENCH_VERSION
RUN make build-fips && cp kube-bench /go/bin/kube-bench
# Add kubectl to run policies checks
ARG KUBECTL_VERSION TARGETARCH
RUN wget -O /usr/local/bin/kubectl "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl"
RUN wget -O kubectl.sha256 "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl.sha256"
# Verify kubectl sha256sum
RUN /bin/bash -c 'echo "$(<kubectl.sha256) /usr/local/bin/kubectl" | sha256sum -c -'
RUN chmod +x /usr/local/bin/kubectl
# ubi8-minimal base image for build with ubi standards
FROM registry.access.redhat.com/ubi9/ubi-minimal as run
RUN microdnf install -y yum findutils openssl \
&& yum -y update-minimal --security --sec-severity=Moderate --sec-severity=Important --sec-severity=Critical \
&& yum update -y \
&& yum install -y glibc \
&& yum update -y glibc \
&& yum install -y procps \
&& yum update -y procps \
&& yum install jq -y \
&& yum clean all \
&& microdnf remove yum || rpm -e -v yum \
&& microdnf clean all
WORKDIR /opt/kube-bench/
ENV PATH=$PATH:/usr/local/mount-from-host/bin
COPY LICENSE /licenses/LICENSE
COPY --from=build /go/bin/kube-bench /usr/local/bin/kube-bench
COPY --from=build /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY entrypoint.sh .
COPY cfg/ cfg/
COPY helper_scripts/check_files_owner_in_dir.sh /go/bin
ENTRYPOINT ["./entrypoint.sh"]
CMD ["install"]
# Build-time metadata as defined at http://label-schema.org
ARG BUILD_DATE
ARG VCS_REF
LABEL org.label-schema.build-date=$BUILD_DATE \
org.label-schema.name="kube-bench" \
org.label-schema.description="Run the CIS Kubernetes Benchmark tests" \
org.label-schema.url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.vcs-ref=$VCS_REF \
org.label-schema.vcs-url="https://github.com/aquasecurity/kube-bench" \
org.label-schema.schema-version="1.0"

View File

@@ -1,4 +1,4 @@
FROM golang:1.19.4 AS build
FROM golang:1.23.3 AS build
WORKDIR /go/src/github.com/aquasecurity/kube-bench/
COPY makefile makefile
COPY go.mod go.sum ./
@@ -9,11 +9,19 @@ COPY internal/ internal/
ARG KUBEBENCH_VERSION
RUN make build && cp kube-bench /go/bin/kube-bench
# Add kubectl to run policies checks
ARG KUBECTL_VERSION TARGETARCH
RUN wget -O /usr/local/bin/kubectl "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl"
RUN wget -O kubectl.sha256 "https://dl.k8s.io/release/v${KUBECTL_VERSION}/bin/linux/${TARGETARCH}/kubectl.sha256"
# Verify kubectl sha256sum
RUN /bin/bash -c 'echo "$(<kubectl.sha256) /usr/local/bin/kubectl" | sha256sum -c -'
RUN chmod +x /usr/local/bin/kubectl
# ubi8-minimal base image for build with ubi standards
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.7 as run
FROM registry.access.redhat.com/ubi9/ubi-minimal as run
RUN microdnf install yum findutils openssl\
RUN microdnf install -y yum findutils openssl \
&& yum -y update-minimal --security --sec-severity=Moderate --sec-severity=Important --sec-severity=Critical \
&& yum update -y \
&& yum install -y glibc \
@@ -29,9 +37,12 @@ WORKDIR /opt/kube-bench/
ENV PATH=$PATH:/usr/local/mount-from-host/bin
COPY LICENSE /licenses/LICENSE
COPY --from=build /go/bin/kube-bench /usr/local/bin/kube-bench
COPY --from=build /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY entrypoint.sh .
COPY cfg/ cfg/
COPY helper_scripts/check_files_owner_in_dir.sh /go/bin
ENTRYPOINT ["./entrypoint.sh"]
CMD ["install"]

View File

@@ -24,7 +24,12 @@ Tests are configured with YAML files, making this tool easy to update as test sp
![Kubernetes Bench for Security](/docs/images/output.png "Kubernetes Bench for Security")
### Quick start
## CIS Scanning as part of Trivy and the Trivy Operator
[Trivy](https://github.com/aquasecurity/trivy), the all in one cloud native security scanner, can be deployed as a [Kubernetes Operator](https://github.com/aquasecurity/trivy-operator) inside a cluster.
Both, the [Trivy CLI](https://github.com/aquasecurity/trivy), and the [Trivy Operator](https://github.com/aquasecurity/trivy-operator) support CIS Kubernetes Benchmark scanning among several other features.
## Quick start
There are multiple ways to run kube-bench.
You can run kube-bench inside a pod, but it will need access to the host's PID namespace in order to check the running processes, as well as access to some directories on the host where config files and other files are stored.

View File

@@ -25,14 +25,16 @@ groups:
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
@@ -45,8 +47,6 @@ groups:
compare:
op: bitmask
value: "644"
- flag: "$proxykubeconfig"
set: false
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
@@ -60,8 +60,6 @@ groups:
bin_op: or
test_items:
- flag: root:root
- flag: "$proxykubeconfig"
set: false
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
@@ -97,7 +95,7 @@ groups:
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
@@ -114,7 +112,7 @@ groups:
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
@@ -379,7 +377,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set TLSCipherSuites: to
If using a Kubelet config file, edit the file to set tlsCipherSuites: to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -25,14 +25,16 @@ groups:
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
@@ -46,8 +48,6 @@ groups:
compare:
op: bitmask
value: "644"
- flag: "$proxykubeconfig"
set: false
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
@@ -61,8 +61,6 @@ groups:
bin_op: or
test_items:
- flag: root:root
- flag: "$proxykubeconfig"
set: false
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
@@ -98,7 +96,7 @@ groups:
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
@@ -115,7 +113,7 @@ groups:
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
@@ -450,7 +448,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set TLSCipherSuites: to
If using a Kubelet config file, edit the file to set tlsCipherSuites: to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -24,14 +24,16 @@ groups:
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
@@ -45,8 +47,6 @@ groups:
compare:
op: bitmask
value: "644"
- flag: "$proxykubeconfig"
set: false
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
@@ -60,8 +60,6 @@ groups:
bin_op: or
test_items:
- flag: root:root
- flag: "$proxykubeconfig"
set: false
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
@@ -97,7 +95,7 @@ groups:
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
@@ -114,7 +112,7 @@ groups:
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
@@ -449,7 +447,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,46 @@
---
controls:
version: "cis-1.24"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

View File

@@ -0,0 +1,121 @@
---
controls:
version: "cis-1.24"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
env: "ETCD_CERT_FILE"
- flag: "--key-file"
env: "ETCD_KEY_FILE"
remediation: |
Not applicable. MicroK8s used dqlite and the communication to this service is done through a
local socket (/var/snap/microk8s/current/var/kubernetes/backend/kine.sock:12379) accessible
to users with root permissions.
scored: false
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Not applicable. MicroK8s used dqlite and the communication to this service is done through a
local socket (/var/snap/microk8s/current/var/kubernetes/backend/kine.sock:12379) accessible
to users with root permissions.
scored: false
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Not applicable. MicroK8s used dqlite and the communication to this service is done through a
local socket (/var/snap/microk8s/current/var/kubernetes/backend/kine.sock:12379) accessible
to users with root permissions.
scored: false
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)"
audit: "if test -e /var/snap/microk8s/current/var/kubernetes/backend/cluster.crt && test -e /var/snap/microk8s/current/var/kubernetes/backend/cluster.key; then echo 'certs-found'; fi"
tests:
test_items:
- flag: "certs-found"
remediation: |
The certificate pair for dqlite and tls peer communication is
/var/snap/microk8s/current/var/kubernetes/backend/cluster.crt and
/var/snap/microk8s/current/var/kubernetes/backend/cluster.key.
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/cat $etcdconf | /bin/grep enable-tls || true"
tests:
bin_op: or
test_items:
- flag: "--enable-tls"
compare:
op: eq
value: true
- flag: "--enable-tls"
set: false
remediation: |
MicroK8s used dqlite and tls peer communication uses is TLS if the --enable-tls is set in
/var/snap/microk8s/current/args/k8s-dqlite, set to true by default.
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Not applicable. MicroK8s used dqlite and tls peer communication uses the certificates
created upon the snap creation.
scored: false
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
remediation: |
Not applicable. MicroK8s used dqlite and tls peer communication uses the certificates
created upon the snap creation.
scored: false

View File

@@ -0,0 +1,948 @@
---
controls:
version: "cis-1.24"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the
control plane node.
For example, chmod 644 $apiserverconf
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $apiserverconf
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $controllermanagerconf
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $controllermanagerconf
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $schedulerconf
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $schedulerconf
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then stat -c permissions=%a $etcdconf; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $etcdconf
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then stat -c %U:%G $etcdconf; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $etcdconf
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)"
audit: |
find /var/snap/microk8s/current/args/cni-network/10-calico.conflist -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 <path/to/cni/files>
scored: false
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
find /var/snap/microk8s/current/args/cni-network/10-calico.conflist -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root <path/to/cni/files>
scored: false
# Etcd is not running on MicroK8s master nodes
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: |
DATA_DIR='/var/snap/microk8s/current/var/kubernetes/backend/'
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c permissions=%a "$DATA_DIR"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "700"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/snap/microk8s/current/var/kubernetes/backend/
scored: true
# Etcd is not running on MicroK8s master nodes
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: |
DATA_DIR='/var/snap/microk8s/current/var/kubernetes/backend/'
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c %U:%G "$DATA_DIR"
tests:
test_items:
- flag: "root:root"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above).
For example, chown root:root /var/snap/microk8s/current/var/kubernetes/backend/
scored: true
- id: 1.1.13
text: "Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e /var/snap/microk8s/current/credentials/client.config; then stat -c permissions=%a /var/snap/microk8s/current/credentials/client.config; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /var/snap/microk8s/current/credentials/client.config
scored: true
- id: 1.1.14
text: "Ensure that the admin.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e /var/snap/microk8s/current/credentials/client.config; then stat -c %U:%G /var/snap/microk8s/current/credentials/client.config; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /var/snap/microk8s/current/credentials/client.config
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c permissions=%a $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $schedulerkubeconfig
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c %U:%G $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c permissions=%a $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $controllermanagerkubeconfig
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c %U:%G $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "find /var/snap/microk8s/current/certs/ | xargs stat -c %U:%G"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /var/snap/microk8s/current/certs/
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)"
audit: "find /var/snap/microk8s/current/certs/ -name '*.crt' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /var/snap/microk8s/current/certs/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)"
audit: "find /var/snap/microk8s/current/certs/ -name '*.key' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /var/snap/microk8s/current/certs/*.key
scored: false
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
audit: "cat $apiserverconf | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
- flag: "--anonymous-auth"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--anonymous-auth=false
scored: false
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the control plane node and remove the --token-auth-file=<filename> parameter.
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is not set (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: "DenyServiceExternalIPs"
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and remove the `DenyServiceExternalIPs`
from enabled admission plugins.
scored: true
- id: 1.2.4
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the
apiserver and kubelets. Then, edit API server pod specification file
$apiserverconf on the control plane node and set the
kubelet client certificate and key parameters as below.
--kubelet-client-certificate=<path/to/client-certificate-file>
--kubelet-client-key=<path/to/client-key-file>
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the control plane node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
scored: true
- id: 1.2.6
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
One such example could be as below.
--authorization-mode=RBAC
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
for example `--authorization-mode=Node,RBAC`.
scored: true
- id: 1.2.9
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file $apiserverconf
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
scored: false
- id: 1.2.10
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
value that does not include AlwaysAdmit.
scored: true
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
scored: false
- id: 1.2.12
text: "Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)"
audit: "cat $apiserverconf | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "SecurityContextDeny"
- flag: "--enable-admission-plugins"
compare:
op: has
value: "PodSecurityPolicy"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
SecurityContextDeny, unless PodSecurityPolicy is already in place.
--enable-admission-plugins=...,SecurityContextDeny,...
scored: false
- id: 1.2.13
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Follow the documentation and create ServiceAccount objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
value that does not include ServiceAccount.
scored: true
- id: 1.2.14
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --disable-admission-plugins parameter to
ensure it does not include NamespaceLifecycle.
scored: true
- id: 1.2.15
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to a
value that includes NodeRestriction.
--enable-admission-plugins=...,NodeRestriction,...
scored: true
- id: 1.2.16
text: "Ensure that the --secure-port argument is not set to 0 (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--secure-port"
compare:
op: gt
value: 0
- flag: "--secure-port"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --secure-port parameter or
set it to a different (non-zero) desired port.
scored: true
- id: 1.2.17
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.2.18
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
--audit-log-path=/var/log/apiserver/audit.log
scored: true
- id: 1.2.19
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxage parameter to 30
or as an appropriate number of days, for example,
--audit-log-maxage=30
scored: true
- id: 1.2.20
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value. For example,
--audit-log-maxbackup=10
scored: true
- id: 1.2.21
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB, --audit-log-maxsize=100
scored: true
- id: 1.2.22
text: "Ensure that the --request-timeout argument is set as appropriate (Manual)"
audit: "cat $apiserverconf | grep -v grep"
type: manual
remediation: |
Edit the API server pod specification file $apiserverconf
and set the below parameter as appropriate and if needed.
For example, --request-timeout=300s
scored: false
- id: 1.2.23
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--service-account-lookup=true
Alternatively, you can delete the --service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.24
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --service-account-key-file parameter
to the public key file for service accounts. For example,
--service-account-key-file=<filename>
scored: true
# MicroK8s does not use etcd. The API server talk to a local dqlite instance
- id: 1.2.25
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
- flag: "--etcd-keyfile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate and key file parameters.
--etcd-certfile=<path/to/client-certificate-file>
--etcd-keyfile=<path/to/client-key-file>
scored: false
- id: 1.2.26
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
- flag: "--tls-private-key-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the TLS certificate and private key file parameters.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
scored: true
- id: 1.2.27
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the client certificate authority file.
--client-ca-file=<path/to/client-ca-file>
scored: true
- id: 1.2.28
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate authority file parameter.
--etcd-cafile=<path/to/ca-file>
scored: false
- id: 1.2.29
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --encryption-provider-config parameter to the path of that file.
For example, --encryption-provider-config=</path/to/EncryptionConfig/File>
scored: false
- id: 1.2.30
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: |
ENCRYPTION_PROVIDER_CONFIG=$(cat $apiserverconf | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi
tests:
test_items:
- flag: "provider"
compare:
op: valid_elements
value: "aescbc,kms,secretbox"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
scored: false
- id: 1.2.31
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "cat $apiserverconf | grep -v grep"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384"
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the control plane node and set the below parameter.
--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
scored: false
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "cat $controllermanagerconf | grep -v grep"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example, --terminated-pod-gc-threshold=10
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "cat $controllermanagerconf | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "cat $controllermanagerconf | grep -v grep"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node to set the below parameter.
--use-service-account-credentials=true
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "cat $controllermanagerconf | grep -v grep"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --service-account-private-key-file parameter
to the private key file for service accounts.
--service-account-private-key-file=<filename>
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "cat $controllermanagerconf | grep -v grep"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
--root-ca-file=<path/to/file>
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "cat $controllermanagerconf | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "cat $controllermanagerconf | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "cat $schedulerconf | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf file
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "cat $schedulerconf | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true

View File

@@ -0,0 +1,458 @@
---
controls:
version: "cis-1.24"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi' "
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 600 $kubeletsvc
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi' "
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)"
audit: "/bin/sh -c 'if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi' "
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: false
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)"
audit: "/bin/sh -c 'if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi' "
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: false
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi' "
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi' "
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 <filename>
scored: false
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: false
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Manual)"
audit: "/bin/sh -c 'if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi' "
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 600 $kubeletconf
scored: false
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual)"
audit: "/bin/sh -c 'if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi' "
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: false
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: "{.authentication.anonymous.enabled}"
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: "{.authorization.mode}"
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: "{.authentication.x509.clientCAFile}"
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Manual)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: "{.readOnlyPort}"
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: "{.readOnlyPort}"
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: "{.streamingConnectionIdleTimeout}"
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: "{.streamingConnectionIdleTimeout}"
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.6
text: "Ensure that the --protect-kernel-defaults argument is set to true (Automated)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --protect-kernel-defaults
path: "{.protectKernelDefaults}"
compare:
op: eq
value: true
remediation: |
If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: "{.makeIPTablesUtilChains}"
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: "{.makeIPTablesUtilChains}"
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.8
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
audit: "cat $kubeletconf"
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.9
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: "{.eventRecordQPS}"
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.10
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cert-file
path: "{.tlsCertFile}"
- flag: --tls-private-key-file
path: "{.tlsPrivateKeyFile}"
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.11
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: "{.rotateCertificates}"
compare:
op: eq
value: true
- flag: --rotate-certificates
path: "{.rotateCertificates}"
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.12
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: "{.featureGates.RotateKubeletServerCertificate}"
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: "{.featureGates.RotateKubeletServerCertificate}"
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.13
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "cat $kubeletconf"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cipher-suites
path: "{range .tlsCipherSuites[:]}{}{','}{end}"
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false

View File

@@ -0,0 +1,269 @@
---
controls:
version: "cis-1.24"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Automated)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Automated)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilities in applications running on your cluster. Where a namespace
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -9,18 +9,18 @@ groups:
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)"
text: "Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the
control plane node.
For example, chmod 644 $apiserverconf
For example, chmod 600 $apiserverconf
scored: true
- id: 1.1.2

View File

@@ -24,14 +24,16 @@ groups:
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
@@ -45,8 +47,6 @@ groups:
compare:
op: bitmask
value: "600"
- flag: "$proxykubeconfig"
set: false
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
@@ -60,8 +60,6 @@ groups:
bin_op: or
test_items:
- flag: root:root
- flag: "$proxykubeconfig"
set: false
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
@@ -97,7 +95,7 @@ groups:
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
@@ -114,7 +112,7 @@ groups:
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
@@ -350,8 +348,12 @@ groups:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: eq
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
@@ -449,7 +451,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `TLSCipherSuites` to
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -1,6 +1,6 @@
---
controls:
version: "cis-1.25"
version: "cis-1.24"
id: 5
text: "Kubernetes Policies"
type: "policies"

View File

@@ -25,16 +25,17 @@ groups:
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Scored)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
set: true
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
@@ -48,8 +49,6 @@ groups:
compare:
op: bitmask
value: "644"
- flag: "$proxykubeconfig"
set: false
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
@@ -64,8 +63,6 @@ groups:
test_items:
- flag: root:root
set: true
- flag: "$proxykubeconfig"
set: false
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
@@ -106,7 +103,7 @@ groups:
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Scored)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
@@ -124,7 +121,7 @@ groups:
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Scored)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
@@ -475,7 +472,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set TLSCipherSuites: to
If using a Kubelet config file, edit the file to set tlsCipherSuites: to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

View File

@@ -25,10 +25,12 @@ groups:
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
@@ -98,7 +100,7 @@ groups:
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
@@ -115,7 +117,7 @@ groups:
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}')
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
@@ -450,7 +452,7 @@ groups:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set TLSCipherSuites: to
If using a Kubelet config file, edit the file to set tlsCipherSuites: to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file

2
cfg/cis-1.7/config.yaml Normal file
View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,60 @@
---
controls:
version: "cis-1.7"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.1.2
text: "Service account token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of service account tokens.
scored: false
- id: 3.1.3
text: "Bootstrap token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of bootstrap tokens.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

135
cfg/cis-1.7/etcd.yaml Normal file
View File

@@ -0,0 +1,135 @@
---
controls:
version: "cis-1.7"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
env: "ETCD_CERT_FILE"
- flag: "--key-file"
env: "ETCD_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are
set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
env: "ETCD_PEER_CERT_FILE"
- flag: "--peer-key-file"
env: "ETCD_PEER_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--peer-client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: false

946
cfg/cis-1.7/master.yaml Normal file
View File

@@ -0,0 +1,946 @@
---
controls:
version: "cis-1.7"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the
control plane node.
For example, chmod 600 $apiserverconf
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $apiserverconf
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $controllermanagerconf
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $controllermanagerconf
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $schedulerconf
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $schedulerconf
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c permissions=%a; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $etcdconf
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c %U:%G; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $etcdconf
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 <path/to/cni/files>
scored: false
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root <path/to/cni/files>
scored: false
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c permissions=%a "$DATA_DIR"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "700"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c %U:%G "$DATA_DIR"
tests:
test_items:
- flag: "etcd:etcd"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
scored: true
- id: 1.1.13
text: "Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c permissions=%a /etc/kubernetes/admin.conf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /etc/kubernetes/admin.conf
scored: true
- id: 1.1.14
text: "Ensure that the admin.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c %U:%G /etc/kubernetes/admin.conf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /etc/kubernetes/admin.conf
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c permissions=%a $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $schedulerkubeconfig
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c %U:%G $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c permissions=%a $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $controllermanagerkubeconfig
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c %U:%G $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "find /etc/kubernetes/pki/ | xargs stat -c %U:%G"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /etc/kubernetes/pki/
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.crt' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.key' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.key
scored: false
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--anonymous-auth=false
scored: false
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the control plane node and remove the --token-auth-file=<filename> parameter.
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "DenyServiceExternalIPs"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and add the `DenyServiceExternalIPs` plugin
to the enabled admission plugins, as such --enable-admission-plugin=DenyServiceExternalIPs.
scored: false
- id: 1.2.4
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the
apiserver and kubelets. Then, edit API server pod specification file
$apiserverconf on the control plane node and set the
kubelet client certificate and key parameters as below.
--kubelet-client-certificate=<path/to/client-certificate-file>
--kubelet-client-key=<path/to/client-key-file>
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the control plane node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
scored: true
- id: 1.2.6
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
One such example could be as below.
--authorization-mode=RBAC
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
for example `--authorization-mode=Node,RBAC`.
scored: true
- id: 1.2.9
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file $apiserverconf
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
scored: false
- id: 1.2.10
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
value that does not include AlwaysAdmit.
scored: true
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
scored: false
- id: 1.2.12
text: "Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "SecurityContextDeny"
- flag: "--enable-admission-plugins"
compare:
op: has
value: "PodSecurityPolicy"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
SecurityContextDeny, unless PodSecurityPolicy is already in place.
--enable-admission-plugins=...,SecurityContextDeny,...
scored: false
- id: 1.2.13
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Follow the documentation and create ServiceAccount objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
value that does not include ServiceAccount.
scored: true
- id: 1.2.14
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --disable-admission-plugins parameter to
ensure it does not include NamespaceLifecycle.
scored: true
- id: 1.2.15
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to a
value that includes NodeRestriction.
--enable-admission-plugins=...,NodeRestriction,...
scored: true
- id: 1.2.16
text: "Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--secure-port"
compare:
op: gt
value: 0
- flag: "--secure-port"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --secure-port parameter or
set it to a different (non-zero) desired port.
scored: false
- id: 1.2.17
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.2.18
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
--audit-log-path=/var/log/apiserver/audit.log
scored: true
- id: 1.2.19
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxage parameter to 30
or as an appropriate number of days, for example,
--audit-log-maxage=30
scored: true
- id: 1.2.20
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value. For example,
--audit-log-maxbackup=10
scored: true
- id: 1.2.21
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB, --audit-log-maxsize=100
scored: true
- id: 1.2.22
text: "Ensure that the --request-timeout argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
type: manual
remediation: |
Edit the API server pod specification file $apiserverconf
and set the below parameter as appropriate and if needed.
For example, --request-timeout=300s
scored: false
- id: 1.2.23
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--service-account-lookup=true
Alternatively, you can delete the --service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.24
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --service-account-key-file parameter
to the public key file for service accounts. For example,
--service-account-key-file=<filename>
scored: true
- id: 1.2.25
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
- flag: "--etcd-keyfile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate and key file parameters.
--etcd-certfile=<path/to/client-certificate-file>
--etcd-keyfile=<path/to/client-key-file>
scored: true
- id: 1.2.26
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
- flag: "--tls-private-key-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the TLS certificate and private key file parameters.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
scored: true
- id: 1.2.27
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the client certificate authority file.
--client-ca-file=<path/to/client-ca-file>
scored: true
- id: 1.2.28
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate authority file parameter.
--etcd-cafile=<path/to/ca-file>
scored: true
- id: 1.2.29
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --encryption-provider-config parameter to the path of that file.
For example, --encryption-provider-config=</path/to/EncryptionConfig/File>
scored: false
- id: 1.2.30
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: |
ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep $apiserverbin | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi
tests:
test_items:
- flag: "provider"
compare:
op: valid_elements
value: "aescbc,kms,secretbox"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
scored: false
- id: 1.2.31
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384"
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the control plane node and set the below parameter.
--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
scored: false
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example, --terminated-pod-gc-threshold=10
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node to set the below parameter.
--use-service-account-credentials=true
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --service-account-private-key-file parameter
to the private key file for service accounts.
--service-account-private-key-file=<filename>
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
--root-ca-file=<path/to/file>
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf file
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true

455
cfg/cis-1.7/node.yaml Normal file
View File

@@ -0,0 +1,455 @@
---
controls:
version: "cis-1.7"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 600 $kubeletsvc
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: false
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: false
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 <filename>
scored: false
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: false
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 600 $kubeletconf
scored: false
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: false
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
audit: "/bin/ps -fC $kubeletbin "
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.8
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.9
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cert-file
path: '{.tlsCertFile}'
- flag: --tls-private-key-file
path: '{.tlsPrivateKeyFile}'
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.10
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.11
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.12
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.13
text: "Ensure that a limit is set on pod PIDs (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --pod-max-pids
path: '{.podPidsLimit}'
remediation: |
Decide on an appropriate level for this parameter and set it,
either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
scored: false

304
cfg/cis-1.7/policies.yaml Normal file
View File

@@ -0,0 +1,304 @@
---
controls:
version: "cis-1.7"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.1.9
text: "Minimize access to create persistent volumes (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to PersistentVolume objects in the cluster.
scored: false
- id: 5.1.10
text: "Minimize access to the proxy sub-resource of nodes (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the proxy sub-resource of node objects.
scored: false
- id: 5.1.11
text: "Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
scored: false
- id: 5.1.12
text: "Minimize access to webhook configuration objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
scored: false
- id: 5.1.13
text: "Minimize access to the service account token creation (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the token sub-resource of serviceaccount objects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Automated)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Automated)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

2
cfg/cis-1.8/config.yaml Normal file
View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,60 @@
---
controls:
version: "cis-1.8"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.1.2
text: "Service account token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of service account tokens.
scored: false
- id: 3.1.3
text: "Bootstrap token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of bootstrap tokens.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

135
cfg/cis-1.8/etcd.yaml Normal file
View File

@@ -0,0 +1,135 @@
---
controls:
version: "cis-1.8"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
env: "ETCD_CERT_FILE"
- flag: "--key-file"
env: "ETCD_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are
set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
env: "ETCD_PEER_CERT_FILE"
- flag: "--peer-key-file"
env: "ETCD_PEER_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--peer-client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: false

928
cfg/cis-1.8/master.yaml Normal file
View File

@@ -0,0 +1,928 @@
---
controls:
version: "cis-1.8"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the
control plane node.
For example, chmod 600 $apiserverconf
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $apiserverconf
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $controllermanagerconf
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $controllermanagerconf
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $schedulerconf
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $schedulerconf
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c permissions=%a; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $etcdconf
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c %U:%G; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $etcdconf
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 <path/to/cni/files>
scored: false
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root <path/to/cni/files>
scored: false
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c permissions=%a "$DATA_DIR"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "700"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c %U:%G "$DATA_DIR"
tests:
test_items:
- flag: "etcd:etcd"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
scored: true
- id: 1.1.13
text: "Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c permissions=%a /etc/kubernetes/admin.conf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /etc/kubernetes/admin.conf
scored: true
- id: 1.1.14
text: "Ensure that the admin.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c %U:%G /etc/kubernetes/admin.conf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /etc/kubernetes/admin.conf
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c permissions=%a $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $schedulerkubeconfig
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c %U:%G $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c permissions=%a $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $controllermanagerkubeconfig
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c %U:%G $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "find /etc/kubernetes/pki/ | xargs stat -c %U:%G"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /etc/kubernetes/pki/
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.crt' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.key' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.key
scored: false
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--anonymous-auth=false
scored: false
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the control plane node and remove the --token-auth-file=<filename> parameter.
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "DenyServiceExternalIPs"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and add the `DenyServiceExternalIPs` plugin
to the enabled admission plugins, as such --enable-admission-plugin=DenyServiceExternalIPs.
scored: false
- id: 1.2.4
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the
apiserver and kubelets. Then, edit API server pod specification file
$apiserverconf on the control plane node and set the
kubelet client certificate and key parameters as below.
--kubelet-client-certificate=<path/to/client-certificate-file>
--kubelet-client-key=<path/to/client-key-file>
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the control plane node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
scored: true
- id: 1.2.6
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
One such example could be as below.
--authorization-mode=RBAC
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
for example `--authorization-mode=Node,RBAC`.
scored: true
- id: 1.2.9
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file $apiserverconf
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
scored: false
- id: 1.2.10
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
value that does not include AlwaysAdmit.
scored: true
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
scored: false
- id: 1.2.12
text: "Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "SecurityContextDeny"
- flag: "--enable-admission-plugins"
compare:
op: has
value: "PodSecurityPolicy"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
SecurityContextDeny, unless PodSecurityPolicy is already in place.
--enable-admission-plugins=...,SecurityContextDeny,...
scored: false
- id: 1.2.13
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Follow the documentation and create ServiceAccount objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
value that does not include ServiceAccount.
scored: true
- id: 1.2.14
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --disable-admission-plugins parameter to
ensure it does not include NamespaceLifecycle.
scored: true
- id: 1.2.15
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to a
value that includes NodeRestriction.
--enable-admission-plugins=...,NodeRestriction,...
scored: true
- id: 1.2.16
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.2.17
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
--audit-log-path=/var/log/apiserver/audit.log
scored: true
- id: 1.2.18
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxage parameter to 30
or as an appropriate number of days, for example,
--audit-log-maxage=30
scored: true
- id: 1.2.19
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value. For example,
--audit-log-maxbackup=10
scored: true
- id: 1.2.20
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB, --audit-log-maxsize=100
scored: true
- id: 1.2.21
text: "Ensure that the --request-timeout argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
type: manual
remediation: |
Edit the API server pod specification file $apiserverconf
and set the below parameter as appropriate and if needed.
For example, --request-timeout=300s
scored: false
- id: 1.2.22
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--service-account-lookup=true
Alternatively, you can delete the --service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.23
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --service-account-key-file parameter
to the public key file for service accounts. For example,
--service-account-key-file=<filename>
scored: true
- id: 1.2.24
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
- flag: "--etcd-keyfile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate and key file parameters.
--etcd-certfile=<path/to/client-certificate-file>
--etcd-keyfile=<path/to/client-key-file>
scored: true
- id: 1.2.25
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
- flag: "--tls-private-key-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the TLS certificate and private key file parameters.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
scored: true
- id: 1.2.26
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the client certificate authority file.
--client-ca-file=<path/to/client-ca-file>
scored: true
- id: 1.2.27
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate authority file parameter.
--etcd-cafile=<path/to/ca-file>
scored: true
- id: 1.2.28
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --encryption-provider-config parameter to the path of that file.
For example, --encryption-provider-config=</path/to/EncryptionConfig/File>
scored: false
- id: 1.2.29
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: |
ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep $apiserverbin | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi
tests:
test_items:
- flag: "provider"
compare:
op: valid_elements
value: "aescbc,kms,secretbox"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
scored: false
- id: 1.2.30
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384"
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the control plane node and set the below parameter.
--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
scored: false
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example, --terminated-pod-gc-threshold=10
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node to set the below parameter.
--use-service-account-credentials=true
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --service-account-private-key-file parameter
to the private key file for service accounts.
--service-account-private-key-file=<filename>
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
--root-ca-file=<path/to/file>
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf file
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true

455
cfg/cis-1.8/node.yaml Normal file
View File

@@ -0,0 +1,455 @@
---
controls:
version: "cis-1.8"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 600 $kubeletsvc
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: false
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: false
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 <filename>
scored: false
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: false
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 600 $kubeletconf
scored: true
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
audit: "/bin/ps -fC $kubeletbin "
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.8
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.9
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cert-file
path: '{.tlsCertFile}'
- flag: --tls-private-key-file
path: '{.tlsPrivateKeyFile}'
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.10
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.11
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.12
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.13
text: "Ensure that a limit is set on pod PIDs (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --pod-max-pids
path: '{.podPidsLimit}'
remediation: |
Decide on an appropriate level for this parameter and set it,
either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
scored: false

304
cfg/cis-1.8/policies.yaml Normal file
View File

@@ -0,0 +1,304 @@
---
controls:
version: "cis-1.8"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.1.9
text: "Minimize access to create persistent volumes (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to PersistentVolume objects in the cluster.
scored: false
- id: 5.1.10
text: "Minimize access to the proxy sub-resource of nodes (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the proxy sub-resource of node objects.
scored: false
- id: 5.1.11
text: "Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
scored: false
- id: 5.1.12
text: "Minimize access to webhook configuration objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
scored: false
- id: 5.1.13
text: "Minimize access to the service account token creation (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the token sub-resource of serviceaccount objects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Automated)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Automated)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

2
cfg/cis-1.9/config.yaml Normal file
View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,62 @@
---
controls:
version: "cis-1.9"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.1.2
text: "Service account token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of service account tokens.
scored: false
- id: 3.1.3
text: "Bootstrap token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of bootstrap tokens.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

135
cfg/cis-1.9/etcd.yaml Normal file
View File

@@ -0,0 +1,135 @@
---
controls:
version: "cis-1.9"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
env: "ETCD_CERT_FILE"
- flag: "--key-file"
env: "ETCD_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are
set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
env: "ETCD_PEER_CERT_FILE"
- flag: "--peer-key-file"
env: "ETCD_PEER_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--peer-client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: false

919
cfg/cis-1.9/master.yaml Normal file
View File

@@ -0,0 +1,919 @@
---
controls:
version: "cis-1.9"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the
control plane node.
For example, chmod 600 $apiserverconf
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $apiserverconf
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $controllermanagerconf
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $controllermanagerconf
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $schedulerconf
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $schedulerconf
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c permissions=%a; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $etcdconf
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c %U:%G; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $etcdconf
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 <path/to/cni/files>
scored: false
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root <path/to/cni/files>
scored: false
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c permissions=%a "$DATA_DIR"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "700"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: |
DATA_DIR=''
for d in $(ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%'); do
if test -d "$d"; then DATA_DIR="$d"; fi
done
if ! test -d "$DATA_DIR"; then DATA_DIR=$etcddatadir; fi
stat -c %U:%G "$DATA_DIR"
tests:
test_items:
- flag: "etcd:etcd"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
scored: true
- id: 1.1.13
text: "Ensure that the default administrative credential file permissions are set to 600 (Automated)"
audit: |
for adminconf in /etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf; do if test -e $adminconf; then stat -c "permissions=%a %n" $adminconf; fi; done
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /etc/kubernetes/admin.conf
On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
For example, chmod 600 /etc/kubernetes/super-admin.conf
scored: true
- id: 1.1.14
text: "Ensure that the default administrative credential file ownership is set to root:root (Automated)"
audit: |
for adminconf in /etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf; do if test -e $adminconf; then stat -c "ownership=%U:%G %n" $adminconf; fi; done
use_multiple_values: true
tests:
test_items:
- flag: "ownership"
compare:
op: eq
value: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /etc/kubernetes/admin.conf
On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
For example, chown root:root /etc/kubernetes/super-admin.conf
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c permissions=%a $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $schedulerkubeconfig
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c %U:%G $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c permissions=%a $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $controllermanagerkubeconfig
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c %U:%G $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "find /etc/kubernetes/pki/ | xargs stat -c %U:%G"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /etc/kubernetes/pki/
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.crt' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)"
audit: "find /etc/kubernetes/pki/ -name '*.key' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.key
scored: false
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--anonymous-auth=false
scored: false
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the control plane node and remove the --token-auth-file=<filename> parameter.
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "DenyServiceExternalIPs"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and add the `DenyServiceExternalIPs` plugin
to the enabled admission plugins, as such --enable-admission-plugin=DenyServiceExternalIPs.
scored: false
- id: 1.2.4
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the
apiserver and kubelets. Then, edit API server pod specification file
$apiserverconf on the control plane node and set the
kubelet client certificate and key parameters as below.
--kubelet-client-certificate=<path/to/client-certificate-file>
--kubelet-client-key=<path/to/client-key-file>
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the control plane node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
scored: true
- id: 1.2.6
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
One such example could be as below.
--authorization-mode=RBAC
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
for example `--authorization-mode=Node,RBAC`.
scored: true
- id: 1.2.9
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file $apiserverconf
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
scored: false
- id: 1.2.10
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
value that does not include AlwaysAdmit.
scored: true
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
scored: false
- id: 1.2.12
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Follow the documentation and create ServiceAccount objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
value that does not include ServiceAccount.
scored: true
- id: 1.2.13
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --disable-admission-plugins parameter to
ensure it does not include NamespaceLifecycle.
scored: true
- id: 1.2.14
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to a
value that includes NodeRestriction.
--enable-admission-plugins=...,NodeRestriction,...
scored: true
- id: 1.2.15
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.2.16
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
--audit-log-path=/var/log/apiserver/audit.log
scored: true
- id: 1.2.17
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxage parameter to 30
or as an appropriate number of days, for example,
--audit-log-maxage=30
scored: true
- id: 1.2.18
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value. For example,
--audit-log-maxbackup=10
scored: true
- id: 1.2.19
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB, --audit-log-maxsize=100
scored: true
- id: 1.2.20
text: "Ensure that the --request-timeout argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
type: manual
remediation: |
Edit the API server pod specification file $apiserverconf
and set the below parameter as appropriate and if needed.
For example, --request-timeout=300s
scored: false
- id: 1.2.21
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--service-account-lookup=true
Alternatively, you can delete the --service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.22
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --service-account-key-file parameter
to the public key file for service accounts. For example,
--service-account-key-file=<filename>
scored: true
- id: 1.2.23
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
- flag: "--etcd-keyfile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate and key file parameters.
--etcd-certfile=<path/to/client-certificate-file>
--etcd-keyfile=<path/to/client-key-file>
scored: true
- id: 1.2.24
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
- flag: "--tls-private-key-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the TLS certificate and private key file parameters.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
scored: true
- id: 1.2.25
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the client certificate authority file.
--client-ca-file=<path/to/client-ca-file>
scored: true
- id: 1.2.26
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate authority file parameter.
--etcd-cafile=<path/to/ca-file>
scored: true
- id: 1.2.27
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --encryption-provider-config parameter to the path of that file.
For example, --encryption-provider-config=</path/to/EncryptionConfig/File>
scored: false
- id: 1.2.28
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: |
ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep $apiserverbin | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi
tests:
test_items:
- flag: "provider"
compare:
op: valid_elements
value: "aescbc,kms,secretbox"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
scored: false
- id: 1.2.29
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384"
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the control plane node and set the below parameter.
--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
scored: false
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example, --terminated-pod-gc-threshold=10
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node to set the below parameter.
--use-service-account-credentials=true
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --service-account-private-key-file parameter
to the private key file for service accounts.
--service-account-private-key-file=<filename>
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
--root-ca-file=<path/to/file>
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf file
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true

478
cfg/cis-1.9/node.yaml Normal file
View File

@@ -0,0 +1,478 @@
---
controls:
version: "cis-1.9"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 600 $kubeletsvc
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c "if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: false
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: false
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c permissions=%a $CAFILE; fi
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 <filename>
scored: false
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: |
CAFILE=$(ps -ef | grep kubelet | grep -v apiserver | grep -- --client-ca-file= | awk -F '--client-ca-file=' '{print $2}' | awk '{print $1}' | uniq)
if test -z $CAFILE; then CAFILE=$kubeletcafile; fi
if test -e $CAFILE; then stat -c %U:%G $CAFILE; fi
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: false
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 600 $kubeletconf
scored: true
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
audit: "/bin/ps -fC $kubeletbin"
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.8
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.9
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cert-file
path: '{.tlsCertFile}'
- flag: --tls-private-key-file
path: '{.tlsPrivateKeyFile}'
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.10
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.11
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.12
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.13
text: "Ensure that a limit is set on pod PIDs (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --pod-max-pids
path: '{.podPidsLimit}'
remediation: |
Decide on an appropriate level for this parameter and set it,
either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
scored: false
- id: 4.3
text: "kube-proxy"
checks:
- id: 4.3.1
text: "Ensure that the kube-proxy metrics service is bound to localhost (Automated)"
audit: "/bin/ps -fC $proxybin"
audit_config: "/bin/sh -c 'if test -e $proxykubeconfig; then cat $proxykubeconfig; fi'"
tests:
bin_op: or
test_items:
- flag: "--metrics-bind-address"
path: '{.metricsBindAddress}'
compare:
op: has
value: "127.0.0.1"
- flag: "--metrics-bind-address"
path: '{.metricsBindAddress}'
set: false
remediation: |
Modify or remove any values which bind the metrics service to a non-localhost address.
The default value is 127.0.0.1:10249.
scored: true

405
cfg/cis-1.9/policies.yaml Normal file
View File

@@ -0,0 +1,405 @@
---
controls:
version: "cis-1.9"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
audit: |
kubectl get clusterrolebindings -o=custom-columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name --no-headers | while read -r role_name role_binding subject
do
if [[ "${role_name}" != "cluster-admin" && "${role_binding}" == "cluster-admin" ]]; then
is_compliant="false"
else
is_compliant="true"
fi;
echo "**role_name: ${role_name} role_binding: ${role_binding} subject: ${subject} is_compliant: ${is_compliant}"
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role : kubectl delete clusterrolebinding [name]
Condition: is_compliant is false if rolename is not cluster-admin and rolebinding is cluster-admin.
scored: true
- id: 5.1.2
text: "Minimize access to secrets (Automated)"
audit: "echo \"canGetListWatchSecretsAsSystemAuthenticated: $(kubectl auth can-i get,list,watch secrets --all-namespaces --as=system:authenticated)\""
tests:
test_items:
- flag: "canGetListWatchSecretsAsSystemAuthenticated"
compare:
op: eq
value: no
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: true
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
audit: |
# Check Roles
kubectl get roles --all-namespaces -o custom-columns=ROLE_NAMESPACE:.metadata.namespace,ROLE_NAME:.metadata.name --no-headers | while read -r role_namespace role_name
do
role_rules=$(kubectl get role -n "${role_namespace}" "${role_name}" -o=json | jq -c '.rules')
if echo "${role_rules}" | grep -q "\[\"\*\"\]"; then
role_is_compliant="false"
else
role_is_compliant="true"
fi;
echo "**role_name: ${role_name} role_namespace: ${role_namespace} role_rules: ${role_rules} role_is_compliant: ${role_is_compliant}"
done
# Check ClusterRoles
kubectl get clusterroles -o custom-columns=CLUSTERROLE_NAME:.metadata.name --no-headers | while read -r clusterrole_name
do
clusterrole_rules=$(kubectl get clusterrole "${clusterrole_name}" -o=json | jq -c '.rules')
if echo "${clusterrole_rules}" | grep -q "\[\"\*\"\]"; then
clusterrole_is_compliant="false"
else
clusterrole_is_compliant="true"
fi;
echo "**clusterrole_name: ${clusterrole_name} clusterrole_rules: ${clusterrole_rules} clusterrole_is_compliant: ${clusterrole_is_compliant}"
done
use_multiple_values: true
tests:
bin_op: or
test_items:
- flag: "role_is_compliant"
compare:
op: eq
value: true
set: true
- flag: "clusterrole_is_compliant"
compare:
op: eq
value: true
set: true
remediation: |
Where possible replace any use of wildcards ["*"] in roles and clusterroles with specific
objects or actions.
Condition: role_is_compliant is false if ["*"] is found in rules.
Condition: clusterrole_is_compliant is false if ["*"] is found in rules.
scored: true
- id: 5.1.4
text: "Minimize access to create pods (Automated)"
audit: |
echo "canCreatePodsAsSystemAuthenticated: $(kubectl auth can-i create pods --all-namespaces --as=system:authenticated)"
tests:
test_items:
- flag: "canCreatePodsAsSystemAuthenticated"
compare:
op: eq
value: no
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: true
- id: 5.1.5
text: "Ensure that default service accounts are not actively used (Automated)"
audit: |
kubectl get serviceaccount --all-namespaces --field-selector metadata.name=default -o=json | jq -r '.items[] | " namespace: \(.metadata.namespace), kind: \(.kind), name: \(.metadata.name), automountServiceAccountToken: \(.automountServiceAccountToken | if . == null then "notset" else . end )"' | xargs -L 1
use_multiple_values: true
tests:
test_items:
- flag: "automountServiceAccountToken"
compare:
op: eq
value: false
set: true
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
`automountServiceAccountToken: false`.
scored: true
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
audit: |
kubectl get pods --all-namespaces -o custom-columns=POD_NAMESPACE:.metadata.namespace,POD_NAME:.metadata.name,POD_SERVICE_ACCOUNT:.spec.serviceAccount,POD_IS_AUTOMOUNTSERVICEACCOUNTTOKEN:.spec.automountServiceAccountToken --no-headers | while read -r pod_namespace pod_name pod_service_account pod_is_automountserviceaccounttoken
do
# Retrieve automountServiceAccountToken's value for ServiceAccount and Pod, set to notset if null or <none>.
svacc_is_automountserviceaccounttoken=$(kubectl get serviceaccount -n "${pod_namespace}" "${pod_service_account}" -o json | jq -r '.automountServiceAccountToken' | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
pod_is_automountserviceaccounttoken=$(echo "${pod_is_automountserviceaccounttoken}" | sed -e 's/<none>/notset/g' -e 's/null/notset/g')
if [ "${svacc_is_automountserviceaccounttoken}" = "false" ] && ( [ "${pod_is_automountserviceaccounttoken}" = "false" ] || [ "${pod_is_automountserviceaccounttoken}" = "notset" ] ); then
is_compliant="true"
elif [ "${svacc_is_automountserviceaccounttoken}" = "true" ] && [ "${pod_is_automountserviceaccounttoken}" = "false" ]; then
is_compliant="true"
else
is_compliant="false"
fi
echo "**namespace: ${pod_namespace} pod_name: ${pod_name} service_account: ${pod_service_account} pod_is_automountserviceaccounttoken: ${pod_is_automountserviceaccounttoken} svacc_is_automountServiceAccountToken: ${svacc_is_automountserviceaccounttoken} is_compliant: ${is_compliant}"
done
use_multiple_values: true
tests:
test_items:
- flag: "is_compliant"
compare:
op: eq
value: true
remediation: |
Modify the definition of ServiceAccounts and Pods which do not need to mount service
account tokens to disable it, with `automountServiceAccountToken: false`.
If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.
Condition: Pod is_compliant to true when
- ServiceAccount is automountServiceAccountToken: false and Pod is automountServiceAccountToken: false or notset
- ServiceAccount is automountServiceAccountToken: true notset and Pod is automountServiceAccountToken: false
scored: true
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.1.9
text: "Minimize access to create persistent volumes (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to PersistentVolume objects in the cluster.
scored: false
- id: 5.1.10
text: "Minimize access to the proxy sub-resource of nodes (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the proxy sub-resource of node objects.
scored: false
- id: 5.1.11
text: "Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
scored: false
- id: 5.1.12
text: "Minimize access to webhook configuration objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
scored: false
- id: 5.1.13
text: "Minimize access to the service account token creation (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the token sub-resource of serviceaccount objects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Manual)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Manual)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -36,6 +36,7 @@ master:
- /var/snap/microk8s/current/args/kube-apiserver
- /etc/origin/master/master-config.yaml
- /etc/kubernetes/manifests/talos-kube-apiserver.yaml
- /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
defaultconf: /etc/kubernetes/manifests/kube-apiserver.yaml
scheduler:
@@ -53,6 +54,7 @@ master:
- /var/snap/microk8s/current/args/kube-scheduler
- /etc/origin/master/scheduler.json
- /etc/kubernetes/manifests/talos-kube-scheduler.yaml
- /var/lib/rancher/rke2/agent/pod-manifests/kube-scheduler.yaml
defaultconf: /etc/kubernetes/manifests/kube-scheduler.yaml
kubeconfig:
- /etc/kubernetes/scheduler.conf
@@ -77,6 +79,7 @@ master:
- /var/snap/kube-controller-manager/current/args
- /var/snap/microk8s/current/args/kube-controller-manager
- /etc/kubernetes/manifests/talos-kube-controller-manager.yaml
- /var/lib/rancher/rke2/agent/pod-manifests/kube-controller-manager.yaml
defaultconf: /etc/kubernetes/manifests/kube-controller-manager.yaml
kubeconfig:
- /etc/kubernetes/controller-manager.conf
@@ -92,6 +95,7 @@ master:
datadirs:
- /var/lib/etcd/default.etcd
- /var/lib/etcd/data.etcd
- /var/lib/rancher/k3s/server/db/etcd
confs:
- /etc/kubernetes/manifests/etcd.yaml
- /etc/kubernetes/manifests/etcd.yml
@@ -101,6 +105,8 @@ master:
- /var/snap/etcd/common/etcd.conf.yaml
- /var/snap/microk8s/current/args/etcd
- /usr/lib/systemd/system/etcd.service
- /var/lib/rancher/rke2/server/db/etcd/config
- /var/lib/rancher/k3s/server/db/etcd/config
defaultconf: /etc/kubernetes/manifests/etcd.yaml
defaultdatadir: /var/lib/etcd/default.etcd
@@ -132,6 +138,9 @@ node:
- "/etc/kubernetes/certs/ca.crt"
- "/etc/kubernetes/cert/ca.pem"
- "/var/snap/microk8s/current/certs/ca.crt"
- "/var/lib/rancher/rke2/agent/server.crt"
- "/var/lib/rancher/rke2/agent/client-ca.crt"
- "/var/lib/rancher/k3s/agent/client-ca.crt"
svc:
# These paths must also be included
# in the 'confs' property below
@@ -151,13 +160,17 @@ node:
- "/var/lib/kubelet/kubeconfig"
- "/etc/kubernetes/kubelet-kubeconfig"
- "/etc/kubernetes/kubelet/kubeconfig"
- "/etc/kubernetes/ssl/kubecfg-kube-node.yaml"
- "/var/snap/microk8s/current/credentials/kubelet.config"
- "/etc/kubernetes/kubeconfig-kubelet"
- "/var/lib/rancher/rke2/agent/kubelet.kubeconfig"
- "/var/lib/rancher/k3s/agent/kubelet.kubeconfig"
confs:
- "/etc/kubernetes/kubelet-config.yaml"
- "/var/lib/kubelet/config.yaml"
- "/var/lib/kubelet/config.yml"
- "/etc/kubernetes/kubelet/kubelet-config.json"
- "/etc/kubernetes/kubelet/config.json"
- "/etc/kubernetes/kubelet/config"
- "/home/kubernetes/kubelet-config.yaml"
- "/home/kubernetes/kubelet-config.yml"
@@ -177,6 +190,7 @@ node:
- "/etc/systemd/system/snap.kubelet.daemon.service"
- "/etc/systemd/system/snap.microk8s.daemon-kubelet.service"
- "/etc/kubernetes/kubelet.yaml"
defaultconf: "/var/lib/kubelet/config.yaml"
defaultsvc: "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
defaultkubeconfig: "/etc/kubernetes/kubelet.conf"
@@ -200,8 +214,11 @@ node:
- "/etc/kubernetes/kubelet-kubeconfig"
- "/etc/kubernetes/kubelet-kubeconfig.conf"
- "/etc/kubernetes/kubelet/config"
- "/etc/kubernetes/ssl/kubecfg-kube-proxy.yaml"
- "/var/lib/kubelet/kubeconfig"
- "/var/snap/microk8s/current/credentials/proxy.config"
- "/var/lib/rancher/rke2/agent/kubeproxy.kubeconfig"
- "/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig"
svc:
- "/lib/systemd/system/kube-proxy.service"
- "/etc/systemd/system/snap.microk8s.daemon-proxy.service"
@@ -218,6 +235,7 @@ etcd:
datadirs:
- /var/lib/etcd/default.etcd
- /var/lib/etcd/data.etcd
- /var/lib/rancher/k3s/server/db/etcd
confs:
- /etc/kubernetes/manifests/etcd.yaml
- /etc/kubernetes/manifests/etcd.yml
@@ -227,6 +245,8 @@ etcd:
- /var/snap/etcd/common/etcd.conf.yaml
- /var/snap/microk8s/current/args/etcd
- /usr/lib/systemd/system/etcd.service
- /var/lib/rancher/rke2/agent/pod-manifests/etcd.yaml
- /var/lib/rancher/k3s/server/db/etcd/config
defaultconf: /etc/kubernetes/manifests/etcd.yaml
defaultdatadir: /var/lib/etcd/default.etcd
@@ -258,16 +278,34 @@ version_mapping:
"1.22": "cis-1.23"
"1.23": "cis-1.23"
"1.24": "cis-1.24"
"1.25": "cis-1.7"
"1.26": "cis-1.8"
"1.27": "cis-1.9"
"1.28": "cis-1.9"
"1.29": "cis-1.9"
"eks-1.0.1": "eks-1.0.1"
"eks-1.1.0": "eks-1.1.0"
"eks-1.2.0": "eks-1.2.0"
"gke-1.0": "gke-1.0"
"gke-1.2.0": "gke-1.2.0"
"gke-1.6.0": "gke-1.6.0"
"ocp-3.10": "rh-0.7"
"ocp-3.11": "rh-0.7"
"ocp-4.0": "rh-1.0"
"aks-1.0": "aks-1.0"
"ack-1.0": "ack-1.0"
"cis-1.6-k3s": "cis-1.6-k3s"
"cis-1.24-microk8s": "cis-1.24-microk8s"
"tkgi-1.2.53": "tkgi-1.2.53"
"k3s-cis-1.7": "k3s-cis-1.7"
"k3s-cis-1.23": "k3s-cis-1.23"
"k3s-cis-1.24": "k3s-cis-1.24"
"rke-cis-1.7": "rke-cis-1.7"
"rke-cis-1.23": "rke-cis-1.23"
"rke-cis-1.24": "rke-cis-1.24"
"rke2-cis-1.7": "rke2-cis-1.7"
"rke2-cis-1.23": "rke2-cis-1.23"
"rke2-cis-1.24": "rke2-cis-1.24"
target_mapping:
"cis-1.5":
@@ -306,6 +344,30 @@ target_mapping:
- "controlplane"
- "etcd"
- "policies"
"cis-1.24-microk8s":
- "master"
- "etcd"
- "node"
- "controlplane"
- "policies"
"cis-1.7":
- "master"
- "node"
- "controlplane"
- "etcd"
- "policies"
"cis-1.8":
- "master"
- "node"
- "controlplane"
- "etcd"
- "policies"
"cis-1.9":
- "master"
- "node"
- "controlplane"
- "etcd"
- "policies"
"gke-1.0":
- "master"
- "node"
@@ -319,6 +381,12 @@ target_mapping:
- "controlplane"
- "policies"
- "managedservices"
"gke-1.6.0":
- "master"
- "node"
- "controlplane"
- "policies"
- "managedservices"
"eks-1.0.1":
- "master"
- "node"
@@ -331,6 +399,12 @@ target_mapping:
- "controlplane"
- "policies"
- "managedservices"
"eks-1.2.0":
- "master"
- "node"
- "controlplane"
- "policies"
- "managedservices"
"rh-0.7":
- "master"
- "node"
@@ -358,3 +432,69 @@ target_mapping:
- "controlplane"
- "policies"
- "managedservices"
"tkgi-1.2.53":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"k3s-cis-1.7":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"k3s-cis-1.8":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"k3s-cis-1.23":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"k3s-cis-1.24":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"rke-cis-1.7":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"rke-cis-1.23":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"rke-cis-1.24":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"rke2-cis-1.7":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"rke2-cis-1.23":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"
"rke2-cis-1.24":
- "master"
- "etcd"
- "controlplane"
- "node"
- "policies"

View File

@@ -0,0 +1,9 @@
---
## Version-specific settings that override the values in cfg/config.yaml
## These settings are required if you are using the --asff option to report findings to AWS Security Hub
## AWS account number is required.
AWS_ACCOUNT: "<AWS_ACCT_NUMBER>"
## AWS region is required.
AWS_REGION: "<AWS_REGION>"
## EKS Cluster ARN is required.
CLUSTER_ARN: "<AWS_CLUSTER_ARN>"

View File

@@ -0,0 +1,14 @@
---
controls:
version: "eks-1.2.0"
id: 2
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 2.1
text: "Logging"
checks:
- id: 2.1.1
text: "Enable audit logs (Manual)"
remediation: "Enable control plane logging for API Server, Audit, Authenticator, Controller Manager, and Scheduler."
scored: false

View File

@@ -0,0 +1,154 @@
---
controls:
version: "eks-1.2.0"
id: 5
text: "Managed Services"
type: "managedservices"
groups:
- id: 5.1
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning using Amazon ECR image scanning or a third-party provider (Manual)"
type: "manual"
remediation: |
To utilize AWS ECR for Image scanning please follow the steps below:
To create a repository configured for scan on push (AWS CLI):
aws ecr create-repository --repository-name $REPO_NAME --image-scanning-configuration scanOnPush=true --region $REGION_CODE
To edit the settings of an existing repository (AWS CLI):
aws ecr put-image-scanning-configuration --repository-name $REPO_NAME --image-scanning-configuration scanOnPush=true --region $REGION_CODE
Use the following steps to start a manual image scan using the AWS Management Console.
Open the Amazon ECR console at https://console.aws.amazon.com/ecr/repositories.
From the navigation bar, choose the Region to create your repository in.
In the navigation pane, choose Repositories.
On the Repositories page, choose the repository that contains the image to scan.
On the Images page, select the image to scan and then choose Scan.
scored: false
- id: 5.1.2
text: "Minimize user access to Amazon ECR (Manual)"
type: "manual"
remediation: |
Before you use IAM to manage access to Amazon ECR, you should understand what IAM features
are available to use with Amazon ECR. To get a high-level view of how Amazon ECR and other
AWS services work with IAM, see AWS Services That Work with IAM in the IAM User Guide.
scored: false
- id: 5.1.3
text: "Minimize cluster access to read-only for Amazon ECR (Manual)"
type: "manual"
remediation: |
You can use your Amazon ECR images with Amazon EKS, but you need to satisfy the following prerequisites.
The Amazon EKS worker node IAM role (NodeInstanceRole) that you use with your worker nodes must possess
the following IAM policy permissions for Amazon ECR.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecr:GetAuthorizationToken"
],
"Resource": "*"
}
]
}
scored: false
- id: 5.1.4
text: "Minimize Container Registries to only those approved (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.2
text: "Identity and Access Management (IAM)"
checks:
- id: 5.2.1
text: "Prefer using dedicated Amazon EKS Service Accounts (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.3
text: "AWS Key Management Service (KMS)"
checks:
- id: 5.3.1
text: "Ensure Kubernetes Secrets are encrypted using Customer Master Keys (CMKs) managed in AWS KMS (Manual)"
type: "manual"
remediation: |
This process can only be performed during Cluster Creation.
Enable 'Secrets Encryption' during Amazon EKS cluster creation as described
in the links within the 'References' section.
scored: false
- id: 5.4
text: "Cluster Networking"
checks:
- id: 5.4.1
text: "Restrict Access to the Control Plane Endpoint (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.4.2
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.4.3
text: "Ensure clusters are created with Private Nodes (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.4.4
text: "Ensure Network Policy is Enabled and set as appropriate (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.4.5
text: "Encrypt traffic to HTTPS load balancers with TLS certificates (Manual)"
type: "manual"
remediation: "No remediation"
scored: false
- id: 5.5
text: "Authentication and Authorization"
checks:
- id: 5.5.1
text: "Manage Kubernetes RBAC users with AWS IAM Authenticator for Kubernetes (Manual)"
type: "manual"
remediation: |
Refer to the 'Managing users or IAM roles for your cluster' in Amazon EKS documentation.
scored: false
- id: 5.6
text: "Other Cluster Configurations"
checks:
- id: 5.6.1
text: "Consider Fargate for running untrusted workloads (Manual)"
type: "manual"
remediation: |
Create a Fargate profile for your cluster Before you can schedule pods running on Fargate
in your cluster, you must define a Fargate profile that specifies which pods should use
Fargate when they are launched. For more information, see AWS Fargate profile.
Note: If you created your cluster with eksctl using the --fargate option, then a Fargate profile has
already been created for your cluster with selectors for all pods in the kube-system
and default namespaces. Use the following procedure to create Fargate profiles for
any other namespaces you would like to use with Fargate.
scored: false

View File

@@ -0,0 +1,6 @@
---
controls:
version: "eks-1.2.0"
id: 1
text: "Control Plane Components"
type: "master"

330
cfg/eks-1.2.0/node.yaml Normal file
View File

@@ -0,0 +1,330 @@
---
controls:
version: "eks-1.2.0"
id: 3
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 3.1
text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletkubeconfig
scored: false
- id: 3.1.2
text: "Ensure that the kubelet kubeconfig file ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: false
- id: 3.1.3
text: "Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 644 $kubeletconf
scored: false
- id: 3.1.4
text: "Ensure that the kubelet configuration file ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: false
- id: 3.2
text: "Kubelet"
checks:
- id: 3.2.1
text: "Ensure that the Anonymous Auth is Not Enabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
set: true
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to
false.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
set: true
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set authorization: mode to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.3
text: "Ensure that a Client CA File is Configured (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
If using a Kubelet config file, edit the file to set authentication: x509: clientCAFile to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.2.4
text: "Ensure that the --read-only-port is disabled (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: true
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set readOnlyPort to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: true
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set streamingConnectionIdleTimeout to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.6
text: "Ensure that the --protect-kernel-defaults argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --protect-kernel-defaults
path: '{.protectKernelDefaults}'
set: true
compare:
op: eq
value: true
remediation: |
If using a Kubelet config file, edit the file to set protectKernelDefaults: true.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.7
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated) "
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: true
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set makeIPTablesUtilChains: true.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.8
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
audit: "/bin/ps -fC $kubeletbin "
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.2.9
text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
set: true
compare:
op: gte
value: 0
remediation: |
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.2.10
text: "Ensure that the --rotate-certificates argument is not present or is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: true
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line rotateCertificates: true or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.2.11
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: true
compare:
op: eq
value: true
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 3.3
text: "Container Optimized OS"
checks:
- id: 3.3.1
text: "Prefer using a container-optimized OS when possible (Manual)"
remediation: "No remediation"
scored: false

213
cfg/eks-1.2.0/policies.yaml Normal file
View File

@@ -0,0 +1,213 @@
---
controls:
version: "eks-1.2.0"
id: 4
text: "Policies"
type: "policies"
groups:
- id: 4.1
text: "RBAC and Service Accounts"
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 4.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to secret objects in the cluster.
scored: false
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 4.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 4.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 4.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 4.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 4.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 4.2
text: "Pod Security Policies"
checks:
- id: 4.2.1
text: "Minimize the admission of privileged containers (Automated)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that
the .spec.privileged field is omitted or set to false.
scored: false
- id: 4.2.2
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostPID field is omitted or set to false.
scored: false
- id: 4.2.3
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostIPC field is omitted or set to false.
scored: false
- id: 4.2.4
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.hostNetwork field is omitted or set to false.
scored: false
- id: 4.2.5
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.allowPrivilegeEscalation field is omitted or set to false.
scored: false
- id: 4.2.6
text: "Minimize the admission of root containers (Automated)"
type: "manual"
remediation: |
Create a PSP as described in the Kubernetes documentation, ensuring that the
.spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of
UIDs not including 0.
scored: false
- id: 4.2.7
text: "Minimize the admission of containers with added capabilities (Manual)"
type: "manual"
remediation: |
Ensure that allowedCapabilities is not present in PSPs for the cluster unless
it is set to an empty array.
scored: false
- id: 4.2.8
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilities in applications running on your cluster. Where a namespace
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 4.3
text: "CNI Plugin"
checks:
- id: 4.3.1
text: "Ensure CNI plugin supports network policies (Manual)"
type: "manual"
remediation: |
As with RBAC policies, network policies should adhere to the policy of least privileged
access. Start by creating a deny all policy that restricts all inbound and outbound traffic
from a namespace or create a global policy using Calico.
scored: false
- id: 4.3.2
text: "Ensure that all Namespaces have Network Policies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 4.4
text: "Secrets Management"
checks:
- id: 4.4.1
text: "Prefer using secrets as files over secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 4.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 4.5
text: "Extensible Admission Control"
checks: []
- id: 4.6
text: "General Policies"
checks:
- id: 4.6.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 4.6.2
text: "Apply Security Context to Your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 4.6.3
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -0,0 +1,9 @@
---
## Version-specific settings that override the values in cfg/config.yaml
node:
proxy:
defaultkubeconfig: "/var/lib/kubelet/kubeconfig"
kubelet:
defaultconf: "/etc/kubernetes/kubelet/kubelet-config.yaml"

View File

@@ -0,0 +1,20 @@
---
controls:
version: "gke-1.6.0"
id: 2
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 2.1
text: "Authentication and Authorization"
checks:
- id: 2.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
You can remediate the availability of client certificates in your GKE cluster. See
Recommendation 5.8.1.
scored: false

View File

@@ -0,0 +1,617 @@
---
controls:
version: "gke-1.6.0"
id: 5
text: "Managed Services"
type: "managedservices"
groups:
- id: 5.1
text: "Image Registry and Image Scanning"
checks:
- id: 5.1.1
text: "Ensure Image Vulnerability Scanning is enabled (Automated)"
type: "manual"
remediation: |
For Images Hosted in GCR:
Using Command Line:
gcloud services enable containeranalysis.googleapis.com
For Images Hosted in AR:
Using Command Line:
gcloud services enable containerscanning.googleapis.com
scored: false
- id: 5.1.2
text: "Minimize user access to Container Image repositories (Manual)"
type: "manual"
remediation: |
For Images Hosted in AR:
Using Command Line:
gcloud artifacts repositories set-iam-policy <repository-name> <path-to-policy-file> \
--location <repository-location>
To learn how to configure policy files see: https://cloud.google.com/artifact-registry/docs/access-control#grant
For Images Hosted in GCR:
Using Command Line:
To change roles at the GCR bucket level:
Firstly, run the following if read permissions are required:
gsutil iam ch <type>:<email_address>:objectViewer gs://artifacts.<project_id>.appspot.com
Then remove the excessively privileged role (Storage Admin / Storage Object
Admin / Storage Object Creator) using:
gsutil iam ch -d <type>:<email_address>:<role> gs://artifacts.<project_id>.appspot.com
where:
<type> can be one of the following:
user, if the <email_address> is a Google account.
serviceAccount, if <email_address> specifies a Service account.
<email_address> can be one of the following:
a Google account (for example, someone@example.com).
a Cloud IAM service account.
To modify roles defined at the project level and subsequently inherited within the GCR
bucket, or the Service Account User role, extract the IAM policy file, modify it
accordingly and apply it using:
gcloud projects set-iam-policy <project_id> <policy_file>
scored: false
- id: 5.1.3
text: "Minimize cluster access to read-only for Container Image repositories (Manual)"
type: "manual"
remediation: |
For Images Hosted in AR:
Using Command Line:
Add artifactregistry.reader role
gcloud artifacts repositories add-iam-policy-binding <repository> \
--location=<repository-location> \
--member='serviceAccount:<email-address>' \
--role='roles/artifactregistry.reader'
Remove any roles other than artifactregistry.reader
gcloud artifacts repositories remove-iam-policy-binding <repository> \
--location <repository-location> \
--member='serviceAccount:<email-address>' \
--role='<role-name>'
For Images Hosted in GCR:
For an account explicitly granted to the bucket:
Firstly add read access to the Kubernetes Service Account:
gsutil iam ch <type>:<email_address>:objectViewer gs://artifacts.<project_id>.appspot.com
where:
<type> can be one of the following:
user, if the <email_address> is a Google account.
serviceAccount, if <email_address> specifies a Service account.
<email_address> can be one of the following:
a Google account (for example, someone@example.com).
a Cloud IAM service account.
Then remove the excessively privileged role (Storage Admin / Storage Object
Admin / Storage Object Creator) using:
gsutil iam ch -d <type>:<email_address>:<role> gs://artifacts.<project_id>.appspot.com
For an account that inherits access to the GCR Bucket through Project level
permissions, modify the Projects IAM policy file accordingly, then upload it using:
gcloud projects set-iam-policy <project_id> <policy_file>
scored: false
- id: 5.1.4
text: "Ensure only trusted container images are used (Manual)"
type: "manual"
remediation: |
Using Command Line:
Update the cluster to enable Binary Authorization:
gcloud container cluster update <cluster_name> --enable-binauthz
Create a Binary Authorization Policy using the Binary Authorization Policy Reference:
https://cloud.google.com/binary-authorization/docs/policy-yaml-reference for guidance.
Import the policy file into Binary Authorization:
gcloud container binauthz policy import <yaml_policy>
scored: false
- id: 5.2
text: "Identity and Access Management (IAM)"
checks:
- id: 5.2.1
text: "Ensure GKE clusters are not running using the Compute Engine default service account (Automated))"
type: "manual"
remediation: |
Using Command Line:
To create a minimally privileged service account:
gcloud iam service-accounts create <node_sa_name> \
--display-name "GKE Node Service Account"
export NODE_SA_EMAIL=gcloud iam service-accounts list \
--format='value(email)' --filter='displayName:GKE Node Service Account'
Grant the following roles to the service account:
export PROJECT_ID=gcloud config get-value project
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/monitoring.metricWriter
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/monitoring.viewer
gcloud projects add-iam-policy-binding <project_id> --member \
serviceAccount:<node_sa_email> --role roles/logging.logWriter
To create a new Node pool using the Service account, run the following command:
gcloud container node-pools create <node_pool> \
--service-account=<sa_name>@<project_id>.iam.gserviceaccount.com \
--cluster=<cluster_name> --zone <compute_zone>
Note: The workloads will need to be migrated to the new Node pool, and the old node
pools that use the default service account should be deleted to complete the
remediation.
scored: false
- id: 5.2.2
text: "Prefer using dedicated GCP Service Accounts and Workload Identity (Manual)"
type: "manual"
remediation: |
Using Command Line:
gcloud container clusters update <cluster_name> --zone <cluster_zone> \
--workload-pool <project_id>.svc.id.goog
Note that existing Node pools are unaffected. New Node pools default to --workload-
metadata-from-node=GKE_METADATA_SERVER.
Then, modify existing Node pools to enable GKE_METADATA_SERVER:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <cluster_zone> --workload-metadata=GKE_METADATA
Workloads may need to be modified in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity.
Also consider the effects on the availability of hosted workloads as Node pools
are updated. It may be more appropriate to create new Node Pools.
scored: false
- id: 5.3
text: "Cloud Key Management Service (Cloud KMS)"
checks:
- id: 5.3.1
text: "Ensure Kubernetes Secrets are encrypted using keys managed in Cloud KMS (Automated)"
type: "manual"
remediation: |
To create a key:
Create a key ring:
gcloud kms keyrings create <ring_name> --location <location> --project \
<key_project_id>
Create a key:
gcloud kms keys create <key_name> --location <location> --keyring <ring_name> \
--purpose encryption --project <key_project_id>
Grant the Kubernetes Engine Service Agent service account the Cloud KMS
CryptoKey Encrypter/Decrypter role:
gcloud kms keys add-iam-policy-binding <key_name> --location <location> \
--keyring <ring_name> --member serviceAccount:<service_account_name> \
--role roles/cloudkms.cryptoKeyEncrypterDecrypter --project <key_project_id>
To create a new cluster with Application-layer Secrets Encryption:
gcloud container clusters create <cluster_name> --cluster-version=latest \
--zone <zone> \
--database-encryption-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name> \
--project <cluster_project_id>
To enable on an existing cluster:
gcloud container clusters update <cluster_name> --zone <zone> \
--database-encryption-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name> \
--project <cluster_project_id>
scored: false
- id: 5.4
text: "Node Metadata"
checks:
- id: 5.4.1
text: "Ensure the GKE Metadata Server is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
gcloud container clusters update <cluster_name> --identity-namespace=<project_id>.svc.id.goog
Note that existing Node pools are unaffected. New Node pools default to --workload-
metadata-from-node=GKE_METADATA_SERVER.
To modify an existing Node pool to enable GKE Metadata Server:
gcloud container node-pools update <node_pool_name> --cluster=<cluster_name> \
--workload-metadata-from-node=GKE_METADATA_SERVER
Workloads may need modification in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity.
scored: false
- id: 5.5
text: "Node Configuration and Maintenance"
checks:
- id: 5.5.1
text: "Ensure Container-Optimized OS (cos_containerd) is used for GKE node images (Automated)"
type: "manual"
remediation: |
Using Command Line:
To set the node image to cos for an existing cluster's Node pool:
gcloud container clusters upgrade <cluster_name> --image-type cos_containerd \
--zone <compute_zone> --node-pool <node_pool_name>
scored: false
- id: 5.5.2
text: "Ensure Node Auto-Repair is enabled for GKE nodes (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable node auto-repair for an existing cluster's Node pool:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --enable-autorepair
scored: false
- id: 5.5.3
text: "Ensure Node Auto-Upgrade is enabled for GKE nodes (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable node auto-upgrade for an existing cluster's Node pool, run the following
command:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name> \
--zone <cluster_zone> --enable-autoupgrade
scored: false
- id: 5.5.4
text: "When creating New Clusters - Automate GKE version management using Release Channels (Automated)"
type: "manual"
remediation: |
Using Command Line:
Create a new cluster by running the following command:
gcloud container clusters create <cluster_name> --zone <cluster_zone> \
--release-channel <release_channel>
where <release_channel> is stable or regular, according to requirements.
scored: false
- id: 5.5.5
text: "Ensure Shielded GKE Nodes are Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To migrate an existing cluster, the flag --enable-shielded-nodes needs to be
specified in the cluster update command:
gcloud container clusters update <cluster_name> --zone <cluster_zone> \
--enable-shielded-nodes
scored: false
- id: 5.5.6
text: "Ensure Integrity Monitoring for Shielded GKE Nodes is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To create a Node pool within the cluster with Integrity Monitoring enabled, run the
following command:
gcloud container node-pools create <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --shielded-integrity-monitoring
Workloads from existing non-conforming Node pools will need to be migrated to the
newly created Node pool, then delete non-conforming Node pools to complete the
remediation
scored: false
- id: 5.5.7
text: "Ensure Secure Boot for Shielded GKE Nodes is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To create a Node pool within the cluster with Secure Boot enabled, run the following
command:
gcloud container node-pools create <node_pool_name> --cluster <cluster_name> \
--zone <compute_zone> --shielded-secure-boot
Workloads will need to be migrated from existing non-conforming Node pools to the
newly created Node pool, then delete the non-conforming pools.
scored: false
- id: 5.6
text: "Cluster Networking"
checks:
- id: 5.6.1
text: "Enable VPC Flow Logs and Intranode Visibility (Automated)"
type: "manual"
remediation: |
Using Command Line:
1. Find the subnetwork name associated with the cluster.
gcloud container clusters describe <cluster_name> \
--region <cluster_region> - -format json | jq '.subnetwork'
2. Update the subnetwork to enable VPC Flow Logs.
gcloud compute networks subnets update <subnet_name> --enable-flow-logs
scored: false
- id: 5.6.2
text: "Ensure use of VPC-native clusters (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable Alias IP on a new cluster, run the following command:
gcloud container clusters create <cluster_name> --zone <compute_zone> \
--enable-ip-alias
If using Autopilot configuration mode:
gcloud container clusters create-auto <cluster_name> \
--zone <compute_zone>
scored: false
- id: 5.6.3
text: "Ensure Control Plane Authorized Networks is Enabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To enable Control Plane Authorized Networks for an existing cluster, run the following
command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--enable-master-authorized-networks
Along with this, you can list authorized networks using the --master-authorized-networks
flag which contains a list of up to 20 external networks that are allowed to
connect to your cluster's control plane through HTTPS. You provide these networks as
a comma-separated list of addresses in CIDR notation (such as 90.90.100.0/24).
scored: false
- id: 5.6.4
text: "Ensure clusters are created with Private Endpoint Enabled and Public Access Disabled (Manual)"
type: "manual"
remediation: |
Using Command Line:
Create a cluster with a Private Endpoint enabled and Public Access disabled by including
the --enable-private-endpoint flag within the cluster create command:
gcloud container clusters create <cluster_name> --enable-private-endpoint
Setting this flag also requires the setting of --enable-private-nodes, --enable-ip-alias
and --master-ipv4-cidr=<master_cidr_range>.
scored: false
- id: 5.6.5
text: "Ensure clusters are created with Private Nodes (Manual)"
type: "manual"
remediation: |
Using Command Line:
To create a cluster with Private Nodes enabled, include the --enable-private-nodes
flag within the cluster create command:
gcloud container clusters create <cluster_name> --enable-private-nodes
Setting this flag also requires the setting of --enable-ip-alias and
--master-ipv4-cidr=<master_cidr_range>.
scored: false
- id: 5.6.6
text: "Consider firewalling GKE worker nodes (Manual)"
type: "manual"
remediation: |
Using Command Line:
Use the following command to generate firewall rules, setting the variables as
appropriate:
gcloud compute firewall-rules create <firewall_rule_name> \
--network <network> --priority <priority> --direction <direction> \
--action <action> --target-tags <tag> \
--target-service-accounts <service_account> \
--source-ranges <source_cidr_range> --source-tags <source_tags> \
--source-service-accounts <source_service_account> \
--destination-ranges <destination_cidr_range> --rules <rules>
scored: false
- id: 5.6.7
text: "Ensure use of Google-managed SSL Certificates (Automated)"
type: "manual"
remediation: |
If services of type:LoadBalancer are discovered, consider replacing the Service with
an Ingress.
To configure the Ingress and use Google-managed SSL certificates, follow the
instructions as listed at: https://cloud.google.com/kubernetes-engine/docs/how-
to/managed-certs.
scored: false
- id: 5.7
text: "Logging"
checks:
- id: 5.7.1
text: "Ensure Logging and Cloud Monitoring is Enabled (Automated)"
type: "manual"
remediation: |
To enable Logging for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--logging=<components_to_be_logged>
See https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--logging
for a list of available components for logging.
To enable Cloud Monitoring for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--monitoring=<components_to_be_logged>
See https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--
monitoring for a list of available components for Cloud Monitoring.
scored: false
- id: 5.7.2
text: "Enable Linux auditd logging (Manual)"
type: "manual"
remediation: |
Using Command Line:
Download the example manifests:
curl https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-tools/master/os-audit/cos-auditd-logging.yaml > cos-auditd-logging.yaml
Edit the example manifests if needed. Then, deploy them:
kubectl apply -f cos-auditd-logging.yaml
Verify that the logging Pods have started. If a different Namespace was defined in the
manifests, replace cos-auditd with the name of the namespace being used:
kubectl get pods --namespace=cos-auditd
scored: false
- id: 5.8
text: "Authentication and Authorization"
checks:
- id: 5.8.1
text: "Ensure authentication using Client Certificates is Disabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
Create a new cluster without a Client Certificate:
gcloud container clusters create [CLUSTER_NAME] \
--no-issue-client-certificate
scored: false
- id: 5.8.2
text: "Manage Kubernetes RBAC users with Google Groups for GKE (Manual)"
type: "manual"
remediation: |
Using Command Line:
Follow the G Suite Groups instructions at: https://cloud.google.com/kubernetes-
engine/docs/how-to/role-based-access-control#google-groups-for-gke.
Then, create a cluster with:
gcloud container clusters create <cluster_name> --security-group <security_group_name>
Finally create Roles, ClusterRoles, RoleBindings, and ClusterRoleBindings that
reference the G Suite Groups.
scored: false
- id: 5.8.3
text: "Ensure Legacy Authorization (ABAC) is Disabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To disable Legacy Authorization for an existing cluster, run the following command:
gcloud container clusters update <cluster_name> --zone <compute_zone> \
--no-enable-legacy-authorization
scored: false
- id: 5.9
text: "Storage"
checks:
- id: 5.9.1
text: "Enable Customer-Managed Encryption Keys (CMEK) for GKE Persistent Disks (PD) (Manual)"
type: "manual"
remediation: |
Using Command Line:
Follow the instructions detailed at: https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek.
scored: false
- id: 5.9.2
text: "Enable Customer-Managed Encryption Keys (CMEK) for Boot Disks (Automated)"
type: "manual"
remediation: |
Using Command Line:
Create a new node pool using customer-managed encryption keys for the node boot
disk, of <disk_type> either pd-standard or pd-ssd:
gcloud container node-pools create <cluster_name> --disk-type <disk_type> \
--boot-disk-kms-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name>
Create a cluster using customer-managed encryption keys for the node boot disk, of
<disk_type> either pd-standard or pd-ssd:
gcloud container clusters create <cluster_name> --disk-type <disk_type> \
--boot-disk-kms-key projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKeys/<key_name>
scored: false
- id: 5.10
text: "Other Cluster Configurations"
checks:
- id: 5.10.1
text: "Ensure Kubernetes Web UI is Disabled (Automated)"
type: "manual"
remediation: |
Using Command Line:
To disable the Kubernetes Dashboard on an existing cluster, run the following
command:
gcloud container clusters update <cluster_name> --zone <zone> \
--update-addons=KubernetesDashboard=DISABLED
scored: false
- id: 5.10.2
text: "Ensure that Alpha clusters are not used for production workloads (Automated)"
type: "manual"
remediation: |
Using Command Line:
Upon creating a new cluster
gcloud container clusters create [CLUSTER_NAME] \
--zone [COMPUTE_ZONE]
Do not use the --enable-kubernetes-alpha argument.
scored: false
- id: 5.10.3
text: "Consider GKE Sandbox for running untrusted workloads (Manual)"
type: "manual"
remediation: |
Using Command Line:
To enable GKE Sandbox on an existing cluster, a new Node pool must be created,
which can be done using:
gcloud container node-pools create <node_pool_name> --zone <compute-zone> \
--cluster <cluster_name> --image-type=cos_containerd --sandbox="type=gvisor"
scored: false
- id: 5.10.4
text: "Ensure use of Binary Authorization (Automated)"
type: "manual"
remediation: |
Using Command Line:
Update the cluster to enable Binary Authorization:
gcloud container cluster update <cluster_name> --zone <compute_zone> \
--binauthz-evaluation-mode=<evaluation_mode>
Example:
gcloud container clusters update $CLUSTER_NAME --zone $COMPUTE_ZONE \
--binauthz-evaluation-mode=PROJECT_SINGLETON_POLICY_ENFORCE
See: https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--binauthz-evaluation-mode
for more details around the evaluation modes available.
Create a Binary Authorization Policy using the Binary Authorization Policy Reference:
https://cloud.google.com/binary-authorization/docs/policy-yaml-reference for guidance.
Import the policy file into Binary Authorization:
gcloud container binauthz policy import <yaml_policy>
scored: false
- id: 5.10.5
text: "Enable Security Posture (Manual)"
type: "manual"
remediation: |
Enable security posture via the UI, gCloud or API.
https://cloud.google.com/kubernetes-engine/docs/how-to/protect-workload-configuration
scored: false

View File

@@ -0,0 +1,6 @@
---
controls:
version: "gke-1.6.0"
id: 1
text: "Control Plane Components"
type: "master"

506
cfg/gke-1.6.0/node.yaml Normal file
View File

@@ -0,0 +1,506 @@
---
controls:
version: "gke-1.6.0"
id: 3
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 3.1
text: "Worker Node Configuration Files"
checks:
- id: 3.1.1
text: "Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on each worker node.
For example,
chmod 644 $proxykubeconfig
scored: true
- id: 3.1.2
text: "Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on each worker node.
For example:
chown root:root $proxykubeconfig
scored: true
- id: 3.1.3
text: "Ensure that the kubelet configuration file has permissions set to 600 (Manual)"
audit: '/bin/sh -c ''if test -e /home/kubernetes/kubelet-config.yaml; then stat -c permissions=%a /home/kubernetes/kubelet-config.yaml; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the kubelet config file location)
chmod 644 /home/kubernetes/kubelet-config.yaml
scored: true
- id: 3.1.4
text: "Ensure that the kubelet configuration file ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e /home/kubernetes/kubelet-config.yaml; then stat -c %U:%G /home/kubernetes/kubelet-config.yaml; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identied in the Audit step)
chown root:root /home/kubernetes/kubelet-config.yaml
scored: true
- id: 3.2
text: "Kubelet"
checks:
- id: 3.2.1
text: "Ensure that the Anonymous Auth is Not Enabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /home/kubernetes/kubelet-config.yaml
Disable Anonymous Authentication by setting the following parameter:
"authentication": { "anonymous": { "enabled": false } }
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--anonymous-auth=false
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Enable Webhook Authentication by setting the following parameter:
"authentication": { "webhook": { "enabled": true } }
Next, set the Authorization Mode to Webhook by setting the following parameter:
"authorization": { "mode": "Webhook }
Finer detail of the authentication and authorization fields can be found in the
Kubelet Configuration documentation (https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/).
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--authentication-token-webhook
--authorization-mode=Webhook
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.3
text: "Ensure that a Client CA File is Configured (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Configure the client certificate authority file by setting the following parameter
appropriately:
"authentication": { "x509": {"clientCAFile": <path/to/client-ca-file> } }"
Remediation Method 2:
If using executable arguments, edit the kubelet service file on each worker node and
ensure the below parameters are part of the KUBELET_ARGS variable string.
For systems using systemd, such as the Amazon EKS Optimised Amazon Linux or
Bottlerocket AMIs, then this file can be found at
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf. Otherwise,
you may need to look up documentation for your chosen operating system to determine
which service manager is configured:
--client-ca-file=<path/to/client-ca-file>
For Both Remediation Steps:
Based on your system, restart the kubelet service and check the service status.
The following example is for operating systems using systemd, such as the Amazon
EKS Optimised Amazon Linux or Bottlerocket AMIs, and invokes the systemctl
command. If systemctl is not available then you will need to look up documentation for
your chosen operating system to determine which service manager is configured:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.4
text: "Ensure that the --read-only-port argument is disabled (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
bin_op: or
remediation: |
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to 0
"readOnlyPort": 0
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--read-only-port=0
For each remediation:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet-config.yaml and set the below parameter to a non-zero
value in the format of #h#m#s
"streamingConnectionIdleTimeout": "4h0m0s"
You should ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not
specify a --streaming-connection-idle-timeout argument because it would
override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--streaming-connection-idle-timeout=4h0m0s
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"streamingConnectionIdleTimeout": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),
and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from
"kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-utils-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"makeIPTablesUtilChains": true
Ensure that /etc/systemd/system/kubelet.service.d/10-kubelet-args.conf
does not set the --make-iptables-util-chains argument because that would
override your Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--make-iptables-util-chains:true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"makeIPTablesUtilChains.: true by extracting the live configuration from the nodes
running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),
and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from
"kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediations:
Based on your system, restart the kubelet service and check status
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true
- id: 3.2.7
text: "Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
set: true
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 3.2.8
text: "Ensure that the --rotate-certificates argument is not present or is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.yaml file
/etc/kubernetes/kubelet/kubelet-config.yaml and set the below parameter to
true
"RotateCertificate":true
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set the --RotateCertificate
executable argument to false because this would override the Kubelet
config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--RotateCertificate=true
scored: true
- id: 3.2.9
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/cat /home/kubernetes/kubelet-config.yaml"
tests:
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: eq
value: true
remediation: |
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet-config.yaml and set the below parameter to true
"featureGates": {
"RotateKubeletServerCertificate":true
},
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set
the --rotate-kubelet-server-certificate executable argument to false because
this would override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--rotate-kubelet-server-certificate=true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"RotateKubeletServerCertificate": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster (https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/),
and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
export HOSTNAME_PORT=localhost:8001 (example host and port number)
export NODE_NAME=gke-cluster-1-pool1-5e572947-r2hg (example node name from
"kubectl get nodes")
curl -sSL "http://${HOSTNAME_PORT}/api/v1/nodes/${NODE_NAME}/proxy/configz"
For all three remediation methods:
Restart the kubelet service and check status. The example below is for when using
systemctl to manage services:
systemctl daemon-reload
systemctl restart kubelet.service
systemctl status kubelet -l
scored: true

238
cfg/gke-1.6.0/policies.yaml Normal file
View File

@@ -0,0 +1,238 @@
---
controls:
version: "gke-1.6.0"
id: 4
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 4.1
text: "RBAC and Service Accounts"
checks:
- id: 4.1.1
text: "Ensure that the cluster-admin role is only used where required (Automated)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 4.1.2
text: "Minimize access to secrets (Automated)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to secret objects in the cluster.
scored: false
- id: 4.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Automated)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 4.1.4
text: "Ensure that default service accounts are not actively used (Automated)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific
access to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 4.1.5
text: "Ensure that Service Account Tokens are only mounted where necessary (Automated)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 4.1.6
text: "Avoid use of system:masters group (Automated)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 4.1.7
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 4.1.8
text: "Avoid bindings to system:anonymous (Automated)"
type: "manual"
remediation: |
Identify all clusterrolebindings and rolebindings to the user system:anonymous.
Check if they are used and review the permissions associated with the binding using the
commands in the Audit section above or refer to GKE documentation
(https://cloud.google.com/kubernetes-engine/docs/best-practices/rbac#detect-prevent-default).
Strongly consider replacing unsafe bindings with an authenticated, user-defined group.
Where possible, bind to non-default, user-defined groups with least-privilege roles.
If there are any unsafe bindings to the user system:anonymous, proceed to delete them
after consideration for cluster operations with only necessary, safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: false
- id: 4.1.9
text: "Avoid non-default bindings to system:unauthenticated (Automated)"
type: "manual"
remediation: |
Identify all non-default clusterrolebindings and rolebindings to the group
system:unauthenticated. Check if they are used and review the permissions
associated with the binding using the commands in the Audit section above or refer to
GKE documentation (https://cloud.google.com/kubernetes-engine/docs/best-practices/rbac#detect-prevent-default).
Strongly consider replacing non-default, unsafe bindings with an authenticated, user-
defined group. Where possible, bind to non-default, user-defined groups with least-
privilege roles.
If there are any non-default, unsafe bindings to the group system:unauthenticated,
proceed to delete them after consideration for cluster operations with only necessary,
safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: false
- id: 4.1.10
text: "Avoid non-default bindings to system:authenticated (Automated)"
type: "manual"
remediation: |
Identify all non-default clusterrolebindings and rolebindings to the group
system:authenticated. Check if they are used and review the permissions associated
with the binding using the commands in the Audit section above or refer to GKE
documentation.
Strongly consider replacing non-default, unsafe bindings with an authenticated, user-
defined group. Where possible, bind to non-default, user-defined groups with least-
privilege roles.
If there are any non-default, unsafe bindings to the group system:authenticated,
proceed to delete them after consideration for cluster operations with only necessary,
safer bindings.
kubectl delete clusterrolebinding [CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding [ROLE_BINDING_NAME] --namespace [ROLE_BINDING_NAMESPACE]
scored: false
- id: 4.2
text: "Pod Security Standards"
checks:
- id: 4.2.1
text: "Ensure that the cluster enforces Pod Security Standard Baseline profile or stricter for all namespaces. (Manual)"
type: "manual"
remediation: |
Ensure that Pod Security Admission is in place for every namespace which contains
user workloads.
Run the following command to enforce the Baseline profile in a namespace:
kubectl label namespace pod-security.kubernetes.io/enforce=baseline
scored: false
- id: 4.3
text: "Network Policies and CNI"
checks:
- id: 4.3.1
text: "Ensure that the CNI in use supports Network Policies (Manual)"
type: "manual"
remediation: |
To use a CNI plugin with Network Policy, enable Network Policy in GKE, and the CNI plugin
will be updated. See Recommendation 5.6.7.
scored: false
- id: 4.3.2
text: "Ensure that all Namespaces have Network Policies defined (Automated)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as needed.
See: https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy#creating_a_network_policy
for more information.
scored: false
- id: 4.4
text: "Secrets Management"
checks:
- id: 4.4.1
text: "Prefer using secrets as files over secrets as environment variables (Automated)"
type: "manual"
remediation: |
if possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 4.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 4.5
text: "Extensible Admission Control"
checks:
- id: 4.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
Also see recommendation 5.10.4.
scored: false
- id: 4.6
text: "General Policies"
checks:
- id: 4.6.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 4.6.2
text: "Ensure that the seccomp profile is set to RuntimeDefault in your pod definitions (Automated)"
type: "manual"
remediation: |
Use security context to enable the RuntimeDefault seccomp profile in your pod
definitions. An example is as below:
{
"namespace": "kube-system",
"name": "metrics-server-v0.7.0-dbcc8ddf6-gz7d4",
"seccompProfile": "RuntimeDefault"
}
scored: false
- id: 4.6.3
text: "Apply Security Context to Your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Google Container-
Optimized OS Benchmark.
scored: false
- id: 4.6.4
text: "The default namespace should not be used (Automated)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -0,0 +1,47 @@
---
## Version-specific settings that override the values in cfg/config.yaml
master:
components:
- apiserver
- scheduler
- controllermanager
- etcd
- policies
apiserver:
bins:
- containerd
scheduler:
bins:
- containerd
controllermanager:
bins:
- containerd
etcd:
bins:
- containerd
datadirs:
- /var/lib/rancher/k3s/server/db/etcd
node:
components:
- kubelet
- proxy
kubelet:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubelet.kubeconfig
defaultcafile: /var/lib/rancher/k3s/agent/client-ca.crt
proxy:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
policies:
components:
- policies

View File

@@ -0,0 +1,47 @@
---
controls:
version: "k3s-cis-1.23"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'"
type: "manual"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

137
cfg/k3s-cis-1.23/etcd.yaml Normal file
View File

@@ -0,0 +1,137 @@
---
controls:
version: "k3s-cis-1.23"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "grep -A 4 'client-transport-security' $etcdconf | grep -E 'cert-file|key-file'"
tests:
bin_op: and
test_items:
- flag: "cert-file"
set: true
- flag: "key-file"
set: true
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "grep -A 4 'client-transport-security' $etcdconf | grep 'client-cert-auth'"
tests:
bin_op: or
test_items:
- flag: "--client-cert-auth"
set: true
- flag: "client-cert-auth"
compare:
op: eq
value: true
set: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "if grep -q '^auto-tls' $etcdconf;then grep '^auto-tls' $etcdconf;else echo 'notset';fi"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
set: false
- flag: "--auto-tls"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)"
audit: "grep -A 4 'peer-transport-security' $etcdconf | grep -E 'cert-file|key-file'"
tests:
bin_op: and
test_items:
- flag: "cert-file"
set: true
- flag: "key-file"
set: true
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "grep -A 4 'peer-transport-security' $etcdconf | grep 'client-cert-auth'"
tests:
bin_op: or
test_items:
- flag: "--client-cert-auth"
set: true
- flag: "client-cert-auth"
compare:
op: eq
value: true
set: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "if grep -q '^peer-auto-tls' $etcdconf;then grep '^peer-auto-tls' $etcdconf;else echo 'notset';fi"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
set: false
- flag: "--peer-auto-tls"
compare:
op: eq
value: false
set: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: "if grep -q 'trusted-ca-file' $etcdconf;then grep 'trusted-ca-file' $etcdconf;else echo 'notset';fi"
tests:
test_items:
- flag: "trusted-ca-file"
set: true
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: false

View File

@@ -0,0 +1,986 @@
---
controls:
version: "k3s-cis-1.23"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
type: "skip"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the
control plane node.
For example, chmod 644 $apiserverconf
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
type: "skip"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $apiserverconf
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
type: "skip"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 644 $controllermanagerconf
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
type: "skip"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $controllermanagerconf
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
type: "skip"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 644 $schedulerconf
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
type: "skip"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $schedulerconf
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then stat -c permissions=%a $etcdconf; fi'"
type: "skip"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 644 $etcdconf
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $etcdconf; then stat -c %U:%G $etcdconf; fi'"
type: "skip"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $etcdconf
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)"
type: "skip"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 644 <path/to/cni/files>
scored: false
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
type: "skip"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root <path/to/cni/files>
scored: false
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: "stat -c %a $etcddatadir"
tests:
test_items:
- flag: "700"
compare:
op: eq
value: "700"
set: true
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%' | xargs stat -c %U:%G
type: "skip"
tests:
test_items:
- flag: "etcd:etcd"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
scored: true
- id: 1.1.13
text: "Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/admin.kubeconfig'"
type: "skip"
tests:
test_items:
- flag: "600"
compare:
op: eq
value: "600"
set: true
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
scored: true
- id: 1.1.14
text: "Ensure that the admin.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: "root:root"
set: true
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /etc/kubernetes/admin.conf
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 644 $schedulerkubeconfig
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/controller.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/controller.kubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 644 $controllermanagerkubeconfig
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "stat -c %U:%G /var/lib/rancher/k3s/server/tls"
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: "root:root"
set: true
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "find /var/lib/rancher/k3s/server/tls | xargs stat -c %U:%G"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /etc/kubernetes/pki/
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive (Manual)"
audit: "stat -c %n\ %a /var/lib/rancher/k3s/server/tls/*.crt"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 644 /etc/kubernetes/pki/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)"
audit: "stat -c %n\ %a /var/lib/rancher/k3s/server/tls/*.key"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod -R 600 /etc/kubernetes/pki/*.key
scored: false
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'"
type: manual
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--anonymous-auth=false
scored: false
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the control plane node and remove the --token-auth-file=<filename> parameter.
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: "DenyServiceExternalIPs"
set: true
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and remove the `DenyServiceExternalIPs`
from enabled admission plugins.
scored: true
- id: 1.2.4
text: "Ensure that the --kubelet-https argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-https'"
type: "skip"
tests:
bin_op: or
test_items:
- flag: "--kubelet-https"
compare:
op: eq
value: true
- flag: "--kubelet-https"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and remove the --kubelet-https parameter.
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the
apiserver and kubelets. Then, edit API server pod specification file
$apiserverconf on the control plane node and set the
kubelet client certificate and key parameters as below.
--kubelet-client-certificate=<path/to/client-certificate-file>
--kubelet-client-key=<path/to/client-key-file>
scored: true
- id: 1.2.6
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the control plane node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
One such example could be as below.
--authorization-mode=RBAC
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.9
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
for example `--authorization-mode=Node,RBAC`.
scored: true
- id: 1.2.10
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file $apiserverconf
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
scored: false
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
value that does not include AlwaysAdmit.
scored: true
- id: 1.2.12
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
scored: false
- id: 1.2.13
text: "Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "SecurityContextDeny"
- flag: "--enable-admission-plugins"
compare:
op: has
value: "PodSecurityPolicy"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
SecurityContextDeny, unless PodSecurityPolicy is already in place.
--enable-admission-plugins=...,SecurityContextDeny,...
scored: false
- id: 1.2.14
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Follow the documentation and create ServiceAccount objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
value that does not include ServiceAccount.
scored: true
- id: 1.2.15
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --disable-admission-plugins parameter to
ensure it does not include NamespaceLifecycle.
scored: true
- id: 1.2.16
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to a
value that includes NodeRestriction.
--enable-admission-plugins=...,NodeRestriction,...
scored: true
- id: 1.2.17
text: "Ensure that the --secure-port argument is not set to 0 (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'secure-port'"
tests:
bin_op: or
test_items:
- flag: "--secure-port"
compare:
op: gt
value: 0
- flag: "--secure-port"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --secure-port parameter or
set it to a different (non-zero) desired port.
scored: true
- id: 1.2.18
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.2.19
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
--audit-log-path=/var/log/apiserver/audit.log
scored: true
- id: 1.2.20
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxage parameter to 30
or as an appropriate number of days, for example,
--audit-log-maxage=30
scored: true
- id: 1.2.21
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value. For example,
--audit-log-maxbackup=10
scored: true
- id: 1.2.22
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB, --audit-log-maxsize=100
scored: true
- id: 1.2.23
text: "Ensure that the --request-timeout argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
- flag: "--request-timeout"
remediation: |
Edit the API server pod specification file $apiserverconf
and set the below parameter as appropriate and if needed.
For example, --request-timeout=300s
scored: true
- id: 1.2.24
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--service-account-lookup=true
Alternatively, you can delete the --service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.25
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -v grep"
type: "skip"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --service-account-key-file parameter
to the public key file for service accounts. For example,
--service-account-key-file=<filename>
scored: true
- id: 1.2.26
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep -m1 'Running kube-apiserver'"
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
set: true
- flag: "--etcd-keyfile"
set: true
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate and key file parameters.
--etcd-certfile=<path/to/client-certificate-file>
--etcd-keyfile=<path/to/client-key-file>
scored: true
- id: 1.2.27
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep -A1 'Running kube-apiserver' | tail -n2"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
set: true
- flag: "--tls-private-key-file"
set: true
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the TLS certificate and private key file parameters.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
scored: true
- id: 1.2.28
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the client certificate authority file.
--client-ca-file=<path/to/client-ca-file>
scored: true
- id: 1.2.29
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate authority file parameter.
--etcd-cafile=<path/to/ca-file>
scored: true
- id: 1.2.30
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config'"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --encryption-provider-config parameter to the path of that file.
For example, --encryption-provider-config=</path/to/EncryptionConfig/File>
scored: false
- id: 1.2.31
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: "grep aescbc /path/to/encryption-config.json"
type: "manual"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
scored: false
- id: 1.2.32
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384"
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the control plane node and set the below parameter.
--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
scored: false
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example, --terminated-pod-gc-threshold=10
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node to set the below parameter.
--use-service-account-credentials=true
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --service-account-private-key-file parameter
to the private key file for service accounts.
--service-account-private-key-file=<filename>
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
--root-ca-file=<path/to/file>
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'RotateKubeletServerCertificate'"
type: "skip"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
set: true
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-scheduler' | tail -n1"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
set: true
remediation: |
Edit the Scheduler pod specification file $schedulerconf file
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
set: true
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true

490
cfg/k3s-cis-1.23/node.yaml Normal file
View File

@@ -0,0 +1,490 @@
---
controls:
version: "k3s-cis-1.23"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
type: "skip"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 644 $kubeletsvc
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
type: "skip"
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)"
audit: 'stat -c %a /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig'
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $proxykubeconfig
scored: false
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: false
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)"
audit: 'stat -c %a /var/lib/rancher/k3s/agent/kubelet.kubeconfig '
tests:
test_items:
- flag: "644"
compare:
op: eq
value: "644"
set: true
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: 'stat -c %U:%G /var/lib/rancher/k3s/agent/kubelet.kubeconfig'
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: "root:root"
set: true
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Manual)"
audit: "stat -c %a /var/lib/rancher/k3s/server/tls/server-ca.crt"
tests:
test_items:
- flag: "644"
compare:
op: eq
value: "644"
set: true
- flag: "640"
compare:
op: eq
value: "640"
set: true
- flag: "600"
compare:
op: eq
value: "600"
set: true
- flag: "444"
compare:
op: eq
value: "444"
set: true
- flag: "440"
compare:
op: eq
value: "440"
set: true
- flag: "400"
compare:
op: eq
value: "400"
set: true
- flag: "000"
compare:
op: eq
value: "000"
set: true
bin_op: or
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 644 <filename>
scored: true
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Manual)"
audit: "stat -c %U:%G /var/lib/rancher/k3s/server/tls/client-ca.crt"
tests:
test_items:
- flag: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: false
- id: 4.1.9
text: "Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
type: "skip"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 644 $kubeletconf
scored: true
- id: 4.1.10
text: "Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
type: "skip"
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'' '
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
set: true
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode" | grep -v grep; else echo "--authorization-mode=Webhook"; fi'' '
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
set: true
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file" | grep -v grep; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'' '
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Ensure that the --read-only-port argument is set to 0 (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kubelet' | tail -n1 | grep 'read-only-port' "
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kubelet' | tail -n1 | grep 'streaming-connection-idle-timeout'"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.6
text: "Ensure that the --protect-kernel-defaults argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kubelet' | tail -n1 | grep 'protect-kernel-defaults'"
type: "skip"
tests:
test_items:
- flag: --protect-kernel-defaults
path: '{.protectKernelDefaults}'
compare:
op: eq
value: true
set: true
remediation: |
If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kubelet' | tail -n1 | grep 'make-iptables-util-chains'"
type: "skip"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
set: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.8
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
audit: "/bin/ps -fC $kubeletbin "
type: "skip"
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.9
text: "Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Manual)"
audit: "/bin/ps -fC $kubeletbin"
type: "manual"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.10
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --tls-cert-file
path: '/var/lib/rancher/k3s/agent/serving-kubelet.crt'
- flag: --tls-private-key-file
path: '/var/lib/rancher/k3s/agent/serving-kubelet.key'
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.11
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
type: "skip"
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.12
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: "/bin/ps -fC $kubeletbin"
type: "skip"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.13
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -fC $kubeletbin"
type: "manual"
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: false

View File

@@ -0,0 +1,269 @@
---
controls:
version: "k3s-cis-1.23"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Automated)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Automated)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -0,0 +1,58 @@
---
## Version-specific settings that override the values in cfg/config.yaml
master:
components:
- apiserver
- scheduler
- controllermanager
- etcd
- policies
apiserver:
bins:
- containerd
scheduler:
bins:
- containerd
kubeconfig:
- /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig
controllermanager:
bins:
- containerd
kubeconfig:
- /var/lib/rancher/k3s/server/cred/controller.kubeconfig
etcd:
bins:
- containerd
etcd:
components:
- etcd
etcd:
confs: /var/lib/rancher/k3s/server/db/etcd/config
node:
components:
- kubelet
- proxy
kubelet:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubelet.kubeconfig
defaultcafile: /var/lib/rancher/k3s/agent/client-ca.crt
proxy:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
policies:
components:
- policies

View File

@@ -0,0 +1,46 @@
---
controls:
version: "k3s-cis-1.24"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

144
cfg/k3s-cis-1.24/etcd.yaml Normal file
View File

@@ -0,0 +1,144 @@
---
controls:
version: "k3s-cis-1.24"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: and
test_items:
- path: "{.client-transport-security.cert-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/server-client.crt"
- path: "{.client-transport-security.key-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/server-client.key"
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates cert and key files for etcd.
These are located in /var/lib/rancher/k3s/server/tls/etcd/.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use custom cert and key files.
scored: false
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
test_items:
- path: "{.client-transport-security.client-cert-auth}"
compare:
op: eq
value: true
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s sets the --client-cert-auth parameter to true.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to disable client certificate authentication.
scored: false
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: or
test_items:
- path: "{.client-transport-security.auto-tls}"
compare:
op: eq
value: false
- path: "{.client-transport-security.auto-tls}"
set: false
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s does not set the --auto-tls parameter.
If this check fails, edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
client-transport-security:
auto-tls: false
scored: false
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: and
test_items:
- path: "{.peer-transport-security.cert-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt"
- path: "{.peer-transport-security.key-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key"
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates peer cert and key files for etcd.
These are located in /var/lib/rancher/k3s/server/tls/etcd/.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use custom peer cert and key files.
scored: false
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
test_items:
- path: "{.peer-transport-security.client-cert-auth}"
compare:
op: eq
value: true
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s sets the --peer-cert-auth parameter to true.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to disable peer client certificate authentication.
scored: false
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: or
test_items:
- path: "{.peer-transport-security.auto-tls}"
compare:
op: eq
value: false
- path: "{.peer-transport-security.auto-tls}"
set: false
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s does not set the --peer-auto-tls parameter.
If this check fails, edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
peer-transport-security:
auto-tls: false
scored: false
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Automated)"
audit_config: "cat $etcdconf"
tests:
test_items:
- path: "{.peer-transport-security.trusted-ca-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt"
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates a unique certificate authority for etcd.
This is located at /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use a shared certificate authority.
scored: false

1021
cfg/k3s-cis-1.24/master.yaml Normal file

File diff suppressed because it is too large Load Diff

427
cfg/k3s-cis-1.24/node.yaml Normal file
View File

@@ -0,0 +1,427 @@
---
controls:
version: "k3s-cis-1.24"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
type: "skip"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
type: "skip"
tests:
test_items:
- flag: root:root
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: true
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)"
audit: 'stat -c %U:%G $proxykubeconfig'
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: true
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: 'stat -c %U:%G $kubeletkubeconfig'
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: "root:root"
set: true
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)"
audit: "stat -c permissions=%a $kubeletcafile"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
set: true
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 $kubeletcafile
scored: true
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Automated)"
audit: "stat -c %U:%G $kubeletcafile"
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root $kubeletcafile
scored: true
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
type: "skip"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
type: "skip"
tests:
test_items:
- flag: root:root
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'' '
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
set: true
remediation: |
By default, K3s sets the --anonymous-auth to false. If you have set this to a different value, you
should set it back to false. If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "anonymous-auth=true"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="anonymous-auth=true"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode"; else echo "--authorization-mode=Webhook"; fi'' '
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
set: true
remediation: |
By default, K3s does not set the --authorization-mode to AlwaysAllow.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "authorization-mode=AlwaysAllow"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="authorization-mode=AlwaysAllow"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file"; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'' '
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
set: true
remediation: |
By default, K3s automatically provides the client ca certificate for the Kubelet.
It is generated and located at /var/lib/rancher/k3s/agent/client-ca.crt
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
By default, K3s sets the --read-only-port to 0. If you have set this to a different value, you
should set it back to 0. If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "read-only-port=XXXX"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="read-only-port=XXXX"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter to an appropriate value.
kubelet-arg:
- "streaming-connection-idle-timeout=5m"
If using the command line, run K3s with --kubelet-arg="streaming-connection-idle-timeout=5m".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false
- id: 4.2.6
text: "Ensure that the --protect-kernel-defaults argument is set to true (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --protect-kernel-defaults
path: '{.protectKernelDefaults}'
compare:
op: eq
value: true
set: true
remediation: |
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter.
protect-kernel-defaults: true
If using the command line, run K3s with --protect-kernel-defaults=true.
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.7
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
set: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter.
kubelet-arg:
- "make-iptables-util-chains=true"
If using the command line, run K3s with --kubelet-arg="make-iptables-util-chains=true".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.8
text: "Ensure that the --hostname-override argument is not set (Automated)"
type: "skip"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Not Applicable.
By default, K3s does set the --hostname-override argument. Per CIS guidelines, this is to comply
with cloud providers that require this flag to ensure that hostname matches node names.
scored: true
- id: 4.2.9
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: eq
value: 0
remediation: |
By default, K3s sets the event-qps to 0. Should you wish to change this,
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter to an appropriate value.
kubelet-arg:
- "event-qps=<value>"
If using the command line, run K3s with --kubelet-arg="event-qps=<value>".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false
- id: 4.2.10
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --tls-cert-file
path: '/var/lib/rancher/k3s/agent/serving-kubelet.crt'
- flag: --tls-private-key-file
path: '/var/lib/rancher/k3s/agent/serving-kubelet.key'
remediation: |
By default, K3s automatically provides the TLS certificate and private key for the Kubelet.
They are generated and located at /var/lib/rancher/k3s/agent/serving-kubelet.crt and /var/lib/rancher/k3s/agent/serving-kubelet.key
If for some reason you need to provide your own certificate and key, you can set the
below parameters in the K3s config file /etc/rancher/k3s/config.yaml.
kubelet-arg:
- "tls-cert-file=<path/to/tls-cert-file>"
- "tls-private-key-file=<path/to/tls-private-key-file>"
scored: true
- id: 4.2.11
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
By default, K3s does not set the --rotate-certificates argument. If you have set this flag with a value of `false`, you should either set it to `true` or completely remove the flag.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any rotate-certificates parameter.
If using the command line, remove the K3s flag --kubelet-arg="rotate-certificates".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.12
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
By default, K3s does not set the RotateKubeletServerCertificate feature gate.
If you have enabled this feature gate, you should remove it.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any feature-gate=RotateKubeletServerCertificate parameter.
If using the command line, remove the K3s flag --kubelet-arg="feature-gate=RotateKubeletServerCertificate".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.13
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/cat $kubeletconf"
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a K3s config file /etc/rancher/k3s/config.yaml, edit the file to set `tlsCipherSuites` to
kubelet-arg:
- "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
or to a subset of these values.
If using the command line, add the K3s flag --kubelet-arg="tls-cipher-suites=<same values as above>"
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false

View File

@@ -0,0 +1,269 @@
---
controls:
version: "k3s-cis-1.24"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Automated)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Automated)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilities in applications running on your cluster. Where a namespace
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "manual"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "manual"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -0,0 +1,64 @@
---
## Version-specific settings that override the values in cfg/config.yaml
master:
components:
- apiserver
- kubelet
- scheduler
- controllermanager
- etcd
- policies
apiserver:
bins:
- containerd
kubelet:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubelet.kubeconfig
defaultcafile: /var/lib/rancher/k3s/agent/client-ca.crt
scheduler:
bins:
- containerd
kubeconfig:
- /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig
controllermanager:
bins:
- containerd
kubeconfig:
- /var/lib/rancher/k3s/server/cred/controller.kubeconfig
etcd:
bins:
- containerd
etcd:
components:
- etcd
etcd:
confs: /var/lib/rancher/k3s/server/db/etcd/config
node:
components:
- kubelet
- proxy
kubelet:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubelet.kubeconfig
defaultcafile: /var/lib/rancher/k3s/agent/client-ca.crt
proxy:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
policies:
components:
- policies

View File

@@ -0,0 +1,60 @@
---
controls:
version: "k3s-cis-1.7"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.1.2
text: "Service account token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of service account tokens.
scored: false
- id: 3.1.3
text: "Bootstrap token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of bootstrap tokens.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

144
cfg/k3s-cis-1.7/etcd.yaml Normal file
View File

@@ -0,0 +1,144 @@
---
controls:
version: "k3s-cis-1.7"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: and
test_items:
- path: "{.client-transport-security.cert-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/server-client.crt"
- path: "{.client-transport-security.key-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/server-client.key"
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates cert and key files for etcd.
These are located in /var/lib/rancher/k3s/server/tls/etcd/.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use custom cert and key files.
scored: false
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
test_items:
- path: "{.client-transport-security.client-cert-auth}"
compare:
op: eq
value: true
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s sets the --client-cert-auth parameter to true.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to disable client certificate authentication.
scored: false
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: or
test_items:
- path: "{.client-transport-security.auto-tls}"
compare:
op: eq
value: false
- path: "{.client-transport-security.auto-tls}"
set: false
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s does not set the --auto-tls parameter.
If this check fails, edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
client-transport-security:
auto-tls: false
scored: false
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: and
test_items:
- path: "{.peer-transport-security.cert-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt"
- path: "{.peer-transport-security.key-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key"
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates peer cert and key files for etcd.
These are located in /var/lib/rancher/k3s/server/tls/etcd/.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use custom peer cert and key files.
scored: false
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
test_items:
- path: "{.peer-transport-security.client-cert-auth}"
compare:
op: eq
value: true
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s sets the --peer-cert-auth parameter to true.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to disable peer client certificate authentication.
scored: false
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: or
test_items:
- path: "{.peer-transport-security.auto-tls}"
compare:
op: eq
value: false
- path: "{.peer-transport-security.auto-tls}"
set: false
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s does not set the --peer-auto-tls parameter.
If this check fails, edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
peer-transport-security:
auto-tls: false
scored: false
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Automated)"
audit_config: "cat $etcdconf"
tests:
test_items:
- path: "{.peer-transport-security.trusted-ca-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt"
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates a unique certificate authority for etcd.
This is located at /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use a shared certificate authority.
scored: false

1000
cfg/k3s-cis-1.7/master.yaml Normal file

File diff suppressed because it is too large Load Diff

422
cfg/k3s-cis-1.7/node.yaml Normal file
View File

@@ -0,0 +1,422 @@
---
controls:
version: "k3s-cis-1.7"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
Not Applicable.
All configuration is passed in as arguments at container run time.
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: true
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: true
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: 'stat -c %U:%G $kubeletkubeconfig'
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)"
audit: "stat -c permissions=%a $kubeletcafile"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 $kubeletcafile
scored: true
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Automated)"
audit: "stat -c %U:%G $kubeletcafile"
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root $kubeletcafile
scored: true
- id: 4.1.9
text: "Ensure that the kubelet --config configuration file has permissions set to 600 or more restrictive (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.1.10
text: "Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'' '
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
By default, K3s sets the --anonymous-auth to false. If you have set this to a different value, you
should set it back to false. If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "anonymous-auth=true"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="anonymous-auth=true"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode"; else echo "--authorization-mode=Webhook"; fi'' '
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
By default, K3s does not set the --authorization-mode to AlwaysAllow.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "authorization-mode=AlwaysAllow"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="authorization-mode=AlwaysAllow"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file"; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'' '
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
By default, K3s automatically provides the client ca certificate for the Kubelet.
It is generated and located at /var/lib/rancher/k3s/agent/client-ca.crt
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
By default, K3s sets the --read-only-port to 0. If you have set this to a different value, you
should set it back to 0. If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "read-only-port=XXXX"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="read-only-port=XXXX"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter to an appropriate value.
kubelet-arg:
- "streaming-connection-idle-timeout=5m"
If using the command line, run K3s with --kubelet-arg="streaming-connection-idle-timeout=5m".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false
- id: 4.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter.
kubelet-arg:
- "make-iptables-util-chains=true"
If using the command line, run K3s with --kubelet-arg="make-iptables-util-chains=true".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.7
text: "Ensure that the --hostname-override argument is not set (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
type: "skip"
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Not Applicable.
By default, K3s does set the --hostname-override argument. Per CIS guidelines, this is to comply
with cloud providers that require this flag to ensure that hostname matches node names.
scored: true
- id: 4.2.8
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
By default, K3s sets the event-qps to 0. Should you wish to change this,
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter to an appropriate value.
kubelet-arg:
- "event-qps=<value>"
If using the command line, run K3s with --kubelet-arg="event-qps=<value>".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false
- id: 4.2.9
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --tls-cert-file
path: '/var/lib/rancher/k3s/agent/serving-kubelet.crt'
- flag: --tls-private-key-file
path: '/var/lib/rancher/k3s/agent/serving-kubelet.key'
remediation: |
By default, K3s automatically provides the TLS certificate and private key for the Kubelet.
They are generated and located at /var/lib/rancher/k3s/agent/serving-kubelet.crt and /var/lib/rancher/k3s/agent/serving-kubelet.key
If for some reason you need to provide your own certificate and key, you can set the
the below parameters in the K3s config file /etc/rancher/k3s/config.yaml.
kubelet-arg:
- "tls-cert-file=<path/to/tls-cert-file>"
- "tls-private-key-file=<path/to/tls-private-key-file>"
scored: true
- id: 4.2.10
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
By default, K3s does not set the --rotate-certificates argument. If you have set this flag with a value of `false`, you should either set it to `true` or completely remove the flag.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any rotate-certificates parameter.
If using the command line, remove the K3s flag --kubelet-arg="rotate-certificates".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.11
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
By default, K3s does not set the RotateKubeletServerCertificate feature gate.
If you have enabled this feature gate, you should remove it.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any feature-gate=RotateKubeletServerCertificate parameter.
If using the command line, remove the K3s flag --kubelet-arg="feature-gate=RotateKubeletServerCertificate".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.12
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a K3s config file /etc/rancher/k3s/config.yaml, edit the file to set `tlsCipherSuites` to
kubelet-arg:
- "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
or to a subset of these values.
If using the command line, add the K3s flag --kubelet-arg="tls-cipher-suites=<same values as above>"
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false
- id: 4.2.13
text: "Ensure that a limit is set on pod PIDs (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --pod-max-pids
path: '{.podPidsLimit}'
remediation: |
Decide on an appropriate level for this parameter and set it,
If using a K3s config file /etc/rancher/k3s/config.yaml, edit the file to set `podPidsLimit` to
kubelet-arg:
- "pod-max-pids=<value>"
scored: false

View File

@@ -0,0 +1,300 @@
---
controls:
version: "k3s-cis-1.7"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.1.9
text: "Minimize access to create persistent volumes (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to PersistentVolume objects in the cluster.
scored: false
- id: 5.1.10
text: "Minimize access to the proxy sub-resource of nodes (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the proxy sub-resource of node objects.
scored: false
- id: 5.1.11
text: "Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
scored: false
- id: 5.1.12
text: "Minimize access to webhook configuration objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
scored: false
- id: 5.1.13
text: "Minimize access to the service account token creation (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the token sub-resource of serviceaccount objects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: true
- id: 5.2.7
text: "Minimize the admission of root containers (Automated)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Automated)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilities in applications running on your cluster. Where a namespace
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -0,0 +1,54 @@
---
## Version-specific settings that override the values in cfg/config.yaml
master:
components:
- apiserver
- kubelet
- scheduler
- controllermanager
- etcd
- policies
apiserver:
bins:
- containerd
kubelet:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubelet.kubeconfig
defaultcafile: /var/lib/rancher/k3s/agent/client-ca.crt
scheduler:
bins:
- containerd
kubeconfig:
- /var/lib/rancher/k3s/server/cred/scheduler.kubeconfig
controllermanager:
bins:
- containerd
kubeconfig:
- /var/lib/rancher/k3s/server/cred/controller.kubeconfig
etcd:
bins:
- containerd
etcd:
confs: /var/lib/rancher/k3s/server/db/etcd/config
node:
components:
- kubelet
- proxy
kubelet:
bins:
- containerd
confs:
- /var/lib/rancher/k3s/agent/kubelet.kubeconfig
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubelet.kubeconfig
defaultcafile: /var/lib/rancher/k3s/agent/client-ca.crt
proxy:
bins:
- containerd
defaultkubeconfig: /var/lib/rancher/k3s/agent/kubeproxy.kubeconfig
policies:
components:
- policies

View File

@@ -0,0 +1,62 @@
---
controls:
version: "k3s-cis-1.8"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.1.2
text: "Service account token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of service account tokens.
scored: false
- id: 3.1.3
text: "Bootstrap token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of bootstrap tokens.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'audit-policy-file'"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: false
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

144
cfg/k3s-cis-1.8/etcd.yaml Normal file
View File

@@ -0,0 +1,144 @@
---
controls:
version: "k3s-cis-1.8"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: and
test_items:
- path: "{.client-transport-security.cert-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/server-client.crt"
- path: "{.client-transport-security.key-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/server-client.key"
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates cert and key files for etcd.
These are located in /var/lib/rancher/k3s/server/tls/etcd/.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use custom cert and key files.
scored: false
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
test_items:
- path: "{.client-transport-security.client-cert-auth}"
compare:
op: eq
value: true
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s sets the --client-cert-auth parameter to true.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to disable client certificate authentication.
scored: false
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: or
test_items:
- path: "{.client-transport-security.auto-tls}"
compare:
op: eq
value: false
- path: "{.client-transport-security.auto-tls}"
set: false
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s does not set the --auto-tls parameter.
If this check fails, edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
client-transport-security:
auto-tls: false
scored: false
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: and
test_items:
- path: "{.peer-transport-security.cert-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt"
- path: "{.peer-transport-security.key-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-server-client.key"
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates peer cert and key files for etcd.
These are located in /var/lib/rancher/k3s/server/tls/etcd/.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use custom peer cert and key files.
scored: false
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
test_items:
- path: "{.peer-transport-security.client-cert-auth}"
compare:
op: eq
value: true
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s sets the --peer-cert-auth parameter to true.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to disable peer client certificate authentication.
scored: false
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit_config: "cat $etcdconf"
tests:
bin_op: or
test_items:
- path: "{.peer-transport-security.auto-tls}"
compare:
op: eq
value: false
- path: "{.peer-transport-security.auto-tls}"
set: false
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s does not set the --peer-auto-tls parameter.
If this check fails, edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
peer-transport-security:
auto-tls: false
scored: false
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Automated)"
audit_config: "cat $etcdconf"
tests:
test_items:
- path: "{.peer-transport-security.trusted-ca-file}"
compare:
op: eq
value: "/var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt"
remediation: |
If running on with sqlite or a external DB, etcd checks are Not Applicable.
When running with embedded-etcd, K3s generates a unique certificate authority for etcd.
This is located at /var/lib/rancher/k3s/server/tls/etcd/peer-ca.crt.
If this check fails, ensure that the configuration file $etcdconf
has not been modified to use a shared certificate authority.
scored: false

985
cfg/k3s-cis-1.8/master.yaml Normal file
View File

@@ -0,0 +1,985 @@
---
controls:
version: "k3s-cis-1.8"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
By default, K3s embeds the api server within the k3s process. There is no API server pod specification file.
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Not Applicable.
By default, K3s embeds the api server within the k3s process. There is no API server pod specification file.
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
By default, K3s embeds the controller manager within the k3s process. There is no controller manager pod specification file.
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Not Applicable.
By default, K3s embeds the controller manager within the k3s process. There is no controller manager pod specification file.
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
By default, K3s embeds the scheduler within the k3s process. There is no scheduler pod specification file.
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Not Applicable.
By default, K3s embeds the scheduler within the k3s process. There is no scheduler pod specification file.
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c permissions=%a; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
By default, K3s embeds etcd within the k3s process. There is no etcd pod specification file.
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c %U:%G; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Not Applicable.
By default, K3s embeds etcd within the k3s process. There is no etcd pod specification file.
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Automated)"
audit: find /var/lib/cni/networks -type f ! -name lock 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
By default, K3s sets the CNI file permissions to 600.
Note that for many CNIs, a lock file is created with permissions 750. This is expected and can be ignored.
If you modify your CNI configuration, ensure that the permissions are set to 600.
For example, chmod 600 /var/lib/cni/networks/<filename>
scored: true
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Automated)"
audit: find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root /var/lib/cni/networks/<filename>
scored: true
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: |
if [ "$(journalctl -m -u k3s | grep -m1 'Managed etcd cluster' | wc -l)" -gt 0 ]; then
stat -c permissions=%a /var/lib/rancher/k3s/server/db/etcd
else
echo "permissions=700"
fi
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "700"
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
audit: ps -ef | grep $etcdbin | grep -- --data-dir | sed 's%.*data-dir[= ]\([^ ]*\).*%\1%' | xargs stat -c %U:%G
type: "skip"
tests:
test_items:
- flag: "etcd:etcd"
remediation: |
Not Applicable.
For K3s, etcd is embedded within the k3s process. There is no separate etcd process.
Therefore the etcd data directory ownership is managed by the k3s process and should be root:root.
scored: true
- id: 1.1.13
text: "Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c permissions=%a /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /var/lib/rancher/k3s/server/cred/admin.kubeconfig
scored: true
- id: 1.1.14
text: "Ensure that the admin.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e /var/lib/rancher/k3s/server/cred/admin.kubeconfig; then stat -c %U:%G /var/lib/rancher/k3s/server/cred/admin.kubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: "root:root"
set: true
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /var/lib/rancher/k3s/server/cred/admin.kubeconfig
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c permissions=%a $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $schedulerkubeconfig
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c %U:%G $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c permissions=%a $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $controllermanagerkubeconfig
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
audit: "stat -c %U:%G $controllermanagerkubeconfig"
tests:
test_items:
- flag: "root:root"
compare:
op: eq
value: "root:root"
set: true
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "stat -c %U:%G /var/lib/rancher/k3s/server/tls"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /var/lib/rancher/k3s/server/tls
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)"
audit: "/bin/sh -c 'stat -c permissions=%a /var/lib/rancher/k3s/server/tls/*.crt'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod -R 600 /var/lib/rancher/k3s/server/tls/*.crt
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated)"
audit: "/bin/sh -c 'stat -c permissions=%a /var/lib/rancher/k3s/server/tls/*.key'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the master node.
For example,
chmod -R 600 /var/lib/rancher/k3s/server/tls/*.key
scored: true
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'anonymous-auth'"
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
remediation: |
By default, K3s sets the --anonymous-auth argument to false.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove anything similar to below.
kube-apiserver-arg:
- "anonymous-auth=true"
scored: true
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove anything similar to below.
kube-apiserver-arg:
- "token-auth-file=<path>"
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is not set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: "DenyServiceExternalIPs"
set: true
- flag: "--enable-admission-plugins"
set: false
remediation: |
By default, K3s does not set DenyServiceExternalIPs.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml, remove any lines like below.
kube-apiserver-arg:
- "enable-admission-plugins=DenyServiceExternalIPs"
scored: true
- id: 1.2.4
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
By default, K3s automatically provides the kubelet client certificate and key.
They are generated and located at /var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt and /var/lib/rancher/k3s/server/tls/client-kube-apiserver.key
If for some reason you need to provide your own certificate and key, you can set the
below parameters in the K3s config file /etc/rancher/k3s/config.yaml.
kube-apiserver-arg:
- "kubelet-client-certificate=<path/to/client-cert-file>"
- "kubelet-client-key=<path/to/client-key-file>"
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'kubelet-certificate-authority'"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
remediation: |
By default, K3s automatically provides the kubelet CA cert file, at /var/lib/rancher/k3s/server/tls/server-ca.crt.
If for some reason you need to provide your own ca certificate, look at using the k3s certificate command line tool.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "kubelet-certificate-authority=<path/to/ca-cert-file>"
scored: true
- id: 1.2.6
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
By default, K3s does not set the --authorization-mode to AlwaysAllow.
If this check fails, edit K3s config file /etc/rancher/k3s/config.yaml, remove any lines like below.
kube-apiserver-arg:
- "authorization-mode=AlwaysAllow"
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
By default, K3s sets the --authorization-mode to Node and RBAC.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml,
ensure that you are not overriding authorization-mode.
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'authorization-mode'"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
By default, K3s sets the --authorization-mode to Node and RBAC.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml,
ensure that you are not overriding authorization-mode.
scored: true
- id: 1.2.9
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the K3s config file /etc/rancher/k3s/config.yaml and set the below parameters.
kube-apiserver-arg:
- "enable-admission-plugins=...,EventRateLimit,..."
- "admission-control-config-file=<path/to/configuration/file>"
scored: false
- id: 1.2.10
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
By default, K3s does not set the --enable-admission-plugins to AlwaysAdmit.
If this check fails, edit K3s config file /etc/rancher/k3s/config.yaml, remove any lines like below.
kube-apiserver-arg:
- "enable-admission-plugins=AlwaysAdmit"
scored: true
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Permissive, per CIS guidelines,
"This setting could impact offline or isolated clusters, which have images pre-loaded and
do not have access to a registry to pull in-use images. This setting is not appropriate for
clusters which use this configuration."
Edit the K3s config file /etc/rancher/k3s/config.yaml and set the below parameter.
kube-apiserver-arg:
- "enable-admission-plugins=...,AlwaysPullImages,..."
scored: false
- id: 1.2.12
text: "Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)"
type: "skip"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "SecurityContextDeny"
- flag: "--enable-admission-plugins"
compare:
op: has
value: "PodSecurityPolicy"
remediation: |
Not Applicable.
Enabling Pod Security Policy is no longer supported on K3s v1.25+ and will cause applications to unexpectedly fail.
scored: false
- id: 1.2.13
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
By default, K3s does not set the --disable-admission-plugins to anything.
Follow the documentation and create ServiceAccount objects as per your environment.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "disable-admission-plugins=ServiceAccount"
scored: true
- id: 1.2.14
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
By default, K3s does not set the --disable-admission-plugins to anything.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "disable-admission-plugins=...,NamespaceLifecycle,..."
scored: true
- id: 1.2.15
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'enable-admission-plugins'"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
By default, K3s sets the --enable-admission-plugins to NodeRestriction.
If using the K3s config file /etc/rancher/k3s/config.yaml, check that you are not overriding the admission plugins.
If you are, include NodeRestriction in the list.
kube-apiserver-arg:
- "enable-admission-plugins=...,NodeRestriction,..."
scored: true
- id: 1.2.16
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'profiling'"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
By default, K3s sets the --profiling argument to false.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "profiling=true"
scored: true
- id: 1.2.17
text: "Ensure that the --audit-log-path argument is set (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the K3s config file /etc/rancher/k3s/config.yaml and set the audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
kube-apiserver-arg:
- "audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log"
scored: false
- id: 1.2.18
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node and
set the audit-log-maxage parameter to 30 or as an appropriate number of days, for example,
kube-apiserver-arg:
- "audit-log-maxage=30"
scored: false
- id: 1.2.19
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node and
set the audit-log-maxbackup parameter to 10 or to an appropriate value. For example,
kube-apiserver-arg:
- "audit-log-maxbackup=10"
scored: false
- id: 1.2.20
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node and
set the audit-log-maxsize parameter to an appropriate size in MB. For example,
kube-apiserver-arg:
- "audit-log-maxsize=100"
scored: false
- id: 1.2.21
text: "Ensure that the --request-timeout argument is set as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--request-timeout"
remediation: |
Permissive, per CIS guidelines,
"it is recommended to set this limit as appropriate and change the default limit of 60 seconds only if needed".
Edit the K3s config file /etc/rancher/k3s/config.yaml
and set the below parameter if needed. For example,
kube-apiserver-arg:
- "request-timeout=300s"
scored: false
- id: 1.2.22
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
By default, K3s does not set the --service-account-lookup argument.
Edit the K3s config file /etc/rancher/k3s/config.yaml and set the service-account-lookup. For example,
kube-apiserver-arg:
- "service-account-lookup=true"
Alternatively, you can delete the service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.23
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
K3s automatically generates and sets the service account key file.
It is located at /var/lib/rancher/k3s/server/tls/service.key.
If this check fails, edit K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "service-account-key-file=<path>"
scored: true
- id: 1.2.24
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: |
if [ "$(journalctl -m -u k3s | grep -m1 'Managed etcd cluster' | wc -l)" -gt 0 ]; then
journalctl -m -u k3s | grep -m1 'Running kube-apiserver' | tail -n1
else
echo "--etcd-certfile AND --etcd-keyfile"
fi
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
set: true
- flag: "--etcd-keyfile"
set: true
remediation: |
K3s automatically generates and sets the etcd certificate and key files.
They are located at /var/lib/rancher/k3s/server/tls/etcd/client.crt and /var/lib/rancher/k3s/server/tls/etcd/client.key.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "etcd-certfile=<path>"
- "etcd-keyfile=<path>"
scored: true
- id: 1.2.25
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep -A1 'Running kube-apiserver' | tail -n2"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
set: true
- flag: "--tls-private-key-file"
set: true
remediation: |
By default, K3s automatically generates and provides the TLS certificate and private key for the apiserver.
They are generated and located at /var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt and /var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "tls-cert-file=<path>"
- "tls-private-key-file=<path>"
scored: true
- id: 1.2.26
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'client-ca-file'"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
By default, K3s automatically provides the client certificate authority file.
It is generated and located at /var/lib/rancher/k3s/server/tls/client-ca.crt.
If for some reason you need to provide your own ca certificate, look at using the k3s certificate command line tool.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "client-ca-file=<path>"
scored: true
- id: 1.2.27
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'etcd-cafile'"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
By default, K3s automatically provides the etcd certificate authority file.
It is generated and located at /var/lib/rancher/k3s/server/tls/client-ca.crt.
If for some reason you need to provide your own ca certificate, look at using the k3s certificate command line tool.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-apiserver-arg:
- "etcd-cafile=<path>"
scored: true
- id: 1.2.28
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'encryption-provider-config'"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
K3s can be configured to use encryption providers to encrypt secrets at rest.
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node and set the below parameter.
secrets-encryption: true
Secrets encryption can then be managed with the k3s secrets-encrypt command line tool.
If needed, you can find the generated encryption config at /var/lib/rancher/k3s/server/cred/encryption-config.json.
scored: false
- id: 1.2.29
text: "Ensure that encryption providers are appropriately configured (Manual)"
audit: |
ENCRYPTION_PROVIDER_CONFIG=$(journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -o 'providers\"\:\[.*\]' $ENCRYPTION_PROVIDER_CONFIG | grep -o "[A-Za-z]*" | head -2 | tail -1 | sed 's/^/provider=/'; fi
tests:
test_items:
- flag: "provider"
compare:
op: valid_elements
value: "aescbc,kms,secretbox"
remediation: |
K3s can be configured to use encryption providers to encrypt secrets at rest. K3s will utilize the aescbc provider.
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node and set the below parameter.
secrets-encryption: true
Secrets encryption can then be managed with the k3s secrets-encrypt command line tool.
If needed, you can find the generated encryption config at /var/lib/rancher/k3s/server/cred/encryption-config.json
scored: false
- id: 1.2.30
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-apiserver' | tail -n1 | grep 'tls-cipher-suites'"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384"
remediation: |
By default, the K3s kube-apiserver complies with this test. Changes to these values may cause regression, therefore ensure that all apiserver clients support the new TLS configuration before applying it in production deployments.
If a custom TLS configuration is required, consider also creating a custom version of this rule that aligns with your requirements.
If this check fails, remove any custom configuration around `tls-cipher-suites` or update the /etc/rancher/k3s/config.yaml file to match the default by adding the following:
kube-apiserver-arg:
- "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
scored: true
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Manual)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'terminated-pod-gc-threshold'"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the K3s config file /etc/rancher/k3s/config.yaml on the control plane node
and set the --terminated-pod-gc-threshold to an appropriate threshold,
kube-controller-manager-arg:
- "terminated-pod-gc-threshold=10"
scored: false
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'profiling'"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
By default, K3s sets the --profiling argument to false.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-controller-manager-arg:
- "profiling=true"
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'use-service-account-credentials'"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
By default, K3s sets the --use-service-account-credentials argument to true.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-controller-manager-arg:
- "use-service-account-credentials=false"
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'service-account-private-key-file'"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
By default, K3s automatically provides the service account private key file.
It is generated and located at /var/lib/rancher/k3s/server/tls/service.current.key.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-controller-manager-arg:
- "service-account-private-key-file=<path>"
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1 | grep 'root-ca-file'"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
By default, K3s automatically provides the root CA file.
It is generated and located at /var/lib/rancher/k3s/server/tls/server-ca.crt.
If for some reason you need to provide your own ca certificate, look at using the k3s certificate command line tool.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-controller-manager-arg:
- "root-ca-file=<path>"
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
By default, K3s does not set the RotateKubeletServerCertificate feature gate.
If you have enabled this feature gate, you should remove it.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml, remove any lines like below.
kube-controller-manager-arg:
- "feature-gate=RotateKubeletServerCertificate"
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-controller-manager' | tail -n1"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
set: true
- flag: "--bind-address"
set: false
remediation: |
By default, K3s sets the --bind-address argument to 127.0.0.1
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-controller-manager-arg:
- "bind-address=<IP>"
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'profiling'"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
set: true
remediation: |
By default, K3s sets the --profiling argument to false.
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-scheduler-arg:
- "profiling=true"
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "journalctl -m -u k3s | grep 'Running kube-scheduler' | tail -n1 | grep 'bind-address'"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
set: true
- flag: "--bind-address"
set: false
remediation: |
By default, K3s sets the --bind-address argument to 127.0.0.1
If this check fails, edit the K3s config file /etc/rancher/k3s/config.yaml and remove any lines like below.
kube-scheduler-arg:
- "bind-address=<IP>"
scored: true

422
cfg/k3s-cis-1.8/node.yaml Normal file
View File

@@ -0,0 +1,422 @@
---
controls:
version: "k3s-cis-1.8"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet service file, all configuration is passed in as arguments at runtime.
Not Applicable.
All configuration is passed in as arguments at container run time.
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: true
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: true
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: 'stat -c %U:%G $kubeletkubeconfig'
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)"
audit: "stat -c permissions=%a $kubeletcafile"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 $kubeletcafile
scored: true
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Automated)"
audit: "stat -c %U:%G $kubeletcafile"
tests:
test_items:
- flag: root:root
compare:
op: eq
value: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root $kubeletcafile
scored: true
- id: 4.1.9
text: "Ensure that the kubelet --config configuration file has permissions set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
type: "skip"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.1.10
text: "Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
type: "skip"
tests:
test_items:
- flag: root:root
remediation: |
Not Applicable.
The kubelet is embedded in the k3s process. There is no kubelet config file, all configuration is passed in as arguments at runtime.
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi'' '
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
By default, K3s sets the --anonymous-auth to false. If you have set this to a different value, you
should set it back to false. If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "anonymous-auth=true"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="anonymous-auth=true"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "authorization-mode"; else echo "--authorization-mode=Webhook"; fi'' '
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
By default, K3s does not set the --authorization-mode to AlwaysAllow.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "authorization-mode=AlwaysAllow"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="authorization-mode=AlwaysAllow"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: '/bin/sh -c ''if test $(journalctl -m -u k3s | grep "Running kube-apiserver" | wc -l) -gt 0; then journalctl -m -u k3s | grep "Running kube-apiserver" | tail -n1 | grep "client-ca-file"; else echo "--client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt"; fi'' '
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
By default, K3s automatically provides the client ca certificate for the Kubelet.
It is generated and located at /var/lib/rancher/k3s/agent/client-ca.crt
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
By default, K3s sets the --read-only-port to 0. If you have set this to a different value, you
should set it back to 0. If using the K3s config file /etc/rancher/k3s/config.yaml, remove any lines similar to below.
kubelet-arg:
- "read-only-port=XXXX"
If using the command line, edit the K3s service file and remove the below argument.
--kubelet-arg="read-only-port=XXXX"
Based on your system, restart the k3s service. For example,
systemctl daemon-reload
systemctl restart k3s.service
scored: true
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter to an appropriate value.
kubelet-arg:
- "streaming-connection-idle-timeout=5m"
If using the command line, run K3s with --kubelet-arg="streaming-connection-idle-timeout=5m".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false
- id: 4.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter.
kubelet-arg:
- "make-iptables-util-chains=true"
If using the command line, run K3s with --kubelet-arg="make-iptables-util-chains=true".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.7
text: "Ensure that the --hostname-override argument is not set (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
type: "skip"
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Not Applicable.
By default, K3s does set the --hostname-override argument. Per CIS guidelines, this is to comply
with cloud providers that require this flag to ensure that hostname matches node names.
scored: true
- id: 4.2.8
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
By default, K3s sets the event-qps to 0. Should you wish to change this,
If using the K3s config file /etc/rancher/k3s/config.yaml, set the following parameter to an appropriate value.
kubelet-arg:
- "event-qps=<value>"
If using the command line, run K3s with --kubelet-arg="event-qps=<value>".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false
- id: 4.2.9
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
tests:
test_items:
- flag: --tls-cert-file
path: '/var/lib/rancher/k3s/agent/serving-kubelet.crt'
- flag: --tls-private-key-file
path: '/var/lib/rancher/k3s/agent/serving-kubelet.key'
remediation: |
By default, K3s automatically provides the TLS certificate and private key for the Kubelet.
They are generated and located at /var/lib/rancher/k3s/agent/serving-kubelet.crt and /var/lib/rancher/k3s/agent/serving-kubelet.key
If for some reason you need to provide your own certificate and key, you can set the
below parameters in the K3s config file /etc/rancher/k3s/config.yaml.
kubelet-arg:
- "tls-cert-file=<path/to/tls-cert-file>"
- "tls-private-key-file=<path/to/tls-private-key-file>"
scored: true
- id: 4.2.10
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
By default, K3s does not set the --rotate-certificates argument. If you have set this flag with a value of `false`, you should either set it to `true` or completely remove the flag.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any rotate-certificates parameter.
If using the command line, remove the K3s flag --kubelet-arg="rotate-certificates".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.11
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Automated)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/cat $kubeletconf"
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
By default, K3s does not set the RotateKubeletServerCertificate feature gate.
If you have enabled this feature gate, you should remove it.
If using the K3s config file /etc/rancher/k3s/config.yaml, remove any feature-gate=RotateKubeletServerCertificate parameter.
If using the command line, remove the K3s flag --kubelet-arg="feature-gate=RotateKubeletServerCertificate".
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: true
- id: 4.2.12
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a K3s config file /etc/rancher/k3s/config.yaml, edit the file to set `tlsCipherSuites` to
kubelet-arg:
- "tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305"
or to a subset of these values.
If using the command line, add the K3s flag --kubelet-arg="tls-cipher-suites=<same values as above>"
Based on your system, restart the k3s service. For example,
systemctl restart k3s.service
scored: false
- id: 4.2.13
text: "Ensure that a limit is set on pod PIDs (Manual)"
audit: "journalctl -m -u k3s -u k3s-agent | grep 'Running kubelet' | tail -n1"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --pod-max-pids
path: '{.podPidsLimit}'
remediation: |
Decide on an appropriate level for this parameter and set it,
If using a K3s config file /etc/rancher/k3s/config.yaml, edit the file to set `podPidsLimit` to
kubelet-arg:
- "pod-max-pids=<value>"
scored: false

View File

@@ -0,0 +1,300 @@
---
controls:
version: "k3s-cis-1.8"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.1.9
text: "Minimize access to create persistent volumes (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to PersistentVolume objects in the cluster.
scored: false
- id: 5.1.10
text: "Minimize access to the proxy sub-resource of nodes (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the proxy sub-resource of node objects.
scored: false
- id: 5.1.11
text: "Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
scored: false
- id: 5.1.12
text: "Minimize access to webhook configuration objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
scored: false
- id: 5.1.13
text: "Minimize access to the service account token creation (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the token sub-resource of serviceaccount objects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: true
- id: 5.2.7
text: "Minimize the admission of root containers (Automated)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Automated)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Automated)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilities in applications running on your cluster. Where a namespace
contains applications which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -11,16 +11,17 @@ groups:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Manual)"
audit: |
# For --cert-file
for i in $(oc get pods -oname -n openshift-etcd)
do
oc exec -n openshift-etcd -c etcd $i -- ps -o command= -C etcd | sed 's/.*\(--cert-file=[^ ]*\).*/\1/'
done 2>/dev/null
# For --key-file
for i in $(oc get pods -oname -n openshift-etcd)
do
oc exec -n openshift-etcd -c etcd $i -- ps -o command= -C etcd | sed 's/.*\(--key-file=[^ ]*\).*/\1/'
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-etcd namespace
POD_NAME=$(oc get pods -n openshift-etcd -l app=etcd --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching file found on the current node."
else
# Execute the stat command
oc exec -n openshift-etcd -c etcd "$POD_NAME" -- ps -o command= -C etcd | sed 's/.*\(--cert-file=[^ ]*\).*/\1/'
oc exec -n openshift-etcd -c etcd "$POD_NAME" -- ps -o command= -C etcd | sed 's/.*\(--key-file=[^ ]*\).*/\1/'
fi
use_multiple_values: true
tests:
test_items:
@@ -36,10 +37,16 @@ groups:
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Manual)"
audit: |
for i in $(oc get pods -oname -n openshift-etcd)
do
oc exec -n openshift-etcd -c etcd $i -- ps -o command= -C etcd | sed 's/.*\(--client-cert-auth=[^ ]*\).*/\1/'
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-etcd namespace
POD_NAME=$(oc get pods -n openshift-etcd -l app=etcd --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching file found on the current node."
else
# Execute the stat command
oc exec -n openshift-etcd -c etcd "$POD_NAME" -- ps -o command= -C etcd | sed 's/.*\(--client-cert-auth=[^ ]*\).*/\1/'
fi
use_multiple_values: true
tests:
test_items:
@@ -55,10 +62,16 @@ groups:
text: "Ensure that the --auto-tls argument is not set to true (Manual)"
audit: |
# Returns 0 if found, 1 if not found
for i in $(oc get pods -oname -n openshift-etcd)
do
oc exec -n openshift-etcd -c etcd $i -- ps -o command= -C etcd | grep -- --auto-tls=true 2>/dev/null ; echo exit_code=$?
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-etcd namespace
POD_NAME=$(oc get pods -n openshift-etcd -l app=etcd --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching file found on the current node."
else
# Execute the stat command
oc exec -n openshift-etcd -c etcd "$POD_NAME" -- ps -o command= -C etcd | grep -- --auto-tls=true 2>/dev/null ; echo exit_code=$?
fi
use_multiple_values: true
tests:
test_items:
@@ -67,22 +80,23 @@ groups:
op: eq
value: "1"
remediation: |
This setting is managed by the cluster etcd operator. No remediation required.e
This setting is managed by the cluster etcd operator. No remediation required.
scored: false
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are set as appropriate (Manual)"
audit: |
# For --peer-cert-file
for i in $(oc get pods -oname -n openshift-etcd)
do
oc exec -n openshift-etcd -c etcd $i -- ps -o command= -C etcd | sed 's/.*\(--peer-cert-file=[^ ]*\).*/\1/'
done 2>/dev/null
# For --peer-key-file
for i in $(oc get pods -oname -n openshift-etcd)
do
oc exec -n openshift-etcd -c etcd $i -- ps -o command= -C etcd | sed 's/.*\(--peer-key-file=[^ ]*\).*/\1/'
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-etcd namespace
POD_NAME=$(oc get pods -n openshift-etcd -l app=etcd --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching file found on the current node."
else
# Execute the stat command
oc exec -n openshift-etcd -c etcd "$POD_NAME" -- ps -o command= -C etcd | sed 's/.*\(--peer-cert-file=[^ ]*\).*/\1/'
oc exec -n openshift-etcd -c etcd "$POD_NAME" -- ps -o command= -C etcd | sed 's/.*\(--peer-key-file=[^ ]*\).*/\1/'
fi
use_multiple_values: true
tests:
test_items:
@@ -97,10 +111,16 @@ groups:
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Manual)"
audit: |
for i in $(oc get pods -oname -n openshift-etcd)
do
oc exec -n openshift-etcd -c etcd $i -- ps -o command= -C etcd | sed 's/.*\(--peer-client-cert-auth=[^ ]*\).*/\1/'
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-etcd namespace
POD_NAME=$(oc get pods -n openshift-etcd -l app=etcd --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching file found on the current node."
else
# Execute the stat command
oc exec -n openshift-etcd -c etcd "$POD_NAME" -- ps -o command= -C etcd | sed 's/.*\(--peer-client-cert-auth=[^ ]*\).*/\1/'
fi
use_multiple_values: true
tests:
test_items:
@@ -116,10 +136,16 @@ groups:
text: "Ensure that the --peer-auto-tls argument is not set to true (Manual)"
audit: |
# Returns 0 if found, 1 if not found
for i in $(oc get pods -oname -n openshift-etcd)
do
oc exec -n openshift-etcd -c etcd $i -- ps -o command= -C etcd | grep -- --peer-auto-tls=true 2>/dev/null ; echo exit_code=$?
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-etcd namespace
POD_NAME=$(oc get pods -n openshift-etcd -l app=etcd --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching file found on the current node."
else
# Execute the stat command
oc exec -n openshift-etcd -c etcd "$POD_NAME" -- ps -o command= -C etcd | grep -- --peer-auto-tls=true 2>/dev/null ; echo exit_code=$?
fi
use_multiple_values: true
tests:
test_items:
@@ -134,14 +160,17 @@ groups:
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Manual)"
audit: |
for i in $(oc get pods -oname -n openshift-etcd)
do
oc exec -n openshift-etcd -c etcd $i -- ps -o command= -C etcd | sed 's/.*\(--trusted-ca-file=[^ ]*\).*/\1/'
done 2>/dev/null
for i in $(oc get pods -oname -n openshift-etcd)
do
oc exec -n openshift-etcd -c etcd $i -- ps -o command= -C etcd | sed 's/.*\(--peer-trusted-ca-file=[^ ]*\).*/\1/'
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-etcd namespace
POD_NAME=$(oc get pods -n openshift-etcd -l app=etcd --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching file found on the current node."
else
# Execute the stat command
oc exec -n openshift-etcd -c etcd "$POD_NAME" -- ps -o command= -C etcd | sed 's/.*\(--trusted-ca-file=[^ ]*\).*/\1/'
oc exec -n openshift-etcd -c etcd "$POD_NAME" -- ps -o command= -C etcd | sed 's/.*\(--peer-trusted-ca-file=[^ ]*\).*/\1/'
fi
use_multiple_values: true
tests:
test_items:

View File

@@ -11,10 +11,18 @@ groups:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Manual)"
audit: |
for i in $( oc get pods -n openshift-kube-apiserver -l app=openshift-kube-apiserver -o name )
do
oc exec -n openshift-kube-apiserver $i -- stat -c "$i %n permissions=%a" /etc/kubernetes/static-pod-resources/kube-apiserver-pod.yaml;
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-apiserver namespace
POD_NAME=$(oc get pods -n openshift-kube-apiserver -l app=openshift-kube-apiserver --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-apiserver "$POD_NAME" -- stat -c "$POD_NAME %n permissions=%a" /etc/kubernetes/static-pod-resources/kube-apiserver-pod.yaml
fi
use_multiple_values: true
tests:
test_items:
@@ -29,11 +37,18 @@ groups:
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Manual)"
audit: |
for i in $( oc get pods -n openshift-kube-apiserver -l app=openshift-kube-apiserver -o name )
do
oc exec -n openshift-kube-apiserver $i -- \
stat -c "$i %n %U:%G" /etc/kubernetes/static-pod-resources/kube-apiserver-pod.yaml
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-apiserver namespace
POD_NAME=$(oc get pods -n openshift-kube-apiserver -l app=openshift-kube-apiserver --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-apiserver "$POD_NAME" -- stat -c "$POD_NAME %n %U:%G" /etc/kubernetes/static-pod-resources/kube-apiserver-pod.yaml
fi
use_multiple_values: true
tests:
test_items:
@@ -45,10 +60,18 @@ groups:
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Manual)"
audit: |
for i in $( oc get pods -n openshift-kube-controller-manager -o name -l app=kube-controller-manager)
do
oc exec -n openshift-kube-controller-manager $i -- stat -c "$i %n permissions=%a" /etc/kubernetes/static-pod-resources/kube-controller-manager-pod.yaml;
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-controller-manager namespace
POD_NAME=$(oc get pods -n openshift-kube-controller-manager -l app=kube-controller-manager --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-controller-manager "$POD_NAME" -- stat -c "$POD_NAME %n permissions=%a" /etc/kubernetes/static-pod-resources/kube-controller-manager-pod.yaml
fi
use_multiple_values: true
tests:
test_items:
@@ -63,11 +86,18 @@ groups:
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Manual)"
audit: |
for i in $( oc get pods -n openshift-kube-controller-manager -o name -l app=kube-controller-manager)
do
oc exec -n openshift-kube-controller-manager $i -- \
stat -c "$i %n %U:%G" /etc/kubernetes/static-pod-resources/kube-controller-manager-pod.yaml
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-controller-manager namespace
POD_NAME=$(oc get pods -n openshift-kube-controller-manager -l app=kube-controller-manager --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-controller-manager "$POD_NAME" -- stat -c "$POD_NAME %n %U:%G" /etc/kubernetes/static-pod-resources/kube-controller-manager-pod.yaml
fi
use_multiple_values: true
tests:
test_items:
@@ -79,10 +109,18 @@ groups:
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Manual)"
audit: |
for i in $( oc get pods -n openshift-kube-scheduler -l app=openshift-kube-scheduler -o name )
do
oc exec -n openshift-kube-scheduler $i -- stat -c "$i %n permissions=%a" /etc/kubernetes/static-pod-resources/kube-scheduler-pod.yaml;
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-scheduler namespace
POD_NAME=$(oc get pods -n openshift-kube-scheduler -l app=openshift-kube-scheduler --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-scheduler "$POD_NAME" -- stat -c "$POD_NAME %n permissions=%a" /etc/kubernetes/static-pod-resources/kube-scheduler-pod.yaml
fi
use_multiple_values: true
tests:
test_items:
@@ -97,11 +135,18 @@ groups:
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Manual))"
audit: |
for i in $( oc get pods -n openshift-kube-scheduler -l app=openshift-kube-scheduler -o name )
do
oc exec -n openshift-kube-scheduler $i -- \
stat -c "$i %n %U:%G" /etc/kubernetes/static-pod-resources/kube-scheduler-pod.yaml
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-scheduler namespace
POD_NAME=$(oc get pods -n openshift-kube-scheduler -l app=openshift-kube-scheduler --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-scheduler "$POD_NAME" -- stat -c "$POD_NAME %n %U:%G" /etc/kubernetes/static-pod-resources/kube-scheduler-pod.yaml
fi
use_multiple_values: true
tests:
test_items:
@@ -113,10 +158,18 @@ groups:
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Manual))"
audit: |
for i in $( oc get pods -n openshift-etcd -l app=etcd -o name | grep etcd )
do
oc rsh -n openshift-etcd $i stat -c "$i %n permissions=%a" /etc/kubernetes/manifests/etcd-pod.yaml
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-etcd namespace
POD_NAME=$(oc get pods -n openshift-etcd -l app=etcd --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc rsh -n openshift-etcd "$POD_NAME" stat -c "$POD_NAME %n permissions=%a" /etc/kubernetes/manifests/etcd-pod.yaml
fi
use_multiple_values: true
tests:
test_items:
@@ -131,10 +184,18 @@ groups:
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Manual)"
audit: |
for i in $( oc get pods -n openshift-etcd -l app=etcd -o name | grep etcd )
do
oc rsh -n openshift-etcd $i stat -c "$i %n %U:%G" /etc/kubernetes/manifests/etcd-pod.yaml
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-etcd namespace
POD_NAME=$(oc get pods -n openshift-etcd -l app=etcd --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc rsh -n openshift-etcd "$POD_NAME" stat -c "$POD_NAME %n %U:%G" /etc/kubernetes/manifests/etcd-pod.yaml
fi
use_multiple_values: true
tests:
test_items:
@@ -146,16 +207,41 @@ groups:
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Manual)"
audit: |
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# For CNI multus
for i in $(oc get pods -n openshift-multus -l app=multus -oname); do oc exec -n openshift-multus $i -- /bin/bash -c "stat -c \"$i %n permissions=%a\" /host/etc/cni/net.d/*.conf"; done 2>/dev/null
for i in $(oc get pods -n openshift-multus -l app=multus -oname); do oc exec -n openshift-multus $i -- /bin/bash -c "stat -c \"$i %n permissions=%a\" /host/var/run/multus/cni/net.d/*.conf"; done 2>/dev/null
# Get the pod name in the openshift-multus namespace
POD_NAME=$(oc get pods -n openshift-multus -l app=multus --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-multus "$POD_NAME" -- /bin/bash -c "stat -c \"$i %n permissions=%a\" /host/etc/cni/net.d/*.conf"; 2>/dev/null
oc exec -n openshift-multus "$POD_NAME" -- /bin/bash -c "stat -c \"$i %n permissions=%a\" /host/var/run/multus/cni/net.d/*.conf"; 2>/dev/null
fi
# For SDN pods
for i in $(oc get pods -n openshift-sdn -l app=sdn -oname); do oc exec -n openshift-sdn $i -- find /var/lib/cni/networks/openshift-sdn -type f -exec stat -c "$i %n permissions=%a" {} \;; done 2>/dev/null
for i in $(oc get pods -n openshift-sdn -l app=sdn -oname); do oc exec -n openshift-sdn $i -- find /var/run/openshift-sdn -type f -exec stat -c "$i %n permissions=%a" {} \;; done 2>/dev/null
POD_NAME=$(oc get pods -n openshift-sdn -l app=sdn --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-sdn "$POD_NAME" -- find /var/lib/cni/networks/openshift-sdn -type f -exec stat -c "$i %n permissions=%a" {} \; 2>/dev/null
oc exec -n openshift-sdn "$POD_NAME" -- find /var/run/openshift-sdn -type f -exec stat -c "$i %n permissions=%a" {} \; 2>/dev/null
fi
# For OVS pods
for i in $(oc get pods -n openshift-sdn -l app=ovs -oname); do oc exec -n openshift-sdn $i -- find /var/run/openvswitch -type f -exec stat -c "$i %n permissions=%a" {} \;; done 2>/dev/null
for i in $(oc get pods -n openshift-sdn -l app=ovs -oname); do oc exec -n openshift-sdn $i -- find /etc/openvswitch -type f -exec stat -c "$i %n permissions=%a" {} \;; done 2>/dev/null
for i in $(oc get pods -n openshift-sdn -l app=ovs -oname); do oc exec -n openshift-sdn $i -- find /run/openvswitch -type f -exec stat -c "$i %n permissions=%a" {} \;; done 2>/dev/null
POD_NAME=$(oc get pods -n openshift-sdn -l app=ovs --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-sdn "$POD_NAME" -- find /var/run/openvswitch -type f -exec stat -c "$i %n permissions=%a" {} \; 2>/dev/null
oc exec -n openshift-sdn "$POD_NAME" -- find /etc/openvswitch -type f -exec stat -c "$i %n permissions=%a" {} \; 2>/dev/null
oc exec -n openshift-sdn "$POD_NAME" -- find /run/openvswitch -type f -exec stat -c "$i %n permissions=%a" {} \; 2>/dev/null
fi
use_multiple_values: true
tests:
test_items:
@@ -170,17 +256,40 @@ groups:
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
audit: |
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# For CNI multus
for i in $(oc get pods -n openshift-multus -l app=multus -oname); do oc exec -n openshift-multus $i -- /bin/bash -c "stat -c \""$i %n %U:%G\" /host/etc/cni/net.d/*.conf"; done 2>/dev/null
for i in $(oc get pods -n openshift-multus -l app=multus -oname); do oc exec -n openshift-multus $i -- /bin/bash -c "stat -c \""$i %n %U:%G\" /host/var/run/multus/cni/net.d/*.conf"; done 2>/dev/null
# Get the pod name in the openshift-multus namespace
POD_NAME=$(oc get pods -n openshift-multus -l app=multus --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-multus "$POD_NAME" -- /bin/bash -c "stat -c '$i %n %U:%G' /host/etc/cni/net.d/*.conf" 2>/dev/null
oc exec -n openshift-multus $i -- /bin/bash -c "stat -c '$i %n %U:%G' /host/var/run/multus/cni/net.d/*.conf" 2>/dev/null
fi
# For SDN pods
for i in $(oc get pods -n openshift-sdn -l app=sdn -oname); do oc exec -n openshift-sdn $i -- find /var/lib/cni/networks/openshift-sdn -type f -exec stat -c "$i %n %U:%G" {} \;; done 2>/dev/null
for i in $(oc get pods -n openshift-sdn -l app=sdn -oname); do oc exec -n openshift-sdn $i -- find /var/run/openshift-sdn -type f -exec stat -c "$i %n %U:%G"{} \;; done 2>/dev/null
POD_NAME=$(oc get pods -n openshift-sdn -l app=sdn --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-sdn "$POD_NAME" -- find /var/lib/cni/networks/openshift-sdn -type f -exec stat -c "$i %n %U:%G" {} \; 2>/dev/null
oc exec -n openshift-sdn "$POD_NAME" -- find /var/run/openshift-sdn -type f -exec stat -c "$i %n %U:%G" {} \; 2>/dev/null
fi
# For OVS pods in 4.5
for i in $(oc get pods -n openshift-sdn -l app=ovs -oname); do oc exec -n openshift-sdn $i -- find /var/run/openvswitch -type f -exec stat -c "$i %n %U:%G" {} \;; done 2>/dev/null
for i in $(oc get pods -n openshift-sdn -l app=ovs -oname); do oc exec -n openshift-sdn $i -- find /etc/openvswitch -type f -exec stat -c "$i %n %U:%G" {} \;; done 2>/dev/null
for i in $(oc get pods -n openshift-sdn -l app=ovs -oname); do oc exec -n openshift-sdn $i -- find /run/openvswitch -type f -exec stat -c "$i %n %U:%G" {} \;; done 2>/dev/null
# For OVS pods in 4.6 TBD
POD_NAME=$(oc get pods -n openshift-sdn -l app=ovs --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-sdn "$POD_NAME" -- find /var/run/openvswitch -type f -exec stat -c "$i %n %U:%G" {} \; 2>/dev/null
oc exec -n openshift-sdn "$POD_NAME" -- find /etc/openvswitch -type f -exec stat -c "$i %n %U:%G" {} \; 2>/dev/null
oc exec -n openshift-sdn "$POD_NAME" -- find /run/openvswitch -type f -exec stat -c "$i %n %U:%G" {} \; 2>/dev/null
fi
use_multiple_values: true
tests:
test_items:
@@ -192,7 +301,18 @@ groups:
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Manual)"
audit: |
for i in $(oc get pods -n openshift-etcd -l app=etcd -oname); do oc exec -n openshift-etcd -c etcd $i -- stat -c "$i %n permissions=%a" /var/lib/etcd/member; done
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-etcd namespace
POD_NAME=$(oc get pods -n openshift-etcd -l app=etcd --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-etcd "$POD_NAME" -- stat -c "$POD_NAME %n permissions=%a" /var/lib/etcd/member
fi
use_multiple_values: true
tests:
test_items:
@@ -207,7 +327,18 @@ groups:
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Manual)"
audit: |
for i in $(oc get pods -n openshift-etcd -l app=etcd -oname); do oc exec -n openshift-etcd -c etcd $i -- stat -c "$i %n %U:%G" /var/lib/etcd/member; done
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-etcd namespace
POD_NAME=$(oc get pods -n openshift-etcd -l app=etcd --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-etcd "$POD_NAME" -- stat -c "$POD_NAME %n %U:%G" /var/lib/etcd/member
fi
use_multiple_values: true
tests:
test_items:
@@ -219,10 +350,8 @@ groups:
- id: 1.1.13
text: "Ensure that the admin.conf file permissions are set to 644 or more restrictive (Manual))"
audit: |
for i in $(oc get nodes -o name)
do
oc debug $i -- chroot /host stat -c "$i %n permissions=%a" /etc/kubernetes/kubeconfig
done 2>/dev/null
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host stat -c "$NODE_NAME %n permissions=%a" /etc/kubernetes/kubeconfig 2> /dev/null
use_multiple_values: true
tests:
test_items:
@@ -237,10 +366,8 @@ groups:
- id: 1.1.14
text: "Ensure that the admin.conf file ownership is set to root:root (Manual)"
audit: |
for i in $(oc get nodes -o name)
do
oc debug $i -- chroot /host stat -c "$i %n %U:%G" /etc/kubernetes/kubeconfig
done 2>/dev/null
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host stat -c "$NODE_NAME %n %U:%G" /etc/kubernetes/kubeconfig 2> /dev/null
use_multiple_values: true
tests:
test_items:
@@ -252,10 +379,18 @@ groups:
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 644 or more restrictive (Manual)"
audit: |
for i in $(oc get pods -n openshift-kube-scheduler -l app=openshift-kube-scheduler -o name)
do
oc exec -n openshift-kube-scheduler $i -- stat -c "$i %n permissions=%a" /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-scheduler namespace
POD_NAME=$(oc get pods -n openshift-kube-scheduler -l app=openshift-kube-scheduler --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-scheduler "$POD_NAME" -- stat -c "$POD_NAME %n permissions=%a" /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig
fi
use_multiple_values: true
tests:
test_items:
@@ -270,10 +405,18 @@ groups:
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Manual)"
audit: |
for i in $(oc get pods -n openshift-kube-scheduler -l app=openshift-kube-scheduler -o name)
do
oc exec -n openshift-kube-scheduler $i -- stat -c "$i %n %U:%G" /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-scheduler namespace
POD_NAME=$(oc get pods -n openshift-kube-scheduler -l app=openshift-kube-scheduler --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-scheduler "$POD_NAME" -- stat -c "$POD_NAME %n %U:%G" /etc/kubernetes/static-pod-resources/configmaps/scheduler-kubeconfig/kubeconfig
fi
use_multiple_values: true
tests:
test_items:
@@ -285,10 +428,18 @@ groups:
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive (Manual)"
audit: |
for i in $(oc get pods -n openshift-kube-controller-manager -l app=kube-controller-manager -o name)
do
oc exec -n openshift-kube-controller-manager $i -- stat -c "$i %n permissions=%a" /etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-controller-manager namespace
POD_NAME=$(oc get pods -n openshift-kube-controller-manager -l app=kube-controller-manager --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-controller-manager "$POD_NAME" -- stat -c "$POD_NAME %n permissions=%a" /etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig
fi
use_multiple_values: true
tests:
test_items:
@@ -303,10 +454,18 @@ groups:
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Manual)"
audit: |
for i in $(oc get pods -n openshift-kube-controller-manager -l app=kube-controller-manager -o name)
do
oc exec -n openshift-kube-controller-manager $i -- stat -c "$i %n %U:%G" /etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig
done 2>/dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-controller-manager namespace
POD_NAME=$(oc get pods -n openshift-kube-controller-manager -l app=kube-controller-manager --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-controller-manager "$POD_NAME" -- stat -c "$POD_NAME %n %U:%G" /etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig
fi
use_multiple_values: true
tests:
test_items:
@@ -319,15 +478,22 @@ groups:
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Manual)"
audit: |
# Should return root:root for all files and directories
for i in $(oc -n openshift-kube-apiserver get pod -l app=openshift-kube-apiserver -o jsonpath='{.items[*].metadata.name}')
do
# echo $i static-pod-certs
oc exec -n openshift-kube-apiserver $i -c kube-apiserver -- find /etc/kubernetes/static-pod-certs -type d -wholename '*/secrets*' -exec stat -c "$i %n %U:%G" {} \;
oc exec -n openshift-kube-apiserver $i -c kube-apiserver -- find /etc/kubernetes/static-pod-certs -type f -wholename '*/secrets*' -exec stat -c "$i %n %U:%G" {} \;
# echo $i static-pod-resources
oc exec -n openshift-kube-apiserver $i -c kube-apiserver -- find /etc/kubernetes/static-pod-resources -type d -wholename '*/secrets*' -exec stat -c "$i %n %U:%G" {} \;
oc exec -n openshift-kube-apiserver $i -c kube-apiserver -- find /etc/kubernetes/static-pod-resources -type f -wholename '*/secrets*' -exec stat -c "$i %n %U:%G" {} \;
done
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-controller-manager namespace
POD_NAME=$(oc get pods -n openshift-kube-apiserver -l app=openshift-kube-apiserver --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# echo $i static-pod-certs
oc exec -n openshift-kube-apiserver "$POD_NAME" -c kube-apiserver -- find /etc/kubernetes/static-pod-certs -type d -wholename '*/secrets*' -exec stat -c "$i %n %U:%G" {} \;
oc exec -n openshift-kube-apiserver "$POD_NAME" -c kube-apiserver -- find /etc/kubernetes/static-pod-certs -type f -wholename '*/secrets*' -exec stat -c "$i %n %U:%G" {} \;
# echo $i static-pod-resources
oc exec -n openshift-kube-apiserver "$POD_NAME" -c kube-apiserver -- find /etc/kubernetes/static-pod-resources -type d -wholename '*/secrets*' -exec stat -c "$i %n %U:%G" {} \;
oc exec -n openshift-kube-apiserver "$POD_NAME" -c kube-apiserver -- find /etc/kubernetes/static-pod-resources -type f -wholename '*/secrets*' -exec stat -c "$i %n %U:%G" {} \;
fi
use_multiple_values: true
tests:
test_items:
@@ -339,11 +505,18 @@ groups:
- id: 1.1.20
text: "Ensure that the OpenShift PKI certificate file permissions are set to 644 or more restrictive (Manual)"
audit: |
for i in $(oc -n openshift-kube-apiserver get pod -l app=openshift-kube-apiserver -o jsonpath='{.items[*].metadata.name}')
do
# echo $i static-pod-certs
oc exec -n openshift-kube-apiserver $i -c kube-apiserver -- find /etc/kubernetes/static-pod-certs -type f -wholename '*/secrets/*.crt' -exec stat -c "$i %n permissions=%a" {} \;
done
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-apiserver namespace
POD_NAME=$(oc get pods -n openshift-kube-apiserver -l app=openshift-kube-apiserver --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-apiserver "$POD_NAME" -c kube-apiserver -- find /etc/kubernetes/static-pod-certs -type f -wholename '*/secrets/*.crt' -exec stat -c "$POD_NAME %n permissions=%a" {} \;
fi
use_multiple_values: true
tests:
test_items:
@@ -358,11 +531,18 @@ groups:
- id: 1.1.21
text: "Ensure that the OpenShift PKI key file permissions are set to 600 (Manual)"
audit: |
for i in $(oc -n openshift-kube-apiserver get pod -l app=openshift-kube-apiserver -o jsonpath='{.items[*].metadata.name}')
do
# echo $i static-pod-certs
oc exec -n openshift-kube-apiserver $i -c kube-apiserver -- find /etc/kubernetes/static-pod-certs -type f -wholename '*/secrets/*.key' -exec stat -c "$i %n permissions=%a" {} \;
done
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-kube-apiserver namespace
POD_NAME=$(oc get pods -n openshift-kube-apiserver -l app=openshift-kube-apiserver --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-kube-apiserver "$POD_NAME" -c kube-apiserver -- find /etc/kubernetes/static-pod-certs -type f -wholename '*/secrets/*.key' -exec stat -c "$POD_NAME %n permissions=%a" {} \;
fi
use_multiple_values: true
tests:
test_items:
@@ -532,11 +712,9 @@ groups:
oc get configmap config -n openshift-kube-apiserver -ojson | jq -r '.data["config.yaml"]' | jq '.apiServerArguments'
oc get kubeapiservers.operator.openshift.io cluster -o json | jq '.spec.observedConfig.apiServerArguments'
# For OCP 4.5 and earlier verify that authorization-mode is not used
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host cat /etc/kubernetes/kubelet.conf | grep authorization-mode
oc debug node/${node} -- chroot /host ps -aux | grep kubelet | grep authorization-mode
done
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host cat /etc/kubernetes/kubelet.conf | grep authorization-mode 2> /dev/null
oc debug node/$NODE_NAME -- chroot /host ps -aux | grep kubelet | grep authorization-mode 2> /dev/null
#Check that no overrides are configured
oc get kubeapiservers.operator.openshift.io cluster -o json | jq -r '.spec.unsupportedConfigOverrides'
audit_config: |
@@ -864,7 +1042,6 @@ groups:
remediation: |
Follow the documentation for log forwarding. Forwarding logs to third party systems
https://docs.openshift.com/container-platform/4.5/logging/cluster-logging-external.html
scored: false
- id: 1.2.24
@@ -1070,6 +1247,12 @@ groups:
- id: 1.2.35
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
type: manual
audit: |
# verify cipher suites
oc get cm -n openshift-authentication v4-0-config-system-cliconfig -o jsonpath='{.data.v4\-0\-config\-system\-cliconfig}' | jq .servingInfo
oc get kubeapiservers.operator.openshift.io cluster -o json |jq.spec.observedConfig.servingInfo
oc get openshiftapiservers.operator.openshift.io cluster -o json |jq.spec.observedConfig.servingInfo
oc describe --namespace=openshift-ingress-operator ingresscontroller/default
remediation: |
Verify that the tlsSecurityProfile is set to the value you chose.
Note: The HAProxy Ingress controller image does not support TLS 1.3

View File

@@ -11,11 +11,8 @@ groups:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)"
audit: |
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host stat -c "$node %n permissions=%a" /etc/systemd/system/kubelet.service
done 2> /dev/null
use_multiple_values: true
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host stat -c "$NODE_NAME %n permissions=%a" /etc/systemd/system/kubelet.service 2> /dev/null
tests:
test_items:
- flag: "permissions"
@@ -30,11 +27,8 @@ groups:
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
audit: |
# Should return root:root for each node
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host stat -c "$node %n %U:%G" /etc/systemd/system/kubelet.service
done 2> /dev/null
use_multiple_values: true
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host stat -c "$NODE_NAME %n %U:%G" /etc/systemd/system/kubelet.service 2> /dev/null
tests:
test_items:
- flag: root:root
@@ -45,11 +39,17 @@ groups:
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)"
audit: |
for i in $(oc get pods -n openshift-sdn -l app=sdn -oname)
do
oc exec -n openshift-sdn $i -- stat -Lc "$i %n permissions=%a" /config/kube-proxy-config.yaml
done 2> /dev/null
use_multiple_values: true
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-sdn namespace
POD_NAME=$(oc get pods -n openshift-sdn -l app=sdn --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-sdn "$POD_NAME" -- stat -Lc "$i %n permissions=%a" /config/kube-proxy-config.yaml 2>/dev/null
fi
tests:
bin_op: or
test_items:
@@ -65,10 +65,17 @@ groups:
- id: 4.1.4
text: "Ensure that the proxy kubeconfig file ownership is set to root:root (Manual)"
audit: |
for i in $(oc get pods -n openshift-sdn -l app=sdn -oname)
do
oc exec -n openshift-sdn $i -- stat -Lc "$i %n %U:%G" /config/kube-proxy-config.yaml
done 2> /dev/null
# Get the node name where the pod is running
NODE_NAME=$(oc get pod "$HOSTNAME" -o=jsonpath='{.spec.nodeName}')
# Get the pod name in the openshift-sdn namespace
POD_NAME=$(oc get pods -n openshift-sdn -l app=sdn --field-selector spec.nodeName="$NODE_NAME" -o jsonpath='{.items[0].metadata.name}' 2>/dev/null)
if [ -z "$POD_NAME" ]; then
echo "No matching pods found on the current node."
else
# Execute the stat command
oc exec -n openshift-sdn "$POD_NAME" -- stat -Lc "$i %n %U:%G" /config/kube-proxy-config.yaml 2>/dev/null
fi
use_multiple_values: true
tests:
bin_op: or
@@ -82,10 +89,8 @@ groups:
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Manual)"
audit: |
# Check permissions
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host stat -c "$node %n permissions=%a" /etc/kubernetes/kubelet.conf
done 2> /dev/null
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host stat -c "$NODE_NAME %n permissions=%a" /etc/kubernetes/kubelet.conf 2> /dev/null
use_multiple_values: true
tests:
test_items:
@@ -100,10 +105,8 @@ groups:
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Manual)"
audit: |
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host stat -c "$node %n %U:%G" /etc/kubernetes/kubelet.conf
done 2> /dev/null
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host stat -c "$NODE_NAME %n %U:%G" /etc/kubernetes/kubelet.conf 2> /dev/null
use_multiple_values: true
tests:
test_items:
@@ -115,10 +118,8 @@ groups:
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Automated)"
audit: |
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host stat -c "$node %n permissions=%a" /etc/kubernetes/kubelet-ca.crt
done 2> /dev/null
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host stat -c "$NODE_NAME %n permissions=%a" /etc/kubernetes/kubelet-ca.crt 2> /dev/null
use_multiple_values: true
tests:
test_items:
@@ -133,10 +134,8 @@ groups:
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Automated)"
audit: |
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host stat -c "$node %n %U:%G" /etc/kubernetes/kubelet-ca.crt
done 2> /dev/null
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host stat -c "$NODE_NAME %n %U:%G" /etc/kubernetes/kubelet-ca.crt 2> /dev/null
use_multiple_values: true
tests:
test_items:
@@ -148,10 +147,8 @@ groups:
- id: 4.1.9
text: "Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)"
audit: |
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host stat -c "$node %n permissions=%a" /var/lib/kubelet/kubeconfig
done 2> /dev/null
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host stat -c "$NODE_NAME %n permissions=%a" /var/lib/kubelet/kubeconfig 2> /dev/null
use_multiple_values: true
tests:
test_items:
@@ -166,10 +163,8 @@ groups:
- id: 4.1.10
text: "Ensure that the kubelet configuration file ownership is set to root:root (Automated)"
audit: |
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host stat -c "$node %n %U:%G" /var/lib/kubelet/kubeconfig
done 2> /dev/null
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host stat -c "$NODE_NAME %n %U:%G" /var/lib/kubelet/kubeconfig 2> /dev/null
use_multiple_values: true
tests:
test_items:
@@ -184,10 +179,8 @@ groups:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: |
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host grep -B4 -A1 anonymous /etc/kubernetes/kubelet.conf
done
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host grep -B4 -A1 anonymous /etc/kubernetes/kubelet.conf 2> /dev/null
use_multiple_values: true
tests:
test_items:
@@ -205,10 +198,8 @@ groups:
audit: |
POD=$(oc -n openshift-kube-apiserver get pod -l app=openshift-kube-apiserver -o jsonpath='{.items[0].metadata.name}')
TOKEN=$(oc whoami -t)
for name in $(oc get nodes -ojsonpath='{.items[*].metadata.name}')
do
oc exec -n openshift-kube-apiserver $POD -- curl -sS https://172.25.0.1/api/v1/nodes/$name/proxy/configz -k -H "Authorization:Bearer $TOKEN" | jq -r '.kubeletconfig.authorization.mode'
done
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc exec -n openshift-kube-apiserver $POD -- curl -sS https://172.25.0.1/api/v1/nodes/$NODE_NAME/proxy/configz -k -H "Authorization:Bearer $TOKEN" | jq -r '.kubeletconfig.authorization.mode' 2> /dev/null
use_multiple_values: true
tests:
test_items:
@@ -220,17 +211,12 @@ groups:
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: |
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host grep clientCAFile /etc/kubernetes/kubelet.conf
done 2> /dev/null
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host grep clientCAFile /etc/kubernetes/kubelet.conf 2> /dev/null
use_multiple_values: true
tests:
test_items:
- flag: "clientCAFile"
compare:
op: eq
value: "/etc/kubernetes/kubelet-ca.crt"
- flag: '"clientCAFile": "/etc/kubernetes/kubelet-ca.crt"'
remediation: |
None required. Changing the clientCAFile value is unsupported.
scored: true
@@ -258,18 +244,13 @@ groups:
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: |
# Should return 1 for each node
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host ps -ef | grep kubelet | grep streaming-connection-idle-timeout
echo exit_code=$?
done 2>/dev/null
# Should return 1 for each node
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host grep streamingConnectionIdleTimeout /etc/kubernetes/kubelet.conf
echo exit_code=$?
done 2>/dev/null
# Should return 1 for node
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/${NODE_NAME} -- chroot /host ps -ef | grep kubelet | grep streaming-connection-idle-timeout 2> /dev/null
echo exit_code=$?
# Should return 1 for node
oc debug node/${NODE_NAME} -- chroot /host grep streamingConnectionIdleTimeout /etc/kubernetes/kubelet.conf 2> /dev/null
echo exit_code=$?
use_multiple_values: true
tests:
bin_op: or
@@ -278,6 +259,10 @@ groups:
compare:
op: noteq
value: 0
- flag: streamingConnectionIdleTimeout
compare:
op: noteq
value: 0s
- flag: "exit_code"
compare:
op: eq
@@ -290,10 +275,8 @@ groups:
- id: 4.2.6
text: "Ensure that the --protect-kernel-defaults argument is not set (Manual)"
audit: |
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}');
do
oc debug node/${node} -- chroot /host more /etc/kubernetes/kubelet.conf;
done
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/$NODE_NAME -- chroot /host more /etc/kubernetes/kubelet.conf 2> /dev/null
tests:
test_items:
- flag: protectKernelDefaults
@@ -348,10 +331,8 @@ groups:
- id: 4.2.9
text: "Ensure that the kubeAPIQPS [--event-qps] argument is set to 0 or a level which ensures appropriate event capture (Manual)"
audit: |
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}');
do
oc debug node/${node} -- chroot /host more /etc/kubernetes/kubelet.conf;
done
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/${NODE_NAME} -- chroot /host cat /etc/kubernetes/kubelet.conf;
oc get machineconfig 01-worker-kubelet -o yaml | grep --color kubeAPIQPS%3A%2050
oc get machineconfig 01-master-kubelet -o yaml | grep --color kubeAPIQPS%3A%2050
type: "manual"
@@ -364,7 +345,12 @@ groups:
- id: 4.2.10
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: |
oc get configmap config -n openshift-kube-apiserver -ojson | jq -r '.data["config.yaml"]' | jq '.kubeletClientInfo'
oc get configmap config -n openshift-kube-apiserver -o json \
| jq -r '.data["config.yaml"]' \
| jq -r '.apiServerArguments |
.["kubelet-client-certificate"][0],
.["kubelet-client-key"][0]
'
tests:
bin_op: and
test_items:
@@ -379,15 +365,10 @@ groups:
text: "Ensure that the --rotate-certificates argument is not set to false (Manual)"
audit: |
#Verify the rotateKubeletClientCertificate feature gate is not set to false
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot /host cat /etc/kubernetes/kubelet.conf | grep RotateKubeletClientCertificate
done 2> /dev/null
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/${NODE_NAME} -- chroot /host cat /etc/kubernetes/kubelet.conf | grep RotateKubeletClientCertificate 2> /dev/null
# Verify the rotateCertificates argument is set to true
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot host grep rotate /etc/kubernetes/kubelet.conf;
done 2> /dev/null
oc debug node/${NODE_NAME} -- chroot host grep rotate /etc/kubernetes/kubelet.conf 2> /dev/null
use_multiple_values: true
tests:
bin_op: or
@@ -410,24 +391,19 @@ groups:
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
audit: |
#Verify the rotateKubeletServerCertificate feature gate is on
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}');
do
oc debug node/${node} -- chroot /host grep RotateKubeletServerCertificate /etc/kubernetes/kubelet.conf;
done 2> /dev/null
NODE_NAME=$(oc get pod $HOSTNAME -o=jsonpath='{.spec.nodeName}')
oc debug node/${NODE_NAME} -- chroot /host grep RotateKubeletServerCertificate /etc/kubernetes/kubelet.conf 2> /dev/null
# Verify the rotateCertificates argument is set to true
for node in $(oc get nodes -o jsonpath='{.items[*].metadata.name}')
do
oc debug node/${node} -- chroot host grep rotate /etc/kubernetes/kubelet.conf;
done 2> /dev/null
oc debug node/${NODE_NAME} -- chroot host grep rotate /etc/kubernetes/kubelet.conf 2> /dev/null
use_multiple_values: true
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
- flag: rotateCertificates
compare:
op: eq
value: true
- flag: rotateCertificates
- flag: RotateKubeletServerCertificate
compare:
op: eq
value: true

View File

@@ -11,6 +11,12 @@ groups:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
audit: |
#To get a list of users and service accounts with the cluster-admin role
oc get clusterrolebindings -o=customcolumns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].kind |
grep cluster-admin
#To verity that kbueadmin is removed, no results should be returned
oc get secrets kubeadmin -n kube-system
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
@@ -29,6 +35,15 @@ groups:
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
audit: |
#needs verification
oc get roles --all-namespaces -o yaml
for i in $(oc get roles -A -o jsonpath='{.items[*].metadata.name}'); do oc
describe clusterrole ${i}; done
#Retrieve the cluster roles defined in the cluster and review for wildcards
oc get clusterroles -o yaml
for i in $(oc get clusterroles -o jsonpath='{.items[*].metadata.name}'); do
oc describe clusterrole ${i}; done
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
@@ -63,10 +78,7 @@ groups:
text: "Minimize the admission of privileged containers (Manual)"
audit: |
# needs verification
for i in `oc get scc --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'`;
do
echo "$i"; oc describe scc $i | grep "Allow Privileged";
done
oc get scc -o=custom-columns=NAME:.metadata.name,allowPrivilegedContainer:.allowPrivilegedContainer
tests:
test_items:
- flag: "false"
@@ -78,10 +90,7 @@ groups:
- id: 5.2.2
text: "Minimize the admission of containers wishing to share the host process ID namespace (Manual)"
audit: |
for i in `oc get scc --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'`;
do
echo "$i"; oc describe scc $i | grep "Allow Host PID";
done
oc get scc -o=custom-columns=NAME:.metadata.name,allowHostPID:.allowHostPID
tests:
test_items:
- flag: "false"
@@ -93,10 +102,7 @@ groups:
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host IPC namespace (Manual)"
audit: |
for i in `oc get scc --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'`;
do
echo "$i"; oc describe scc $i | grep "Allow Host IPC";
done
oc get scc -o=custom-columns=NAME:.metadata.name,allowHostIPC:.allowHostIPC
tests:
test_items:
- flag: "false"
@@ -108,10 +114,7 @@ groups:
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host network namespace (Manual)"
audit: |
for i in `oc get scc --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'`;
do
echo "$i"; oc describe scc $i | grep "Allow Host Network";
done
oc get scc -o=custom-columns=NAME:.metadata.name,allowHostNetwork:.allowHostNetwork
tests:
test_items:
- flag: "false"
@@ -123,10 +126,7 @@ groups:
- id: 5.2.5
text: "Minimize the admission of containers with allowPrivilegeEscalation (Manual)"
audit: |
for i in `oc get scc --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'`;
do
echo "$i"; oc describe scc $i | grep "Allow Privilege Escalation";
done
oc get scc -o=custom-columns=NAME:.metadata.name,allowPrivilegeEscalation:.allowPrivilegeEscalation
tests:
test_items:
- flag: "false"
@@ -138,18 +138,10 @@ groups:
- id: 5.2.6
text: "Minimize the admission of root containers (Manual)"
audit: |
# needs verification
for i in `oc get scc --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'`;
do
echo "$i";
oc describe scc $i | grep "Run As User Strategy";
done
# needs verification # | awk 'NR>1 {gsub("map\\[type:", "", $2); gsub("\\]$", "", $2); print $1 ":" $2}'
oc get scc -o=custom-columns=NAME:.metadata.name,runAsUser:.runAsUser.type
#For SCCs with MustRunAs verify that the range of UIDs does not include 0
for i in `oc get scc --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'`;
do
echo "$i";
oc describe scc $i | grep "\sUID";
done
oc get scc -o=custom-columns=NAME:.metadata.name,uidRangeMin:.runAsUser.uidRangeMin,uidRangeMax:.runAsUser.uidRangeMax
tests:
bin_op: or
test_items:
@@ -168,11 +160,7 @@ groups:
text: "Minimize the admission of containers with the NET_RAW capability (Manual)"
audit: |
# needs verification
for i in `oc get scc --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'`;
do
echo "$i";
oc describe scc $i | grep "Required Drop Capabilities";
done
oc get scc -o=custom-columns=NAME:.metadata.name,requiredDropCapabilities:.requiredDropCapabilities
tests:
bin_op: or
test_items:
@@ -213,6 +201,9 @@ groups:
- id: 5.3.2
text: "Ensure that all Namespaces have Network Policies defined (Manual)"
type: "manual"
audit: |
#Run the following command and review the NetworkPolicy objects created in the cluster.
oc -n all get networkpolicy
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
@@ -223,6 +214,10 @@ groups:
- id: 5.4.1
text: "Prefer using secrets as files over secrets as environment variables (Manual)"
type: "manual"
audit: |
#Run the following command to find references to objects which use environment variables defined from secrets.
oc get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind}
{.metadata.name} {"\n"}{end}' -A
remediation: |
If possible, rewrite application code to read secrets from mounted secret files, rather than
from environment variables.
@@ -252,6 +247,10 @@ groups:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
audit: |
#Run the following command and review the namespaces created in the cluster.
oc get namespaces
#Ensure that these namespaces are the ones you need and are adequately administered as per your requirements.
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
@@ -277,6 +276,11 @@ groups:
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "manual"
audit: |
#Run this command to list objects in default namespace
oc project default
oc get all
#The only entries there should be system managed resources such as the kubernetes and openshift service
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.

View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,46 @@
---
controls:
version: "rke-cis-1.23"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: true
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

149
cfg/rke-cis-1.23/etcd.yaml Normal file
View File

@@ -0,0 +1,149 @@
---
controls:
version: "rke-cis-1.23"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
env: "ETCD_CERT_FILE"
set: true
- flag: "--key-file"
env: "ETCD_KEY_FILE"
set: true
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--client-cert-auth"
set: true
- flag: "--client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
set: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are
set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
env: "ETCD_PEER_CERT_FILE"
set: true
- flag: "--peer-key-file"
env: "ETCD_PEER_KEY_FILE"
set: true
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-client-cert-auth"
set: true
- flag: "--peer-client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
set: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
set: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
set: true
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: true

1011
cfg/rke-cis-1.23/master.yaml Normal file

File diff suppressed because it is too large Load Diff

466
cfg/rke-cis-1.23/node.yaml Normal file
View File

@@ -0,0 +1,466 @@
---
controls:
version: "rke-cis-1.23"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 644 or more restrictive (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Cluster provisioned by RKE doesnt require or maintain a configuration file for the kubelet service.
All configuration is passed in as arguments at container run time.
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Cluster provisioned by RKE doesnt require or maintain a configuration file for the kubelet service.
All configuration is passed in as arguments at container run time.
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 644 or more restrictive (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $proxykubeconfig
scored: true
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: true
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 644 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e /node$kubeletkubeconfig; then stat -c permissions=%a /node$kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 644 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e /node$kubeletkubeconfig; then stat -c %U:%G /node$kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Automated)"
audit: "stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 644 <filename>
scored: true
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Automated)"
audit: "stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem"
tests:
test_items:
- flag: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: true
- id: 4.1.9
text: "Ensure that the kubelet --config configuration file has permissions set to 644 or more restrictive (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "644"
remediation: |
Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet.
All configuration is passed in as arguments at container run time.
Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet.
All configuration is passed in as arguments at container run time.
scored: true
- id: 4.1.10
text: "Ensure that the kubelet --config configuration file ownership is set to root:root (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet.
All configuration is passed in as arguments at container run time.
Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet.
All configuration is passed in as arguments at container run time.
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Ensure that the --read-only-port argument is set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.6
text: "Ensure that the --protect-kernel-defaults argument is set to true (Automated)"
type: "skip"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --protect-kernel-defaults
path: '{.protectKernelDefaults}'
compare:
op: eq
value: true
remediation: |
If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
System level configurations are required prior to provisioning the cluster in order for this argument to be set to true.
scored: true
- id: 4.2.7
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.8
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
type: "skip"
audit: "/bin/ps -fC $kubeletbin "
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors
scored: false
- id: 4.2.9
text: "Ensure that the --event-qps argument is set to 0 or a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.10
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
type: "skip"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --tls-cert-file
path: '{.tlsCertFile}'
- flag: --tls-private-key-file
path: '{.tlsPrivateKeyFile}'
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.
scored: false
- id: 4.2.11
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.12
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
type: "skip"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
Clusters provisioned by RKE handles certificate rotation directly through RKE.
scored: false
- id: 4.2.13
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true

View File

@@ -0,0 +1,269 @@
---
controls:
version: "rke-cis-1.23"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
type: "skip"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
type: "skip"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
type: "skip"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Manual)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Manual)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "skip"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "skip"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: false

View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,46 @@
---
controls:
version: "rke-cis-1.24"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: true
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

149
cfg/rke-cis-1.24/etcd.yaml Normal file
View File

@@ -0,0 +1,149 @@
---
controls:
version: "rke-cis-1.24"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
env: "ETCD_CERT_FILE"
set: true
- flag: "--key-file"
env: "ETCD_KEY_FILE"
set: true
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--client-cert-auth"
set: true
- flag: "--client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
set: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are
set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
env: "ETCD_PEER_CERT_FILE"
set: true
- flag: "--peer-key-file"
env: "ETCD_PEER_KEY_FILE"
set: true
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-client-cert-auth"
set: true
- flag: "--peer-client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
set: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
set: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
set: true
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: true

1014
cfg/rke-cis-1.24/master.yaml Normal file

File diff suppressed because it is too large Load Diff

459
cfg/rke-cis-1.24/node.yaml Normal file
View File

@@ -0,0 +1,459 @@
---
controls:
version: "rke-cis-1.24"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 600 $kubeletsvc
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: true
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: true
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c "if test -e /node/etc/kubernetes/ssl/kube-ca.pem; then stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
- flag: "File not found"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 <filename>
scored: true
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Automated)"
audit: '/bin/sh -c "if test -e /node/etc/kubernetes/ssl/kube-ca.pem; then stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem; else echo \"File not found\"; fi"'
tests:
bin_op: or
test_items:
- flag: root:root
- flag: "File not found"
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: true
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Cluster provisioned by RKE doesn't require or maintain a configuration file for the kubelet.
All configuration is passed in as arguments at container run time.
scored: true
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Cluster provisioned by RKE doesnt require or maintain a configuration file for the kubelet.
All configuration is passed in as arguments at container run time.
scored: true
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.6
text: "Ensure that the --protect-kernel-defaults argument is set to true (Automated)"
type: "skip"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --protect-kernel-defaults
path: '{.protectKernelDefaults}'
compare:
op: eq
value: true
remediation: |
If using a Kubelet config file, edit the file to set `protectKernelDefaults` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
System level configurations are required prior to provisioning the cluster in order for this argument to be set to true.
scored: true
- id: 4.2.7
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.8
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
type: "manual"
audit: "/bin/ps -fC $kubeletbin "
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors
scored: false
- id: 4.2.9
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: eq
value: 0
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.10
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --tls-cert-file
path: '{.tlsCertFile}'
- flag: --tls-private-key-file
path: '{.tlsPrivateKeyFile}'
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.
scored: true
- id: 4.2.11
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.12
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
type: "manual"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
Clusters provisioned by RKE handles certificate rotation directly through RKE.
scored: false
- id: 4.2.13
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true

View File

@@ -0,0 +1,327 @@
---
controls:
version: "rke-cis-1.24"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
audit: |
kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostPID == null) or (.spec.hostPID == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
tests:
bin_op: or
test_items:
- flag: "kubectl: not found"
- flag: "jq: not found"
- flag: count
compare:
op: gt
value: 0
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
scored: true
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
audit: |
kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostIPC == null) or (.spec.hostIPC == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
tests:
bin_op: or
test_items:
- flag: "kubectl: not found"
- flag: "jq: not found"
- flag: count
compare:
op: gt
value: 0
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
scored: true
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
audit: |
kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.hostNetwork == null) or (.spec.hostNetwork == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
tests:
bin_op: or
test_items:
- flag: "kubectl: not found"
- flag: "jq: not found"
- flag: count
compare:
op: gt
value: 0
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
scored: true
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)"
audit: |
kubectl get psp -o json | jq .items[] | jq -r 'select((.spec.allowPrivilegeEscalation == null) or (.spec.allowPrivilegeEscalation == false))' | jq .metadata.name | wc -l | xargs -I {} echo '--count={}'
tests:
bin_op: or
test_items:
- flag: "kubectl: not found"
- flag: "jq: not found"
- flag: count
compare:
op: gt
value: 0
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: true
- id: 5.2.7
text: "Minimize the admission of root containers (Manual)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Manual)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "skip"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Automated)"
audit: |
#!/bin/bash
set -eE
handle_error() {
echo "false"
}
trap 'handle_error' ERR
count=$(kubectl get all -n default -o json | jq .items[] | jq -r 'select((.metadata.name!="kubernetes"))' | jq .metadata.name | wc -l)
if [[ ${count} -gt 0 ]]; then
echo "false"
exit
fi
echo "true"
tests:
bin_op: or
test_items:
- flag: "kubectl: not found"
- flag: "jq: not found"
- flag: "true"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
scored: true

View File

@@ -0,0 +1,2 @@
---
## Version-specific settings that override the values in cfg/config.yaml

View File

@@ -0,0 +1,60 @@
---
controls:
version: "rke-cis-1.7"
id: 3
text: "Control Plane Configuration"
type: "controlplane"
groups:
- id: 3.1
text: "Authentication and Authorization"
checks:
- id: 3.1.1
text: "Client certificate authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
scored: false
- id: 3.1.2
text: "Service account token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of service account tokens.
scored: false
- id: 3.1.3
text: "Bootstrap token authentication should not be used for users (Manual)"
type: "manual"
remediation: |
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be implemented
in place of bootstrap tokens.
scored: false
- id: 3.2
text: "Logging"
checks:
- id: 3.2.1
text: "Ensure that a minimal audit policy is created (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-policy-file"
set: true
remediation: |
Create an audit policy file for your cluster.
scored: true
- id: 3.2.2
text: "Ensure that the audit policy covers key security concerns (Manual)"
type: "manual"
remediation: |
Review the audit policy provided for the cluster and ensure that it covers
at least the following areas,
- Access to Secrets managed by the cluster. Care should be taken to only
log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in
order to avoid risk of logging sensitive data.
- Modification of Pod and Deployment objects.
- Use of `pods/exec`, `pods/portforward`, `pods/proxy` and `services/proxy`.
For most requests, minimally logging at the Metadata level is recommended
(the most basic level of logging).
scored: false

136
cfg/rke-cis-1.7/etcd.yaml Normal file
View File

@@ -0,0 +1,136 @@
---
controls:
version: "rke-cis-1.7"
id: 2
text: "Etcd Node Configuration"
type: "etcd"
groups:
- id: 2
text: "Etcd Node Configuration"
checks:
- id: 2.1
text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--cert-file"
env: "ETCD_CERT_FILE"
- flag: "--key-file"
env: "ETCD_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure TLS encryption.
Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml
on the master node and set the below parameters.
--cert-file=</path/to/ca-file>
--key-file=</path/to/key-file>
scored: true
- id: 2.2
text: "Ensure that the --client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--client-cert-auth"
env: "ETCD_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--client-cert-auth="true"
scored: true
- id: 2.3
text: "Ensure that the --auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
set: false
- flag: "--auto-tls"
env: "ETCD_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --auto-tls parameter or set it to false.
--auto-tls=false
scored: true
- id: 2.4
text: "Ensure that the --peer-cert-file and --peer-key-file arguments are
set as appropriate (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--peer-cert-file"
env: "ETCD_PEER_CERT_FILE"
- flag: "--peer-key-file"
env: "ETCD_PEER_KEY_FILE"
remediation: |
Follow the etcd service documentation and configure peer TLS encryption as appropriate
for your etcd cluster.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameters.
--peer-client-file=</path/to/peer-cert-file>
--peer-key-file=</path/to/peer-key-file>
scored: true
- id: 2.5
text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--peer-client-cert-auth"
env: "ETCD_PEER_CLIENT_CERT_AUTH"
compare:
op: eq
value: true
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and set the below parameter.
--peer-client-cert-auth=true
scored: true
- id: 2.6
text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
set: false
- flag: "--peer-auto-tls"
env: "ETCD_PEER_AUTO_TLS"
compare:
op: eq
value: false
remediation: |
Edit the etcd pod specification file $etcdconf on the master
node and either remove the --peer-auto-tls parameter or set it to false.
--peer-auto-tls=false
scored: true
- id: 2.7
text: "Ensure that a unique Certificate Authority is used for etcd (Automated)"
audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep"
tests:
test_items:
- flag: "--trusted-ca-file"
env: "ETCD_TRUSTED_CA_FILE"
set: true
remediation: |
[Manual test]
Follow the etcd documentation and create a dedicated certificate authority setup for the
etcd service.
Then, edit the etcd pod specification file $etcdconf on the
master node and set the below parameter.
--trusted-ca-file=</path/to/ca-file>
scored: true

998
cfg/rke-cis-1.7/master.yaml Normal file
View File

@@ -0,0 +1,998 @@
---
controls:
version: "rke-cis-1.7"
id: 1
text: "Control Plane Security Configuration"
type: "master"
groups:
- id: 1.1
text: "Control Plane Node Configuration Files"
checks:
- id: 1.1.1
text: "Ensure that the API server pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c permissions=%a $apiserverconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the
control plane node.
For example, chmod 600 $apiserverconf
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.2
text: "Ensure that the API server pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $apiserverconf; then stat -c %U:%G $apiserverconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $apiserverconf
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.3
text: "Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c permissions=%a $controllermanagerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $controllermanagerconf
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.4
text: "Ensure that the controller manager pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $controllermanagerconf; then stat -c %U:%G $controllermanagerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $controllermanagerconf
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.5
text: "Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c permissions=%a $schedulerconf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 $schedulerconf
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.6
text: "Ensure that the scheduler pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $schedulerconf; then stat -c %U:%G $schedulerconf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root $schedulerconf
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.7
text: "Ensure that the etcd pod specification file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c permissions=%a; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $etcdconf
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.8
text: "Ensure that the etcd pod specification file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $etcdconf; then find $etcdconf -name '*etcd*' | xargs stat -c %U:%G; fi'"
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $etcdconf
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for kube-apiserver.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.9
text: "Ensure that the Container Network Interface file permissions are set to 600 or more restrictive (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c permissions=%a
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c permissions=%a
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 <path/to/cni/files>
scored: false
- id: 1.1.10
text: "Ensure that the Container Network Interface file ownership is set to root:root (Manual)"
audit: |
ps -ef | grep $kubeletbin | grep -- --cni-conf-dir | sed 's%.*cni-conf-dir[= ]\([^ ]*\).*%\1%' | xargs -I{} find {} -mindepth 1 | xargs --no-run-if-empty stat -c %U:%G
find /var/lib/cni/networks -type f 2> /dev/null | xargs --no-run-if-empty stat -c %U:%G
use_multiple_values: true
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root <path/to/cni/files>
scored: false
- id: 1.1.11
text: "Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)"
audit: stat -c %a /node/var/lib/etcd
tests:
test_items:
- flag: "700"
compare:
op: eq
value: "700"
set: true
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above). For example,
chmod 700 /var/lib/etcd
scored: true
- id: 1.1.12
text: "Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)"
type: "skip"
audit: "stat -c %U:%G /node/var/lib/etcd"
tests:
test_items:
- flag: "etcd:etcd"
set: true
remediation: |
On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
from the command 'ps -ef | grep etcd'.
Run the below command (based on the etcd data directory found above).
For example, chown etcd:etcd /var/lib/etcd
Permissive - A system service account is required for etcd data directory ownership.
Refer to Rancher's hardening guide for more details on how to configure this ownership.
scored: true
- id: 1.1.13
text: "Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c permissions=%a /etc/kubernetes/admin.conf; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chmod 600 /etc/kubernetes/admin.conf
Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
scored: true
- id: 1.1.14
text: "Ensure that the admin.conf file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e /etc/kubernetes/admin.conf; then stat -c %U:%G /etc/kubernetes/admin.conf; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /etc/kubernetes/admin.conf
Not Applicable - Cluster provisioned by RKE does not store the kubernetes default kubeconfig credentials file on the nodes.
scored: true
- id: 1.1.15
text: "Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c permissions=%a $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $schedulerkubeconfig
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.16
text: "Ensure that the scheduler.conf file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $schedulerkubeconfig; then stat -c %U:%G $schedulerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $schedulerkubeconfig
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for scheduler.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.17
text: "Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c permissions=%a $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chmod 600 $controllermanagerkubeconfig
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.18
text: "Ensure that the controller-manager.conf file ownership is set to root:root (Automated)"
type: "skip"
audit: "/bin/sh -c 'if test -e $controllermanagerkubeconfig; then stat -c %U:%G $controllermanagerkubeconfig; fi'"
tests:
test_items:
- flag: "root:root"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown root:root $controllermanagerkubeconfig
Not Applicable - Cluster provisioned by RKE doesn't require or maintain a configuration file for controller-manager.
All configuration is passed in as arguments at container run time.
scored: true
- id: 1.1.19
text: "Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)"
audit: "check_files_owner_in_dir.sh /node/etc/kubernetes/ssl"
tests:
test_items:
- flag: "true"
compare:
op: eq
value: "true"
set: true
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
chown -R root:root /etc/kubernetes/pki/
scored: true
- id: 1.1.20
text: "Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Manual)"
audit: "find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
find /node/etc/kubernetes/ssl/ -name '*.pem' ! -name '*key.pem' -exec chmod -R 600 {} +
scored: false
- id: 1.1.21
text: "Ensure that the Kubernetes PKI key file permissions are set to 600 (Manual)"
audit: "find /node/etc/kubernetes/ssl/ -name '*key.pem' | xargs stat -c permissions=%a"
use_multiple_values: true
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the control plane node.
For example,
find /node/etc/kubernetes/ssl/ -name '*key.pem' -exec chmod -R 600 {} +
scored: false
- id: 1.2
text: "API Server"
checks:
- id: 1.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--anonymous-auth"
compare:
op: eq
value: false
set: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--anonymous-auth=false
scored: true
- id: 1.2.2
text: "Ensure that the --token-auth-file parameter is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--token-auth-file"
set: false
remediation: |
Follow the documentation and configure alternate mechanisms for authentication. Then,
edit the API server pod specification file $apiserverconf
on the control plane node and remove the --token-auth-file=<filename> parameter.
scored: true
- id: 1.2.3
text: "Ensure that the --DenyServiceExternalIPs is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: "DenyServiceExternalIPs"
set: true
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and remove the `DenyServiceExternalIPs`
from enabled admission plugins.
scored: true
- id: 1.2.4
text: "Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--kubelet-client-certificate"
- flag: "--kubelet-client-key"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the
apiserver and kubelets. Then, edit API server pod specification file
$apiserverconf on the control plane node and set the
kubelet client certificate and key parameters as below.
--kubelet-client-certificate=<path/to/client-certificate-file>
--kubelet-client-key=<path/to/client-key-file>
scored: true
- id: 1.2.5
text: "Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)"
type: "skip"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority"
set: true
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between
the apiserver and kubelets. Then, edit the API server pod specification file
$apiserverconf on the control plane node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
Permissive - When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.
scored: true
- id: 1.2.6
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: nothave
value: "AlwaysAllow"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to values other than AlwaysAllow.
One such example could be as below.
--authorization-mode=RBAC
scored: true
- id: 1.2.7
text: "Ensure that the --authorization-mode argument includes Node (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "Node"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
scored: true
- id: 1.2.8
text: "Ensure that the --authorization-mode argument includes RBAC (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--authorization-mode"
compare:
op: has
value: "RBAC"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --authorization-mode parameter to a value that includes RBAC,
for example `--authorization-mode=Node,RBAC`.
scored: true
- id: 1.2.9
text: "Ensure that the admission control plugin EventRateLimit is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "EventRateLimit"
remediation: |
Follow the Kubernetes documentation and set the desired limits in a configuration file.
Then, edit the API server pod specification file $apiserverconf
and set the below parameters.
--enable-admission-plugins=...,EventRateLimit,...
--admission-control-config-file=<path/to/configuration/file>
scored: false
- id: 1.2.10
text: "Ensure that the admission control plugin AlwaysAdmit is not set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: nothave
value: AlwaysAdmit
- flag: "--enable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --enable-admission-plugins parameter, or set it to a
value that does not include AlwaysAdmit.
scored: true
- id: 1.2.11
text: "Ensure that the admission control plugin AlwaysPullImages is set (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "AlwaysPullImages"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
AlwaysPullImages.
--enable-admission-plugins=...,AlwaysPullImages,...
scored: false
- id: 1.2.12
text: "Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)"
type: "skip"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "SecurityContextDeny"
- flag: "--enable-admission-plugins"
compare:
op: has
value: "PodSecurityPolicy"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to include
SecurityContextDeny, unless PodSecurityPolicy is already in place.
--enable-admission-plugins=...,SecurityContextDeny,...
Permissive - Enabling Pod Security Policy can cause applications to unexpectedly fail.
scored: false
- id: 1.2.13
text: "Ensure that the admission control plugin ServiceAccount is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "ServiceAccount"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Follow the documentation and create ServiceAccount objects as per your environment.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and ensure that the --disable-admission-plugins parameter is set to a
value that does not include ServiceAccount.
scored: true
- id: 1.2.14
text: "Ensure that the admission control plugin NamespaceLifecycle is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--disable-admission-plugins"
compare:
op: nothave
value: "NamespaceLifecycle"
- flag: "--disable-admission-plugins"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --disable-admission-plugins parameter to
ensure it does not include NamespaceLifecycle.
scored: true
- id: 1.2.15
text: "Ensure that the admission control plugin NodeRestriction is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value: "NodeRestriction"
remediation: |
Follow the Kubernetes documentation and configure NodeRestriction plug-in on kubelets.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --enable-admission-plugins parameter to a
value that includes NodeRestriction.
--enable-admission-plugins=...,NodeRestriction,...
scored: true
- id: 1.2.16
text: "Ensure that the --secure-port argument is not set to 0 - NoteThis recommendation is obsolete and will be deleted per the consensus process (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--secure-port"
compare:
op: gt
value: 0
- flag: "--secure-port"
set: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and either remove the --secure-port parameter or
set it to a different (non-zero) desired port.
scored: true
- id: 1.2.17
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.2.18
text: "Ensure that the --audit-log-path argument is set (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-path"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-path parameter to a suitable path and
file where you would like audit logs to be written, for example,
--audit-log-path=/var/log/apiserver/audit.log
scored: true
- id: 1.2.19
text: "Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxage"
compare:
op: gte
value: 30
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxage parameter to 30
or as an appropriate number of days, for example,
--audit-log-maxage=30
scored: true
- id: 1.2.20
text: "Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxbackup"
compare:
op: gte
value: 10
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxbackup parameter to 10 or to an appropriate
value. For example,
--audit-log-maxbackup=10
scored: true
- id: 1.2.21
text: "Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--audit-log-maxsize"
compare:
op: gte
value: 100
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --audit-log-maxsize parameter to an appropriate size in MB.
For example, to set it as 100 MB, --audit-log-maxsize=100
scored: true
- id: 1.2.22
text: "Ensure that the --request-timeout argument is set as appropriate (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
type: manual
remediation: |
Edit the API server pod specification file $apiserverconf
and set the below parameter as appropriate and if needed.
For example, --request-timeout=300s
scored: false
- id: 1.2.23
text: "Ensure that the --service-account-lookup argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--service-account-lookup"
set: false
- flag: "--service-account-lookup"
compare:
op: eq
value: true
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the below parameter.
--service-account-lookup=true
Alternatively, you can delete the --service-account-lookup parameter from this file so
that the default takes effect.
scored: true
- id: 1.2.24
text: "Ensure that the --service-account-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-key-file"
remediation: |
Edit the API server pod specification file $apiserverconf
on the control plane node and set the --service-account-key-file parameter
to the public key file for service accounts. For example,
--service-account-key-file=<filename>
scored: true
- id: 1.2.25
text: "Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--etcd-certfile"
- flag: "--etcd-keyfile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate and key file parameters.
--etcd-certfile=<path/to/client-certificate-file>
--etcd-keyfile=<path/to/client-key-file>
scored: true
- id: 1.2.26
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
bin_op: and
test_items:
- flag: "--tls-cert-file"
- flag: "--tls-private-key-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the TLS certificate and private key file parameters.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
scored: true
- id: 1.2.27
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--client-ca-file"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection on the apiserver.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the client certificate authority file.
--client-ca-file=<path/to/client-ca-file>
scored: true
- id: 1.2.28
text: "Ensure that the --etcd-cafile argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--etcd-cafile"
remediation: |
Follow the Kubernetes documentation and set up the TLS connection between the apiserver and etcd.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the etcd certificate authority file parameter.
--etcd-cafile=<path/to/ca-file>
scored: true
- id: 1.2.29
text: "Ensure that the --encryption-provider-config argument is set as appropriate (Manual)"
type: "skip"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--encryption-provider-config"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
Then, edit the API server pod specification file $apiserverconf
on the control plane node and set the --encryption-provider-config parameter to the path of that file.
For example, --encryption-provider-config=</path/to/EncryptionConfig/File>
Permissive - Enabling encryption changes how data can be recovered as data is encrypted.
scored: false
- id: 1.2.30
text: "Ensure that encryption providers are appropriately configured (Manual)"
type: "skip"
audit: |
ENCRYPTION_PROVIDER_CONFIG=$(ps -ef | grep $apiserverbin | grep -- --encryption-provider-config | sed 's%.*encryption-provider-config[= ]\([^ ]*\).*%\1%')
if test -e $ENCRYPTION_PROVIDER_CONFIG; then grep -A1 'providers:' $ENCRYPTION_PROVIDER_CONFIG | tail -n1 | grep -o "[A-Za-z]*" | sed 's/^/provider=/'; fi
tests:
test_items:
- flag: "provider"
compare:
op: valid_elements
value: "aescbc,kms,secretbox"
remediation: |
Follow the Kubernetes documentation and configure a EncryptionConfig file.
In this file, choose aescbc, kms or secretbox as the encryption provider.
Permissive - Enabling encryption changes how data can be recovered as data is encrypted.
scored: false
- id: 1.2.31
text: "Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Manual)"
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--tls-cipher-suites"
compare:
op: valid_elements
value: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384"
remediation: |
Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml
on the control plane node and set the below parameter.
--tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,
TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384
scored: false
- id: 1.3
text: "Controller Manager"
checks:
- id: 1.3.1
text: "Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--terminated-pod-gc-threshold"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --terminated-pod-gc-threshold to an appropriate threshold,
for example, --terminated-pod-gc-threshold=10
scored: true
- id: 1.3.2
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.3.3
text: "Ensure that the --use-service-account-credentials argument is set to true (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--use-service-account-credentials"
compare:
op: noteq
value: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node to set the below parameter.
--use-service-account-credentials=true
scored: true
- id: 1.3.4
text: "Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--service-account-private-key-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --service-account-private-key-file parameter
to the private key file for service accounts.
--service-account-private-key-file=<filename>
scored: true
- id: 1.3.5
text: "Ensure that the --root-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
test_items:
- flag: "--root-ca-file"
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --root-ca-file parameter to the certificate bundle file`.
--root-ca-file=<path/to/file>
scored: true
- id: 1.3.6
text: "Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)"
type: "skip"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--feature-gates"
compare:
op: nothave
value: "RotateKubeletServerCertificate=false"
set: true
- flag: "--feature-gates"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and set the --feature-gates parameter to include RotateKubeletServerCertificate=true.
--feature-gates=RotateKubeletServerCertificate=true
Cluster provisioned by RKE handles certificate rotation directly through RKE.
scored: true
- id: 1.3.7
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $controllermanagerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Controller Manager pod specification file $controllermanagerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true
- id: 1.4
text: "Scheduler"
checks:
- id: 1.4.1
text: "Ensure that the --profiling argument is set to false (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
test_items:
- flag: "--profiling"
compare:
op: eq
value: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf file
on the control plane node and set the below parameter.
--profiling=false
scored: true
- id: 1.4.2
text: "Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)"
audit: "/bin/ps -ef | grep $schedulerbin | grep -v grep"
tests:
bin_op: or
test_items:
- flag: "--bind-address"
compare:
op: eq
value: "127.0.0.1"
- flag: "--bind-address"
set: false
remediation: |
Edit the Scheduler pod specification file $schedulerconf
on the control plane node and ensure the correct value for the --bind-address parameter
scored: true

462
cfg/rke-cis-1.7/node.yaml Normal file
View File

@@ -0,0 +1,462 @@
---
controls:
version: "rke-cis-1.7"
id: 4
text: "Worker Node Security Configuration"
type: "node"
groups:
- id: 4.1
text: "Worker Node Configuration Files"
checks:
- id: 4.1.1
text: "Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c permissions=%a $kubeletsvc; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chmod 600 $kubeletsvc
Not Applicable - Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet service.
All configuration is passed in as arguments at container run time.
scored: true
- id: 4.1.2
text: "Ensure that the kubelet service file ownership is set to root:root (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletsvc; then stat -c %U:%G $kubeletsvc; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletsvc
Not Applicable - Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet service.
All configuration is passed in as arguments at container run time.
scored: true
- id: 4.1.3
text: "If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c permissions=%a $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: "permissions"
set: true
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $proxykubeconfig
scored: true
- id: 4.1.4
text: "If proxy kubeconfig file exists ensure ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $proxykubeconfig; then stat -c %U:%G $proxykubeconfig; fi'' '
tests:
bin_op: or
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example, chown root:root $proxykubeconfig
scored: true
- id: 4.1.5
text: "Ensure that the --kubeconfig kubelet.conf file permissions are set to 600 or more restrictive (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c permissions=%a $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chmod 600 $kubeletkubeconfig
scored: true
- id: 4.1.6
text: "Ensure that the --kubeconfig kubelet.conf file ownership is set to root:root (Automated)"
audit: '/bin/sh -c ''if test -e $kubeletkubeconfig; then stat -c %U:%G $kubeletkubeconfig; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the below command (based on the file location on your system) on the each worker node.
For example,
chown root:root $kubeletkubeconfig
scored: true
- id: 4.1.7
text: "Ensure that the certificate authorities file permissions are set to 600 or more restrictive (Automated)"
audit: "stat -c permissions=%a /node/etc/kubernetes/ssl/kube-ca.pem"
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command to modify the file permissions of the
--client-ca-file chmod 600 <filename>
scored: true
- id: 4.1.8
text: "Ensure that the client certificate authorities file ownership is set to root:root (Automated)"
audit: "stat -c %U:%G /node/etc/kubernetes/ssl/kube-ca.pem"
tests:
test_items:
- flag: root:root
remediation: |
Run the following command to modify the ownership of the --client-ca-file.
chown root:root <filename>
scored: true
- id: 4.1.9
text: "If the kubelet config.yaml configuration file is being used validate permissions set to 600 or more restrictive (Automated)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c permissions=%a $kubeletconf; fi'' '
tests:
test_items:
- flag: "permissions"
compare:
op: bitmask
value: "600"
remediation: |
Run the following command (using the config file location identified in the Audit step)
chmod 600 $kubeletconf
Not Applicable - Clusters provisioned by RKE do not require or maintain a configuration file for the kubelet.
All configuration is passed in as arguments at container run time.
scored: true
- id: 4.1.10
text: "If the kubelet config.yaml configuration file is being used validate file ownership is set to root:root (Manual)"
type: "skip"
audit: '/bin/sh -c ''if test -e $kubeletconf; then stat -c %U:%G $kubeletconf; fi'' '
tests:
test_items:
- flag: root:root
remediation: |
Run the following command (using the config file location identified in the Audit step)
chown root:root $kubeletconf
Not Applicable - Clusters provisioned by RKE doesnt require or maintain a configuration file for the kubelet.
All configuration is passed in as arguments at container run time.
scored: false
- id: 4.2
text: "Kubelet"
checks:
- id: 4.2.1
text: "Ensure that the --anonymous-auth argument is set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: "--anonymous-auth"
path: '{.authentication.anonymous.enabled}'
compare:
op: eq
value: false
remediation: |
If using a Kubelet config file, edit the file to set `authentication: anonymous: enabled` to
`false`.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
`--anonymous-auth=false`
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.2
text: "Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --authorization-mode
path: '{.authorization.mode}'
compare:
op: nothave
value: AlwaysAllow
remediation: |
If using a Kubelet config file, edit the file to set `authorization.mode` to Webhook. If
using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--authorization-mode=Webhook
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.3
text: "Ensure that the --client-ca-file argument is set as appropriate (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --client-ca-file
path: '{.authentication.x509.clientCAFile}'
remediation: |
If using a Kubelet config file, edit the file to set `authentication.x509.clientCAFile` to
the location of the client CA file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_AUTHZ_ARGS variable.
--client-ca-file=<path/to/client-ca-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.4
text: "Verify that the --read-only-port argument is set to 0 (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
bin_op: or
test_items:
- flag: "--read-only-port"
path: '{.readOnlyPort}'
compare:
op: eq
value: 0
- flag: "--read-only-port"
path: '{.readOnlyPort}'
set: false
remediation: |
If using a Kubelet config file, edit the file to set `readOnlyPort` to 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.5
text: "Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
compare:
op: noteq
value: 0
- flag: --streaming-connection-idle-timeout
path: '{.streamingConnectionIdleTimeout}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `streamingConnectionIdleTimeout` to a
value other than 0.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--streaming-connection-idle-timeout=5m
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.6
text: "Ensure that the --make-iptables-util-chains argument is set to true (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
compare:
op: eq
value: true
- flag: --make-iptables-util-chains
path: '{.makeIPTablesUtilChains}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `makeIPTablesUtilChains` to `true`.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove the --make-iptables-util-chains argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.7
text: "Ensure that the --hostname-override argument is not set (Manual)"
# This is one of those properties that can only be set as a command line argument.
# To check if the property is set as expected, we need to parse the kubelet command
# instead reading the Kubelet Configuration file.
type: "skip"
audit: "/bin/ps -fC $kubeletbin "
tests:
test_items:
- flag: --hostname-override
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and remove the --hostname-override argument from the
KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
Not Applicable - Clusters provisioned by RKE set the --hostname-override to avoid any hostname configuration errors
scored: false
- id: 4.2.8
text: "Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --event-qps
path: '{.eventRecordQPS}'
compare:
op: gte
value: 0
- flag: --event-qps
path: '{.eventRecordQPS}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to set `eventRecordQPS` to an appropriate level.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: false
- id: 4.2.9
text: "Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Manual)"
type: "skip"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --tls-cert-file
path: '{.tlsCertFile}'
- flag: --tls-private-key-file
path: '{.tlsPrivateKeyFile}'
remediation: |
If using a Kubelet config file, edit the file to set `tlsCertFile` to the location
of the certificate file to use to identify this Kubelet, and `tlsPrivateKeyFile`
to the location of the corresponding private key file.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
--tls-private-key-file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
Permissive - When generating serving certificates, functionality could break in conjunction with hostname overrides which are required for certain cloud providers.
scored: false
- id: 4.2.10
text: "Ensure that the --rotate-certificates argument is not set to false (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --rotate-certificates
path: '{.rotateCertificates}'
compare:
op: eq
value: true
- flag: --rotate-certificates
path: '{.rotateCertificates}'
set: false
bin_op: or
remediation: |
If using a Kubelet config file, edit the file to add the line `rotateCertificates` to `true` or
remove it altogether to use the default value.
If using command line arguments, edit the kubelet service file
$kubeletsvc on each worker node and
remove --rotate-certificates=false argument from the KUBELET_CERTIFICATE_ARGS
variable.
Based on your system, restart the kubelet service. For example,
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.11
text: "Verify that the RotateKubeletServerCertificate argument is set to true (Manual)"
type: "skip"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
bin_op: or
test_items:
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
compare:
op: nothave
value: false
- flag: RotateKubeletServerCertificate
path: '{.featureGates.RotateKubeletServerCertificate}'
set: false
remediation: |
Edit the kubelet service file $kubeletsvc
on each worker node and set the below parameter in KUBELET_CERTIFICATE_ARGS variable.
--feature-gates=RotateKubeletServerCertificate=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
Not Applicable - Clusters provisioned by RKE handles certificate rotation directly through RKE.
scored: false
- id: 4.2.12
text: "Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Automated)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --tls-cipher-suites
path: '{range .tlsCipherSuites[:]}{}{'',''}{end}'
compare:
op: valid_elements
value: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
remediation: |
If using a Kubelet config file, edit the file to set `tlsCipherSuites` to
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
or to a subset of these values.
If using executable arguments, edit the kubelet service file
$kubeletsvc on each worker node and
set the --tls-cipher-suites parameter as follows, or to a subset of these values.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
scored: true
- id: 4.2.13
text: "Ensure that a limit is set on pod PIDs (Manual)"
audit: "/bin/ps -fC $kubeletbin"
audit_config: "/bin/sh -c 'if test -e $kubeletconf; then /bin/cat $kubeletconf; fi' "
tests:
test_items:
- flag: --pod-max-pids
path: '{.podPidsLimit}'
remediation: |
Decide on an appropriate level for this parameter and set it,
either via the --pod-max-pids command line parameter or the PodPidsLimit configuration file setting.
scored: false

View File

@@ -0,0 +1,309 @@
---
controls:
version: "rke-cis-1.7"
id: 5
text: "Kubernetes Policies"
type: "policies"
groups:
- id: 5.1
text: "RBAC and Service Accounts"
checks:
- id: 5.1.1
text: "Ensure that the cluster-admin role is only used where required (Manual)"
type: "manual"
remediation: |
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
kubectl delete clusterrolebinding [name]
scored: false
- id: 5.1.2
text: "Minimize access to secrets (Manual)"
type: "manual"
remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: false
- id: 5.1.3
text: "Minimize wildcard use in Roles and ClusterRoles (Manual)"
type: "manual"
remediation: |
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
scored: false
- id: 5.1.4
text: "Minimize access to create pods (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to pod objects in the cluster.
scored: false
- id: 5.1.5
text: "Ensure that default service accounts are not actively used. (Manual)"
type: "manual"
remediation: |
Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
scored: false
- id: 5.1.6
text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)"
type: "manual"
remediation: |
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
scored: false
- id: 5.1.7
text: "Avoid use of system:masters group (Manual)"
type: "manual"
remediation: |
Remove the system:masters group from all users in the cluster.
scored: false
- id: 5.1.8
text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)"
type: "manual"
remediation: |
Where possible, remove the impersonate, bind and escalate rights from subjects.
scored: false
- id: 5.1.9
text: "Minimize access to create persistent volumes (Manual)"
type: "manual"
remediation: |
Where possible, remove create access to PersistentVolume objects in the cluster.
scored: false
- id: 5.1.10
text: "Minimize access to the proxy sub-resource of nodes (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the proxy sub-resource of node objects.
scored: false
- id: 5.1.11
text: "Minimize access to the approval sub-resource of certificatesigningrequests objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the approval sub-resource of certificatesigningrequest objects.
scored: false
- id: 5.1.12
text: "Minimize access to webhook configuration objects (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the validatingwebhookconfigurations or mutatingwebhookconfigurations objects
scored: false
- id: 5.1.13
text: "Minimize access to the service account token creation (Manual)"
type: "manual"
remediation: |
Where possible, remove access to the token sub-resource of serviceaccount objects.
scored: false
- id: 5.2
text: "Pod Security Standards"
checks:
- id: 5.2.1
text: "Ensure that the cluster has at least one active policy control mechanism in place (Manual)"
type: "manual"
remediation: |
Ensure that either Pod Security Admission or an external policy control system is in place
for every namespace which contains user workloads.
scored: false
- id: 5.2.2
text: "Minimize the admission of privileged containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of privileged containers.
scored: false
- id: 5.2.3
text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)"
type: "skip"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostPID` containers.
Permissive - Enabling Pod Security Policy can cause applications to unexpectedly fail.
scored: false
- id: 5.2.4
text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)"
type: "skip"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostIPC` containers.
Permissive - Enabling Pod Security Policy can cause applications to unexpectedly fail.
scored: false
- id: 5.2.5
text: "Minimize the admission of containers wishing to share the host network namespace (Automated)"
type: "skip"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of `hostNetwork` containers.
Permissive - Enabling Pod Security Policy can cause applications to unexpectedly fail.
scored: false
- id: 5.2.6
text: "Minimize the admission of containers with allowPrivilegeEscalation (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `.spec.allowPrivilegeEscalation` set to `true`.
scored: false
- id: 5.2.7
text: "Minimize the admission of root containers (Manual)"
type: "manual"
remediation: |
Create a policy for each namespace in the cluster, ensuring that either `MustRunAsNonRoot`
or `MustRunAs` with the range of UIDs not including 0, is set.
scored: false
- id: 5.2.8
text: "Minimize the admission of containers with the NET_RAW capability (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with the `NET_RAW` capability.
scored: false
- id: 5.2.9
text: "Minimize the admission of containers with added capabilities (Manual)"
type: "manual"
remediation: |
Ensure that `allowedCapabilities` is not present in policies for the cluster unless
it is set to an empty array.
scored: false
- id: 5.2.10
text: "Minimize the admission of containers with capabilities assigned (Manual)"
type: "manual"
remediation: |
Review the use of capabilites in applications running on your cluster. Where a namespace
contains applicaions which do not require any Linux capabities to operate consider adding
a PSP which forbids the admission of containers which do not drop all capabilities.
scored: false
- id: 5.2.11
text: "Minimize the admission of Windows HostProcess containers (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`.
scored: false
- id: 5.2.12
text: "Minimize the admission of HostPath volumes (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers with `hostPath` volumes.
scored: false
- id: 5.2.13
text: "Minimize the admission of containers which use HostPorts (Manual)"
type: "manual"
remediation: |
Add policies to each namespace in the cluster which has user workloads to restrict the
admission of containers which use `hostPort` sections.
scored: false
- id: 5.3
text: "Network Policies and CNI"
checks:
- id: 5.3.1
text: "Ensure that the CNI in use supports NetworkPolicies (Manual)"
type: "manual"
remediation: |
If the CNI plugin in use does not support network policies, consideration should be given to
making use of a different plugin, or finding an alternate mechanism for restricting traffic
in the Kubernetes cluster.
scored: false
- id: 5.3.2
text: "Ensure that all Namespaces have NetworkPolicies defined (Manual)"
type: "skip"
remediation: |
Follow the documentation and create NetworkPolicy objects as you need them.
Permissive - Enabling Network Policies can prevent certain applications from communicating with each other.
scored: false
- id: 5.4
text: "Secrets Management"
checks:
- id: 5.4.1
text: "Prefer using Secrets as files over Secrets as environment variables (Manual)"
type: "manual"
remediation: |
If possible, rewrite application code to read Secrets from mounted secret files, rather than
from environment variables.
scored: false
- id: 5.4.2
text: "Consider external secret storage (Manual)"
type: "manual"
remediation: |
Refer to the Secrets management options offered by your cloud provider or a third-party
secrets management solution.
scored: false
- id: 5.5
text: "Extensible Admission Control"
checks:
- id: 5.5.1
text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and setup image provenance.
scored: false
- id: 5.7
text: "General Policies"
checks:
- id: 5.7.1
text: "Create administrative boundaries between resources using namespaces (Manual)"
type: "manual"
remediation: |
Follow the documentation and create namespaces for objects in your deployment as you need
them.
scored: false
- id: 5.7.2
text: "Ensure that the seccomp profile is set to docker/default in your Pod definitions (Manual)"
type: "manual"
remediation: |
Use `securityContext` to enable the docker/default seccomp profile in your pod definitions.
An example is as below:
securityContext:
seccompProfile:
type: RuntimeDefault
scored: false
- id: 5.7.3
text: "Apply SecurityContext to your Pods and Containers (Manual)"
type: "manual"
remediation: |
Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a
suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker
Containers.
scored: false
- id: 5.7.4
text: "The default namespace should not be used (Manual)"
type: "skip"
remediation: |
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
Permissive - Kubernetes provides a default namespace.
scored: false

Some files were not shown because too many files have changed in this diff Show More