Compare commits

..

177 Commits

Author SHA1 Message Date
David Wertenteil
f1514d6e76 Merge pull request #1002 from kubescape/hot-fix-windows-url
update opa-utils pkg with URL parsing fixed
2023-01-03 22:39:07 +02:00
David Wertenteil
3a038c9a0e update opa-utils pkg with URL parsing fixed 2023-01-03 22:24:37 +02:00
David Wertenteil
b4bdf4d860 Release (#1000)
* fixed flaky loop(cautils): loadpolicy getter

We should not inject pointers to the variable iterated over by the
"range" operator.

Signed-off-by: Frédéric BIDON <fredbi@yahoo.com>

* fixed more flaky pointers in loops (registryadaptors, opaprocessor)

Signed-off-by: Frédéric BIDON <fredbi@yahoo.com>

* fixed more flaky pointers in loops (resultshandling)

Signed-off-by: Frédéric BIDON <fredbi@yahoo.com>

* enabled golangci linter in CI

Signed-off-by: Frédéric BIDON <fredbi@yahoo.com>

* fixed linting issues with minimal linters config

Signed-off-by: Frédéric BIDON <fredbi@yahoo.com>

* bump go version to 1.19

* English and typos

* Support AKS parser (#994)

* support GKE parser

* update go mod

* support GKE parser

* update go mod

* update k8s-interface pkg

* Added KS desgin.drawio

* revert k8s.io to v0.25.3

* ran go mod tidy

* update sign-up url

* [wip] Adding CreateAccount support

* revert to docs URL

* update opa-utils pkg

* Print attack tree (optional, with argument) (#997)

* Print attack tree with the argument


Signed-off-by: Frédéric BIDON <fredbi@yahoo.com>
Co-authored-by: Frédéric BIDON <frederic@oneconcern.com>
Co-authored-by: Frédéric BIDON <fredbi@yahoo.com>
Co-authored-by: Oshrat Nir <45561829+Oshratn@users.noreply.github.com>
Co-authored-by: Amir Malka <amirm@armosec.io>
Co-authored-by: David Wertenteil <dwertent@armosec.io>
2023-01-03 11:30:09 +02:00
David Wertenteil
08e7108dc0 Merge pull request #991 from kubescape/dev
Release
2022-12-22 18:12:37 +02:00
David Wertenteil
108a2d6dd8 Merge pull request #962 from anubhav06/gitlab-scan
added GitLab repo scanning support
2022-12-22 17:29:57 +02:00
David Wertenteil
2c28286bb1 update httphandler go mod 2022-12-22 17:07:47 +02:00
David Wertenteil
79858b7ed7 Merge pull request #975 from kooomix/dev
control scan and download only by id
2022-12-22 16:56:40 +02:00
David Wertenteil
bb2e83eb3b update go-git pkg 2022-12-22 16:55:11 +02:00
David Wertenteil
282a29b971 Merge pull request #990 from dwertent/cloud-name-breakdown
update config scanning path
2022-12-22 16:37:27 +02:00
David Wertenteil
60b9edc463 update config scanning path 2022-12-22 16:14:44 +02:00
David Wertenteil
0f9a5e3127 Merge pull request #989 from dwertent/cloud-name-breakdown
Breakdown cloud-cluster name
2022-12-22 16:03:31 +02:00
David Wertenteil
7c79c14363 Update core/pkg/resultshandling/results.go 2022-12-22 15:00:59 +02:00
Vlad Klokun
fe84225252 feat: notify about writing to an output file in PrettyPrinter 2022-12-22 15:00:59 +02:00
Vlad Klokun
56da8d8d92 style: tidy up the PDFPrinter
- Shorten receiver names
- Modify comments to follow Go Doc convention
2022-12-22 15:00:59 +02:00
Vlad Klokun
f135e95d2c style: shorten receiver names in JUnitPrinter 2022-12-22 15:00:59 +02:00
Vlad Klokun
db34183fc1 style: shorten receiver names in JSONPrinter 2022-12-22 15:00:59 +02:00
Vlad Klokun
8f3af71c84 style: shorten receiver names in HTML Printer 2022-12-22 15:00:59 +02:00
Vlad Klokun
116aee0c9c style: shorten receiver names in PrettyPrinter 2022-12-22 15:00:59 +02:00
Vlad Klokun
e5d44f741d docs: clarify new meaning of the --format CLI flag 2022-12-22 15:00:59 +02:00
Vlad Klokun
f005cb7f80 feat: always print to (T)UI using PrettyPrinter
Prior to this change, `pretty-printer` was a special type of Printer
that wrote output to `Stdout`, unless explicitly asked to write to a
given file. Kubescape used `pretty-printer` as an output format by
default. This behavior created the following inconsistencies:
- When invoked as `kubescape scan`, Kubescape would use `pretty-printer`
  by default, and it would output the scan resluts in the
  `pretty-printer` format to `Stdout`.
- When invoked as `kubescape scan --format=pretty-printer`, the behavior
  would be as above.
- When invoked as `kubescape scan --format=FORMAT`, where `FORMAT` is any
  format except for `pretty-printer`, Kubescape would write the results
  to a sensible default file for the selected format. This is in
  contrast to how `--format=pretty-printer` would still output to
  `os.Stdout`, and not an output file.
- When invoked as `kubescape scan --format=ANY_FORMAT --output=FILENAME`, where
  `ANY_FORMAT` is any format, including `pretty-printer`, Kubescape
  would write the results to the provided `FILENAME` in the given
  `ANY_FORMAT`, and not write any results to `Stdout`.

The aforementioned situation complicates life for users running
Kubescape in CI, where Kubescape would skip writing the results to
`Stdout` and only write to the provided output file.

Moreover, with the addition of support for multiple output formats and,
hence, files, this introduces the following ambiguity:
- When invoked as `kubescape scan --format=json,pdf,pretty-printer
  --output=FILENAME`, should Kubescape treat `pretty-printer` as a
  format for the output file, or just an instruction to also print the
  results to `Stdout`?

To fix these inconsistencies and ambiguities, this commit introduces the
following changes:

- Kubescape will always print results to `Stdout` using the
  PrettyPrinter format.

- The `--format` CLI flag will control the format(s) in which the results
  will be written to one or many *output* files. This breaks the
  previous behavior that running `kubescape scan
  --format=pretty-printer` would not produce an output file, and only
  write to `Stdout`. After this change, the same invocation will still
  write to `Stdout`, but also produce a `report.txt` file in the
  PrettyPrinter format.
2022-12-22 15:00:59 +02:00
Vlad Klokun
9ae9d35ccb style: GetFormatsFormats 2022-12-22 15:00:59 +02:00
Vlad Klokun
cb38a4e8a1 style: go fmt the project
- Fixes style inside the project with `go fmt`
2022-12-22 15:00:59 +02:00
Vlad Klokun
eb6d39be42 style: shorten receiver names in ResultsHandler 2022-12-22 15:00:59 +02:00
Vlad Klokun
3160d74c42 style: shorten receiver names for Prometheus printer 2022-12-22 15:00:59 +02:00
Vlad Klokun
5076c38482 refactor: tidy up printing to multiple outputs
This change:

- Simplifies printing to multiple outputs.
- Adds a comment on why we keep the Print → Score → Submit order when
  outputting results.
2022-12-22 15:00:59 +02:00
Vlad Klokun
73c55fe253 fix: revert the overriden ScanningTarget when submitting reports
Before this change, we used to override a scan info `ScanningTarget` to
submit a result that is compatible with the backend for Kubescape.
However, previously we forgot to change back to the original value.

When printing scan results, if the correct order of events (Print →
Score → Submit) was not enforced, this broke the SARIF printer so that
it did not output results due to incorrect `basePath` for the results.

This change reverts to the original `ScanningTarget` value after
submitting the results and fixes the SARIF printer.
2022-12-22 15:00:59 +02:00
suhasgumma
f48f81c0b5 Add logs for some formats 2022-12-22 15:00:59 +02:00
Suhas Gumma
81c1c29b7c Update core/pkg/resultshandling/printer/printresults.go
Co-authored-by: Vlad Klokun <vladklokun@users.noreply.github.com>
2022-12-22 15:00:59 +02:00
suhasgumma
874aa38f68 Handle Output Extensions Gracefully 2022-12-22 15:00:59 +02:00
suhasgumma
b9caaf5025 Add logs for some formats 2022-12-22 15:00:59 +02:00
suhasgumma
61c120de0e Support getting outputs in multiple formats 2022-12-22 15:00:59 +02:00
kooomix
de3408bf57 minor fix 2022-12-22 14:09:27 +02:00
David Wertenteil
8d32032ec1 Merge branch 'cloud-name-breakdown' of github.com:dwertent/kubescape into cloud-name-breakdown 2022-12-22 13:33:23 +02:00
David Wertenteil
42ed787f7b update go mod in httphandler 2022-12-22 13:32:27 +02:00
David Wertenteil
ccdba85b3c Merge branch 'dev' into cloud-name-breakdown 2022-12-22 11:57:47 +02:00
David Wertenteil
c59f7691dc Breakdown cloud-cluster name 2022-12-22 11:43:45 +02:00
kooomix
cf87c2d30b Fixed test 2022-12-21 19:25:22 +02:00
kooomix
b547814dec DownloadInfo, PolicyIdentifier add Identity, remove ID and Name 2022-12-21 19:17:29 +02:00
kooomix
b476a72e04 Merge branch 'dev' of github.com:kooomix/kubescape into dev 2022-12-21 17:02:24 +02:00
kooomix
4f6f85710a Merge pull request #988 from kooomix/opa_utils-v.0.0.216
opa-utils v0.0.216
2022-12-21 16:06:34 +02:00
kooomix
47c23de160 opa-utils v0.0.216 2022-12-21 15:46:53 +02:00
kooomix
bc85844ec0 Merge branch 'kubescape:dev' into dev 2022-12-21 15:31:07 +02:00
kooomix
134d854722 opa-utils v0.0.216 2022-12-21 15:29:58 +02:00
David Wertenteil
e3522c19cc Merge pull request #986 from dwertent/master
Cosmetic changes
2022-12-20 10:33:21 +02:00
David Wertenteil
967fc3fe81 ignore resource if it is not found 2022-12-19 19:00:21 +02:00
David Wertenteil
896a0699ec remove image vuln warning 2022-12-18 13:45:43 +02:00
David Wertenteil
a53375204e remove --verbose flag from default 2022-12-18 13:44:12 +02:00
David Wertenteil
b1392361f8 remove emoji from display 2022-12-18 13:42:58 +02:00
David Wertenteil
7b4fbffae2 Merge pull request #976 from mkilchhofer/explicit_allowPrivilegeEscalation
chore: Explicit set allowPrivilegeEscalation=true
2022-12-18 08:09:35 +02:00
David Wertenteil
34e7b9f2ad Merge pull request #978 from kubescape/maintainers
Update maintainers
2022-12-18 08:08:20 +02:00
David Wertenteil
f0080bdeae Merge pull request #979 from craigbox/code-of-conduct
Adopt CNCF Code of Conduct.
2022-12-18 08:07:39 +02:00
Craig Box
0eb27389da Adopt CNCF Code of Conduct.
Signed-off-by: Craig Box <craigb@armosec.io>
2022-12-16 21:46:31 +13:00
craigbox
2c5eed9ee2 Update maintainers
- Add self
- Remove e-mail addresses & job roles
2022-12-16 21:19:02 +13:00
David Wertenteil
2c1a5bd032 Merge pull request #977 from kubescape/revert-973-dev
Revert "Excluding controlPlaneInfo from error message in case no data recieved."
2022-12-15 17:48:32 +02:00
David Wertenteil
298f8346e9 validate downloaded framework 2022-12-15 17:13:14 +02:00
kooomix
1897c5a4ba Revert "Excluding controlPlaneInfo from error message in case no data recieved." 2022-12-15 16:17:39 +02:00
Marco Kilchhofer
57e435271e chore: Explicit set allowPrivilegeEscalation=true
The value of allowPrivilegeEscalation followed implicit default of Kubernetes:
> AllowPrivilegeEscalation is true always when the container is:
> 1) run as Privileged
> 2) has CAP_SYS_ADMIN

For users still using PodSecurityPolicy (or a follow-up product like OPA Gatekeeper or
Kyverno), there might be mutating admission controllers which defaults this field to
`false` if unset. A value of `false` would then conflict with `privileged: true`.

Signed-off-by: Marco Kilchhofer <mkilchhofer@users.noreply.github.com>
2022-12-14 22:27:05 +01:00
kooomix
7e9b430347 test fix 2022-12-14 14:22:46 +02:00
kooomix
ca5b3e626b test fix 2022-12-14 14:08:32 +02:00
kooomix
3a404f29fa control scan by id 2022-12-14 13:42:52 +02:00
kooomix
16073d6872 download control only by id 2022-12-14 13:06:04 +02:00
Rotem Refael
dce563d2f5 Merge pull request #973 from kooomix/dev
Excluding controlPlaneInfo from error message in case no data recieved.
2022-12-14 11:02:55 +02:00
kooomix
8d556a5b84 minor 2022-12-14 10:48:01 +02:00
kooomix
a61063e5b8 revert opa-utils version 2022-12-14 10:45:24 +02:00
kooomix
94973867db Merge branch 'kubescape:dev' into dev 2022-12-14 10:23:11 +02:00
kooomix
214c2dcae8 patch to filter out "controlPlaneInfo" from error messages in case no data 2022-12-14 10:19:24 +02:00
David Wertenteil
72b36bf012 Merge pull request #968 from fredbi/chore/package-name
chore(style): renamed versioned packages to stick to idiomatic conventions
2022-12-13 16:52:57 +02:00
Frederic BIDON
4335e6ceac chore(style): renamed versioned packages to stick to idiomatic conventions
* fixes: #967

Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2022-12-13 14:27:21 +01:00
kooomix
b5f92a7d54 go mod tidy 2022-12-13 11:32:23 +02:00
kooomix
41ec75d264 update opa-utils v0.0.209 2022-12-13 11:20:17 +02:00
kooomix
6d6ad1f487 Merge pull request #963 from kooomix/outputs_to_get_controls_only_by_ids
All prints and outputs to get data only by control ID
2022-12-13 08:32:01 +02:00
kooomix
3ac33d21ac All prints and outputs to get data by control ID 2022-12-12 15:20:48 +02:00
Anubhav Gupta
04e4b37f6f added GitLab repo scanning support
Signed-off-by: Anubhav Gupta <mail.anubhav06@gmail.com>
2022-12-11 21:39:28 +05:30
David Wertenteil
3e5903de6a Merge pull request #961 from kubescape/dev
change linux runner to 20.04 instead of ubuntu-latest (#960)
2022-12-11 15:02:04 +02:00
Moshe Rappaport
04ea0fe524 change linux runner to 20.04 instead of ubuntu-latest (#960)
Co-authored-by: Amir Malka <amirm@armosec.io>
2022-12-11 14:20:28 +02:00
David Wertenteil
955d6751a9 Merge pull request #956 from kubescape/dev
Enhance `host-scanner`
2022-12-08 22:51:26 +02:00
David Wertenteil
30c43bff10 Merge pull request #958 from Moshe-Rappaport-CA/dev
Fix Junit format
2022-12-08 19:41:31 +02:00
Moshe-Rappaport-CA
e009244566 Fix Junit format 2022-12-08 17:56:16 +02:00
David Wertenteil
3d3cd2c2d8 Added Kubescape flow.drawio 2022-12-06 15:44:34 +02:00
David Wertenteil
f5498371ec Merge pull request #942 from kooomix/eran-dev
new host-scanner endpoint - cloudProviderInfo
2022-12-06 15:20:24 +02:00
David Wertenteil
c3b95bed8c Merge branch 'dev' into eran-dev 2022-12-06 14:17:49 +02:00
David Wertenteil
8ce7d6c0f6 Merge pull request #930 from JusteenR/issue929
Issue929
2022-12-06 14:15:35 +02:00
David Wertenteil
e875f429a9 Merge pull request #948 from YiscahLevySilas1/dev
Print host scanner version
2022-12-06 14:13:47 +02:00
David Wertenteil
b6beff0488 Merge pull request #946 from suhasgumma/dev
Fixed: CIS control link not working for html output format
2022-12-06 14:13:06 +02:00
David Wertenteil
60c69ac3f0 Merge pull request #950 from fredbi/fix-789
fix(giturlparse): fixes panic on unexpected gitlab remote URL
2022-12-06 14:12:25 +02:00
David Wertenteil
1fb9320421 Merge pull request #941 from dwertent/master
Updating examples
2022-12-06 14:11:07 +02:00
David Wertenteil
9a176f6667 remove tag latest 2022-12-06 11:42:34 +02:00
David Wertenteil
96ea9a9e42 fixed scanning example 2022-12-06 11:41:12 +02:00
David Wertenteil
e39fca0c11 do not build dev images 2022-12-06 11:05:21 +02:00
David Wertenteil
2ec035005d fixed echo command 2022-12-04 15:45:23 +02:00
Frederic BIDON
b734b3aef0 go mod tidy ancillary modules manifest
Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2022-12-04 12:39:34 +01:00
yiscah
0f5635f42d move parsing of version to GetVersion 2022-12-04 12:17:04 +02:00
Frederic BIDON
8557075b7c fix(giturlparse): fixes panic on unexpected gitlab remote URL
* replaced dependencies to github.com/armosec/go-git-url by
github.com/kubescape/go-git-url
* fixes #789

NOTE: this requires kubescape/go-git-url#2 to be merged, a new release
of that repo to be cut, in order to finalize the dependency update.

Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2022-12-02 16:09:25 +01:00
David Wertenteil
bc0f0e7087 Merge branch 'master' of github.com:dwertent/kubescape 2022-12-02 02:31:14 +02:00
David Wertenteil
8ce5f9aea3 fixed typo 2022-12-02 02:30:35 +02:00
David Wertenteil
050f9d3a4e Update cmd/scan/framework.go
Co-authored-by: craigbox <craig.box@gmail.com>
2022-12-02 02:16:43 +02:00
David Wertenteil
a81bf0deb4 deprecate set-output 2022-12-02 01:43:45 +02:00
David Wertenteil
2059324c27 testing release 2022-12-02 01:35:57 +02:00
David Wertenteil
a09a0a1bca Merge pull request #9 from dwertent/fix-dev-image
run build only if secret is set
2022-12-02 01:32:26 +02:00
David Wertenteil
83712bb9f5 run build only if secret is set 2022-12-02 01:30:24 +02:00
David Wertenteil
728ae47b9a Merge pull request #8 from dwertent/fix-dev-image
Fix dev image
2022-12-02 00:56:12 +02:00
David Wertenteil
2a9b272a14 tagging only main image as latest 2022-12-02 00:54:03 +02:00
David Wertenteil
8662deac43 update repository scanning URL 2022-12-02 00:42:28 +02:00
yiscah
e42644bbd8 update hostscanner version 2022-12-01 08:57:58 +02:00
YiscahLevySilas1
07d30b6272 Merge branch 'kubescape:dev' into dev 2022-11-30 20:52:48 +02:00
yiscah
2a4f8543cc added logs of host scanner version 2022-11-30 20:51:45 +02:00
suhasgumma
186b293cce fix link for cis controls in html output 2022-11-30 01:23:45 +05:30
David Wertenteil
2bfe72f39d Merge pull request #944 from kooomix/dev
opa-utils adjustments + dataControlInputs support
2022-11-29 19:11:49 +02:00
kooomix
f99f955223 go mod tidy 2022-11-29 15:26:30 +02:00
kooomix
ec56e69a3c minor fix 2022-11-29 14:55:30 +02:00
kooomix
3942583b1d Merge pull request #1 from kooomix/dataControlInputs
update opa-utils functions
2022-11-29 14:35:08 +02:00
kooomix
a10b15ba4b update opa-utils functions 2022-11-29 14:29:33 +02:00
David Wertenteil
5003cbd7a8 Merge pull request #943 from suhasgumma/invalidformat
Handle Invalid Formats
2022-11-28 17:39:14 +02:00
kooomix
481a137c23 Update host-scanner image version to v1.0.38 2022-11-28 16:46:32 +02:00
suhasgumma
c3f7f0938d Handle Invalid Formats 2022-11-28 19:56:27 +05:30
kooomix
b1925fa38d Support in new host-scanner endpoint - cloudProviderInfo 2022-11-28 09:18:43 +02:00
David Wertenteil
d9f8a7a46f Merge pull request #918 from suhasgumma/dev1
Store Git Repo's root path as localRootPath
2022-11-27 16:25:24 +02:00
David Wertenteil
846a072bf9 Merge pull request #917 from suhasgumma/dev
Fixed: Wrong Relative Path When scanning Local Directory
2022-11-27 16:24:19 +02:00
kooomix
5dd7bbd8a7 Merge pull request #938 from kooomix/eran-dev
Added cloudProvider to postureControlInputs
2022-11-27 09:06:56 +02:00
kooomix
e1773acf24 Getting cloud provider from gitversion of discovered API version 2022-11-25 09:27:27 +02:00
kooomix
03a0f97669 Getting cluster name from context 2022-11-24 16:09:05 +02:00
David Wertenteil
917a3f41e8 Merge pull request #925 from amirmalka/dev
Omit raw resources flag in json output
2022-11-24 14:47:14 +02:00
David Wertenteil
3c8da1b299 supporting client type from env 2022-11-24 11:09:30 +02:00
David Wertenteil
c61c7edbd0 update examples 2022-11-24 11:06:37 +02:00
kooomix
53402d9a1c Added "CloudProvider" to postureControlInputs 2022-11-23 11:57:36 +02:00
David Wertenteil
de9278b388 Merge pull request #935 from mkilchhofer/bugfix/use_correct_directory
fix: filepath.Dir requires trailing slash
2022-11-23 10:49:16 +02:00
Marco Kilchhofer
4fef6200f8 fix: filepath.Dir requires trailing slash
Signed-off-by: Marco Kilchhofer <mkilchhofer@users.noreply.github.com>
2022-11-22 21:26:37 +01:00
JusteenR
81771b7bd7 Adding frameworks column to control command 2022-11-20 15:42:13 -08:00
Moshe Rappaport
2fee77c42c Merge pull request #928 from Moshe-Rappaport-CA/PER-633-support-loading-exceptions-from-cache-kubescape
PER-633 support loading exceptions from cache
2022-11-20 14:09:30 +02:00
Moshe-Rappaport-CA
968ecdb31d PER-633 support loading exceptions from cache 2022-11-20 12:22:15 +02:00
David Wertenteil
af7b36a88b Merge pull request #927 from Moshe-Rappaport-CA/PER-550-support-loading-attack-tracks-from-cache-kubescape
Per 550 support loading attack tracks from cache kubescape
2022-11-20 11:24:38 +02:00
Moshe-Rappaport-CA
6ad58d38e2 PER-550 Support loading attack tracks from cache 2022-11-17 16:31:51 +02:00
Moshe-Rappaport-CA
681b4ce155 stash 2022-11-17 10:49:36 +02:00
Moshe Rappaport
9d21ac1b16 Merge pull request #924 from Moshe-Rappaport-CA/dev
revert change in Junit format
2022-11-16 15:36:48 +02:00
Amir Malka
2b3fcca7e8 omit raw resources flag in json output 2022-11-16 12:15:17 +02:00
David Wertenteil
af8e786ab5 Merge pull request #914 from kubescape/dev
Closing issues
2022-11-16 10:59:59 +02:00
Moshe-Rappaport-CA
c8df1b8f1f Merge remote-tracking branch 'armo/dev' into dev 2022-11-15 17:34:45 +02:00
Moshe-Rappaport-CA
4f921ddf6f Revert PR #802 to the old junit format 2022-11-15 16:59:37 +02:00
David Wertenteil
4f5839870b Merge pull request #920 from amirmalka/dev
Fixed docker build to support ARM #919
2022-11-15 14:53:20 +02:00
Amir Malka
c0d7f51d6c test build flow 2022-11-15 13:29:35 +02:00
Amir Malka
a81d770360 fixed docker build to support arm 2022-11-15 10:57:29 +02:00
suhasgumma
f64d5eab50 Fix RootDir Info 2022-11-15 12:38:57 +05:30
suhasgumma
d773397fe9 replace src with RelSrc 2022-11-15 10:34:36 +05:30
suhasgumma
2e30995bfc Relative Path When scanning Local Repos 2022-11-15 10:22:04 +05:30
David Wertenteil
17a2547f18 Merge pull request #915 from kubescape/change-test-control-name
replace control 0006 by 0048
2022-11-14 14:56:34 +02:00
David Wertenteil
87a5cd66c8 replace control 0006 by 0048 2022-11-14 14:36:37 +02:00
David Wertenteil
9436ace64f continue when resource not found 2022-11-14 13:52:46 +02:00
David Wertenteil
fde00f6bd8 Merge pull request #909 from suhasgumma/dev
pretty-print Controls format  made Pretty
2022-11-13 17:04:46 +02:00
David Wertenteil
04a72a069a Merge pull request #913 from dwertent/ignore-missing-resource
Do not exit on error
2022-11-13 16:04:15 +02:00
David Wertenteil
e2dcb5bc15 Merge pull request #912 from dwertent/dep-rbac-submit
Deprecate rbac submit
2022-11-13 16:03:51 +02:00
suhasgumma
c7040a257c Pretty Print frameworks and exceptions 2022-11-13 19:29:26 +05:30
suhasgumma
602dc00c65 Shift GetControlLink to cautils 2022-11-13 19:09:30 +05:30
David Wertenteil
0339691571 Merge pull request #911 from dwertent/adding-remidiation
Adding remediation
2022-11-13 15:12:21 +02:00
David Wertenteil
9e1f3ec131 remove from smoke test 2022-11-13 15:10:05 +02:00
David Wertenteil
b8589819dc Do not exit on error 2022-11-13 15:06:32 +02:00
David Wertenteil
a3e87f4c01 Updating json v1 deprecation message 2022-11-13 15:03:22 +02:00
David Wertenteil
21ab5a602e Deprecate rbac submit 2022-11-13 15:01:32 +02:00
David Wertenteil
5d97d7b4b2 adding Remediation to message 2022-11-13 14:55:52 +02:00
suhasgumma
d8d7d0b372 Updated and Used GetControlLink 2022-11-13 17:56:39 +05:30
suhasgumma
b8323d41fc Modified Link Convention for CIS Controls 2022-11-13 17:22:37 +05:30
suhasgumma
d0b5314201 Improve Code Quality 2022-11-13 15:39:04 +05:30
suhasgumma
547e36e73f Pretty Print Controls made Pretty 2022-11-13 14:29:30 +05:30
David Wertenteil
e593a772cb Merge pull request #908 from Moshe-Rappaport-CA/update-k8s-interface-version
Update k8s-interface version and rbac-utils
2022-11-13 09:31:00 +02:00
Moshe-Rappaport-CA
4da09529b6 Update rbac-utils tag 2022-11-10 18:56:28 +02:00
Moshe-Rappaport-CA
de375992e8 Fix go.mod in httphandler 2022-11-10 17:54:44 +02:00
Moshe-Rappaport-CA
0bc4a29881 Update k8s-interface version 2022-11-10 17:38:32 +02:00
David Wertenteil
9575c92713 Merge pull request #906 from suhasgumma/dev
Fixed: Empty Lines before printing Controls and Added Invalid Format Error
2022-11-10 11:27:22 +02:00
David Wertenteil
cf277874eb Merge pull request #907 from matthyx/ioutil
remove deprecated ioutil package
2022-11-10 11:23:10 +02:00
Matthias Bertschy
746e060402 remove deprecated ioutil package 2022-11-10 09:58:07 +01:00
suhasgumma
dd3a7c816e Invalid Format Error 2022-11-10 11:57:57 +05:30
suhasgumma
814bc3ab2c Solved: Empty Lines before printing Controls 2022-11-10 11:17:48 +05:30
David Wertenteil
dbaf6761df Merge pull request #905 from matthyx/900
900
2022-11-10 06:52:34 +02:00
Matthias Bertschy
580e45827d add IDs to controls list, deprecate id flag 2022-11-09 22:08:04 +01:00
David Wertenteil
f3b8de9d1f fixing readme (#899) 2022-11-08 12:02:52 +02:00
David Wertenteil
6e9a2f55fd Merge pull request #894 from kubescape/dev
Enhancing CLI capabilities and SARIF output
2022-11-06 15:40:00 +02:00
David Wertenteil
dd7a8fd0c1 Merge pull request #883 from kubescape/dev
Minor changes
2022-10-26 13:31:04 +03:00
David Wertenteil
3373b728b7 Merge pull request #877 from kubescape/dev
Enhance configuration usage
2022-10-24 12:00:27 +03:00
174 changed files with 2583 additions and 3416 deletions

54
.github/workflows/01-golang-lint.yaml vendored Normal file
View File

@@ -0,0 +1,54 @@
name: golangci-lint
on:
push:
branches:
- dev
pull_request:
types: [ edited, opened, synchronize, reopened ]
branches: [ master, dev ]
paths-ignore:
- '**.yaml'
- '**.md'
permissions:
contents: read
# Optional: allow read access to pull request. Use with `only-new-issues` option.
pull-requests: read
jobs:
golangci:
name: lint
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: 1.18
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Install libgit2
run: make libgit2
- name: golangci-lint
uses: golangci/golangci-lint-action@v3
with:
# Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version
version: latest
# Optional: working directory, useful for monorepos
# working-directory: somedir
# Optional: golangci-lint command line arguments.
# args: --issues-exit-code=0
args: --timeout 10m --build-tags=static
#--new-from-rev dev
# Optional: show only new issues if it's a pull request. The default value is `false`.
only-new-issues: true
# Optional: if set to true then the all caching functionality will be complete disabled,
# takes precedence over all other caching options.
# skip-cache: true
# Optional: if set to true then the action don't cache or restore ~/go/pkg.
# skip-pkg-cache: true
# Optional: if set to true then the action don't cache or restore ~/.cache/go-build.
# skip-build-cache: true

View File

@@ -26,14 +26,24 @@ on:
type: boolean
description: 'support amd64/arm64'
secrets:
QUAYIO_REGISTRY_USERNAME:
required: true
QUAYIO_REGISTRY_PASSWORD:
required: true
jobs:
check-secret:
name: check if QUAYIO_REGISTRY_USERNAME & QUAYIO_REGISTRY_PASSWORD is set in github secrets
runs-on: ubuntu-latest
outputs:
is-secret-set: ${{ steps.check-secret-set.outputs.is-secret-set }}
steps:
- name: Check whether unity activation requests should be done
id: check-secret-set
env:
QUAYIO_REGISTRY_USERNAME: ${{ secrets.QUAYIO_REGISTRY_USERNAME }}
QUAYIO_REGISTRY_PASSWORD: ${{ secrets.QUAYIO_REGISTRY_PASSWORD }}
run: |
echo "is-secret-set=${{ env.QUAYIO_REGISTRY_USERNAME != '' && env.QUAYIO_REGISTRY_PASSWORD != '' }}" >> $GITHUB_OUTPUT
build-image:
needs: [check-secret]
if: needs.check-secret.outputs.is-secret-set == 'true'
name: Build image and upload to registry
runs-on: ubuntu-latest
permissions:
@@ -61,10 +71,10 @@ jobs:
- name: Build and push image
if: ${{ inputs.support_platforms }}
run: docker buildx build . --file build/Dockerfile --tag ${{ inputs.image_name }}:${{ inputs.image_tag }} --tag ${{ inputs.image_name }}:latest --build-arg image_version=${{ inputs.image_tag }} --build-arg client=${{ inputs.client }} --push --platform linux/amd64,linux/arm64
- name: Build and push image without amd64/arm64 support
if: ${{ !inputs.support_platforms }}
run: docker buildx build . --file build/Dockerfile --tag ${{ inputs.image_name }}:${{ inputs.image_tag }} --tag ${{ inputs.image_name }}:latest --build-arg image_version=${{ inputs.image_tag }} --build-arg client=${{ inputs.client }} --push
run: docker buildx build . --file build/Dockerfile --tag ${{ inputs.image_name }}:${{ inputs.image_tag }} --tag ${{ inputs.image_name }}:latest --build-arg image_version=${{ inputs.image_tag }} --build-arg client=${{ inputs.client }} --push
- name: Install cosign
uses: sigstore/cosign-installer@main
@@ -75,6 +85,5 @@ jobs:
env:
COSIGN_EXPERIMENTAL: "true"
run: |
cosign sign --force ${{ inputs.image_name }}:latest
cosign sign --force ${{ inputs.image_name }}:${{ inputs.image_tag }}
cosign sign --force ${{ inputs.image_name }}

View File

@@ -4,7 +4,6 @@ on:
push:
branches: [ master ]
paths-ignore:
# Do not run the pipeline if only Markdown files changed
- '**.md'
jobs:
test:
@@ -29,7 +28,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-20.04, macos-latest, windows-latest]
steps:
- uses: actions/checkout@v3
with:
@@ -38,7 +37,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.18
go-version: 1.19
- name: Install MSYS2 & libgit2 (Windows)
shell: cmd
@@ -56,8 +55,8 @@ jobs:
CGO_ENABLED: 1
run: python3 --version && python3 build.py
- name: Upload release binaries
id: upload-release-asset
- name: Upload release binaries (Windows / MacOS)
id: upload-release-asset-win-macos
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -66,9 +65,22 @@ jobs:
asset_path: build/${{ matrix.os }}/kubescape
asset_name: kubescape-${{ matrix.os }}
asset_content_type: application/octet-stream
if: matrix.os != 'ubuntu-20.04'
- name: Upload release hash
id: upload-release-hash
- name: Upload release binaries (Linux)
id: upload-release-asset-linux
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.create-release.outputs.upload_url }}
asset_path: build/ubuntu-latest/kubescape
asset_name: kubescape-ubuntu-latest
asset_content_type: application/octet-stream
if: matrix.os == 'ubuntu-20.04'
- name: Upload release hash (Windows / MacOS)
id: upload-release-hash-win-macos
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
@@ -77,15 +89,27 @@ jobs:
asset_path: build/${{ matrix.os }}/kubescape.sha256
asset_name: kubescape-${{ matrix.os }}-sha256
asset_content_type: application/octet-stream
if: matrix.os != 'ubuntu-20.04'
- name: Upload release hash (Linux)
id: upload-release-hash-linux
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.create-release.outputs.upload_url }}
asset_path: build/ubuntu-latest/kubescape.sha256
asset_name: kubescape-ubuntu-latest-sha256
asset_content_type: application/octet-stream
if: matrix.os == 'ubuntu-20.04'
publish-image:
if: ${{ github.repository == 'kubescape/kubescape' }} # TODO
uses: ./.github/workflows/build-image.yaml
needs: create-release
with:
client: "image-release"
image_name: "quay.io/${{ github.repository_owner }}/kubescape"
image_tag: "v2.0.${{ github.run_number }}"
support_platforms: false
support_platforms: true
cosign: true
secrets: inherit

View File

@@ -13,14 +13,13 @@ jobs:
release: "v2.0.${{ github.run_number }}"
client: test
publish-dev-image:
if: ${{ github.repository == 'kubescape/kubescape' }} # TODO
uses: ./.github/workflows/build-image.yaml
needs: test
with:
client: "image-dev"
image_name: "quay.io/${{ github.repository_owner }}/kubescape"
image_tag: "dev-v2.0.${{ github.run_number }}"
support_platforms: false
cosign: true
secrets: inherit
# publish-dev-image:
# uses: ./.github/workflows/build-image.yaml
# needs: test
# with:
# client: "image-dev"
# image_name: "quay.io/${{ github.repository_owner }}/kubescape"
# image_tag: "dev-v2.0.${{ github.run_number }}"
# support_platforms: true
# cosign: true
# secrets: inherit

View File

@@ -19,14 +19,14 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
strategy:
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
os: [ubuntu-20.04, macos-latest, windows-latest]
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Cache Go modules (Linux)
if: matrix.os == 'ubuntu-latest'
if: matrix.os == 'ubuntu-20.04'
uses: actions/cache@v3
with:
path: |
@@ -61,7 +61,7 @@ jobs:
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: 1.18
go-version: 1.19
- name: Install MSYS2 & libgit2 (Windows)
shell: cmd
@@ -85,9 +85,16 @@ jobs:
CGO_ENABLED: 1
run: python3 --version && python3 build.py
- name: Smoke Testing
- name: Smoke Testing (Windows / MacOS)
env:
RELEASE: ${{ inputs.release }}
KUBESCAPE_SKIP_UPDATE_CHECK: "true"
run: python3 smoke_testing/init.py ${PWD}/build/${{ matrix.os }}/kubescape
if: matrix.os != 'ubuntu-20.04'
- name: Smoke Testing (Linux)
env:
RELEASE: ${{ inputs.release }}
KUBESCAPE_SKIP_UPDATE_CHECK: "true"
run: python3 smoke_testing/init.py ${PWD}/build/ubuntu-latest/kubescape
if: matrix.os == 'ubuntu-20.04'

3
.gitignore vendored
View File

@@ -5,4 +5,5 @@
*.pyc*
.idea
.history
ca.srl
ca.srl
*.out

58
.golangci.yml Normal file
View File

@@ -0,0 +1,58 @@
linters-settings:
govet:
check-shadowing: true
dupl:
threshold: 200
goconst:
min-len: 3
min-occurrences: 2
gocognit:
min-complexity: 65
linters:
enable:
- gosec
- staticcheck
- nolintlint
disable:
# temporarily disabled
- varcheck
- ineffassign
- unused
- typecheck
- errcheck
- govet
- gosimple
- deadcode
- gofmt
- goimports
- bodyclose
- dupl
- gocognit
- gocritic
- goimports
- nakedret
- revive
- stylecheck
- unconvert
- unparam
#- forbidigo # <- see later
# should remain disabled
- maligned
- lll
- gochecknoinits
- gochecknoglobals
issues:
exclude-rules:
- linters:
- revive
text: "var-naming"
- linters:
- revive
text: "type name will be used as (.+?) by other packages, and that stutters"
- linters:
- stylecheck
text: "ST1003"
run:
skip-dirs:
- git2go

View File

@@ -1,127 +1,3 @@
# Contributor Covenant Code of Conduct
## Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement [here](mailto:ben@armosec.io).
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.
The Kubescape project follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).

View File

@@ -1,10 +1,11 @@
# Maintainers
The following table lists Kubescape project maintainers
The following table lists the Kubescape project maintainers:
| Name | GitHub | Email | Organization | Role | Added/Renewed On |
| --- | --- | --- | --- | --- | --- |
| [Ben Hirschberg](https://www.linkedin.com/in/benyamin-ben-hirschberg-66141890) | [@slashben](https://github.com/slashben) | ben@armosec.io | [ARMO](https://www.armosec.io/) | VP R&D | 2021-09-01 |
| [Rotem Refael](https://www.linkedin.com/in/rotem-refael) | [@rotemamsa](https://github.com/rotemamsa) | rrefael@armosec.io | [ARMO](https://www.armosec.io/) | Team Leader | 2021-10-11 |
| [David Wertenteil](https://www.linkedin.com/in/david-wertenteil-0ba277b9) | [@dwertent](https://github.com/dwertent) | dwertent@armosec.io | [ARMO](https://www.armosec.io/) | Kubescape CLI Developer | 2021-09-01 |
| [Bezalel Brandwine](https://www.linkedin.com/in/bezalel-brandwine) | [@Bezbran](https://github.com/Bezbran) | bbrandwine@armosec.io | [ARMO](https://www.armosec.io/) | Kubescape SaaS Developer | 2021-09-01 |
| Name | GitHub | Organization | Added/Renewed On |
| --- | --- | --- | --- |
| [Ben Hirschberg](https://www.linkedin.com/in/benyamin-ben-hirschberg-66141890) | [@slashben](https://github.com/slashben) | [ARMO](https://www.armosec.io/) | 2021-09-01 |
| [Rotem Refael](https://www.linkedin.com/in/rotem-refael) | [@rotemamsa](https://github.com/rotemamsa) | [ARMO](https://www.armosec.io/) | 2021-10-11 |
| [David Wertenteil](https://www.linkedin.com/in/david-wertenteil-0ba277b9) | [@dwertent](https://github.com/dwertent) | [ARMO](https://www.armosec.io/) | 2021-09-01 |
| [Bezalel Brandwine](https://www.linkedin.com/in/bezalel-brandwine) | [@Bezbran](https://github.com/Bezbran) | [ARMO](https://www.armosec.io/) | 2021-09-01 |
| [Craig Box](https://www.linkedin.com/in/crbnz/) | [@craigbox](https://github.com/craigbox) | [ARMO](https://www.armosec.io/) | 2022-10-31 |

View File

@@ -11,11 +11,11 @@
:sunglasses: [Want to contribute?](#being-a-part-of-the-team) :innocent:
Kubescape is a K8s open-source tool providing a Kubernetes single pane of glass, including risk analysis, security compliance, RBAC visualizer, and image vulnerability scanning.
Kubescape scans K8s clusters, YAML files, and HELM charts, detecting misconfigurations according to multiple frameworks (such as the [NSA-CISA](https://www.armosec.io/blog/kubernetes-hardening-guidance-summary-by-armo/?utm_source=github&utm_medium=repository), [MITRE ATT&CK®](https://www.microsoft.com/security/blog/2021/03/23/secure-containerized-environments-with-updated-threat-matrix-for-kubernetes/)), software vulnerabilities, and RBAC (role-based-access-control) violations at early stages of the CI/CD pipeline, calculates risk score instantly and shows risk trends over time.
Kubescape is an open-source Kubernetes security platform. A single pane of glass access to view risk analysis, security compliance, RBAC visualization, and image vulnerability scanning.
Kubescape scans Kubernetes clusters, YAML files, and Helm charts. It detects misconfigurations according to multiple frameworks (such as [NSA-CISA](https://www.armosec.io/blog/kubernetes-hardening-guidance-summary-by-armo/?utm_source=github&utm_medium=repository), [MITRE ATT&CK®](https://www.microsoft.com/security/blog/2021/03/23/secure-containerized-environments-with-updated-threat-matrix-for-kubernetes/) and [CIS Benchmark](https://www.armosec.io/blog/cis-kubernetes-benchmark-framework-scanning-tools-comparison/?utm_source=github&utm_medium=repository)). Kubescape also helps you find software vulnerabilities, and RBAC (role-based-access-control) violations at early stages of the CI/CD pipeline. It calculates your risk score instantly and shows risk trends over time.
It has become one of the fastest-growing Kubernetes tools among developers due to its easy-to-use CLI interface, flexible output formats, and automated scanning capabilities, saving Kubernetes users and admins precious time, effort, and resources.
Kubescape integrates natively with other DevOps tools, including Jenkins, CircleCI, Github workflows, Prometheus, and Slack, and supports multi-cloud K8s deployments like EKS, GKE, and AKS.
Kubescape is one of the fastest-growing Kubernetes security tools among developers. It saves Kubernetes users and admins precious time, effort, and resources with its easy-to-use CLI interface, flexible output formats, and automated scanning capabilities.
Kubescape integrates natively with other DevOps tools, including Jenkins, CircleCI, Github workflows, Prometheus, and Slack. It supports multi-cloud Kubernetes deployments like EKS, GKE, and AKS.
</br>
@@ -52,6 +52,9 @@ kubescape scan --enable-host-scan --verbose
</br>
## Architecture in short
[Component architecture](docs/architecture.drawio.svg)
### [CLI](#kubescape-cli)
<div align="center">
<img src="docs/ks-cli-arch.png" width="300" alt="cli-diagram">
@@ -69,12 +72,14 @@ kubescape scan --enable-host-scan --verbose
# Being a part of the team
## Community
We invite you to our community! We are excited about this project and want to return the love we get.
You are in vited to our community! We are excited about this project and want to return the love we get.
We hold community meetings in [Zoom](https://us02web.zoom.us/j/84020231442) on the first Tuesday of every month at 14:00 GMT! :sunglasses:
We hold community meetings on [Zoom](https://us02web.zoom.us/j/84020231442) on the first Tuesday of every month at 14:00 GMT! :sunglasses:
Please make sure that you follow our [Code Of Conduct](https://github.com/kubescape/kubescape/blob/master/CODE_OF_CONDUCT.md).
## Contributions
[Want to contribute?](https://github.com/kubescape/kubescape/blob/master/CONTRIBUTING.md) Want to discuss something? Have an issue? Please make sure that you follow our [Code Of Conduct](https://github.com/kubescape/kubescape/blob/master/CODE_OF_CONDUCT.md) .
Want to discuss something? Have an issue? [Want to contribute?](https://github.com/kubescape/kubescape/blob/master/CONTRIBUTING.md)
* Feel free to pick a task from the [issues](https://github.com/kubescape/kubescape/issues?q=is%3Aissue+is%3Aopen+label%3A%22open+for+contribution%22), [roadmap](docs/roadmap.md) or suggest a feature of your own. [Contact us](MAINTAINERS.md) directly for more information :)
* [Open an issue](https://github.com/kubescape/kubescape/issues/new/choose) , we are trying to respond within 48 hours
@@ -220,6 +225,8 @@ kubescape scan *.yaml
```
#### Scan Kubernetes manifest files from a git repository
```
kubescape scan https://github.com/kubescape/kubescape
```
@@ -259,7 +266,7 @@ kubescape scan --format prometheus
kubescape scan --format html --output results.html
```
#### Scan with exceptions, objects with exceptions will be presented as `exclude` and not `fail`
#### Scan with exceptions. Objects with exceptions will be presented as `exclude` and not `fail`
[Full documentation](examples/exceptions/README.md)
```
kubescape scan --exceptions examples/exceptions/exclude-kube-namespaces.json
@@ -271,13 +278,13 @@ kubescape scan </path/to/directory>
```
> Kubescape will load the default value file
#### Scan Kustomize Directory
#### Scan a Kustomize Directory
```
kubescape scan </path/to/directory>
```
> Kubescape will generate Kubernetes Yaml Objects using 'Kustomize' file and scans them for security.
> Kubescape will generate Kubernetes YAML objects using a 'Kustomize' file and scan them for security.
### Offline/Air-gaped Environment Support
### Offline/Air-gapped Environment Support
[Video tutorial](https://youtu.be/IGXL9s37smM)
@@ -321,7 +328,7 @@ kubescape scan framework nsa --use-from /path/nsa.json
![Visual Studio Marketplace Downloads](https://img.shields.io/visual-studio-marketplace/d/kubescape.kubescape?label=VScode) ![Open VSX](https://img.shields.io/open-vsx/dt/kubescape/kubescape?label=openVSX&color=yellowgreen)
Scan the YAML files while writing them using the [vs code extension](https://github.com/armosec/vscode-kubescape/blob/master/README.md)
Scan the YAML files while writing them using the [VS Code extension](https://github.com/armosec/vscode-kubescape/blob/master/README.md)
## Lens Extension
@@ -403,15 +410,15 @@ View Kubescape scan results directly in [Lens IDE](https://k8slens.dev/) using k
<details>
<summary>Instructions to use the playground</summary>
* Apply changes you wish to make to the kubescape directory using text editors like `Vim`.
* Apply changes you wish to make to the Kubescape directory using text editors like `Vim`.
* [Build on Linux](https://github.com/kubescape/kubescape#build-on-linuxmacos)
* Now, you can use Kubescape just like a normal user. Instead of using `kubescape`, use `./kubescape`. (Make sure you are inside kubescape directory because the command will execute the binary named `kubescape` in `kubescape directory`)
* Now, you can use Kubescape like a regular user. Instead of using `kubescape`, use `./kubescape`. Make sure you are in the Kubescape directory because the command will execute the binary named `kubescape` in `kubescape directory`)
</details>
## VS code configuration samples
## VS Code configuration samples
You can use the sample files below to setup your VS code environment for building and debugging purposes.
You can use the sample files below to setup your VS Code environment for building and debugging purposes.
<details><summary>.vscode/settings.json</summary>
@@ -458,11 +465,11 @@ You can use the sample files below to setup your VS code environment for buildin
## Technology
Kubescape is based on the [OPA engine](https://github.com/open-policy-agent/opa) and ARMO's posture controls.
The tools retrieve Kubernetes objects from the API server and run a set of [rego's snippets](https://www.openpolicyagent.org/docs/latest/policy-language/) developed by [ARMO](https://www.armosec.io?utm_source=github&utm_medium=repository).
The tools retrieve Kubernetes objects from the API server and runs a set of [Rego snippets](https://www.openpolicyagent.org/docs/latest/policy-language/) developed by [ARMO](https://www.armosec.io?utm_source=github&utm_medium=repository).
The results by default are printed in a pretty "console friendly" manner, but they can be retrieved in JSON format for further processing.
The results by default are printed in a "console friendly" manner, but they can be retrieved in JSON format for further processing.
Kubescape is an open source project, we welcome your feedback and ideas for improvement. Were also aiming to collaborate with the Kubernetes community to help make the tests more robust and complete as Kubernetes develops.
Kubescape is an open source project, we welcome your feedback and ideas for improvement. We are part of the Kubernetes community and aim to make the tests more robust and complete as Kubernetes develops.
## Thanks to all the contributors ❤️
<a href = "https://github.com/kubescape/kubescape/graphs/contributors">

View File

@@ -57,7 +57,7 @@ def main():
if client_name:
ldflags += " -X {}={}".format(client_var, client_name)
build_command = ["go", "build", "-tags=static", "-o", ks_file, "-ldflags" ,ldflags]
build_command = ["go", "build", "-buildmode=pie", "-tags=static", "-o", ks_file, "-ldflags" ,ldflags]
print("Building kubescape and saving here: {}".format(ks_file))
print("Build command: {}".format(" ".join(build_command)))

View File

@@ -1,4 +1,4 @@
FROM golang:1.18-alpine as builder
FROM golang:1.19-alpine as builder
ARG image_version
ARG client
@@ -12,7 +12,7 @@ ENV CGO_ENABLED=1
# Install required python/pip
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3 git openssl-dev musl-dev gcc make cmake pkgconfig && ln -sf python3 /usr/bin/python
RUN apk add --update --no-cache python3 gcc make git libc-dev binutils-gold cmake pkgconfig && ln -sf python3 /usr/bin/python
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools

View File

@@ -9,11 +9,11 @@ import (
var completionCmdExamples = `
# Enable BASH shell autocompletion
$ source <(kubescape completion bash)
# Enable BASH shell autocompletion
$ source <(kubescape completion bash)
$ echo 'source <(kubescape completion bash)' >> ~/.bashrc
# Enable ZSH shell autocompletion
# Enable ZSH shell autocompletion
$ source <(kubectl completion zsh)
$ echo 'source <(kubectl completion zsh)' >> "${fpath[1]}/_kubectl"
@@ -27,7 +27,7 @@ func GetCompletionCmd() *cobra.Command {
Example: completionCmdExamples,
DisableFlagsInUseLine: true,
ValidArgs: []string{"bash", "zsh", "fish", "powershell"},
Args: cobra.ExactValidArgs(1),
Args: cobra.MatchAll(cobra.ExactArgs(1), cobra.OnlyValidArgs),
Run: func(cmd *cobra.Command, args []string) {
switch strings.ToLower(args[0]) {
case "bash":

View File

@@ -24,8 +24,8 @@ var (
# Download the NSA framework. Run 'kubescape list frameworks' for all frameworks names
kubescape download framework nsa
# Download the "Allowed hostPath" control. Run 'kubescape list controls' for all controls names
kubescape download control "Allowed hostPath"
# Download the "C-0001" control. Run 'kubescape list controls --id' for all controls ids
kubescape download control "C-0001"
# Download the "C-0001" control. Run 'kubescape list controls --id' for all controls ids
kubescape download control C-0001
@@ -36,6 +36,8 @@ var (
# Download the configured controls-inputs
kubescape download controls-inputs
# Download the attack tracks
kubescape download attack-tracks
`
)
@@ -68,7 +70,9 @@ func GeDownloadCmd(ks meta.IKubescape) *cobra.Command {
}
downloadInfo.Target = args[0]
if len(args) >= 2 {
downloadInfo.Name = args[1]
downloadInfo.Identifier = args[1]
}
if err := ks.Download(&downloadInfo); err != nil {
logger.L().Fatal(err.Error())

View File

@@ -1,45 +0,0 @@
package fix
import (
"errors"
"github.com/kubescape/kubescape/v2/core/meta"
metav1 "github.com/kubescape/kubescape/v2/core/meta/datastructures/v1"
"github.com/spf13/cobra"
)
var fixCmdExamples = `
Fix command is for fixing kubernetes manifest files based on a scan command output.
Use with caution, this command will change your files in-place.
# Fix kubernetes YAML manifest files based on a scan command output (output.json)
1) kubescape scan --format json --format-version v2 --output output.json
2) kubescape fix output.json
`
func GetFixCmd(ks meta.IKubescape) *cobra.Command {
var fixInfo metav1.FixInfo
fixCmd := &cobra.Command{
Use: "fix <report output file>",
Short: "Fix misconfiguration in files",
Long: ``,
Example: fixCmdExamples,
RunE: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return errors.New("report output file is required")
}
fixInfo.ReportFile = args[0]
return ks.Fix(&fixInfo)
},
}
fixCmd.PersistentFlags().BoolVar(&fixInfo.NoConfirm, "no-confirm", false, "No confirmation will be given to the user before applying the fix (default false)")
fixCmd.PersistentFlags().BoolVar(&fixInfo.DryRun, "dry-run", false, "No changes will be applied (default false)")
fixCmd.PersistentFlags().BoolVar(&fixInfo.SkipUserValues, "skip-user-values", true, "Changes which involve user-defined values will be skipped")
return fixCmd
}

View File

@@ -20,11 +20,8 @@ var (
# List all supported frameworks names
kubescape list frameworks --account <account id>
# List all supported controls names
# List all supported controls names with ids
kubescape list controls
# List all supported controls ids
kubescape list controls --id
Control documentation:
https://hub.armosec.io/docs/controls
@@ -67,8 +64,8 @@ func GetListCmd(ks meta.IKubescape) *cobra.Command {
listCmd.PersistentFlags().StringVarP(&listPolicies.Credentials.Account, "account", "", "", "Kubescape SaaS account ID. Default will load account ID from cache")
listCmd.PersistentFlags().StringVarP(&listPolicies.Credentials.ClientID, "client-id", "", "", "Kubescape SaaS client ID. Default will load client ID from cache, read more - https://hub.armosec.io/docs/authentication")
listCmd.PersistentFlags().StringVarP(&listPolicies.Credentials.SecretKey, "secret-key", "", "", "Kubescape SaaS secret key. Default will load secret key from cache, read more - https://hub.armosec.io/docs/authentication")
listCmd.PersistentFlags().StringVar(&listPolicies.Format, "format", "pretty-print", "output format. supported: 'pretty-printer'/'json'")
listCmd.PersistentFlags().BoolVarP(&listPolicies.ListIDs, "id", "", false, "List control ID's instead of controls names")
listCmd.PersistentFlags().StringVar(&listPolicies.Format, "format", "pretty-print", "output format. supported: 'pretty-print'/'json'")
listCmd.PersistentFlags().MarkDeprecated("id", "Control ID's are included in list outpus")
return listCmd
}

View File

@@ -10,7 +10,6 @@ import (
"github.com/kubescape/kubescape/v2/cmd/config"
"github.com/kubescape/kubescape/v2/cmd/delete"
"github.com/kubescape/kubescape/v2/cmd/download"
"github.com/kubescape/kubescape/v2/cmd/fix"
"github.com/kubescape/kubescape/v2/cmd/list"
"github.com/kubescape/kubescape/v2/cmd/scan"
"github.com/kubescape/kubescape/v2/cmd/submit"
@@ -28,7 +27,7 @@ var rootInfo cautils.RootInfo
var ksExamples = `
# Scan command
kubescape scan --submit
kubescape scan
# List supported frameworks
kubescape list frameworks
@@ -79,7 +78,6 @@ func getRootCmd(ks meta.IKubescape) *cobra.Command {
rootCmd.AddCommand(version.GetVersionCmd())
rootCmd.AddCommand(config.GetConfigCmd(ks))
rootCmd.AddCommand(update.GetUpdateCmd())
rootCmd.AddCommand(fix.GetFixCmd(ks))
return rootCmd
}

View File

@@ -23,7 +23,7 @@ var (
kubescape scan control "privileged container"
# Scan list of controls separated with a comma
kubescape scan control "privileged container","allowed hostpath"
kubescape scan control "privileged container","HostPath mount"
# Scan list of controls using the control ID separated with a comma
kubescape scan control C-0058,C-0057
@@ -61,7 +61,7 @@ func getControlCmd(ks meta.IKubescape, scanInfo *cautils.ScanInfo) *cobra.Comman
if err := validateFrameworkScanInfo(scanInfo); err != nil {
return err
}
// flagValidationControl(scanInfo)
scanInfo.PolicyIdentifier = []cautils.PolicyIdentifier{}
@@ -109,7 +109,7 @@ func getControlCmd(ks meta.IKubescape, scanInfo *cautils.ScanInfo) *cobra.Comman
if results.GetRiskScore() > float32(scanInfo.FailThreshold) {
logger.L().Fatal("scan risk-score is above permitted threshold", helpers.String("risk-score", fmt.Sprintf("%.2f", results.GetRiskScore())), helpers.String("fail-threshold", fmt.Sprintf("%.2f", scanInfo.FailThreshold)))
}
enforceSeverityThresholds(&results.GetResults().SummaryDetails.SeverityCounters, scanInfo, terminateOnExceedingSeverity)
enforceSeverityThresholds(results.GetResults().SummaryDetails.GetResourcesSeverityCounters(), scanInfo, terminateOnExceedingSeverity)
return nil
},
@@ -120,6 +120,10 @@ func getControlCmd(ks meta.IKubescape, scanInfo *cautils.ScanInfo) *cobra.Comman
func validateControlScanInfo(scanInfo *cautils.ScanInfo) error {
severity := scanInfo.FailThresholdSeverity
if scanInfo.Submit && scanInfo.OmitRawResources {
return fmt.Errorf("you can use `omit-raw-resources` or `submit`, but not both")
}
if err := validateSeverity(severity); severity != "" && err != nil {
return err
}

View File

@@ -16,14 +16,13 @@ import (
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/meta"
"github.com/enescakir/emoji"
"github.com/spf13/cobra"
)
var (
frameworkExample = `
# Scan all frameworks and submit the results
kubescape scan framework all --submit
# Scan all frameworks
kubescape scan framework all
# Scan the NSA framework
kubescape scan framework nsa
@@ -35,7 +34,7 @@ var (
kubescape scan framework all
# Scan kubernetes YAML manifest files (single file or glob)
kubescape scan framework nsa *.yaml
kubescape scan framework nsa .
Run 'kubescape list frameworks' for the list of supported frameworks
`
@@ -113,13 +112,13 @@ func getFrameworkCmd(ks meta.IKubescape, scanInfo *cautils.ScanInfo) *cobra.Comm
logger.L().Fatal(err.Error())
}
if !scanInfo.VerboseMode {
cautils.SimpleDisplay(os.Stderr, "%s Run with '--verbose'/'-v' flag for detailed resources view\n\n", emoji.Detective)
cautils.SimpleDisplay(os.Stderr, "Run with '--verbose'/'-v' flag for detailed resources view\n\n")
}
if results.GetRiskScore() > float32(scanInfo.FailThreshold) {
logger.L().Fatal("scan risk-score is above permitted threshold", helpers.String("risk-score", fmt.Sprintf("%.2f", results.GetRiskScore())), helpers.String("fail-threshold", fmt.Sprintf("%.2f", scanInfo.FailThreshold)))
}
enforceSeverityThresholds(&results.GetData().Report.SummaryDetails.SeverityCounters, scanInfo, terminateOnExceedingSeverity)
enforceSeverityThresholds(results.GetData().Report.SummaryDetails.GetResourcesSeverityCounters(), scanInfo, terminateOnExceedingSeverity)
return nil
},
}
@@ -136,10 +135,10 @@ func countersExceedSeverityThreshold(severityCounters reportsummary.ISeverityCou
SeverityName string
GetFailedResources func() int
}{
{reporthandlingapis.SeverityLowString, severityCounters.NumberOfResourcesWithLowSeverity},
{reporthandlingapis.SeverityMediumString, severityCounters.NumberOfResourcesWithMediumSeverity},
{reporthandlingapis.SeverityHighString, severityCounters.NumberOfResourcesWithHighSeverity},
{reporthandlingapis.SeverityCriticalString, severityCounters.NumberOfResourcesWithCriticalSeverity},
{reporthandlingapis.SeverityLowString, severityCounters.NumberOfLowSeverity},
{reporthandlingapis.SeverityMediumString, severityCounters.NumberOfMediumSeverity},
{reporthandlingapis.SeverityHighString, severityCounters.NumberOfHighSeverity},
{reporthandlingapis.SeverityCriticalString, severityCounters.NumberOfCriticalSeverity},
}
targetSeverityIdx := 0
@@ -201,7 +200,9 @@ func validateFrameworkScanInfo(scanInfo *cautils.ScanInfo) error {
if 100 < scanInfo.FailThreshold || 0 > scanInfo.FailThreshold {
return fmt.Errorf("bad argument: out of range threshold")
}
if scanInfo.Submit && scanInfo.OmitRawResources {
return fmt.Errorf("you can use `omit-raw-resources` or `submit`, but not both")
}
severity := scanInfo.FailThresholdSeverity
if err := validateSeverity(severity); severity != "" && err != nil {
return err

View File

@@ -17,10 +17,10 @@ var scanCmdExamples = `
kubescape scan --enable-host-scan --verbose
# Scan kubernetes YAML manifest files
kubescape scan *.yaml
kubescape scan .
# Scan and save the results in the JSON format
kubescape scan --format json --output results.json
kubescape scan --format json --output results.json --format-version=v2
# Display all resources
kubescape scan --verbose
@@ -58,6 +58,7 @@ func GetScanCommand(ks meta.IKubescape) *cobra.Command {
},
PreRun: func(cmd *cobra.Command, args []string) {
k8sinterface.SetClusterContextName(scanInfo.KubeContext)
},
PostRun: func(cmd *cobra.Command, args []string) {
// TODO - revert context
@@ -65,6 +66,7 @@ func GetScanCommand(ks meta.IKubescape) *cobra.Command {
}
scanCmd.PersistentFlags().StringVarP(&scanInfo.Credentials.Account, "account", "", "", "Kubescape SaaS account ID. Default will load account ID from cache")
// scanCmd.PersistentFlags().BoolVar(&scanInfo.CreateAccount, "create-account", false, "Create a Kubescape SaaS account ID account ID is not found in cache. After creating the account, the account ID will be saved in cache. In addition, the scanning results will be uploaded to the Kubescape SaaS")
scanCmd.PersistentFlags().StringVarP(&scanInfo.Credentials.ClientID, "client-id", "", "", "Kubescape SaaS client ID. Default will load client ID from cache, read more - https://hub.armosec.io/docs/authentication")
scanCmd.PersistentFlags().StringVarP(&scanInfo.Credentials.SecretKey, "secret-key", "", "", "Kubescape SaaS secret key. Default will load secret key from cache, read more - https://hub.armosec.io/docs/authentication")
scanCmd.PersistentFlags().StringVarP(&scanInfo.KubeContext, "kube-context", "", "", "Kube context. Default will use the current-context")
@@ -76,7 +78,7 @@ func GetScanCommand(ks meta.IKubescape) *cobra.Command {
scanCmd.PersistentFlags().Float32VarP(&scanInfo.FailThreshold, "fail-threshold", "t", 100, "Failure threshold is the percent above which the command fails and returns exit code 1")
scanCmd.PersistentFlags().StringVar(&scanInfo.FailThresholdSeverity, "severity-threshold", "", "Severity threshold is the severity of failed controls at which the command fails and returns exit code 1")
scanCmd.PersistentFlags().StringVarP(&scanInfo.Format, "format", "f", "pretty-printer", `Output format. Supported formats: "pretty-printer", "json", "junit", "prometheus", "pdf", "html", "sarif"`)
scanCmd.PersistentFlags().StringVarP(&scanInfo.Format, "format", "f", "", `Output file format. Supported formats: "pretty-printer", "json", "junit", "prometheus", "pdf", "html", "sarif"`)
scanCmd.PersistentFlags().StringVar(&scanInfo.IncludeNamespaces, "include-namespaces", "", "scan specific namespaces. e.g: --include-namespaces ns-a,ns-b")
scanCmd.PersistentFlags().BoolVarP(&scanInfo.Local, "keep-local", "", false, "If you do not want your Kubescape results reported to configured backend.")
scanCmd.PersistentFlags().StringVarP(&scanInfo.Output, "output", "o", "", "Output file. Print output to file and not stdout")
@@ -88,11 +90,15 @@ func GetScanCommand(ks meta.IKubescape) *cobra.Command {
scanCmd.PersistentFlags().StringVar(&scanInfo.FormatVersion, "format-version", "v1", "Output object can be different between versions, this is for maintaining backward and forward compatibility. Supported:'v1'/'v2'")
scanCmd.PersistentFlags().StringVar(&scanInfo.CustomClusterName, "cluster-name", "", "Set the custom name of the cluster. Not same as the kube-context flag")
scanCmd.PersistentFlags().BoolVarP(&scanInfo.Submit, "submit", "", false, "Submit the scan results to Kubescape SaaS where you can see the results in a user-friendly UI, choose your preferred compliance framework, check risk results history and trends, manage exceptions, get remediation recommendations and much more. By default the results are not submitted")
scanCmd.PersistentFlags().BoolVarP(&scanInfo.OmitRawResources, "omit-raw-resources", "", false, "Omit raw resources from the output. By default the raw resources are included in the output")
scanCmd.PersistentFlags().BoolVarP(&scanInfo.PrintAttackTree, "print-attack-tree", "", false, "Print attack tree")
scanCmd.PersistentFlags().MarkDeprecated("silent", "use '--logger' flag instead. Flag will be removed at 1.May.2022")
// hidden flags
scanCmd.PersistentFlags().MarkHidden("host-scan-yaml") // this flag should be used very cautiously. We prefer users will not use it at all unless the DaemonSet can not run pods on the nodes
scanCmd.PersistentFlags().MarkHidden("omit-raw-resources")
scanCmd.PersistentFlags().MarkHidden("print-attack-tree")
// Retrieve --kubeconfig flag from https://github.com/kubernetes/kubectl/blob/master/pkg/cmd/cmd.go
scanCmd.PersistentFlags().AddGoFlag(flag.Lookup("kubeconfig"))

View File

@@ -24,91 +24,91 @@ func TestExceedsSeverity(t *testing.T) {
{
Description: "Critical failed resource should exceed Critical threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "critical"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithCriticalSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{CriticalSeverityCounter: 1},
Want: true,
},
{
Description: "Critical failed resource should exceed Critical threshold set as constant",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: apis.SeverityCriticalString},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithCriticalSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{CriticalSeverityCounter: 1},
Want: true,
},
{
Description: "High failed resource should not exceed Critical threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "critical"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithHighSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{HighSeverityCounter: 1},
Want: false,
},
{
Description: "Critical failed resource exceeds High threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "high"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithCriticalSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{CriticalSeverityCounter: 1},
Want: true,
},
{
Description: "High failed resource exceeds High threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "high"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithHighSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{HighSeverityCounter: 1},
Want: true,
},
{
Description: "Medium failed resource does not exceed High threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "high"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithMediumSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{MediumSeverityCounter: 1},
Want: false,
},
{
Description: "Critical failed resource exceeds Medium threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "medium"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithCriticalSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{CriticalSeverityCounter: 1},
Want: true,
},
{
Description: "High failed resource exceeds Medium threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "medium"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithHighSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{HighSeverityCounter: 1},
Want: true,
},
{
Description: "Medium failed resource exceeds Medium threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "medium"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithMediumSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{MediumSeverityCounter: 1},
Want: true,
},
{
Description: "Low failed resource does not exceed Medium threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "medium"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithLowSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{LowSeverityCounter: 1},
Want: false,
},
{
Description: "Critical failed resource exceeds Low threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "low"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithCriticalSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{CriticalSeverityCounter: 1},
Want: true,
},
{
Description: "High failed resource exceeds Low threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "low"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithHighSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{HighSeverityCounter: 1},
Want: true,
},
{
Description: "Medium failed resource exceeds Low threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "low"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithMediumSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{MediumSeverityCounter: 1},
Want: true,
},
{
Description: "Low failed resource exceeds Low threshold",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "low"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithLowSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{LowSeverityCounter: 1},
Want: true,
},
{
Description: "Unknown severity returns an error",
ScanInfo: &cautils.ScanInfo{FailThresholdSeverity: "unknown"},
SeverityCounters: &reportsummary.SeverityCounters{ResourcesWithLowSeverityCounter: 1},
SeverityCounters: &reportsummary.SeverityCounters{LowSeverityCounter: 1},
Want: false,
Error: ErrUnknownSeverity,
},
@@ -139,7 +139,7 @@ func Test_enforceSeverityThresholds(t *testing.T) {
}{
{
"Exceeding Critical severity counter should call the terminating function",
&reportsummary.SeverityCounters{ResourcesWithCriticalSeverityCounter: 1},
&reportsummary.SeverityCounters{CriticalSeverityCounter: 1},
&cautils.ScanInfo{FailThresholdSeverity: apis.SeverityCriticalString},
true,
},

View File

@@ -31,10 +31,11 @@ var (
// getRBACCmd represents the RBAC command
func getRBACCmd(ks meta.IKubescape, submitInfo *v1.Submit) *cobra.Command {
return &cobra.Command{
Use: "rbac",
Example: rbacExamples,
Short: "Submit cluster's Role-Based Access Control(RBAC)",
Long: ``,
Use: "rbac",
Deprecated: "This command is deprecated and will not be supported after 1/Jan/2023. Please use the 'scan' command instead.",
Example: rbacExamples,
Short: "Submit cluster's Role-Based Access Control(RBAC)",
Long: ``,
RunE: func(cmd *cobra.Command, args []string) error {
if err := flagValidationSubmit(submitInfo); err != nil {

View File

@@ -7,16 +7,21 @@ import (
)
var submitCmdExamples = `
# Submit Kubescape scan results file
kubescape submit results
# Submit exceptions file to Kubescape SaaS
kubescape submit exceptions
`
func GetSubmitCmd(ks meta.IKubescape) *cobra.Command {
var submitInfo metav1.Submit
submitCmd := &cobra.Command{
Use: "submit <command>",
Short: "Submit an object to the Kubescape SaaS version",
Long: ``,
Use: "submit <command>",
Short: "Submit an object to the Kubescape SaaS version",
Long: ``,
Example: submitCmdExamples,
Run: func(cmd *cobra.Command, args []string) {
},
}

View File

@@ -0,0 +1,12 @@
package cautils
import (
"fmt"
"strings"
)
func GetControlLink(controlID string) string {
// For CIS Controls, cis-1.1.3 will be transformed to cis-1-1-3 in documentation link.
docLinkID := strings.ReplaceAll(controlID, ".", "-")
return fmt.Sprintf("https://hub.armosec.io/docs/%s", strings.ToLower(docLinkID))
}

View File

@@ -470,10 +470,7 @@ func (c *ClusterConfig) updateConfigMap() error {
}
func updateConfigFile(configObj *ConfigObj) error {
if err := os.WriteFile(ConfigFileFullPath(), configObj.Config(), 0664); err != nil {
return err
}
return nil
return os.WriteFile(ConfigFileFullPath(), configObj.Config(), 0664) //nolint:gosec
}
func (c *ClusterConfig) updateConfigData(configMap *corev1.ConfigMap) {

View File

@@ -5,6 +5,7 @@ import (
"github.com/kubescape/k8s-interface/workloadinterface"
"github.com/kubescape/opa-utils/reporthandling"
apis "github.com/kubescape/opa-utils/reporthandling/apis"
"github.com/kubescape/opa-utils/reporthandling/attacktrack/v1alpha1"
"github.com/kubescape/opa-utils/reporthandling/results/v1/prioritization"
"github.com/kubescape/opa-utils/reporthandling/results/v1/resourcesresults"
reporthandlingv2 "github.com/kubescape/opa-utils/reporthandling/v2"
@@ -18,18 +19,21 @@ type OPASessionObj struct {
K8SResources *K8SResources // input k8s objects
ArmoResource *KSResources // input ARMO objects
AllPolicies *Policies // list of all frameworks
Policies []reporthandling.Framework // list of frameworks to scan
AllResources map[string]workloadinterface.IMetadata // all scanned resources, map[<resource ID>]<resource>
ResourcesResult map[string]resourcesresults.Result // resources scan results, map[<resource ID>]<resource result>
ResourceSource map[string]reporthandling.Source // resources sources, map[<resource ID>]<resource result>
ResourcesPrioritized map[string]prioritization.PrioritizedResource // resources prioritization information, map[<resource ID>]<prioritized resource>
Report *reporthandlingv2.PostureReport // scan results v2 - Remove
Exceptions []armotypes.PostureExceptionPolicy // list of exceptions to apply on scan results
RegoInputData RegoInputData // input passed to rego for scanning. map[<control name>][<input arguments>]
ResourceAttackTracks map[string]v1alpha1.IAttackTrack // resources attack tracks, map[<resource ID>]<attack track>
AttackTracks map[string]v1alpha1.IAttackTrack
Report *reporthandlingv2.PostureReport // scan results v2 - Remove
RegoInputData RegoInputData // input passed to rego for scanning. map[<control name>][<input arguments>]
Metadata *reporthandlingv2.Metadata
InfoMap map[string]apis.StatusInfo // Map errors of resources to StatusInfo
ResourceToControlsMap map[string][]string // map[<apigroup/apiversion/resource>] = [<control_IDs>]
SessionID string // SessionID
InfoMap map[string]apis.StatusInfo // Map errors of resources to StatusInfo
ResourceToControlsMap map[string][]string // map[<apigroup/apiversion/resource>] = [<control_IDs>]
SessionID string // SessionID
Policies []reporthandling.Framework // list of frameworks to scan
Exceptions []armotypes.PostureExceptionPolicy // list of exceptions to apply on scan results
OmitRawResources bool // omit raw resources from output
}
func NewOPASessionObj(frameworks []reporthandling.Framework, k8sResources *K8SResources, scanInfo *ScanInfo) *OPASessionObj {
@@ -45,6 +49,7 @@ func NewOPASessionObj(frameworks []reporthandling.Framework, k8sResources *K8SRe
ResourceSource: make(map[string]reporthandling.Source),
SessionID: scanInfo.ScanID,
Metadata: scanInfoToScanMetadata(scanInfo),
OmitRawResources: scanInfo.OmitRawResources,
}
}
@@ -94,6 +99,7 @@ type Exception struct {
type RegoInputData struct {
PostureControlInputs map[string][]string `json:"postureControlInputs"`
DataControlInputs map[string]string `json:"dataControlInputs"`
// ClusterName string `json:"clusterName"`
// K8sConfig RegoK8sConfig `json:"k8sconfig"`
}

View File

@@ -1,6 +1,7 @@
package getter
import (
"fmt"
"strings"
"github.com/armosec/armoapi-go/armotypes"
@@ -24,11 +25,11 @@ func NewDownloadReleasedPolicy() *DownloadReleasedPolicy {
}
}
func (drp *DownloadReleasedPolicy) GetControl(policyName string) (*reporthandling.Control, error) {
func (drp *DownloadReleasedPolicy) GetControl(ID string) (*reporthandling.Control, error) {
var control *reporthandling.Control
var err error
control, err = drp.gs.GetOPAControl(policyName)
control, err = drp.gs.GetOPAControlByID(ID)
if err != nil {
return nil, err
}
@@ -55,13 +56,29 @@ func (drp *DownloadReleasedPolicy) ListFrameworks() ([]string, error) {
return drp.gs.GetOPAFrameworksNamesList()
}
func (drp *DownloadReleasedPolicy) ListControls(listType ListType) ([]string, error) {
switch listType {
case ListID:
return drp.gs.GetOPAControlsIDsList()
default:
return drp.gs.GetOPAControlsNamesList()
func (drp *DownloadReleasedPolicy) ListControls() ([]string, error) {
controlsIDsList, err := drp.gs.GetOPAControlsIDsList()
if err != nil {
return []string{}, err
}
controlsNamesList, err := drp.gs.GetOPAControlsNamesList()
if err != nil {
return []string{}, err
}
controls, err := drp.gs.GetOPAControls()
if err != nil {
return []string{}, err
}
var controlsFrameworksList [][]string
for _, control := range controls {
controlsFrameworksList = append(controlsFrameworksList, control.FrameworkNames)
}
controlsNamesWithIDsandFrameworksList := make([]string, len(controlsIDsList))
// by design all slices have the same lengt
for i := range controlsIDsList {
controlsNamesWithIDsandFrameworksList[i] = fmt.Sprintf("%v|%v|%v", controlsIDsList[i], controlsNamesList[i], strings.Join(controlsFrameworksList[i], ","))
}
return controlsNamesWithIDsandFrameworksList, nil
}
func (drp *DownloadReleasedPolicy) GetControlsInputs(clusterName string) (map[string][]string, error) {

View File

@@ -6,19 +6,13 @@ import (
"github.com/kubescape/opa-utils/reporthandling/attacktrack/v1alpha1"
)
// supported listing
type ListType string
const ListID ListType = "id"
const ListName ListType = "name"
type IPolicyGetter interface {
GetFramework(name string) (*reporthandling.Framework, error)
GetFrameworks() ([]reporthandling.Framework, error)
GetControl(name string) (*reporthandling.Control, error)
GetControl(ID string) (*reporthandling.Control, error)
ListFrameworks() ([]string, error)
ListControls(ListType) ([]string, error)
ListControls() ([]string, error)
}
type IExceptionsGetter interface {

View File

@@ -21,18 +21,19 @@ func SaveInFile(policy interface{}, pathStr string) error {
if err != nil {
return err
}
err = os.WriteFile(pathStr, []byte(fmt.Sprintf("%v", string(encodedData))), 0644)
err = os.WriteFile(pathStr, encodedData, 0644) //nolint:gosec
if err != nil {
if os.IsNotExist(err) {
pathDir := path.Dir(pathStr)
if err := os.Mkdir(pathDir, 0744); err != nil {
// pathDir could contain subdirectories
if err := os.MkdirAll(pathDir, 0755); err != nil {
return err
}
} else {
return err
}
err = os.WriteFile(pathStr, []byte(fmt.Sprintf("%v", string(encodedData))), 0644)
err = os.WriteFile(pathStr, encodedData, 0644) //nolint:gosec
if err != nil {
return err
}

View File

@@ -4,7 +4,7 @@ import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"io"
"net/http"
"strings"
"time"
@@ -192,7 +192,7 @@ func (api *KSCloudAPI) GetFrameworks() ([]reporthandling.Framework, error) {
return frameworks, err
}
func (api *KSCloudAPI) GetControl(policyName string) (*reporthandling.Control, error) {
func (api *KSCloudAPI) GetControl(ID string) (*reporthandling.Control, error) {
return nil, fmt.Errorf("control api is not public")
}
@@ -306,7 +306,7 @@ func (api *KSCloudAPI) ListFrameworks() ([]string, error) {
return frameworkList, nil
}
func (api *KSCloudAPI) ListControls(l ListType) ([]string, error) {
func (api *KSCloudAPI) ListControls() ([]string, error) {
return nil, fmt.Errorf("control api is not public")
}
@@ -358,7 +358,7 @@ func (api *KSCloudAPI) Login() error {
return fmt.Errorf("error authenticating: %d", resp.StatusCode)
}
responseBody, err := ioutil.ReadAll(resp.Body)
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return err
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/armosec/armoapi-go/armotypes"
"github.com/kubescape/opa-utils/reporthandling"
"github.com/kubescape/opa-utils/reporthandling/attacktrack/v1alpha1"
)
// =======================================================================================================================
@@ -35,11 +36,11 @@ func NewLoadPolicy(filePaths []string) *LoadPolicy {
}
}
// Return control from file
func (lp *LoadPolicy) GetControl(controlName string) (*reporthandling.Control, error) {
// GetControl returns a control from the policy file.
func (lp *LoadPolicy) GetControl(controlID string) (*reporthandling.Control, error) {
control := &reporthandling.Control{}
filePath := lp.filePath()
f, err := os.ReadFile(filePath)
if err != nil {
return nil, err
@@ -48,20 +49,26 @@ func (lp *LoadPolicy) GetControl(controlName string) (*reporthandling.Control, e
if err = json.Unmarshal(f, control); err != nil {
return control, err
}
if controlName != "" && !strings.EqualFold(controlName, control.Name) && !strings.EqualFold(controlName, control.ControlID) {
framework, err := lp.GetFramework(control.Name)
if err != nil {
return nil, fmt.Errorf("control from file not matching")
} else {
for _, ctrl := range framework.Controls {
if strings.EqualFold(ctrl.Name, controlName) || strings.EqualFold(ctrl.ControlID, controlName) {
control = &ctrl
break
}
}
if controlID == "" || strings.EqualFold(controlID, control.ControlID) {
return control, nil
}
framework, err := lp.GetFramework(control.Name)
if err != nil {
return nil, fmt.Errorf("control from file not matching")
}
for _, toPin := range framework.Controls {
ctrl := toPin
if strings.EqualFold(ctrl.ControlID, controlID) {
control = &ctrl
break
}
}
return control, err
return control, nil
}
func (lp *LoadPolicy) GetFramework(frameworkName string) (*reporthandling.Framework, error) {
@@ -109,7 +116,7 @@ func (lp *LoadPolicy) ListFrameworks() ([]string, error) {
return fwNames, nil
}
func (lp *LoadPolicy) ListControls(listType ListType) ([]string, error) {
func (lp *LoadPolicy) ListControls() ([]string, error) {
// TODO - Support
return []string{}, fmt.Errorf("loading controls list from file is not supported")
}
@@ -152,3 +159,18 @@ func (lp *LoadPolicy) filePath() string {
}
return ""
}
func (lp *LoadPolicy) GetAttackTracks() ([]v1alpha1.AttackTrack, error) {
attackTracks := []v1alpha1.AttackTrack{}
f, err := os.ReadFile(lp.filePath())
if err != nil {
return nil, err
}
if err := json.Unmarshal(f, &attackTracks); err != nil {
return nil, err
}
return attackTracks, nil
}

View File

@@ -0,0 +1,16 @@
package cautils
import (
"testing"
giturl "github.com/kubescape/go-git-url"
"github.com/stretchr/testify/require"
)
func TestEnsureRemoteParsed(t *testing.T) {
const remote = "git@gitlab.com:foobar/gitlab-tests/sample-project.git"
require.NotPanics(t, func() {
_, _ = giturl.NewGitURL(remote)
})
}

View File

@@ -3,7 +3,6 @@ package cautils
import (
_ "embed"
"encoding/json"
"io/ioutil"
"os"
"path/filepath"
"strings"
@@ -39,7 +38,7 @@ func (s *HelmChartTestSuite) SetupSuite() {
}
var obj interface{}
file, _ := ioutil.ReadFile(filepath.Join("testdata", "helm_expected_default_values.json"))
file, _ := os.ReadFile(filepath.Join("testdata", "helm_expected_default_values.json"))
_ = json.Unmarshal([]byte(file), &obj)
s.expectedDefaultValues = obj.(map[string]interface{})
}

View File

@@ -6,10 +6,10 @@ import (
"strings"
"time"
"github.com/armosec/go-git-url/apis"
gitv5 "github.com/go-git/go-git/v5"
configv5 "github.com/go-git/go-git/v5/config"
plumbingv5 "github.com/go-git/go-git/v5/plumbing"
"github.com/kubescape/go-git-url/apis"
git2go "github.com/libgit2/git2go/v33"
)

View File

@@ -27,7 +27,7 @@ func unzipFile(zipPath, destinationFolder string) (*zip.ReadCloser, error) {
return nil, err
}
for _, f := range archive.File {
filePath := filepath.Join(destinationFolder, f.Name)
filePath := filepath.Join(destinationFolder, f.Name) //nolint:gosec
if !strings.HasPrefix(filePath, filepath.Clean(destinationFolder)+string(os.PathSeparator)) {
return nil, fmt.Errorf("invalid file path")
}
@@ -50,7 +50,7 @@ func unzipFile(zipPath, destinationFolder string) (*zip.ReadCloser, error) {
return nil, err
}
if _, err := io.Copy(dstFile, fileInArchive); err != nil {
if _, err := io.Copy(dstFile, fileInArchive); err != nil { //nolint:gosec
return nil, err
}

View File

@@ -3,7 +3,6 @@ package cautils
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
@@ -11,7 +10,7 @@ import (
"github.com/armosec/armoapi-go/armotypes"
apisv1 "github.com/kubescape/opa-utils/httpserver/apis/v1"
giturl "github.com/armosec/go-git-url"
giturl "github.com/kubescape/go-git-url"
logger "github.com/kubescape/go-logger"
"github.com/kubescape/go-logger/helpers"
"github.com/kubescape/k8s-interface/k8sinterface"
@@ -40,7 +39,8 @@ const (
// ScanCluster string = "cluster"
// ScanLocalFiles string = "yaml"
localControlInputsFilename string = "controls-inputs.json"
localExceptionsFilename string = "exceptions.json"
LocalExceptionsFilename string = "exceptions.json"
LocalAttackTracksFilename string = "attack-tracks.json"
)
type BoolPtrFlag struct {
@@ -94,7 +94,7 @@ const (
)
type PolicyIdentifier struct {
Name string // policy name e.g. nsa,mitre,c-0012
Identifier string // policy Identifier e.g. c-0012 for control, nsa,mitre for frameworks
Kind apisv1.NotificationPolicyKind // policy kind e.g. Framework,Control,Rule
Designators armotypes.PortalDesignator
}
@@ -120,6 +120,7 @@ type ScanInfo struct {
FailThreshold float32 // Failure score threshold
FailThresholdSeverity string // Severity at and above which the command should fail
Submit bool // Submit results to Kubescape Cloud BE
CreateAccount bool // Create account in Kubescape Cloud BE if no account found in local cache
ScanID string // Report id of the current scan
HostSensorEnabled BoolPtrFlag // Deploy Kubescape K8s host scanner to collect data from certain controls
HostSensorYamlPath string // Path to hostsensor file
@@ -128,6 +129,8 @@ type ScanInfo struct {
KubeContext string // context name
FrameworkScan bool // false if scanning control
ScanAll bool // true if scan all frameworks
OmitRawResources bool // true if omit raw resources from the output
PrintAttackTree bool // true if print attack tree
}
type Getters struct {
@@ -139,7 +142,6 @@ type Getters struct {
func (scanInfo *ScanInfo) Init() {
scanInfo.setUseFrom()
scanInfo.setOutputFile()
scanInfo.setUseArtifactsFrom()
if scanInfo.ScanID == "" {
scanInfo.ScanID = uuid.NewString()
@@ -159,7 +161,7 @@ func (scanInfo *ScanInfo) setUseArtifactsFrom() {
scanInfo.UseArtifactsFrom = dir
}
// set frameworks files
files, err := ioutil.ReadDir(scanInfo.UseArtifactsFrom)
files, err := os.ReadDir(scanInfo.UseArtifactsFrom)
if err != nil {
logger.L().Fatal("failed to read files from directory", helpers.String("dir", scanInfo.UseArtifactsFrom), helpers.Error(err))
}
@@ -176,35 +178,24 @@ func (scanInfo *ScanInfo) setUseArtifactsFrom() {
// set config-inputs file
scanInfo.ControlsInputs = filepath.Join(scanInfo.UseArtifactsFrom, localControlInputsFilename)
// set exceptions
scanInfo.UseExceptions = filepath.Join(scanInfo.UseArtifactsFrom, localExceptionsFilename)
scanInfo.UseExceptions = filepath.Join(scanInfo.UseArtifactsFrom, LocalExceptionsFilename)
}
func (scanInfo *ScanInfo) setUseFrom() {
if scanInfo.UseDefault {
for _, policy := range scanInfo.PolicyIdentifier {
scanInfo.UseFrom = append(scanInfo.UseFrom, getter.GetDefaultPath(policy.Name+".json"))
scanInfo.UseFrom = append(scanInfo.UseFrom, getter.GetDefaultPath(policy.Identifier+".json"))
}
}
}
func (scanInfo *ScanInfo) setOutputFile() {
if scanInfo.Output == "" {
return
}
if scanInfo.Format == "json" {
if filepath.Ext(scanInfo.Output) != ".json" {
scanInfo.Output += ".json"
}
}
if scanInfo.Format == "junit" {
if filepath.Ext(scanInfo.Output) != ".xml" {
scanInfo.Output += ".xml"
}
}
if scanInfo.Format == "pdf" {
if filepath.Ext(scanInfo.Output) != ".pdf" {
scanInfo.Output += ".pdf"
}
// Formats returns a slice of output formats that have been requested for a given scan
func (scanInfo *ScanInfo) Formats() []string {
formatString := scanInfo.Format
if formatString != "" {
return strings.Split(scanInfo.Format, ",")
} else {
return []string{}
}
}
@@ -213,7 +204,7 @@ func (scanInfo *ScanInfo) SetPolicyIdentifiers(policies []string, kind apisv1.No
if !scanInfo.contains(policy) {
newPolicy := PolicyIdentifier{}
newPolicy.Kind = kind
newPolicy.Name = policy
newPolicy.Identifier = policy
scanInfo.PolicyIdentifier = append(scanInfo.PolicyIdentifier, newPolicy)
}
}
@@ -221,7 +212,7 @@ func (scanInfo *ScanInfo) SetPolicyIdentifiers(policies []string, kind apisv1.No
func (scanInfo *ScanInfo) contains(policyName string) bool {
for _, policy := range scanInfo.PolicyIdentifier {
if policy.Name == policyName {
if policy.Identifier == policyName {
return true
}
}
@@ -249,7 +240,7 @@ func scanInfoToScanMetadata(scanInfo *ScanInfo) *reporthandlingv2.Metadata {
}
// append frameworks
for _, policy := range scanInfo.PolicyIdentifier {
metadata.ScanMetadata.TargetNames = append(metadata.ScanMetadata.TargetNames, policy.Name)
metadata.ScanMetadata.TargetNames = append(metadata.ScanMetadata.TargetNames, policy.Identifier)
}
metadata.ScanMetadata.KubescapeVersion = BuildNumber

View File

@@ -43,3 +43,30 @@ func TestGetScanningContext(t *testing.T) {
assert.Equal(t, ContextCluster, GetScanningContext(""))
assert.Equal(t, ContextGitURL, GetScanningContext("https://github.com/kubescape/kubescape"))
}
func TestScanInfoFormats(t *testing.T) {
testCases := []struct {
Input string
Want []string
}{
{"", []string{}},
{"json", []string{"json"}},
{"pdf", []string{"pdf"}},
{"html", []string{"html"}},
{"sarif", []string{"sarif"}},
{"html,pdf,sarif", []string{"html", "pdf", "sarif"}},
{"pretty-printer,pdf,sarif", []string{"pretty-printer", "pdf", "sarif"}},
}
for _, tc := range testCases {
t.Run(tc.Input, func(t *testing.T) {
input := tc.Input
want := tc.Want
scanInfo := &ScanInfo{Format: input}
got := scanInfo.Formats()
assert.Equal(t, want, got)
})
}
}

View File

@@ -14,8 +14,9 @@ import (
"golang.org/x/mod/semver"
)
const SKIP_VERSION_CHECK_DEPRECATED = "KUBESCAPE_SKIP_UPDATE_CHECK"
const SKIP_VERSION_CHECK = "KS_SKIP_UPDATE_CHECK"
const SKIP_VERSION_CHECK_DEPRECATED_ENV = "KUBESCAPE_SKIP_UPDATE_CHECK"
const SKIP_VERSION_CHECK_ENV = "KS_SKIP_UPDATE_CHECK"
const CLIENT_ENV = "KS_CLIENT"
var BuildNumber string
var Client string
@@ -31,9 +32,14 @@ func NewIVersionCheckHandler() IVersionCheckHandler {
if BuildNumber == "" {
logger.L().Warning("unknown build number, this might affect your scan results. Please make sure you are updated to latest version")
}
if v, ok := os.LookupEnv(SKIP_VERSION_CHECK); ok && boolutils.StringToBool(v) {
if v, ok := os.LookupEnv(CLIENT_ENV); ok && v != "" {
Client = v
}
if v, ok := os.LookupEnv(SKIP_VERSION_CHECK_ENV); ok && boolutils.StringToBool(v) {
return NewVersionCheckHandlerMock()
} else if v, ok := os.LookupEnv(SKIP_VERSION_CHECK_DEPRECATED); ok && boolutils.StringToBool(v) {
} else if v, ok := os.LookupEnv(SKIP_VERSION_CHECK_DEPRECATED_ENV); ok && boolutils.StringToBool(v) {
return NewVersionCheckHandlerMock()
}
return NewVersionCheckHandler()

View File

@@ -19,6 +19,7 @@ var (
"KubeletInfo",
"KubeProxyInfo",
"ControlPlaneInfo",
"CloudProviderInfo",
}
CloudResources = []string{
"ClusterDescribe",

View File

@@ -6,19 +6,28 @@ import (
"path/filepath"
"strings"
"github.com/armosec/armoapi-go/armotypes"
logger "github.com/kubescape/go-logger"
"github.com/kubescape/go-logger/helpers"
"github.com/kubescape/kubescape/v2/core/cautils/getter"
metav1 "github.com/kubescape/kubescape/v2/core/meta/datastructures/v1"
)
const (
TargetControlsInputs = "controls-inputs"
TargetExceptions = "exceptions"
TargetControl = "control"
TargetFramework = "framework"
TargetArtifacts = "artifacts"
TargetAttackTracks = "attack-tracks"
)
var downloadFunc = map[string]func(*metav1.DownloadInfo) error{
"controls-inputs": downloadConfigInputs,
"exceptions": downloadExceptions,
"control": downloadControl,
"framework": downloadFramework,
"artifacts": downloadArtifacts,
TargetControlsInputs: downloadConfigInputs,
TargetExceptions: downloadExceptions,
TargetControl: downloadControl,
TargetFramework: downloadFramework,
TargetArtifacts: downloadArtifacts,
TargetAttackTracks: downloadAttackTracks,
}
func DownloadSupportCommands() []string {
@@ -70,6 +79,7 @@ func downloadArtifacts(downloadInfo *metav1.DownloadInfo) error {
"controls-inputs": downloadConfigInputs,
"exceptions": downloadExceptions,
"framework": downloadFramework,
"attack-tracks": downloadAttackTracks,
}
for artifact := range artifacts {
if err := downloadArtifact(&metav1.DownloadInfo{Target: artifact, Path: downloadInfo.Path, FileName: fmt.Sprintf("%s.json", artifact)}, artifacts); err != nil {
@@ -82,7 +92,7 @@ func downloadArtifacts(downloadInfo *metav1.DownloadInfo) error {
func downloadConfigInputs(downloadInfo *metav1.DownloadInfo) error {
tenant := getTenantConfig(&downloadInfo.Credentials, "", "", getKubernetesApi())
controlsInputsGetter := getConfigInputsGetter(downloadInfo.Name, tenant.GetAccountID(), nil)
controlsInputsGetter := getConfigInputsGetter(downloadInfo.Identifier, tenant.GetAccountID(), nil)
controlInputs, err := controlsInputsGetter.GetControlsInputs(tenant.GetContextName())
if err != nil {
return err
@@ -103,17 +113,14 @@ func downloadConfigInputs(downloadInfo *metav1.DownloadInfo) error {
}
func downloadExceptions(downloadInfo *metav1.DownloadInfo) error {
var err error
tenant := getTenantConfig(&downloadInfo.Credentials, "", "", getKubernetesApi())
exceptionsGetter := getExceptionsGetter("", tenant.GetAccountID(), nil)
exceptions := []armotypes.PostureExceptionPolicy{}
if tenant.GetAccountID() != "" {
exceptions, err = exceptionsGetter.GetExceptions(tenant.GetContextName())
if err != nil {
return err
}
exceptions, err := exceptionsGetter.GetExceptions(tenant.GetContextName())
if err != nil {
return err
}
if downloadInfo.FileName == "" {
downloadInfo.FileName = fmt.Sprintf("%s.json", downloadInfo.Target)
}
@@ -126,13 +133,37 @@ func downloadExceptions(downloadInfo *metav1.DownloadInfo) error {
return nil
}
func downloadAttackTracks(downloadInfo *metav1.DownloadInfo) error {
var err error
tenant := getTenantConfig(&downloadInfo.Credentials, "", "", getKubernetesApi())
attackTracksGetter := getAttackTracksGetter(tenant.GetAccountID(), nil)
attackTracks, err := attackTracksGetter.GetAttackTracks()
if err != nil {
return err
}
if downloadInfo.FileName == "" {
downloadInfo.FileName = fmt.Sprintf("%s.json", downloadInfo.Target)
}
// save in file
err = getter.SaveInFile(attackTracks, filepath.Join(downloadInfo.Path, downloadInfo.FileName))
if err != nil {
return err
}
logger.L().Success("Downloaded", helpers.String("attack tracks", downloadInfo.Target), helpers.String("path", filepath.Join(downloadInfo.Path, downloadInfo.FileName)))
return nil
}
func downloadFramework(downloadInfo *metav1.DownloadInfo) error {
tenant := getTenantConfig(&downloadInfo.Credentials, "", "", getKubernetesApi())
g := getPolicyGetter(nil, tenant.GetTenantEmail(), true, nil)
if downloadInfo.Name == "" {
if downloadInfo.Identifier == "" {
// if framework name not specified - download all frameworks
frameworks, err := g.GetFrameworks()
if err != nil {
@@ -149,9 +180,9 @@ func downloadFramework(downloadInfo *metav1.DownloadInfo) error {
// return fmt.Errorf("missing framework name")
} else {
if downloadInfo.FileName == "" {
downloadInfo.FileName = fmt.Sprintf("%s.json", downloadInfo.Name)
downloadInfo.FileName = fmt.Sprintf("%s.json", downloadInfo.Identifier)
}
framework, err := g.GetFramework(downloadInfo.Name)
framework, err := g.GetFramework(downloadInfo.Identifier)
if err != nil {
return err
}
@@ -174,25 +205,25 @@ func downloadControl(downloadInfo *metav1.DownloadInfo) error {
g := getPolicyGetter(nil, tenant.GetTenantEmail(), false, nil)
if downloadInfo.Name == "" {
if downloadInfo.Identifier == "" {
// TODO - support
return fmt.Errorf("missing control name")
return fmt.Errorf("missing control ID")
}
if downloadInfo.FileName == "" {
downloadInfo.FileName = fmt.Sprintf("%s.json", downloadInfo.Name)
downloadInfo.FileName = fmt.Sprintf("%s.json", downloadInfo.Identifier)
}
controls, err := g.GetControl(downloadInfo.Name)
controls, err := g.GetControl(downloadInfo.Identifier)
if err != nil {
return err
return fmt.Errorf("failed to download control id '%s', %s", downloadInfo.Identifier, err.Error())
}
if controls == nil {
return fmt.Errorf("failed to download control - received an empty objects")
return fmt.Errorf("failed to download control id '%s' - received an empty objects", downloadInfo.Identifier)
}
downloadTo := filepath.Join(downloadInfo.Path, downloadInfo.FileName)
err = getter.SaveInFile(controls, downloadTo)
if err != nil {
return err
}
logger.L().Success("Downloaded", helpers.String("artifact", downloadInfo.Target), helpers.String("name", downloadInfo.Name), helpers.String("path", downloadTo))
logger.L().Success("Downloaded", helpers.String("artifact", downloadInfo.Target), helpers.String("ID", downloadInfo.Identifier), helpers.String("path", downloadTo))
return nil
}

View File

@@ -1,72 +0,0 @@
package core
import (
"fmt"
"strings"
logger "github.com/kubescape/go-logger"
metav1 "github.com/kubescape/kubescape/v2/core/meta/datastructures/v1"
"github.com/kubescape/kubescape/v2/core/pkg/fixhandler"
)
const NoChangesApplied = "No changes were applied."
const NoResourcesToFix = "No issues to fix."
const ConfirmationQuestion = "Would you like to apply the changes to the files above? [y|n]: "
func (ks *Kubescape) Fix(fixInfo *metav1.FixInfo) error {
logger.L().Info("Reading report file...")
handler, err := fixhandler.NewFixHandler(fixInfo)
if err != nil {
return err
}
resourcesToFix := handler.PrepareResourcesToFix()
if len(resourcesToFix) == 0 {
logger.L().Info(NoResourcesToFix)
return nil
}
handler.PrintExpectedChanges(resourcesToFix)
if fixInfo.DryRun {
logger.L().Info(NoChangesApplied)
return nil
}
if !fixInfo.NoConfirm && !userConfirmed() {
logger.L().Info(NoChangesApplied)
return nil
}
updatedFilesCount, errors := handler.ApplyChanges(resourcesToFix)
logger.L().Info(fmt.Sprintf("Fixed resources in %d files.", updatedFilesCount))
if len(errors) > 0 {
for _, err := range errors {
logger.L().Error(err.Error())
}
return fmt.Errorf("Failed to fix some resources, check the logs for more details")
}
return nil
}
func userConfirmed() bool {
var input string
for {
fmt.Printf(ConfirmationQuestion)
if _, err := fmt.Scanln(&input); err != nil {
continue
}
input = strings.ToLower(input)
if input == "y" || input == "yes" {
return true
} else if input == "n" || input == "no" {
return false
}
}
}

View File

@@ -2,6 +2,7 @@ package core
import (
"fmt"
"os"
logger "github.com/kubescape/go-logger"
"github.com/kubescape/go-logger/helpers"
@@ -10,6 +11,8 @@ import (
"github.com/kubescape/kubescape/v2/core/cautils/getter"
"github.com/kubescape/kubescape/v2/core/pkg/hostsensorutils"
"github.com/kubescape/kubescape/v2/core/pkg/resourcehandler"
"github.com/kubescape/kubescape/v2/core/pkg/resultshandling/printer"
printerv2 "github.com/kubescape/kubescape/v2/core/pkg/resultshandling/printer/v2"
"github.com/kubescape/kubescape/v2/core/pkg/resultshandling/reporter"
reporterv2 "github.com/kubescape/kubescape/v2/core/pkg/resultshandling/reporter/v2"
@@ -45,8 +48,9 @@ func getExceptionsGetter(useExceptions string, accountID string, downloadRelease
if downloadReleasedPolicy == nil {
downloadReleasedPolicy = getter.NewDownloadReleasedPolicy()
}
if err := downloadReleasedPolicy.SetRegoObjects(); err != nil {
logger.L().Warning("failed to get exceptions from github release, this may affect the scanning results", helpers.Error(err))
if err := downloadReleasedPolicy.SetRegoObjects(); err != nil { // if failed to pull attack tracks, fallback to cache
logger.L().Warning("failed to get exceptions from github release, loading attack tracks from cache", helpers.Error(err))
return getter.NewLoadPolicy([]string{getter.GetDefaultPath(cautils.LocalExceptionsFilename)})
}
return downloadReleasedPolicy
@@ -98,7 +102,7 @@ func getHostSensorHandler(scanInfo *cautils.ScanInfo, k8s *k8sinterface.Kubernet
// we need to determined which controls needs host scanner
if scanInfo.HostSensorEnabled.Get() == nil && hasHostSensorControls {
scanInfo.HostSensorEnabled.SetBool(false) // default - do not run host scanner
logger.L().Warning("Kubernetes cluster nodes scanning is disabled. This is required to collect valuable data for certain controls. You can enable it using the --enable-host-scan flag")
logger.L().Warning("Kubernetes cluster nodes scanning is disabled. This is required to collect valuable data for certain controls. You can enable it using the --enable-host-scan flag")
}
if hostSensorVal := scanInfo.HostSensorEnabled.Get(); hostSensorVal != nil && *hostSensorVal {
hostSensorHandler, err := hostsensorutils.NewHostSensorHandler(k8s, scanInfo.HostSensorYamlPath)
@@ -121,18 +125,18 @@ func getFieldSelector(scanInfo *cautils.ScanInfo) resourcehandler.IFieldSelector
return &resourcehandler.EmptySelector{}
}
func policyIdentifierNames(pi []cautils.PolicyIdentifier) string {
policiesNames := ""
func policyIdentifierIdentities(pi []cautils.PolicyIdentifier) string {
policiesIdentities := ""
for i := range pi {
policiesNames += pi[i].Name
policiesIdentities += pi[i].Identifier
if i+1 < len(pi) {
policiesNames += ","
policiesIdentities += ","
}
}
if policiesNames == "" {
policiesNames = "all"
if policiesIdentities == "" {
policiesIdentities = "all"
}
return policiesNames
return policiesIdentities
}
// setSubmitBehavior - Setup the desired cluster behavior regarding submitting to the Kubescape Cloud BE
@@ -178,6 +182,10 @@ func setSubmitBehavior(scanInfo *cautils.ScanInfo, tenantConfig cautils.ITenantC
scanInfo.Submit = true
}
if scanInfo.CreateAccount {
scanInfo.Submit = true
}
}
// setPolicyGetter set the policy getter - local file/github release/Kubescape Cloud API
@@ -247,8 +255,19 @@ func getAttackTracksGetter(accountID string, downloadReleasedPolicy *getter.Down
if downloadReleasedPolicy == nil {
downloadReleasedPolicy = getter.NewDownloadReleasedPolicy()
}
if err := downloadReleasedPolicy.SetRegoObjects(); err != nil {
logger.L().Warning("failed to get attack tracks from github release, this may affect the scanning results", helpers.Error(err))
if err := downloadReleasedPolicy.SetRegoObjects(); err != nil { // if failed to pull attack tracks, fallback to cache
logger.L().Warning("failed to get attack tracks from github release, loading attack tracks from cache", helpers.Error(err))
return getter.NewLoadPolicy([]string{getter.GetDefaultPath(cautils.LocalAttackTracksFilename)})
}
return downloadReleasedPolicy
}
// getUIPrinter returns a printer that will be used to print to the programs UI (terminal)
func getUIPrinter(verboseMode bool, formatVersion string, attackTree bool, viewType cautils.ViewTypes) printer.IPrinter {
p := printerv2.NewPrettyPrinter(verboseMode, formatVersion, attackTree, viewType)
// Since the UI of the program is a CLI (Stdout), it means that it should always print to Stdout
p.SetWriter(os.Stdout.Name())
return p
}

View File

@@ -0,0 +1,39 @@
package core
import (
"reflect"
"testing"
"github.com/kubescape/kubescape/v2/core/cautils"
)
func Test_getUIPrinter(t *testing.T) {
scanInfo := &cautils.ScanInfo{
FormatVersion: "v2",
VerboseMode: true,
View: "control",
}
wantFormatVersion := scanInfo.FormatVersion
wantVerboseMode := scanInfo.VerboseMode
wantViewType := cautils.ViewTypes(scanInfo.View)
got := getUIPrinter(scanInfo.VerboseMode, scanInfo.FormatVersion, scanInfo.PrintAttackTree, cautils.ViewTypes(scanInfo.View))
gotValue := reflect.ValueOf(got).Elem()
gotFormatVersion := gotValue.FieldByName("formatVersion").String()
gotVerboseMode := gotValue.FieldByName("verboseMode").Bool()
gotViewType := cautils.ViewTypes(gotValue.FieldByName("viewType").String())
if gotFormatVersion != wantFormatVersion {
t.Errorf("Got: %s, want: %s", gotFormatVersion, wantFormatVersion)
}
if gotVerboseMode != wantVerboseMode {
t.Errorf("Got: %t, want: %t", gotVerboseMode, wantVerboseMode)
}
if gotViewType != wantViewType {
t.Errorf("Got: %v, want: %v", gotViewType, wantViewType)
}
}

View File

@@ -6,8 +6,11 @@ import (
"sort"
"strings"
"github.com/kubescape/kubescape/v2/core/cautils/getter"
"github.com/kubescape/kubescape/v2/core/cautils"
metav1 "github.com/kubescape/kubescape/v2/core/meta/datastructures/v1"
"github.com/kubescape/kubescape/v2/core/pkg/resultshandling/printer"
v2 "github.com/kubescape/kubescape/v2/core/pkg/resultshandling/printer/v2"
"github.com/olekukonko/tablewriter"
)
var listFunc = map[string]func(*metav1.ListPolicies) ([]string, error){
@@ -16,7 +19,7 @@ var listFunc = map[string]func(*metav1.ListPolicies) ([]string, error){
"exceptions": listExceptions,
}
var listFormatFunc = map[string]func(*metav1.ListPolicies, []string){
var listFormatFunc = map[string]func(string, []string){
"pretty-print": prettyPrintListFormat,
"json": jsonListFormat,
}
@@ -29,14 +32,18 @@ func ListSupportActions() []string {
return commands
}
func (ks *Kubescape) List(listPolicies *metav1.ListPolicies) error {
if f, ok := listFunc[listPolicies.Target]; ok {
policies, err := f(listPolicies)
if policyListerFunc, ok := listFunc[listPolicies.Target]; ok {
policies, err := policyListerFunc(listPolicies)
if err != nil {
return err
}
sort.Strings(policies)
listFormatFunc[listPolicies.Format](listPolicies, policies)
if listFormatFunction, ok := listFormatFunc[listPolicies.Format]; ok {
listFormatFunction(listPolicies.Target, policies)
} else {
return fmt.Errorf("Invalid format \"%s\", Supported formats: 'pretty-print'/'json' ", listPolicies.Format)
}
return nil
}
@@ -45,20 +52,16 @@ func (ks *Kubescape) List(listPolicies *metav1.ListPolicies) error {
func listFrameworks(listPolicies *metav1.ListPolicies) ([]string, error) {
tenant := getTenantConfig(&listPolicies.Credentials, "", "", getKubernetesApi()) // change k8sinterface
g := getPolicyGetter(nil, tenant.GetTenantEmail(), true, nil)
policyGetter := getPolicyGetter(nil, tenant.GetTenantEmail(), true, nil)
return listFrameworksNames(g), nil
return listFrameworksNames(policyGetter), nil
}
func listControls(listPolicies *metav1.ListPolicies) ([]string, error) {
tenant := getTenantConfig(&listPolicies.Credentials, "", "", getKubernetesApi()) // change k8sinterface
g := getPolicyGetter(nil, tenant.GetTenantEmail(), false, nil)
l := getter.ListName
if listPolicies.ListIDs {
l = getter.ListID
}
return g.ListControls(l)
policyGetter := getPolicyGetter(nil, tenant.GetTenantEmail(), false, nil)
return policyGetter.ListControls()
}
func listExceptions(listPolicies *metav1.ListPolicies) ([]string, error) {
@@ -77,12 +80,73 @@ func listExceptions(listPolicies *metav1.ListPolicies) ([]string, error) {
return exceptionsNames, nil
}
func prettyPrintListFormat(listPolicies *metav1.ListPolicies, policies []string) {
sep := "\n * "
fmt.Printf("Supported %s:%s%s\n", listPolicies.Target, sep, strings.Join(policies, sep))
func prettyPrintListFormat(targetPolicy string, policies []string) {
if targetPolicy == "controls" {
prettyPrintControls(policies)
return
}
header := fmt.Sprintf("Supported %s", targetPolicy)
policyTable := tablewriter.NewWriter(printer.GetWriter(""))
policyTable.SetAutoWrapText(true)
policyTable.SetHeader([]string{header})
policyTable.SetHeaderLine(true)
policyTable.SetRowLine(true)
data := v2.Matrix{}
controlRows := generatePolicyRows(policies)
data = append(data, controlRows...)
policyTable.SetAlignment(tablewriter.ALIGN_CENTER)
policyTable.AppendBulk(data)
policyTable.Render()
}
func jsonListFormat(listPolicies *metav1.ListPolicies, policies []string) {
func jsonListFormat(targetPolicy string, policies []string) {
j, _ := json.MarshalIndent(policies, "", " ")
fmt.Printf("%s\n", j)
}
func prettyPrintControls(policies []string) {
controlsTable := tablewriter.NewWriter(printer.GetWriter(""))
controlsTable.SetAutoWrapText(true)
controlsTable.SetHeader([]string{"Control ID", "Control Name", "Docs", "Frameworks"})
controlsTable.SetHeaderLine(true)
controlsTable.SetRowLine(true)
data := v2.Matrix{}
controlRows := generateControlRows(policies)
data = append(data, controlRows...)
controlsTable.AppendBulk(data)
controlsTable.Render()
}
func generateControlRows(policies []string) [][]string {
rows := [][]string{}
for _, control := range policies {
idAndControlAndFrameworks := strings.Split(control, "|")
id, control, framework := idAndControlAndFrameworks[0], idAndControlAndFrameworks[1], idAndControlAndFrameworks[2]
docs := cautils.GetControlLink(id)
currentRow := []string{id, control, docs, framework}
rows = append(rows, currentRow)
}
return rows
}
func generatePolicyRows(policies []string) [][]string {
rows := [][]string{}
for _, policy := range policies {
currentRow := []string{policy}
rows = append(rows, currentRow)
}
return rows
}

View File

@@ -27,7 +27,8 @@ type componentInterfaces struct {
tenantConfig cautils.ITenantConfig
resourceHandler resourcehandler.IResourceHandler
report reporter.IReport
printerHandler printer.IPrinter
outputPrinters []printer.IPrinter
uiPrinter printer.IPrinter
hostSensorHandler hostsensorutils.IHostSensor
}
@@ -54,12 +55,16 @@ func getInterfaces(scanInfo *cautils.ScanInfo) componentInterfaces {
if err := tenantConfig.SetTenant(); err != nil {
logger.L().Error(err.Error())
}
if scanInfo.OmitRawResources {
logger.L().Warning("omit-raw-resources flag will be ignored in submit mode")
}
}
// ================== version testing ======================================
v := cautils.NewIVersionCheckHandler()
v.CheckLatestVersion(cautils.NewVersionCheckRequest(cautils.BuildNumber, policyIdentifierNames(scanInfo.PolicyIdentifier), "", cautils.ScanningContextToScanningScope(scanInfo.GetScanningContext())))
v.CheckLatestVersion(cautils.NewVersionCheckRequest(cautils.BuildNumber, policyIdentifierIdentities(scanInfo.PolicyIdentifier), "", cautils.ScanningContextToScanningScope(scanInfo.GetScanningContext())))
// ================== setup host scanner object ======================================
@@ -89,9 +94,17 @@ func getInterfaces(scanInfo *cautils.ScanInfo) componentInterfaces {
// reporting behavior - setup reporter
reportHandler := getReporter(tenantConfig, scanInfo.ScanID, scanInfo.Submit, scanInfo.FrameworkScan, scanInfo.GetScanningContext())
// setup printer
printerHandler := resultshandling.NewPrinter(scanInfo.Format, scanInfo.FormatVersion, scanInfo.VerboseMode, cautils.ViewTypes(scanInfo.View))
printerHandler.SetWriter(scanInfo.Output)
// setup printers
formats := scanInfo.Formats()
outputPrinters := make([]printer.IPrinter, 0)
for _, format := range formats {
printerHandler := resultshandling.NewPrinter(format, scanInfo.FormatVersion, scanInfo.PrintAttackTree, scanInfo.VerboseMode, cautils.ViewTypes(scanInfo.View))
printerHandler.SetWriter(scanInfo.Output)
outputPrinters = append(outputPrinters, printerHandler)
}
uiPrinter := getUIPrinter(scanInfo.VerboseMode, scanInfo.FormatVersion, scanInfo.PrintAttackTree, cautils.ViewTypes(scanInfo.View))
// ================== return interface ======================================
@@ -99,7 +112,8 @@ func getInterfaces(scanInfo *cautils.ScanInfo) componentInterfaces {
tenantConfig: tenantConfig,
resourceHandler: resourceHandler,
report: reportHandler,
printerHandler: printerHandler,
outputPrinters: outputPrinters,
uiPrinter: uiPrinter,
hostSensorHandler: hostSensorHandler,
}
}
@@ -137,7 +151,7 @@ func (ks *Kubescape) Scan(scanInfo *cautils.ScanInfo) (*resultshandling.ResultsH
}
}()
resultsHandling := resultshandling.NewResultsHandler(interfaces.report, interfaces.printerHandler)
resultsHandling := resultshandling.NewResultsHandler(interfaces.report, interfaces.outputPrinters, interfaces.uiPrinter)
// ===================== policies & resources =====================
policyHandler := policyhandler.NewPolicyHandler(interfaces.resourceHandler)
@@ -156,7 +170,7 @@ func (ks *Kubescape) Scan(scanInfo *cautils.ScanInfo) (*resultshandling.ResultsH
// ======================== prioritization ===================
if priotizationHandler, err := resourcesprioritization.NewResourcesPrioritizationHandler(scanInfo.Getters.AttackTracksGetter); err != nil {
if priotizationHandler, err := resourcesprioritization.NewResourcesPrioritizationHandler(scanInfo.Getters.AttackTracksGetter, scanInfo.PrintAttackTree); err != nil {
logger.L().Warning("failed to get attack tracks, this may affect the scanning results", helpers.Error(err))
} else if err := priotizationHandler.PrioritizeResources(scanData); err != nil {
return resultsHandling, fmt.Errorf("%w", err)

View File

@@ -6,6 +6,6 @@ type DownloadInfo struct {
Path string // directory to save artifact. Default is "~/.kubescape/"
FileName string // can be empty
Target string // type of artifact to download
Name string // name of artifact to download
Identifier string // identifier of artifact to download
Credentials cautils.Credentials
}

View File

@@ -1,8 +0,0 @@
package v1
type FixInfo struct {
ReportFile string // path to report file (mandatory)
NoConfirm bool // if true, no confirmation will be given to the user before applying the fix
SkipUserValues bool // if true, user values will not be changed
DryRun bool // if true, no changes will be applied
}

View File

@@ -4,7 +4,6 @@ import "github.com/kubescape/kubescape/v2/core/cautils"
type ListPolicies struct {
Target string
ListIDs bool
Format string
Credentials cautils.Credentials
}

View File

@@ -25,7 +25,4 @@ type IKubescape interface {
// delete
DeleteExceptions(deleteexceptions *metav1.DeleteExceptions) error
// fix
Fix(fixInfo *metav1.FixInfo) error
}

View File

@@ -8,7 +8,7 @@ import (
"github.com/kubescape/opa-utils/reporthandling"
)
var mockControl_0006 = `{"guid":"","name":"Allowed hostPath","attributes":{"armoBuiltin":true},"id":"C-0006","controlID":"C-0006","creationTime":"","description":"Mounting host directory to the container can be abused to get access to sensitive data and gain persistence on the host machine.","remediation":"Refrain from using host path mount.","rules":[{"guid":"","name":"alert-rw-hostpath","attributes":{"armoBuiltin":true,"m$K8sThreatMatrix":"Persistence::Writable hostPath mount, Lateral Movement::Writable volume mounts on the host"},"creationTime":"","rule":"package armo_builtins\n\n# input: pod\n# apiversion: v1\n# does: returns hostPath volumes\n\ndeny[msga] {\n pod := input[_]\n pod.kind == \"Pod\"\n volumes := pod.spec.volumes\n volume := volumes[_]\n volume.hostPath\n\tcontainer := pod.spec.containers[i]\n\tvolumeMount := container.volumeMounts[k]\n\tvolumeMount.name == volume.name\n\tbegginingOfPath := \"spec.\"\n\tresult := isRWMount(volumeMount, begginingOfPath, i, k)\n\n podname := pod.metadata.name\n\n\tmsga := {\n\t\t\"alertMessage\": sprintf(\"pod: %v has: %v as hostPath volume\", [podname, volume.name]),\n\t\t\"packagename\": \"armo_builtins\",\n\t\t\"alertScore\": 7,\n\t\t\"failedPaths\": [result],\n\t\t\"alertObject\": {\n\t\t\t\"k8sApiObjects\": [pod]\n\t\t}\n\t}\n}\n\n#handles majority of workload resources\ndeny[msga] {\n\twl := input[_]\n\tspec_template_spec_patterns := {\"Deployment\",\"ReplicaSet\",\"DaemonSet\",\"StatefulSet\",\"Job\"}\n\tspec_template_spec_patterns[wl.kind]\n volumes := wl.spec.template.spec.volumes\n volume := volumes[_]\n volume.hostPath\n\tcontainer := wl.spec.template.spec.containers[i]\n\tvolumeMount := container.volumeMounts[k]\n\tvolumeMount.name == volume.name\n\tbegginingOfPath := \"spec.template.spec.\"\n\tresult := isRWMount(volumeMount, begginingOfPath, i, k)\n\n\tmsga := {\n\t\t\"alertMessage\": sprintf(\"%v: %v has: %v as hostPath volume\", [wl.kind, wl.metadata.name, volume.name]),\n\t\t\"packagename\": \"armo_builtins\",\n\t\t\"alertScore\": 7,\n\t\t\"failedPaths\": [result],\n\t\t\"alertObject\": {\n\t\t\t\"k8sApiObjects\": [wl]\n\t\t}\n\t\n\t}\n}\n\n#handles CronJobs\ndeny[msga] {\n\twl := input[_]\n\twl.kind == \"CronJob\"\n volumes := wl.spec.jobTemplate.spec.template.spec.volumes\n volume := volumes[_]\n volume.hostPath\n\n\tcontainer = wl.spec.jobTemplate.spec.template.spec.containers[i]\n\tvolumeMount := container.volumeMounts[k]\n\tvolumeMount.name == volume.name\n\tbegginingOfPath := \"spec.jobTemplate.spec.template.spec.\"\n\tresult := isRWMount(volumeMount, begginingOfPath, i, k)\n\n\tmsga := {\n\t\"alertMessage\": sprintf(\"%v: %v has: %v as hostPath volume\", [wl.kind, wl.metadata.name, volume.name]),\n\t\"packagename\": \"armo_builtins\",\n\t\"alertScore\": 7,\n\t\"failedPaths\": [result],\n\t\"alertObject\": {\n\t\t\t\"k8sApiObjects\": [wl]\n\t\t}\n\t}\n}\n\nisRWMount(mount, begginingOfPath, i, k) = path {\n not mount.readOnly == true\n not mount.readOnly == false\n path = \"\"\n}\nisRWMount(mount, begginingOfPath, i, k) = path {\n mount.readOnly == false\n path = sprintf(\"%vcontainers[%v].volumeMounts[%v].readOnly\", [begginingOfPath, format_int(i, 10), format_int(k, 10)])\n} ","resourceEnumerator":"","ruleLanguage":"Rego","match":[{"apiGroups":["*"],"apiVersions":["*"],"resources":["Deployment","ReplicaSet","DaemonSet","StatefulSet","Job","CronJob","Pod"]}],"ruleDependencies":[{"packageName":"cautils"},{"packageName":"kubernetes.api.client"}],"configInputs":null,"controlConfigInputs":null,"description":"determines if any workload contains a hostPath volume with rw permissions","remediation":"Set the readOnly field of the mount to true","ruleQuery":""}],"rulesIDs":[""],"baseScore":6}`
var mockControl_0006 = `{"guid":"","name":"HostPath mount","attributes":{"armoBuiltin":true},"id":"C-0048","controlID":"C-0048","creationTime":"","description":"Mounting host directory to the container can be abused to get access to sensitive data and gain persistence on the host machine.","remediation":"Refrain from using host path mount.","rules":[{"guid":"","name":"alert-rw-hostpath","attributes":{"armoBuiltin":true,"m$K8sThreatMatrix":"Persistence::Writable hostPath mount, Lateral Movement::Writable volume mounts on the host"},"creationTime":"","rule":"package armo_builtins\n\n# input: pod\n# apiversion: v1\n# does: returns hostPath volumes\n\ndeny[msga] {\n pod := input[_]\n pod.kind == \"Pod\"\n volumes := pod.spec.volumes\n volume := volumes[_]\n volume.hostPath\n\tcontainer := pod.spec.containers[i]\n\tvolumeMount := container.volumeMounts[k]\n\tvolumeMount.name == volume.name\n\tbegginingOfPath := \"spec.\"\n\tresult := isRWMount(volumeMount, begginingOfPath, i, k)\n\n podname := pod.metadata.name\n\n\tmsga := {\n\t\t\"alertMessage\": sprintf(\"pod: %v has: %v as hostPath volume\", [podname, volume.name]),\n\t\t\"packagename\": \"armo_builtins\",\n\t\t\"alertScore\": 7,\n\t\t\"failedPaths\": [result],\n\t\t\"alertObject\": {\n\t\t\t\"k8sApiObjects\": [pod]\n\t\t}\n\t}\n}\n\n#handles majority of workload resources\ndeny[msga] {\n\twl := input[_]\n\tspec_template_spec_patterns := {\"Deployment\",\"ReplicaSet\",\"DaemonSet\",\"StatefulSet\",\"Job\"}\n\tspec_template_spec_patterns[wl.kind]\n volumes := wl.spec.template.spec.volumes\n volume := volumes[_]\n volume.hostPath\n\tcontainer := wl.spec.template.spec.containers[i]\n\tvolumeMount := container.volumeMounts[k]\n\tvolumeMount.name == volume.name\n\tbegginingOfPath := \"spec.template.spec.\"\n\tresult := isRWMount(volumeMount, begginingOfPath, i, k)\n\n\tmsga := {\n\t\t\"alertMessage\": sprintf(\"%v: %v has: %v as hostPath volume\", [wl.kind, wl.metadata.name, volume.name]),\n\t\t\"packagename\": \"armo_builtins\",\n\t\t\"alertScore\": 7,\n\t\t\"failedPaths\": [result],\n\t\t\"alertObject\": {\n\t\t\t\"k8sApiObjects\": [wl]\n\t\t}\n\t\n\t}\n}\n\n#handles CronJobs\ndeny[msga] {\n\twl := input[_]\n\twl.kind == \"CronJob\"\n volumes := wl.spec.jobTemplate.spec.template.spec.volumes\n volume := volumes[_]\n volume.hostPath\n\n\tcontainer = wl.spec.jobTemplate.spec.template.spec.containers[i]\n\tvolumeMount := container.volumeMounts[k]\n\tvolumeMount.name == volume.name\n\tbegginingOfPath := \"spec.jobTemplate.spec.template.spec.\"\n\tresult := isRWMount(volumeMount, begginingOfPath, i, k)\n\n\tmsga := {\n\t\"alertMessage\": sprintf(\"%v: %v has: %v as hostPath volume\", [wl.kind, wl.metadata.name, volume.name]),\n\t\"packagename\": \"armo_builtins\",\n\t\"alertScore\": 7,\n\t\"failedPaths\": [result],\n\t\"alertObject\": {\n\t\t\t\"k8sApiObjects\": [wl]\n\t\t}\n\t}\n}\n\nisRWMount(mount, begginingOfPath, i, k) = path {\n not mount.readOnly == true\n not mount.readOnly == false\n path = \"\"\n}\nisRWMount(mount, begginingOfPath, i, k) = path {\n mount.readOnly == false\n path = sprintf(\"%vcontainers[%v].volumeMounts[%v].readOnly\", [begginingOfPath, format_int(i, 10), format_int(k, 10)])\n} ","resourceEnumerator":"","ruleLanguage":"Rego","match":[{"apiGroups":["*"],"apiVersions":["*"],"resources":["Deployment","ReplicaSet","DaemonSet","StatefulSet","Job","CronJob","Pod"]}],"ruleDependencies":[{"packageName":"cautils"},{"packageName":"kubernetes.api.client"}],"configInputs":null,"controlConfigInputs":null,"description":"determines if any workload contains a hostPath volume with rw permissions","remediation":"Set the readOnly field of the mount to true","ruleQuery":""}],"rulesIDs":[""],"baseScore":6}`
var mockControl_0044 = `{"guid":"","name":"Container hostPort","attributes":{"armoBuiltin":true},"id":"C-0044","controlID":"C-0044","creationTime":"","description":"Configuring hostPort limits you to a particular port, and if any two workloads that specify the same HostPort they cannot be deployed to the same node. Therefore, if the number of replica of such workload is higher than the number of nodes, the deployment will fail.","remediation":"Avoid usage of hostPort unless it is absolutely necessary. Use NodePort / ClusterIP instead.","rules":[{"guid":"","name":"container-hostPort","attributes":{"armoBuiltin":true},"creationTime":"","rule":"package armo_builtins\n\n\n# Fails if pod has container with hostPort\ndeny[msga] {\n pod := input[_]\n pod.kind == \"Pod\"\n container := pod.spec.containers[i]\n\tbegginingOfPath := \"spec.\"\n\tpath := isHostPort(container, i, begginingOfPath)\n\tmsga := {\n\t\t\"alertMessage\": sprintf(\"Container: %v has Host-port\", [ container.name]),\n\t\t\"packagename\": \"armo_builtins\",\n\t\t\"alertScore\": 7,\n\t\t\"failedPaths\": path,\n\t\t\"alertObject\": {\n\t\t\t\"k8sApiObjects\": [pod]\n\t\t}\n\t}\n}\n\n# Fails if workload has container with hostPort\ndeny[msga] {\n wl := input[_]\n\tspec_template_spec_patterns := {\"Deployment\",\"ReplicaSet\",\"DaemonSet\",\"StatefulSet\",\"Job\"}\n\tspec_template_spec_patterns[wl.kind]\n container := wl.spec.template.spec.containers[i]\n\tbegginingOfPath := \"spec.template.spec.\"\n path := isHostPort(container, i, begginingOfPath)\n\tmsga := {\n\t\t\"alertMessage\": sprintf(\"Container: %v in %v: %v has Host-port\", [ container.name, wl.kind, wl.metadata.name]),\n\t\t\"packagename\": \"armo_builtins\",\n\t\t\"alertScore\": 7,\n\t\t\"failedPaths\": path,\n\t\t\"alertObject\": {\n\t\t\t\"k8sApiObjects\": [wl]\n\t\t}\n\t}\n}\n\n# Fails if cronjob has container with hostPort\ndeny[msga] {\n \twl := input[_]\n\twl.kind == \"CronJob\"\n\tcontainer = wl.spec.jobTemplate.spec.template.spec.containers[i]\n\tbegginingOfPath := \"spec.jobTemplate.spec.template.spec.\"\n path := isHostPort(container, i, begginingOfPath)\n msga := {\n\t\t\"alertMessage\": sprintf(\"Container: %v in %v: %v has Host-port\", [ container.name, wl.kind, wl.metadata.name]),\n\t\t\"packagename\": \"armo_builtins\",\n\t\t\"alertScore\": 7,\n\t\t\"failedPaths\": path,\n\t\t\"alertObject\": {\n\t\t\t\"k8sApiObjects\": [wl]\n\t\t}\n\t}\n}\n\n\n\nisHostPort(container, i, begginingOfPath) = path {\n\tpath = [sprintf(\"%vcontainers[%v].ports[%v].hostPort\", [begginingOfPath, format_int(i, 10), format_int(j, 10)]) | port = container.ports[j]; port.hostPort]\n\tcount(path) > 0\n}\n","resourceEnumerator":"","ruleLanguage":"Rego","match":[{"apiGroups":["*"],"apiVersions":["*"],"resources":["Deployment","ReplicaSet","DaemonSet","StatefulSet","Job","Pod","CronJob"]}],"ruleDependencies":[],"configInputs":null,"controlConfigInputs":null,"description":"fails if container has hostPort","remediation":"Make sure you do not configure hostPort for the container, if necessary use NodePort / ClusterIP","ruleQuery":"armo_builtins"}],"rulesIDs":[""],"baseScore":4}`
@@ -31,7 +31,7 @@ func MockFramework_0013() *reporthandling.Framework {
return fw
}
// MockFramework_0006_0013 mock control 0013 and control 0006 - "Non-root containers" and "Allowed hostPath"
// MockFramework_0006_0013 mock control 0013 and control 0006 - "Non-root containers" and "HostPath mount"
func MockFramework_0006_0013() *reporthandling.Framework {
fw := &reporthandling.Framework{
PortalBase: armotypes.PortalBase{

View File

@@ -50,7 +50,7 @@ func randSeq(n int, bank []rune) string {
b := make([]rune, n)
for i := range b {
b[i] = bank[rand.Intn(len(bank))]
b[i] = bank[rand.Intn(len(bank))] //nolint:gosec
}
return string(b)
}
@@ -60,7 +60,7 @@ func GenerateContainerScanLayer(layer *ScanResultLayer) {
layer.LayerHash = randSeq(32, hash)
layer.Vulnerabilities = make(VulnerabilitiesList, 0)
layer.Packages = make(LinuxPkgs, 0)
vuls := rand.Intn(10) + 1
vuls := rand.Intn(10) + 1 //nolint:gosec
for i := 0; i < vuls; i++ {
v := Vulnerability{}

View File

@@ -1,63 +0,0 @@
package fixhandler
import (
"github.com/armosec/armoapi-go/armotypes"
metav1 "github.com/kubescape/kubescape/v2/core/meta/datastructures/v1"
"github.com/kubescape/opa-utils/reporthandling"
reporthandlingv2 "github.com/kubescape/opa-utils/reporthandling/v2"
"gopkg.in/yaml.v3"
)
// FixHandler is a struct that holds the information of the report to be fixed
type FixHandler struct {
fixInfo *metav1.FixInfo
reportObj *reporthandlingv2.PostureReport
localBasePath string
}
// ResourceFixInfo is a struct that holds the information about the resource that needs to be fixed
type ResourceFixInfo struct {
YamlExpressions map[string]*armotypes.FixPath
Resource *reporthandling.Resource
FilePath string
DocumentIndex int
}
// NodeInfo holds extra information about the node
type nodeInfo struct {
node *yaml.Node
parent *yaml.Node
// position of the node among siblings
index int
}
// FixInfoMetadata holds the arguments "getFixInfo" function needs to pass to the
// functions it uses
type fixInfoMetadata struct {
originalList *[]nodeInfo
fixedList *[]nodeInfo
originalListTracker int
fixedListTracker int
contentToAdd *[]contentToAdd
linesToRemove *[]linesToRemove
}
// ContentToAdd holds the information about where to insert the new changes in the existing yaml file
type contentToAdd struct {
// Line where the fix should be applied to
line int
// Content is a string representation of the YAML node that describes a suggested fix
content string
}
// LinesToRemove holds the line numbers to remove from the existing yaml file
type linesToRemove struct {
startLine int
endLine int
}
type fileFixInfo struct {
contentsToAdd *[]contentToAdd
linesToRemove *[]linesToRemove
}

View File

@@ -1,346 +0,0 @@
package fixhandler
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path"
"strconv"
"strings"
"github.com/armosec/armoapi-go/armotypes"
metav1 "github.com/kubescape/kubescape/v2/core/meta/datastructures/v1"
logger "github.com/kubescape/go-logger"
"github.com/kubescape/opa-utils/objectsenvelopes"
"github.com/kubescape/opa-utils/objectsenvelopes/localworkload"
"github.com/kubescape/opa-utils/reporthandling"
"github.com/kubescape/opa-utils/reporthandling/results/v1/resourcesresults"
reporthandlingv2 "github.com/kubescape/opa-utils/reporthandling/v2"
"github.com/mikefarah/yq/v4/pkg/yqlib"
"gopkg.in/op/go-logging.v1"
)
const UserValuePrefix = "YOUR_"
func NewFixHandler(fixInfo *metav1.FixInfo) (*FixHandler, error) {
jsonFile, err := os.Open(fixInfo.ReportFile)
if err != nil {
return nil, err
}
defer jsonFile.Close()
byteValue, _ := ioutil.ReadAll(jsonFile)
var reportObj reporthandlingv2.PostureReport
if err = json.Unmarshal(byteValue, &reportObj); err != nil {
return nil, err
}
if err = isSupportedScanningTarget(&reportObj); err != nil {
return nil, err
}
localPath := getLocalPath(&reportObj)
if _, err = os.Stat(localPath); err != nil {
return nil, err
}
backendLoggerLeveled := logging.AddModuleLevel(logging.NewLogBackend(logger.L().GetWriter(), "", 0))
backendLoggerLeveled.SetLevel(logging.ERROR, "")
yqlib.GetLogger().SetBackend(backendLoggerLeveled)
return &FixHandler{
fixInfo: fixInfo,
reportObj: &reportObj,
localBasePath: localPath,
}, nil
}
func isSupportedScanningTarget(report *reporthandlingv2.PostureReport) error {
if report.Metadata.ScanMetadata.ScanningTarget == reporthandlingv2.GitLocal || report.Metadata.ScanMetadata.ScanningTarget == reporthandlingv2.Directory {
return nil
}
return fmt.Errorf("unsupported scanning target. Only local git and directory scanning targets are supported")
}
func getLocalPath(report *reporthandlingv2.PostureReport) string {
if report.Metadata.ScanMetadata.ScanningTarget == reporthandlingv2.GitLocal {
return report.Metadata.ContextMetadata.RepoContextMetadata.LocalRootPath
}
if report.Metadata.ScanMetadata.ScanningTarget == reporthandlingv2.Directory {
return report.Metadata.ContextMetadata.DirectoryContextMetadata.BasePath
}
return ""
}
func (h *FixHandler) buildResourcesMap() map[string]*reporthandling.Resource {
resourceIdToRawResource := make(map[string]*reporthandling.Resource)
for i := range h.reportObj.Resources {
resourceIdToRawResource[h.reportObj.Resources[i].GetID()] = &h.reportObj.Resources[i]
}
for i := range h.reportObj.Results {
if h.reportObj.Results[i].RawResource == nil {
continue
}
resourceIdToRawResource[h.reportObj.Results[i].RawResource.GetID()] = h.reportObj.Results[i].RawResource
}
return resourceIdToRawResource
}
func (h *FixHandler) getPathFromRawResource(obj map[string]interface{}) string {
if localworkload.IsTypeLocalWorkload(obj) {
localwork := localworkload.NewLocalWorkload(obj)
return localwork.GetPath()
} else if objectsenvelopes.IsTypeRegoResponseVector(obj) {
regoResponseVectorObject := objectsenvelopes.NewRegoResponseVectorObject(obj)
relatedObjects := regoResponseVectorObject.GetRelatedObjects()
for _, relatedObject := range relatedObjects {
if localworkload.IsTypeLocalWorkload(relatedObject.GetObject()) {
return relatedObject.(*localworkload.LocalWorkload).GetPath()
}
}
}
return ""
}
func (h *FixHandler) PrepareResourcesToFix() []ResourceFixInfo {
resourceIdToResource := h.buildResourcesMap()
resourcesToFix := make([]ResourceFixInfo, 0)
for _, result := range h.reportObj.Results {
if !result.GetStatus(nil).IsFailed() {
continue
}
resourceID := result.ResourceID
resourceObj := resourceIdToResource[resourceID]
resourcePath := h.getPathFromRawResource(resourceObj.GetObject())
if resourcePath == "" {
continue
}
if resourceObj.Source == nil || resourceObj.Source.FileType != reporthandling.SourceTypeYaml {
continue
}
relativePath, documentIndex, err := h.getFilePathAndIndex(resourcePath)
if err != nil {
logger.L().Error("Skipping invalid resource path: " + resourcePath)
continue
}
absolutePath := path.Join(h.localBasePath, relativePath)
if _, err := os.Stat(absolutePath); err != nil {
logger.L().Error("Skipping missing file: " + absolutePath)
continue
}
rfi := ResourceFixInfo{
FilePath: absolutePath,
Resource: resourceObj,
YamlExpressions: make(map[string]*armotypes.FixPath, 0),
DocumentIndex: documentIndex,
}
for i := range result.AssociatedControls {
if result.AssociatedControls[i].GetStatus(nil).IsFailed() {
rfi.addYamlExpressionsFromResourceAssociatedControl(documentIndex, &result.AssociatedControls[i], h.fixInfo.SkipUserValues)
}
}
if len(rfi.YamlExpressions) > 0 {
resourcesToFix = append(resourcesToFix, rfi)
}
}
return resourcesToFix
}
func (h *FixHandler) PrintExpectedChanges(resourcesToFix []ResourceFixInfo) {
var sb strings.Builder
sb.WriteString("The following changes will be applied:\n")
for _, resourceFixInfo := range resourcesToFix {
sb.WriteString(fmt.Sprintf("File: %s\n", resourceFixInfo.FilePath))
sb.WriteString(fmt.Sprintf("Resource: %s\n", resourceFixInfo.Resource.GetName()))
sb.WriteString(fmt.Sprintf("Kind: %s\n", resourceFixInfo.Resource.GetKind()))
sb.WriteString("Changes:\n")
i := 1
for _, fixPath := range resourceFixInfo.YamlExpressions {
sb.WriteString(fmt.Sprintf("\t%d) %s = %s\n", i, (*fixPath).Path, (*fixPath).Value))
i++
}
sb.WriteString("\n------\n")
}
logger.L().Info(sb.String())
}
func (h *FixHandler) ApplyChanges(resourcesToFix []ResourceFixInfo) (int, []error) {
updatedFiles := make(map[string]bool)
errors := make([]error, 0)
fileYamlExpressions := h.getFileYamlExpressions(resourcesToFix)
for filepath, yamlExpression := range fileYamlExpressions {
fileAsString, err := getFileString(filepath)
if err != nil {
errors = append(errors, err)
continue
}
fixedYamlString, err := h.ApplyFixToContent(fileAsString, yamlExpression)
if err != nil {
errors = append(errors, fmt.Errorf("Failed to fix file %s: %w ", filepath, err))
continue
} else {
updatedFiles[filepath] = true
}
err = writeFixesToFile(filepath, fixedYamlString)
if err != nil {
logger.L().Error(fmt.Sprintf("Failed to write fixes to file %s, %v", filepath, err.Error()))
errors = append(errors, err)
}
}
return len(updatedFiles), errors
}
func (h *FixHandler) getFilePathAndIndex(filePathWithIndex string) (filePath string, documentIndex int, err error) {
splittedPath := strings.Split(filePathWithIndex, ":")
if len(splittedPath) <= 1 {
return "", 0, fmt.Errorf("expected to find ':' in file path")
}
filePath = splittedPath[0]
if documentIndex, err := strconv.Atoi(splittedPath[1]); err != nil {
return "", 0, err
} else {
return filePath, documentIndex, nil
}
}
func (h *FixHandler) ApplyFixToContent(yamlAsString, yamlExpression string) (fixedString string, err error) {
yamlLines := strings.Split(yamlAsString, "\n")
originalRootNodes, err := decodeDocumentRoots(yamlAsString)
if err != nil {
return "", err
}
fixedRootNodes, err := getFixedNodes(yamlAsString, yamlExpression)
if err != nil {
return "", err
}
fileFixInfo := getFixInfo(originalRootNodes, fixedRootNodes)
fixedYamlLines := getFixedYamlLines(yamlLines, fileFixInfo)
fixedString = getStringFromSlice(fixedYamlLines)
return fixedString, nil
}
func (h *FixHandler) getFileYamlExpressions(resourcesToFix []ResourceFixInfo) map[string]string {
fileYamlExpressions := make(map[string]string, 0)
for _, resourceToFix := range resourcesToFix {
singleExpression := reduceYamlExpressions(&resourceToFix)
resourceFilePath := resourceToFix.FilePath
if _, pathExistsInMap := fileYamlExpressions[resourceFilePath]; !pathExistsInMap {
fileYamlExpressions[resourceFilePath] = singleExpression
} else {
fileYamlExpressions[resourceFilePath] = joinStrings(fileYamlExpressions[resourceFilePath], " | ", singleExpression)
}
}
return fileYamlExpressions
}
func (rfi *ResourceFixInfo) addYamlExpressionsFromResourceAssociatedControl(documentIndex int, ac *resourcesresults.ResourceAssociatedControl, skipUserValues bool) {
for _, rule := range ac.ResourceAssociatedRules {
if !rule.GetStatus(nil).IsFailed() {
continue
}
for _, rulePaths := range rule.Paths {
if rulePaths.FixPath.Path == "" {
continue
}
if strings.HasPrefix(rulePaths.FixPath.Value, UserValuePrefix) && skipUserValues {
continue
}
yamlExpression := fixPathToValidYamlExpression(rulePaths.FixPath.Path, rulePaths.FixPath.Value, documentIndex)
rfi.YamlExpressions[yamlExpression] = &rulePaths.FixPath
}
}
}
// reduceYamlExpressions reduces the number of yaml expressions to a single one
func reduceYamlExpressions(resource *ResourceFixInfo) string {
expressions := make([]string, 0, len(resource.YamlExpressions))
for expr := range resource.YamlExpressions {
expressions = append(expressions, expr)
}
return strings.Join(expressions, " | ")
}
func fixPathToValidYamlExpression(fixPath, value string, documentIndexInYaml int) string {
isStringValue := true
if _, err := strconv.ParseBool(value); err == nil {
isStringValue = false
} else if _, err := strconv.ParseFloat(value, 64); err == nil {
isStringValue = false
} else if _, err := strconv.Atoi(value); err == nil {
isStringValue = false
}
// Strings should be quoted
if isStringValue {
value = fmt.Sprintf("\"%s\"", value)
}
// select document index and add a dot for the root node
return fmt.Sprintf("select(di==%d).%s |= %s", documentIndexInYaml, fixPath, value)
}
func joinStrings(inputStrings ...string) string {
return strings.Join(inputStrings, "")
}
func getFileString(filepath string) (string, error) {
bytes, err := ioutil.ReadFile(filepath)
if err != nil {
return "", fmt.Errorf("Error reading file %s", filepath)
}
return string(bytes), nil
}
func writeFixesToFile(filepath, content string) error {
err := ioutil.WriteFile(filepath, []byte(content), 0644)
if err != nil {
return fmt.Errorf("Error writing fixes to file: %w", err)
}
return nil
}

View File

@@ -1,248 +0,0 @@
package fixhandler
import (
"os"
"path/filepath"
"testing"
logger "github.com/kubescape/go-logger"
metav1 "github.com/kubescape/kubescape/v2/core/meta/datastructures/v1"
reporthandlingv2 "github.com/kubescape/opa-utils/reporthandling/v2"
"github.com/mikefarah/yq/v4/pkg/yqlib"
"github.com/stretchr/testify/assert"
"gopkg.in/op/go-logging.v1"
)
type indentationTestCase struct {
inputFile string
yamlExpression string
expectedFile string
}
func NewFixHandlerMock() (*FixHandler, error) {
backendLoggerLeveled := logging.AddModuleLevel(logging.NewLogBackend(logger.L().GetWriter(), "", 0))
backendLoggerLeveled.SetLevel(logging.ERROR, "")
yqlib.GetLogger().SetBackend(backendLoggerLeveled)
return &FixHandler{
fixInfo: &metav1.FixInfo{},
reportObj: &reporthandlingv2.PostureReport{},
localBasePath: "",
}, nil
}
func getTestdataPath() string {
currentDir, _ := os.Getwd()
return filepath.Join(currentDir, "testdata")
}
func getTestCases() []indentationTestCase {
indentationTestCases := []indentationTestCase{
// Insertion Scenarios
{
"inserts/tc-01-00-input-mapping-insert-mapping.yaml",
"select(di==0).spec.containers[0].securityContext.allowPrivilegeEscalation |= false",
"inserts/tc-01-01-expected.yaml",
},
{
"inserts/tc-02-00-input-mapping-insert-mapping-with-list.yaml",
"select(di==0).spec.containers[0].securityContext.capabilities.drop += [\"NET_RAW\"]",
"inserts/tc-02-01-expected.yaml",
},
{
"inserts/tc-03-00-input-list-append-scalar.yaml",
"select(di==0).spec.containers[0].securityContext.capabilities.drop += [\"SYS_ADM\"]",
"inserts/tc-03-01-expected.yaml",
},
{
"inserts/tc-04-00-input-multiple-inserts.yaml",
`select(di==0).spec.template.spec.securityContext.allowPrivilegeEscalation |= false |
select(di==0).spec.template.spec.containers[0].securityContext.capabilities.drop += ["NET_RAW"] |
select(di==0).spec.template.spec.containers[0].securityContext.seccompProfile.type |= "RuntimeDefault" |
select(di==0).spec.template.spec.containers[0].securityContext.allowPrivilegeEscalation |= false |
select(di==0).spec.template.spec.containers[0].securityContext.readOnlyRootFilesystem |= true`,
"inserts/tc-04-01-expected.yaml",
},
{
"inserts/tc-05-00-input-comment-blank-line-single-insert.yaml",
"select(di==0).spec.containers[0].securityContext.allowPrivilegeEscalation |= false",
"inserts/tc-05-01-expected.yaml",
},
{
"inserts/tc-06-00-input-list-append-scalar-oneline.yaml",
"select(di==0).spec.containers[0].securityContext.capabilities.drop += [\"SYS_ADM\"]",
"inserts/tc-06-01-expected.yaml",
},
{
"inserts/tc-07-00-input-multiple-documents.yaml",
`select(di==0).spec.containers[0].securityContext.allowPrivilegeEscalation |= false |
select(di==1).spec.containers[0].securityContext.allowPrivilegeEscalation |= false`,
"inserts/tc-07-01-expected.yaml",
},
{
"inserts/tc-08-00-input-mapping-insert-mapping-indented.yaml",
"select(di==0).spec.containers[0].securityContext.capabilities.drop += [\"NET_RAW\"]",
"inserts/tc-08-01-expected.yaml",
},
{
"inserts/tc-09-00-input-list-insert-new-mapping-indented.yaml",
`select(di==0).spec.containers += {"name": "redis", "image": "redis"}`,
"inserts/tc-09-01-expected.yaml",
},
{
"inserts/tc-10-00-input-list-insert-new-mapping.yaml",
`select(di==0).spec.containers += {"name": "redis", "image": "redis"}`,
"inserts/tc-10-01-expected.yaml",
},
// Removal Scenarios
{
"removals/tc-01-00-input.yaml",
"del(select(di==0).spec.containers[0].securityContext)",
"removals/tc-01-01-expected.yaml",
},
{
"removals/tc-02-00-input.yaml",
"del(select(di==0).spec.containers[1])",
"removals/tc-02-01-expected.yaml",
},
{
"removals/tc-03-00-input.yaml",
"del(select(di==0).spec.containers[0].securityContext.capabilities.drop[1])",
"removals/tc-03-01-expected.yaml",
},
{
"removes/tc-04-00-input.yaml",
`del(select(di==0).spec.containers[0].securityContext) |
del(select(di==1).spec.containers[1])`,
"removes/tc-04-01-expected.yaml",
},
// Replace Scenarios
{
"replaces/tc-01-00-input.yaml",
"select(di==0).spec.containers[0].securityContext.runAsRoot |= false",
"replaces/tc-01-01-expected.yaml",
},
{
"replaces/tc-02-00-input.yaml",
`select(di==0).spec.containers[0].securityContext.capabilities.drop[0] |= "SYS_ADM" |
select(di==0).spec.containers[0].securityContext.capabilities.add[0] |= "NET_RAW"`,
"replaces/tc-02-01-expected.yaml",
},
// Hybrid Scenarios
{
"hybrids/tc-01-00-input.yaml",
`del(select(di==0).spec.containers[0].securityContext) |
select(di==0).spec.securityContext.runAsRoot |= false`,
"hybrids/tc-01-01-expected.yaml",
},
{
"hybrids/tc-02-00-input-indented-list.yaml",
`del(select(di==0).spec.containers[0].securityContext) |
select(di==0).spec.securityContext.runAsRoot |= false`,
"hybrids/tc-02-01-expected.yaml",
},
{
"hybrids/tc-03-00-input-comments.yaml",
`del(select(di==0).spec.containers[0].securityContext) |
select(di==0).spec.securityContext.runAsRoot |= false`,
"hybrids/tc-03-01-expected.yaml",
},
{
"hybrids/tc-04-00-input-separated-keys.yaml",
`del(select(di==0).spec.containers[0].securityContext) |
select(di==0).spec.securityContext.runAsRoot |= false`,
"hybrids/tc-04-01-expected.yaml",
},
}
return indentationTestCases
}
func TestApplyFixKeepsFormatting(t *testing.T) {
testCases := getTestCases()
for _, tc := range testCases {
t.Run(tc.inputFile, func(t *testing.T) {
getTestDataPath := func(filename string) string {
currentDir, _ := os.Getwd()
currentFile := "testdata/" + filename
return filepath.Join(currentDir, currentFile)
}
input, _ := os.ReadFile(getTestDataPath(tc.inputFile))
wantRaw, _ := os.ReadFile(getTestDataPath(tc.expectedFile))
want := string(wantRaw)
expression := tc.yamlExpression
h, _ := NewFixHandlerMock()
got, _ := h.ApplyFixToContent(string(input), expression)
assert.Equalf(
t, want, got,
"Contents of the fixed file don't match the expectation.\n"+
"Input file: %s\n\n"+
"Got: <%s>\n\n"+
"Want: <%s>",
tc.inputFile, got, want,
)
},
)
}
}
func Test_fixPathToValidYamlExpression(t *testing.T) {
type args struct {
fixPath string
value string
documentIndexInYaml int
}
tests := []struct {
name string
args args
want string
}{
{
name: "fix path with boolean value",
args: args{
fixPath: "spec.template.spec.containers[0].securityContext.privileged",
value: "true",
documentIndexInYaml: 2,
},
want: "select(di==2).spec.template.spec.containers[0].securityContext.privileged |= true",
},
{
name: "fix path with string value",
args: args{
fixPath: "metadata.namespace",
value: "YOUR_NAMESPACE",
documentIndexInYaml: 0,
},
want: "select(di==0).metadata.namespace |= \"YOUR_NAMESPACE\"",
},
{
name: "fix path with number",
args: args{
fixPath: "xxx.yyy",
value: "123",
documentIndexInYaml: 0,
},
want: "select(di==0).xxx.yyy |= 123",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := fixPathToValidYamlExpression(tt.args.fixPath, tt.args.value, tt.args.documentIndexInYaml); got != tt.want {
t.Errorf("fixPathToValidYamlExpression() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -1,19 +0,0 @@
# Fix to Apply:
# REMOVE:
# "del(select(di==0).spec.containers[0].securityContext)"
# INSERT:
# select(di==0).spec.securityContext.runAsRoot: false
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
runAsRoot: true

View File

@@ -1,19 +0,0 @@
# Fix to Apply:
# REMOVE:
# "del(select(di==0).spec.containers[0].securityContext)"
# INSERT:
# select(di==0).spec.securityContext.runAsRoot: false
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
runAsRoot: false

View File

@@ -1,19 +0,0 @@
# Fix to Apply:
# REMOVE:
# "del(select(di==0).spec.containers[0].securityContext)"
# INSERT:
# select(di==0).spec.securityContext.runAsRoot: false
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
runAsRoot: true

View File

@@ -1,19 +0,0 @@
# Fix to Apply:
# REMOVE:
# "del(select(di==0).spec.containers[0].securityContext)"
# INSERT:
# select(di==0).spec.securityContext.runAsRoot: false
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
runAsRoot: false

View File

@@ -1,21 +0,0 @@
# Fix to Apply:
# REMOVE:
# "del(select(di==0).spec.containers[0].securityContext)"
# INSERT:
# select(di==0).spec.securityContext.runAsRoot: false
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
# These are the container comments
containers:
# These are the first containers comments
- name: nginx_container
image: nginx
securityContext:
runAsRoot: true

View File

@@ -1,21 +0,0 @@
# Fix to Apply:
# REMOVE:
# "del(select(di==0).spec.containers[0].securityContext)"
# INSERT:
# select(di==0).spec.securityContext.runAsRoot: false
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
# These are the container comments
containers:
# These are the first containers comments
- name: nginx_container
image: nginx
securityContext:
runAsRoot: false

View File

@@ -1,21 +0,0 @@
# Fix to Apply:
# REMOVE:
# "del(select(di==0).spec.containers[0].securityContext)"
# INSERT:
# select(di==0).spec.securityContext.runAsRoot: false
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
runAsRoot: true

View File

@@ -1,21 +0,0 @@
# Fix to Apply:
# REMOVE:
# "del(select(di==0).spec.containers[0].securityContext)"
# INSERT:
# select(di==0).spec.securityContext.runAsRoot: false
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
runAsRoot: false

View File

@@ -1,12 +0,0 @@
# Fix to Apply:
# "select(di==0).spec.containers[0].securityContext.allowPrivilegeEscalation |= false"
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx

View File

@@ -1,14 +0,0 @@
# Fix to Apply:
# "select(di==0).spec.containers[0].securityContext.allowPrivilegeEscalation |= false"
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
allowPrivilegeEscalation: false

View File

@@ -1,11 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers[0].securityContext.capabilities.drop += ["NET_RAW"]
apiVersion: v1
kind: Pod
metadata:
name: insert_list
spec:
containers:
- name: nginx_container
image: nginx

View File

@@ -1,15 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers[0].securityContext.capabilities.drop += ["NET_RAW"]
apiVersion: v1
kind: Pod
metadata:
name: insert_list
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
capabilities:
drop:
- NET_RAW

View File

@@ -1,15 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers[0].securityContext.capabilities.drop += ["SYS_ADM"]
apiVersion: v1
kind: Pod
metadata:
name: insert_list
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
capabilities:
drop:
- NET_RAW

View File

@@ -1,16 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers[0].securityContext.capabilities.drop += ["SYS_ADM"]
apiVersion: v1
kind: Pod
metadata:
name: insert_list
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
capabilities:
drop:
- NET_RAW
- SYS_ADM

View File

@@ -1,47 +0,0 @@
# Fixes to Apply:
# 1) select(di==0).spec.template.spec.securityContext.allowPrivilegeEscalation = false
# 2) select(di==0).spec.template.spec.containers[0].securityContext.capabilities.drop += ["NET_RAW"]
# 3) select(di==0).spec.template.spec.containers[0].securityContext.seccompProfile.type = RuntimeDefault
# 4) select(di==0).spec.template.spec.containers[0].securityContext.allowPrivilegeEscalation |= false
# 5) select(di==0).spec.template.spec.containers[0].securityContext.readOnlyRootFilesystem |= true
apiVersion: apps/v1
kind: Deployment
metadata:
name: multiple_inserts
spec:
selector:
matchLabels:
app: example_4
template:
metadata:
labels:
app: example_4
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 5
containers:
- name: example_4
image: nginx
ports:
- containerPort: 3000
env:
- name: PORT
value: "3000"
resources:
requests:
cpu: 200m
memory: 180Mi
limits:
cpu: 300m
memory: 300Mi
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 15
exec:
command: ["/bin/grpc_health_probe", "-addr=:3000"]
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 15
exec:
command: ["/bin/grpc_health_probe", "-addr=:3000"]

View File

@@ -1,57 +0,0 @@
# Fixes to Apply:
# 1) select(di==0).spec.template.spec.securityContext.allowPrivilegeEscalation = false
# 2) select(di==0).spec.template.spec.containers[0].securityContext.capabilities.drop += ["NET_RAW"]
# 3) select(di==0).spec.template.spec.containers[0].securityContext.seccompProfile.type = RuntimeDefault
# 4) select(di==0).spec.template.spec.containers[0].securityContext.allowPrivilegeEscalation |= false
# 5) select(di==0).spec.template.spec.containers[0].securityContext.readOnlyRootFilesystem |= true
apiVersion: apps/v1
kind: Deployment
metadata:
name: multiple_inserts
spec:
selector:
matchLabels:
app: example_4
template:
metadata:
labels:
app: example_4
spec:
serviceAccountName: default
terminationGracePeriodSeconds: 5
containers:
- name: example_4
image: nginx
ports:
- containerPort: 3000
env:
- name: PORT
value: "3000"
resources:
requests:
cpu: 200m
memory: 180Mi
limits:
cpu: 300m
memory: 300Mi
readinessProbe:
initialDelaySeconds: 20
periodSeconds: 15
exec:
command: ["/bin/grpc_health_probe", "-addr=:3000"]
livenessProbe:
initialDelaySeconds: 20
periodSeconds: 15
exec:
command: ["/bin/grpc_health_probe", "-addr=:3000"]
securityContext:
capabilities:
drop:
- NET_RAW
seccompProfile:
type: RuntimeDefault
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
securityContext:
allowPrivilegeEscalation: false

View File

@@ -1,16 +0,0 @@
# Fix to Apply:
# "select(di==0).spec.containers[0].securityContext.allowPrivilegeEscalation |= false"
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
# Testing if comments are retained as intended
securityContext:
runAsRoot: false

View File

@@ -1,18 +0,0 @@
# Fix to Apply:
# "select(di==0).spec.containers[0].securityContext.allowPrivilegeEscalation |= false"
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
allowPrivilegeEscalation: false
# Testing if comments are retained as intended
securityContext:
runAsRoot: false

View File

@@ -1,14 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers[0].securityContext.capabilities.drop += ["SYS_ADM"]
apiVersion: v1
kind: Pod
metadata:
name: insert_list
spec:
containers:
- name: nginx1
image: nginx
securityContext:
capabilities:
drop: [NET_RAW]

View File

@@ -1,14 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers[0].securityContext.capabilities.drop += ["SYS_ADM"]
apiVersion: v1
kind: Pod
metadata:
name: insert_list
spec:
containers:
- name: nginx1
image: nginx
securityContext:
capabilities:
drop: [NET_RAW, SYS_ADM]

View File

@@ -1,27 +0,0 @@
# Fix to Apply:
# "select(di==0).spec.containers[0].securityContext.allowPrivilegeEscalation |= false"
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
---
# Fix to Apply:
# "select(di==1).spec.containers[0].securityContext.allowPrivilegeEscalation |= false"
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx

View File

@@ -1,31 +0,0 @@
# Fix to Apply:
# "select(di==0).spec.containers[0].securityContext.allowPrivilegeEscalation |= false"
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
allowPrivilegeEscalation: false
---
# Fix to Apply:
# "select(di==1).spec.containers[0].securityContext.allowPrivilegeEscalation |= false"
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
allowPrivilegeEscalation: false

View File

@@ -1,11 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers[0].securityContext.capabilities.drop += ["NET_RAW"]
apiVersion: v1
kind: Pod
metadata:
name: indented-parent-list-insert-list-value
spec:
containers:
- name: nginx_container
image: nginx

View File

@@ -1,15 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers[0].securityContext.capabilities.drop += ["NET_RAW"]
apiVersion: v1
kind: Pod
metadata:
name: indented-parent-list-insert-list-value
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
capabilities:
drop:
- NET_RAW

View File

@@ -1,11 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers += {"name": "redis", "image": "redis"}
apiVersion: v1
kind: Pod
metadata:
name: indented-parent-list-insert-list-value
spec:
containers:
- name: nginx_container
image: nginx

View File

@@ -1,13 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers += {"name": "redis", "image": "redis"}
apiVersion: v1
kind: Pod
metadata:
name: indented-parent-list-insert-list-value
spec:
containers:
- name: nginx_container
image: nginx
- name: redis
image: redis

View File

@@ -1,11 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers += {"name": "redis", "image": "redis"}
apiVersion: v1
kind: Pod
metadata:
name: indented-list-insert-new-object
spec:
containers:
- name: nginx_container
image: nginx

View File

@@ -1,13 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers += {"name": "redis", "image": "redis"}
apiVersion: v1
kind: Pod
metadata:
name: indented-list-insert-new-object
spec:
containers:
- name: nginx_container
image: nginx
- name: redis
image: redis

View File

@@ -1,14 +0,0 @@
# Fix to Apply:
# del(select(di==0).spec.containers[0].securityContext)
apiVersion: v1
kind: Pod
metadata:
name: remove_example
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
runAsRoot: false

View File

@@ -1,12 +0,0 @@
# Fix to Apply:
# del(select(di==0).spec.containers[0].securityContext)
apiVersion: v1
kind: Pod
metadata:
name: remove_example
spec:
containers:
- name: nginx_container
image: nginx

View File

@@ -1,15 +0,0 @@
# Fix to Apply:
# del(select(di==0).spec.containers[1])
apiVersion: v1
kind: Pod
metadata:
name: remove_example
spec:
containers:
- name: nginx_container
image: nginx
- name: container_with_security_issues
image: image_with_security_issues

View File

@@ -1,12 +0,0 @@
# Fix to Apply:
# del(select(di==0).spec.containers[1])
apiVersion: v1
kind: Pod
metadata:
name: remove_example
spec:
containers:
- name: nginx_container
image: nginx

View File

@@ -1,14 +0,0 @@
# Fix to Apply:
# del(select(di==0).spec.containers[0].securityContext.capabilities.drop[1])
apiVersion: v1
kind: Pod
metadata:
name: insert_list
spec:
containers:
- name: nginx1
image: nginx
securityContext:
capabilities:
drop: ["NET_RAW", "SYS_ADM"]

View File

@@ -1,14 +0,0 @@
# Fix to Apply:
# del(select(di==0).spec.containers[0].securityContext.capabilities.drop[1])
apiVersion: v1
kind: Pod
metadata:
name: insert_list
spec:
containers:
- name: nginx1
image: nginx
securityContext:
capabilities:
drop: ["NET_RAW"]

View File

@@ -1,32 +0,0 @@
# Fix to Apply:
# del(select(di==0).spec.containers[0].securityContext)
apiVersion: v1
kind: Pod
metadata:
name: remove_example
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
runAsRoot: false
---
# Fix to Apply:
# del(select(di==0).spec.containers[1])
apiVersion: v1
kind: Pod
metadata:
name: remove_example
spec:
containers:
- name: nginx_container
image: nginx
- name: container_with_security_issues
image: image_with_security_issues

View File

@@ -1,27 +0,0 @@
# Fix to Apply:
# del(select(di==0).spec.containers[0].securityContext)
apiVersion: v1
kind: Pod
metadata:
name: remove_example
spec:
containers:
- name: nginx_container
image: nginx
---
# Fix to Apply:
# del(select(di==0).spec.containers[1])
apiVersion: v1
kind: Pod
metadata:
name: remove_example
spec:
containers:
- name: nginx_container
image: nginx

View File

@@ -1,14 +0,0 @@
# Fix to Apply:
# "select(di==0).spec.containers[0].securityContext.runAsRoot |= false"
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
runAsRoot: true

View File

@@ -1,14 +0,0 @@
# Fix to Apply:
# "select(di==0).spec.containers[0].securityContext.runAsRoot |= false"
apiVersion: v1
kind: Pod
metadata:
name: insert_to_mapping_node_1
spec:
containers:
- name: nginx_container
image: nginx
securityContext:
runAsRoot: false

View File

@@ -1,18 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers[0].securityContext.capabilities.drop[0] |= "SYS_ADM"
# select(di==0).spec.containers[0].securityContext.capabilities.add[0] |= "NET_RAW"
apiVersion: v1
kind: Pod
metadata:
name: insert_list
spec:
containers:
- name: nginx1
image: nginx
securityContext:
capabilities:
drop:
- "NET_RAW"
add: ["SYS_ADM"]

View File

@@ -1,18 +0,0 @@
# Fix to Apply:
# select(di==0).spec.containers[0].securityContext.capabilities.drop[0] |= "SYS_ADM"
# select(di==0).spec.containers[0].securityContext.capabilities.add[0] |= "NET_RAW"
apiVersion: v1
kind: Pod
metadata:
name: insert_list
spec:
containers:
- name: nginx1
image: nginx
securityContext:
capabilities:
drop:
- "SYS_ADM"
add: ["NET_RAW"]

View File

@@ -1,286 +0,0 @@
package fixhandler
import (
"container/list"
"errors"
"fmt"
"io"
"strings"
"github.com/mikefarah/yq/v4/pkg/yqlib"
"gopkg.in/yaml.v3"
)
// decodeDocumentRoots decodes all YAML documents stored in a given `filepath` and returns a slice of their root nodes
func decodeDocumentRoots(yamlAsString string) ([]yaml.Node, error) {
fileReader := strings.NewReader(yamlAsString)
dec := yaml.NewDecoder(fileReader)
nodes := make([]yaml.Node, 0)
for {
var node yaml.Node
err := dec.Decode(&node)
nodes = append(nodes, node)
if errors.Is(err, io.EOF) {
break
}
if err != nil {
return nil, fmt.Errorf("Cannot Decode File as YAML")
}
}
return nodes, nil
}
func getFixedNodes(yamlAsString, yamlExpression string) ([]yaml.Node, error) {
preferences := yqlib.ConfiguredYamlPreferences
preferences.EvaluateTogether = true
decoder := yqlib.NewYamlDecoder(preferences)
var allDocuments = list.New()
reader := strings.NewReader(yamlAsString)
fileDocuments, err := readDocuments(reader, decoder)
if err != nil {
return nil, err
}
allDocuments.PushBackList(fileDocuments)
allAtOnceEvaluator := yqlib.NewAllAtOnceEvaluator()
fixedCandidateNodes, err := allAtOnceEvaluator.EvaluateCandidateNodes(yamlExpression, allDocuments)
if err != nil {
return nil, fmt.Errorf("Error fixing YAML, %w", err)
}
fixedNodes := make([]yaml.Node, 0)
var fixedNode *yaml.Node
for fixedCandidateNode := fixedCandidateNodes.Front(); fixedCandidateNode != nil; fixedCandidateNode = fixedCandidateNode.Next() {
fixedNode = fixedCandidateNode.Value.(*yqlib.CandidateNode).Node
fixedNodes = append(fixedNodes, *fixedNode)
}
return fixedNodes, nil
}
func flattenWithDFS(node *yaml.Node) *[]nodeInfo {
dfsOrder := make([]nodeInfo, 0)
flattenWithDFSHelper(node, nil, &dfsOrder, 0)
return &dfsOrder
}
func flattenWithDFSHelper(node *yaml.Node, parent *yaml.Node, dfsOrder *[]nodeInfo, index int) {
dfsNode := nodeInfo{
node: node,
parent: parent,
index: index,
}
*dfsOrder = append(*dfsOrder, dfsNode)
for idx, child := range node.Content {
flattenWithDFSHelper(child, node, dfsOrder, idx)
}
}
func getFixInfo(originalRootNodes, fixedRootNodes []yaml.Node) fileFixInfo {
contentToAdd := make([]contentToAdd, 0)
linesToRemove := make([]linesToRemove, 0)
for idx := 0; idx < len(fixedRootNodes); idx++ {
originalList := flattenWithDFS(&originalRootNodes[idx])
fixedList := flattenWithDFS(&fixedRootNodes[idx])
nodeContentToAdd, nodeLinesToRemove := getFixInfoHelper(*originalList, *fixedList)
contentToAdd = append(contentToAdd, nodeContentToAdd...)
linesToRemove = append(linesToRemove, nodeLinesToRemove...)
}
return fileFixInfo{
contentsToAdd: &contentToAdd,
linesToRemove: &linesToRemove,
}
}
func getFixInfoHelper(originalList, fixedList []nodeInfo) ([]contentToAdd, []linesToRemove) {
// While obtaining fixedYamlNode, comments and empty lines at the top are ignored.
// This causes a difference in Line numbers across the tree structure. In order to
// counter this, line numbers are adjusted in fixed list.
adjustFixedListLines(&originalList, &fixedList)
contentToAdd := make([]contentToAdd, 0)
linesToRemove := make([]linesToRemove, 0)
originalListTracker, fixedListTracker := 0, 0
fixInfoMetadata := &fixInfoMetadata{
originalList: &originalList,
fixedList: &fixedList,
originalListTracker: originalListTracker,
fixedListTracker: fixedListTracker,
contentToAdd: &contentToAdd,
linesToRemove: &linesToRemove,
}
for originalListTracker < len(originalList) && fixedListTracker < len(fixedList) {
matchNodeResult := matchNodes(originalList[originalListTracker].node, fixedList[fixedListTracker].node)
fixInfoMetadata.originalListTracker = originalListTracker
fixInfoMetadata.fixedListTracker = fixedListTracker
switch matchNodeResult {
case sameNodes:
originalListTracker += 1
fixedListTracker += 1
case removedNode:
originalListTracker, fixedListTracker = addLinesToRemove(fixInfoMetadata)
case insertedNode:
originalListTracker, fixedListTracker = addLinesToInsert(fixInfoMetadata)
case replacedNode:
originalListTracker, fixedListTracker = updateLinesToReplace(fixInfoMetadata)
}
}
// Some nodes are still not visited if they are removed at the end of the list
for originalListTracker < len(originalList) {
fixInfoMetadata.originalListTracker = originalListTracker
originalListTracker, _ = addLinesToRemove(fixInfoMetadata)
}
// Some nodes are still not visited if they are inserted at the end of the list
for fixedListTracker < len(fixedList) {
// Use negative index of last node in original list as a placeholder to determine the last line number later
fixInfoMetadata.originalListTracker = -(len(originalList) - 1)
fixInfoMetadata.fixedListTracker = fixedListTracker
_, fixedListTracker = addLinesToInsert(fixInfoMetadata)
}
return contentToAdd, linesToRemove
}
// Adds the lines to remove and returns the updated originalListTracker
func addLinesToRemove(fixInfoMetadata *fixInfoMetadata) (int, int) {
isOneLine, line := isOneLineSequenceNode(fixInfoMetadata.originalList, fixInfoMetadata.originalListTracker)
if isOneLine {
// Remove the entire line and replace it with the sequence node in fixed info. This way,
// the original formatting is not lost.
return replaceSingleLineSequence(fixInfoMetadata, line)
}
currentDFSNode := (*fixInfoMetadata.originalList)[fixInfoMetadata.originalListTracker]
newOriginalListTracker := updateTracker(fixInfoMetadata.originalList, fixInfoMetadata.originalListTracker)
*fixInfoMetadata.linesToRemove = append(*fixInfoMetadata.linesToRemove, linesToRemove{
startLine: currentDFSNode.node.Line,
endLine: getNodeLine(fixInfoMetadata.originalList, newOriginalListTracker),
})
return newOriginalListTracker, fixInfoMetadata.fixedListTracker
}
// Adds the lines to insert and returns the updated fixedListTracker
func addLinesToInsert(fixInfoMetadata *fixInfoMetadata) (int, int) {
isOneLine, line := isOneLineSequenceNode(fixInfoMetadata.fixedList, fixInfoMetadata.fixedListTracker)
if isOneLine {
return replaceSingleLineSequence(fixInfoMetadata, line)
}
currentDFSNode := (*fixInfoMetadata.fixedList)[fixInfoMetadata.fixedListTracker]
lineToInsert := getLineToInsert(fixInfoMetadata)
contentToInsert := getContent(currentDFSNode.parent, fixInfoMetadata.fixedList, fixInfoMetadata.fixedListTracker)
newFixedTracker := updateTracker(fixInfoMetadata.fixedList, fixInfoMetadata.fixedListTracker)
*fixInfoMetadata.contentToAdd = append(*fixInfoMetadata.contentToAdd, contentToAdd{
line: lineToInsert,
content: contentToInsert,
})
return fixInfoMetadata.originalListTracker, newFixedTracker
}
// Adds the lines to remove and insert and updates the fixedListTracker and originalListTracker
func updateLinesToReplace(fixInfoMetadata *fixInfoMetadata) (int, int) {
isOneLine, line := isOneLineSequenceNode(fixInfoMetadata.fixedList, fixInfoMetadata.fixedListTracker)
if isOneLine {
return replaceSingleLineSequence(fixInfoMetadata, line)
}
currentDFSNode := (*fixInfoMetadata.fixedList)[fixInfoMetadata.fixedListTracker]
// If only the value node is changed, entire "key-value" pair is replaced
if isValueNodeinMapping(&currentDFSNode) {
fixInfoMetadata.originalListTracker -= 1
fixInfoMetadata.fixedListTracker -= 1
}
addLinesToRemove(fixInfoMetadata)
updatedOriginalTracker, updatedFixedTracker := addLinesToInsert(fixInfoMetadata)
return updatedOriginalTracker, updatedFixedTracker
}
func removeNewLinesAtTheEnd(yamlLines []string) []string {
for idx := 1; idx < len(yamlLines); idx++ {
if yamlLines[len(yamlLines)-idx] != "\n" {
yamlLines = yamlLines[:len(yamlLines)-idx+1]
break
}
}
return yamlLines
}
func getFixedYamlLines(yamlLines []string, fileFixInfo fileFixInfo) (fixedYamlLines []string) {
// Determining last line requires original yaml lines slice. The placeholder for last line is replaced with the real last line
assignLastLine(fileFixInfo.contentsToAdd, fileFixInfo.linesToRemove, &yamlLines)
removeLines(fileFixInfo.linesToRemove, &yamlLines)
fixedYamlLines = make([]string, 0)
lineIdx, lineToAddIdx := 1, 0
// Ideally, new node is inserted at line before the next node in DFS order. But, when the previous line contains a
// comment or empty line, we need to insert new nodes before them.
adjustContentLines(fileFixInfo.contentsToAdd, &yamlLines)
for lineToAddIdx < len(*fileFixInfo.contentsToAdd) {
for lineIdx <= (*fileFixInfo.contentsToAdd)[lineToAddIdx].line {
// Check if the current line is not removed
if yamlLines[lineIdx-1] != "*" {
fixedYamlLines = append(fixedYamlLines, yamlLines[lineIdx-1])
}
lineIdx += 1
}
content := (*fileFixInfo.contentsToAdd)[lineToAddIdx].content
fixedYamlLines = append(fixedYamlLines, content)
lineToAddIdx += 1
}
for lineIdx <= len(yamlLines) {
if yamlLines[lineIdx-1] != "*" {
fixedYamlLines = append(fixedYamlLines, yamlLines[lineIdx-1])
}
lineIdx += 1
}
fixedYamlLines = removeNewLinesAtTheEnd(fixedYamlLines)
return fixedYamlLines
}

View File

@@ -1,406 +0,0 @@
package fixhandler
import (
"bufio"
"bytes"
"container/list"
"errors"
"fmt"
"io"
"math"
"os"
"strings"
logger "github.com/kubescape/go-logger"
"github.com/mikefarah/yq/v4/pkg/yqlib"
"gopkg.in/yaml.v3"
)
type NodeRelation int
const (
sameNodes NodeRelation = iota
insertedNode
removedNode
replacedNode
)
func matchNodes(nodeOne, nodeTwo *yaml.Node) NodeRelation {
isNewNode := nodeTwo.Line == 0 && nodeTwo.Column == 0
sameLines := nodeOne.Line == nodeTwo.Line
sameColumns := nodeOne.Column == nodeTwo.Column
isSameNode := isSameNode(nodeOne, nodeTwo)
switch {
case isSameNode:
return sameNodes
case isNewNode:
return insertedNode
case sameLines && sameColumns:
return replacedNode
default:
return removedNode
}
}
func adjustContentLines(contentToAdd *[]contentToAdd, linesSlice *[]string) {
for contentIdx, content := range *contentToAdd {
line := content.line
// Adjust line numbers such that there are no "empty lines or comment lines of next nodes" before them
for idx := line - 1; idx >= 0; idx-- {
if isEmptyLineOrComment((*linesSlice)[idx]) {
(*contentToAdd)[contentIdx].line -= 1
} else {
break
}
}
}
}
func adjustFixedListLines(originalList, fixedList *[]nodeInfo) {
differenceAtTop := (*originalList)[0].node.Line - (*fixedList)[0].node.Line
if differenceAtTop <= 0 {
return
}
for _, node := range *fixedList {
// line numbers should not be changed for new nodes.
if node.node.Line != 0 {
node.node.Line += differenceAtTop
}
}
return
}
func enocodeIntoYaml(parentNode *yaml.Node, nodeList *[]nodeInfo, tracker int) (string, error) {
content := make([]*yaml.Node, 0)
currentNode := (*nodeList)[tracker].node
content = append(content, currentNode)
// Add the value in "key-value" pair to construct if the parent is mapping node
if parentNode.Kind == yaml.MappingNode {
valueNode := (*nodeList)[tracker+1].node
content = append(content, valueNode)
}
// The parent is added at the top to encode into YAML
parentForContent := yaml.Node{
Kind: parentNode.Kind,
Content: content,
}
buf := new(bytes.Buffer)
encoder := yaml.NewEncoder(buf)
encoder.SetIndent(2)
errorEncoding := encoder.Encode(parentForContent)
if errorEncoding != nil {
return "", fmt.Errorf("Error debugging node, %v", errorEncoding.Error())
}
errorClosingEncoder := encoder.Close()
if errorClosingEncoder != nil {
return "", fmt.Errorf("Error closing encoder: %v", errorClosingEncoder.Error())
}
return fmt.Sprintf(`%v`, buf.String()), nil
}
func getContent(parentNode *yaml.Node, nodeList *[]nodeInfo, tracker int) string {
content, err := enocodeIntoYaml(parentNode, nodeList, tracker)
if err != nil {
logger.L().Fatal("Cannot Encode into YAML")
}
indentationSpaces := parentNode.Column - 1
content = indentContent(content, indentationSpaces)
return strings.TrimSuffix(content, "\n")
}
func indentContent(content string, indentationSpaces int) string {
indentedContent := ""
indentSpaces := strings.Repeat(" ", indentationSpaces)
scanner := bufio.NewScanner(strings.NewReader(content))
for scanner.Scan() {
line := scanner.Text()
indentedContent += (indentSpaces + line + "\n")
}
return indentedContent
}
func getLineToInsert(fixInfoMetadata *fixInfoMetadata) int {
var lineToInsert int
// Check if lineToInsert is last line
if fixInfoMetadata.originalListTracker < 0 {
originalListTracker := int(math.Abs(float64(fixInfoMetadata.originalListTracker)))
// Storing the negative value of line of last node as a placeholder to determine the last line later.
lineToInsert = -(*fixInfoMetadata.originalList)[originalListTracker].node.Line
} else {
lineToInsert = (*fixInfoMetadata.originalList)[fixInfoMetadata.originalListTracker].node.Line - 1
}
return lineToInsert
}
func assignLastLine(contentsToAdd *[]contentToAdd, linesToRemove *[]linesToRemove, linesSlice *[]string) {
for idx, contentToAdd := range *contentsToAdd {
if contentToAdd.line < 0 {
currentLine := int(math.Abs(float64(contentToAdd.line)))
(*contentsToAdd)[idx].line, _ = getLastLineOfResource(linesSlice, currentLine)
}
}
for idx, lineToRemove := range *linesToRemove {
if lineToRemove.endLine < 0 {
endLine, _ := getLastLineOfResource(linesSlice, lineToRemove.startLine)
(*linesToRemove)[idx].endLine = endLine
}
}
}
func getLastLineOfResource(linesSlice *[]string, currentLine int) (int, error) {
// Get lastlines of all resources...
lastLinesOfResources := make([]int, 0)
for lineNumber, lineContent := range *linesSlice {
if lineContent == "---" {
for lastLine := lineNumber - 1; lastLine >= 0; lastLine-- {
if !isEmptyLineOrComment((*linesSlice)[lastLine]) {
lastLinesOfResources = append(lastLinesOfResources, lastLine+1)
break
}
}
}
}
lastLine := len(*linesSlice)
for lastLine >= 0 {
if !isEmptyLineOrComment((*linesSlice)[lastLine-1]) {
lastLinesOfResources = append(lastLinesOfResources, lastLine)
break
} else {
lastLine--
}
}
// Get last line of the resource we need
for _, endLine := range lastLinesOfResources {
if currentLine <= endLine {
return endLine, nil
}
}
return 0, fmt.Errorf("Provided line is greater than the length of YAML file")
}
func getNodeLine(nodeList *[]nodeInfo, tracker int) int {
if tracker < len(*nodeList) {
return (*nodeList)[tracker].node.Line
} else {
return -1
}
}
// Checks if the node is value node in "key-value" pairs of mapping node
func isValueNodeinMapping(node *nodeInfo) bool {
if node.parent.Kind == yaml.MappingNode && node.index%2 != 0 {
return true
}
return false
}
// Checks if the node is part of single line sequence node and returns the line
func isOneLineSequenceNode(list *[]nodeInfo, currentTracker int) (bool, int) {
parentNode := (*list)[currentTracker].parent
if parentNode.Kind != yaml.SequenceNode {
return false, -1
}
var currentNode, prevNode nodeInfo
currentTracker -= 1
for (*list)[currentTracker].node != parentNode {
currentNode = (*list)[currentTracker]
prevNode = (*list)[currentTracker-1]
if currentNode.node.Line != prevNode.node.Line {
return false, -1
}
currentTracker -= 1
}
parentNodeInfo := (*list)[currentTracker]
if parentNodeInfo.parent.Kind == yaml.MappingNode {
keyNodeInfo := (*list)[currentTracker-1]
if keyNodeInfo.node.Line == parentNode.Line {
return true, parentNode.Line
} else {
return false, -1
}
} else {
if parentNodeInfo.parent.Line == parentNode.Line {
return true, parentNode.Line
} else {
return false, -1
}
}
}
// Checks if nodes are of same kind, value, line and column
func isSameNode(nodeOne, nodeTwo *yaml.Node) bool {
sameLines := nodeOne.Line == nodeTwo.Line
sameColumns := nodeOne.Column == nodeTwo.Column
sameKinds := nodeOne.Kind == nodeTwo.Kind
sameValues := nodeOne.Value == nodeTwo.Value
return sameKinds && sameValues && sameLines && sameColumns
}
// Checks if the line is empty or a comment
func isEmptyLineOrComment(lineContent string) bool {
lineContent = strings.TrimSpace(lineContent)
if lineContent == "" {
return true
} else if lineContent[0:1] == "#" {
return true
}
return false
}
func readDocuments(reader io.Reader, decoder yqlib.Decoder) (*list.List, error) {
err := decoder.Init(reader)
if err != nil {
return nil, fmt.Errorf("Error Initializing the decoder, %w", err)
}
inputList := list.New()
var currentIndex uint
for {
candidateNode, errorReading := decoder.Decode()
if errors.Is(errorReading, io.EOF) {
switch reader := reader.(type) {
case *os.File:
safelyCloseFile(reader)
}
return inputList, nil
} else if errorReading != nil {
return nil, fmt.Errorf("Error Decoding YAML file, %w", errorReading)
}
candidateNode.Document = currentIndex
candidateNode.EvaluateTogether = true
inputList.PushBack(candidateNode)
currentIndex = currentIndex + 1
}
}
func safelyCloseFile(file *os.File) {
err := file.Close()
if err != nil {
logger.L().Error("Error Closing File")
}
}
// Remove the entire line and replace it with the sequence node in fixed info. This way,
// the original formatting is lost.
func replaceSingleLineSequence(fixInfoMetadata *fixInfoMetadata, line int) (int, int) {
originalListTracker := getFirstNodeInLine(fixInfoMetadata.originalList, line)
fixedListTracker := getFirstNodeInLine(fixInfoMetadata.fixedList, line)
currentDFSNode := (*fixInfoMetadata.fixedList)[fixedListTracker]
contentToInsert := getContent(currentDFSNode.parent, fixInfoMetadata.fixedList, fixedListTracker)
// Remove the Single line
*fixInfoMetadata.linesToRemove = append(*fixInfoMetadata.linesToRemove, linesToRemove{
startLine: line,
endLine: line,
})
// Encode entire Sequence Node and Insert
*fixInfoMetadata.contentToAdd = append(*fixInfoMetadata.contentToAdd, contentToAdd{
line: line,
content: contentToInsert,
})
originalListTracker = updateTracker(fixInfoMetadata.originalList, originalListTracker)
fixedListTracker = updateTracker(fixInfoMetadata.fixedList, fixedListTracker)
return originalListTracker, fixedListTracker
}
// Returns the first node in the given line that is not mapping node
func getFirstNodeInLine(list *[]nodeInfo, line int) int {
tracker := 0
currentNode := (*list)[tracker].node
for currentNode.Line != line || currentNode.Kind == yaml.MappingNode {
tracker += 1
currentNode = (*list)[tracker].node
}
return tracker
}
// To not mess with the line number while inserting, removed lines are not deleted but replaced with "*"
func removeLines(linesToRemove *[]linesToRemove, linesSlice *[]string) {
var startLine, endLine int
for _, lineToRemove := range *linesToRemove {
startLine = lineToRemove.startLine - 1
endLine = lineToRemove.endLine - 1
for line := startLine; line <= endLine; line++ {
lineContent := (*linesSlice)[line]
// When determining the endLine, empty lines and comments which are not intended to be removed are included.
// To deal with that, we need to refrain from removing empty lines and comments
if isEmptyLineOrComment(lineContent) {
break
}
(*linesSlice)[line] = "*"
}
}
}
// Skips the current node including it's children in DFS order and returns the new tracker.
func skipCurrentNode(node *yaml.Node, currentTracker int) int {
updatedTracker := currentTracker + getChildrenCount(node)
return updatedTracker
}
func getChildrenCount(node *yaml.Node) int {
totalChildren := 1
for _, child := range node.Content {
totalChildren += getChildrenCount(child)
}
return totalChildren
}
// The current node along with it's children is skipped and the tracker is moved to next sibling
// of current node. If parent is mapping node, "value" in "key-value" pairs is also skipped.
func updateTracker(nodeList *[]nodeInfo, tracker int) int {
currentNode := (*nodeList)[tracker]
var updatedTracker int
if currentNode.parent.Kind == yaml.MappingNode {
valueNode := (*nodeList)[tracker+1]
updatedTracker = skipCurrentNode(valueNode.node, tracker+1)
} else {
updatedTracker = skipCurrentNode(currentNode.node, tracker)
}
return updatedTracker
}
func getStringFromSlice(yamlLines []string) (fixedYamlString string) {
return strings.Join(yamlLines, "\n")
}

View File

@@ -36,8 +36,9 @@ spec:
effect: NoSchedule
containers:
- name: host-sensor
image: quay.io/kubescape/host-scanner:v1.0.32
image: quay.io/kubescape/host-scanner:v1.0.39
securityContext:
allowPrivilegeEscalation: true
privileged: true
readOnlyRootFilesystem: true
procMount: Unmasked

View File

@@ -3,6 +3,7 @@ package hostsensorutils
import (
"encoding/json"
"fmt"
"strings"
"sync"
logger "github.com/kubescape/go-logger"
@@ -99,6 +100,30 @@ func (hsh *HostSensorHandler) sendAllPodsHTTPGETRequest(path, requestKind string
return res, nil
}
// return host-scanner version
func (hsh *HostSensorHandler) GetVersion() (string, error) {
// loop over pods and port-forward it to each of them
podList, err := hsh.getPodList()
if err != nil {
return "", fmt.Errorf("failed to sendAllPodsHTTPGETRequest: %v", err)
}
// initialization of the channels
hsh.workerPool.init(len(podList))
hsh.workerPool.hostSensorApplyJobs(podList, "/version", "version")
for job := range hsh.workerPool.jobs {
resBytes, err := hsh.HTTPGetToPod(job.podName, job.path)
if err != nil {
return "", err
} else {
version := strings.ReplaceAll(string(resBytes), "\"", "")
version = strings.ReplaceAll(version, "\n", "")
return version, nil
}
}
return "", nil
}
// return list of LinuxKernelVariables
func (hsh *HostSensorHandler) GetKernelVariables() ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
@@ -135,6 +160,12 @@ func (hsh *HostSensorHandler) GetControlPlaneInfo() ([]hostsensor.HostSensorData
return hsh.sendAllPodsHTTPGETRequest("/controlPlaneInfo", ControlPlaneInfo)
}
// return list of KubeProxyInfo
func (hsh *HostSensorHandler) GetCloudProviderInfo() ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
return hsh.sendAllPodsHTTPGETRequest("/cloudProviderInfo", CloudProviderInfo)
}
// return list of KubeletCommandLine
func (hsh *HostSensorHandler) GetKubeletCommandLine() ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
@@ -192,6 +223,16 @@ func (hsh *HostSensorHandler) CollectResources() ([]hostsensor.HostSensorDataEnv
var kcData []hostsensor.HostSensorDataEnvelope
var err error
logger.L().Debug("Accessing host scanner")
version, err := hsh.GetVersion()
if err != nil {
logger.L().Warning(err.Error())
}
if len(version) > 0 {
logger.L().Info("Host scanner version : " + version)
} else {
logger.L().Info("Unknown host scanner version")
}
//
kcData, err = hsh.GetKubeletConfigurations()
if err != nil {
addInfoToMap(KubeletConfiguration, infoMap, err)
@@ -285,6 +326,16 @@ func (hsh *HostSensorHandler) CollectResources() ([]hostsensor.HostSensorDataEnv
res = append(res, kcData...)
}
// GetCloudProviderInfo
kcData, err = hsh.GetCloudProviderInfo()
if err != nil {
addInfoToMap(CloudProviderInfo, infoMap, err)
logger.L().Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
logger.L().Debug("Done reading information from host scanner")
return res, infoMap, nil
}

View File

@@ -16,6 +16,7 @@ var (
KubeletInfo = "KubeletInfo"
KubeProxyInfo = "KubeProxyInfo"
ControlPlaneInfo = "ControlPlaneInfo"
CloudProviderInfo = "CloudProviderInfo"
MapHostSensorResourceToApiGroup = map[string]string{
KubeletConfiguration: "hostdata.kubescape.cloud/v1beta0",
@@ -28,6 +29,7 @@ var (
KubeletInfo: "hostdata.kubescape.cloud/v1beta0",
KubeProxyInfo: "hostdata.kubescape.cloud/v1beta0",
ControlPlaneInfo: "hostdata.kubescape.cloud/v1beta0",
CloudProviderInfo: "hostdata.kubescape.cloud/v1beta0",
}
)

View File

@@ -35,6 +35,7 @@ type OPAProcessor struct {
func NewOPAProcessor(sessionObj *cautils.OPASessionObj, regoDependenciesData *resources.RegoDependenciesData) *OPAProcessor {
if regoDependenciesData != nil && sessionObj != nil {
regoDependenciesData.PostureControlInputs = sessionObj.RegoInputData.PostureControlInputs
regoDependenciesData.DataControlInputs = sessionObj.RegoInputData.DataControlInputs
}
return &OPAProcessor{
OPASessionObj: sessionObj,
@@ -68,23 +69,26 @@ func (opap *OPAProcessor) Process(policies *cautils.Policies) error {
cautils.StartSpinner()
var errs error
for _, control := range policies.Controls {
for _, toPin := range policies.Controls {
control := toPin
resourcesAssociatedControl, err := opap.processControl(&control)
if err != nil {
logger.L().Error(err.Error())
}
if len(resourcesAssociatedControl) == 0 {
continue
}
// update resources with latest results
if len(resourcesAssociatedControl) != 0 {
for resourceID, controlResult := range resourcesAssociatedControl {
if _, ok := opap.ResourcesResult[resourceID]; !ok {
opap.ResourcesResult[resourceID] = resourcesresults.Result{ResourceID: resourceID}
}
t := opap.ResourcesResult[resourceID]
t.AssociatedControls = append(t.AssociatedControls, controlResult)
opap.ResourcesResult[resourceID] = t
for resourceID, controlResult := range resourcesAssociatedControl {
if _, ok := opap.ResourcesResult[resourceID]; !ok {
opap.ResourcesResult[resourceID] = resourcesresults.Result{ResourceID: resourceID}
}
t := opap.ResourcesResult[resourceID]
t.AssociatedControls = append(t.AssociatedControls, controlResult)
opap.ResourcesResult[resourceID] = t
}
}
@@ -94,7 +98,7 @@ func (opap *OPAProcessor) Process(policies *cautils.Policies) error {
opap.loggerDoneScanning()
return errs
return nil
}
func (opap *OPAProcessor) loggerStartScanning() {
@@ -153,12 +157,16 @@ func (opap *OPAProcessor) processControl(control *reporthandling.Control) (map[s
func (opap *OPAProcessor) processRule(rule *reporthandling.PolicyRule, fixedControlInputs map[string][]string) (map[string]*resourcesresults.ResourceAssociatedRule, error) {
postureControlInputs := opap.regoDependenciesData.GetFilteredPostureControlInputs(rule.ConfigInputs) // get store
dataControlInputs := map[string]string{"cloudProvider": opap.OPASessionObj.Report.ClusterCloudProvider}
// Merge configurable control input and fixed control input
for k, v := range fixedControlInputs {
postureControlInputs[k] = v
}
RuleRegoDependenciesData := resources.RegoDependenciesData{DataControlInputs: dataControlInputs,
PostureControlInputs: postureControlInputs}
inputResources, err := reporthandling.RegoResourcesAggregator(rule, getAllSupportedObjects(opap.K8SResources, opap.ArmoResource, opap.AllResources, rule))
if err != nil {
return nil, fmt.Errorf("error getting aggregated k8sObjects: %s", err.Error())
@@ -185,7 +193,7 @@ func (opap *OPAProcessor) processRule(rule *reporthandling.PolicyRule, fixedCont
opap.AllResources[inputResources[i].GetID()] = inputResources[i]
}
ruleResponses, err := opap.runOPAOnSingleRule(rule, inputRawResources, ruleData, postureControlInputs)
ruleResponses, err := opap.runOPAOnSingleRule(rule, inputRawResources, ruleData, RuleRegoDependenciesData)
if err != nil {
// TODO - Handle error
logger.L().Error(err.Error())
@@ -217,16 +225,16 @@ func (opap *OPAProcessor) processRule(rule *reporthandling.PolicyRule, fixedCont
return resources, err
}
func (opap *OPAProcessor) runOPAOnSingleRule(rule *reporthandling.PolicyRule, k8sObjects []map[string]interface{}, getRuleData func(*reporthandling.PolicyRule) string, postureControlInputs map[string][]string) ([]reporthandling.RuleResponse, error) {
func (opap *OPAProcessor) runOPAOnSingleRule(rule *reporthandling.PolicyRule, k8sObjects []map[string]interface{}, getRuleData func(*reporthandling.PolicyRule) string, ruleRegoDependenciesData resources.RegoDependenciesData) ([]reporthandling.RuleResponse, error) {
switch rule.RuleLanguage {
case reporthandling.RegoLanguage, reporthandling.RegoLanguage2:
return opap.runRegoOnK8s(rule, k8sObjects, getRuleData, postureControlInputs)
return opap.runRegoOnK8s(rule, k8sObjects, getRuleData, ruleRegoDependenciesData)
default:
return nil, fmt.Errorf("rule: '%s', language '%v' not supported", rule.Name, rule.RuleLanguage)
}
}
func (opap *OPAProcessor) runRegoOnK8s(rule *reporthandling.PolicyRule, k8sObjects []map[string]interface{}, getRuleData func(*reporthandling.PolicyRule) string, postureControlInputs map[string][]string) ([]reporthandling.RuleResponse, error) {
func (opap *OPAProcessor) runRegoOnK8s(rule *reporthandling.PolicyRule, k8sObjects []map[string]interface{}, getRuleData func(*reporthandling.PolicyRule) string, ruleRegoDependenciesData resources.RegoDependenciesData) ([]reporthandling.RuleResponse, error) {
// compile modules
modules, err := getRuleDependencies()
@@ -239,7 +247,7 @@ func (opap *OPAProcessor) runRegoOnK8s(rule *reporthandling.PolicyRule, k8sObjec
return nil, fmt.Errorf("in 'runRegoOnSingleRule', failed to compile rule, name: %s, reason: %s", rule.Name, err.Error())
}
store, err := resources.TOStorage(postureControlInputs)
store, err := ruleRegoDependenciesData.TOStorage()
if err != nil {
return nil, err
}
@@ -282,8 +290,12 @@ func (opap *OPAProcessor) enumerateData(rule *reporthandling.PolicyRule, k8sObje
return k8sObjects, nil
}
postureControlInputs := opap.regoDependenciesData.GetFilteredPostureControlInputs(rule.ConfigInputs)
dataControlInputs := map[string]string{"cloudProvider": opap.OPASessionObj.Report.ClusterCloudProvider}
ruleResponse, err := opap.runOPAOnSingleRule(rule, k8sObjects, ruleEnumeratorData, postureControlInputs)
RuleRegoDependenciesData := resources.RegoDependenciesData{DataControlInputs: dataControlInputs,
PostureControlInputs: postureControlInputs}
ruleResponse, err := opap.runOPAOnSingleRule(rule, k8sObjects, ruleEnumeratorData, RuleRegoDependenciesData)
if err != nil {
return nil, err
}

Some files were not shown because too many files have changed in this diff Show More