Compare commits

..

132 Commits

Author SHA1 Message Date
David Wertenteil
fc97b0ad19 Merge pull request #1179 from kubescape/change_wf
delete BUILD_AND_TEST_LOCAL_KUBESCAPE_CLI input for b-binary-build-an…
2023-04-03 10:26:56 +03:00
Matan Shkalim
173eac552c delete BUILD_AND_TEST_LOCAL_KUBESCAPE_CLI input for b-binary-build-and-e2e-tests wf
Signed-off-by: Matan Shkalim <shekel8@gmail.com>
2023-04-03 07:40:09 +01:00
Matthias Bertschy
eeda903c76 Merge pull request #1177 from kubescape/update-meeting
Add new meeting location
2023-04-03 07:27:08 +02:00
Craig Box
fd17a87788 Add new meeting location
Changed Zoom URL and added timezone calculator.

Signed-off-by: Craig Box <craigb@armosec.io>
2023-04-03 16:45:26 +12:00
David Wertenteil
1de14ce1e3 Merge pull request #1171 from kubescape/feat/add-progress-bar-during-cloud-resources-download
feat: add progress bar during cloud resources download
2023-04-02 13:53:44 +03:00
David Wertenteil
143d1bb601 Merge pull request #1161 from kubescape/change_wf
change trigger for wf
2023-04-02 13:51:54 +03:00
Alessio Greggi
feb39ed130 test: fix test with new function argument
Signed-off-by: Alessio Greggi <ale_grey_91@hotmail.it>
2023-03-28 16:39:00 +02:00
David Wertenteil
83363d68e6 Merge pull request #1170 from dwertent/fix-get-account-id
fix(config): Load account details
2023-03-28 17:19:56 +03:00
Alessio Greggi
f010364c98 feat: add progress bar during cloud resources download
Signed-off-by: Alessio Greggi <ale_grey_91@hotmail.it>
2023-03-28 16:10:55 +02:00
David Wertenteil
64b8f48469 clean code
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-28 16:54:02 +03:00
David Wertenteil
de8d365919 load account details
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-28 16:03:31 +03:00
David Wertenteil
db2259d3d0 Merge pull request #1167 from dwertent/update-host-scanner-tag
core(host-scanner): Update host scanner image tag
2023-03-26 22:59:12 +03:00
David Wertenteil
7b9ad26e8e update host scanner image tag
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-26 15:07:06 +03:00
Amir Malka
e35029934b updated createTenant path (#1166)
Signed-off-by: Amir Malka <amirm@armosec.io>
2023-03-26 13:21:30 +03:00
Matthias Bertschy
181ebc27e1 Merge pull request #1154 from fredbi/refact/refacf-host-sensor-exports
refact(hostsensorutils): refactors host sensor exports
2023-03-25 09:56:33 +01:00
Frédéric BIDON
a090a296fa refact(hostsensorutils): unexported fields that don't need to be exposed
Also:
* declared scanner resources as an enum type
* replaced stdlib json, added uit tests for skipped resources
* unexported worker pool
* more unexported methods (i.e. everything that is not part of the interface)
* refact(core): clarified mock injection logic and added a few unit tests at the caller's (CLI init utils)

Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2023-03-25 09:37:24 +01:00
Matthias Bertschy
1e1a48bd9a Merge pull request #1064 from fredbi/perf/opaprocessor-process
perf(opaprocessor): run OPA rule compilation and evaluation in parallel
2023-03-24 15:38:14 +01:00
Matthias Bertschy
5923ce5703 Merge pull request #1147 from HollowMan6/install
Change installation path to ~/.kubescape/bin
2023-03-24 12:46:13 +01:00
Hollow Man
d2dcd29089 fix shellcheck warning and info
Signed-off-by: Hollow Man <hollowman@opensuse.org>
2023-03-24 13:39:15 +02:00
Matthias Bertschy
8a40bab43a Merge pull request #1165 from fredbi/refact/test-utils
Refact(test utils): introduce internal/testutils
2023-03-24 12:38:14 +01:00
Frederic BIDON
dee3a10bac test(utils): introduced internal/testutils package to factorize testing utilities
Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
Signed-off-by: Matthias Bertschy <matthias.bertschy@gmail.com>

Conflicts:
	core/pkg/hostsensorutils/hostsensordeploy_test.go
2023-03-24 11:15:25 +01:00
Matthias Bertschy
9e3ac4b0f7 Merge pull request #1118 from fredbi/chore/refact-kscloud-client
refact(getter): refactor the KS Cloud client
2023-03-24 11:01:31 +01:00
Matthias Bertschy
58f29523a8 Merge pull request #1141 from fredbi/refact/factorize-hostsensor-api-calls
refact(hostsensorutils): refactors the host sensor
2023-03-24 10:52:52 +01:00
Frédéric BIDON
5b62b0b749 addressed review from David: reverted on unconditional loop exit
Signed-off-by: Frédéric BIDON <fredbi@yahoo.com>
2023-03-23 16:56:37 +01:00
Frédéric BIDON
e4f34f6173 refact(host-sensor): refactors the host sensor
This PR factorizes the list of calls to the host-scanner API in a loop.

More godoc-friendly doc strings are added.

Signed-off-by: Frédéric BIDON <fredbi@yahoo.com>
2023-03-23 16:56:37 +01:00
Frédéric BIDON
4a9f26b27c perf(opaprocessor): run OPA rule compilation and evaluation in parallel
This parallelize the Process() portion of the OPA processor.

The main change is that called methods to evaluate a rule no longer
mutate the internal state of the opaprocessor and allocate maps (less
often, in larger chunks) that are merged at the end of the processing.

Signed-off-by: Frédéric BIDON <fredbi@yahoo.com>
2023-03-23 16:56:21 +01:00
Frederic BIDON
548955fc16 refact(getter): refactor the KS Cloud client
* Interfaces are unchanged

* Deprecated: low-level API funcs marked for deprecation:
  HttpPost, HttpGetter, HttpDelete (an augmented version of the KS Cloud
  client will expose the post report API, which is currently the sole
  use-case of low-level API)

* Doc: the package is now godoc-friendly

* Style & code layout:
  * listed all exposed types via aliases, for clarity/less confusing
    imports
  * unexported private types
  * factorized query param logic
  * factorized type <-> JSON using generic func & io.Reader
  * "utils" are now limited to a few common utility functions
  * centralized hard-coded strings as (unexported) constants
  * concision: use higher-level http definitions such as constants,
    cookie methods, etc
  * included type-safety guards to verify that interfaces are
    actually implemented by the exported types

* Tests: existing test assertions are unchanged
  * tests are beefed-up to assert proper authentication flow (token & cookie).
  * added unit tests for utility methods

* Perf:
  * unmarshalling API responses is now flowing without extraneous memory allocation via string representation
  * request headers are now passed withot extraneous map allocation
  * JSON operations are now fully supported by jsoniter (no longer use encoding/json)

* Changes in functionality:
  * the client is now fully extensible with KSCloudOption
  * use the option functor idiom to keep constructors short
  * methods that used to mute errors (i.e. return nil, nil) now bubble up errors
  * the captured cookie is now captured in full, not just its value
  (other cookie parameters returned are stored)
  * added a request/response dump option, for debugging
  * added support for SubmitReport and retrieval of UI url's
  * backported utm changes (reports use case)

Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2023-03-23 16:47:23 +01:00
David Wertenteil
ac2bc6c950 Merge to master - PR number: 1164 2023-03-23 12:49:52 +02:00
MathoAvito
ea27c619d4 Revert "added validation for if ORIGIN_TAG=null" 2023-03-23 12:47:42 +02:00
matanshk
e4150b2bb4 Merge pull request #1163 from kubescape/change-wf
added validation for if ORIGIN_TAG=null
2023-03-23 11:17:06 +02:00
Matan Avital
86c7215a72 added validation for if ORIGIN_TAG=null
Signed-off-by: Matan Avital <matavital13@gmail.com>
2023-03-23 11:16:03 +02:00
Hollow Man
5c24267ee9 check KUBESCAPE_EXEC is not empty before deletion
Signed-off-by: Hollow Man <hollowman@opensuse.org>
2023-03-21 13:15:21 +02:00
Matan Shkalim
bb576610ff change concurrency in 00-pr-scanner
Signed-off-by: Matan Shkalim <shekel8@gmail.com>
2023-03-21 08:05:40 +00:00
Matan Shkalim
085be86197 remove merge action
Signed-off-by: Matan Shkalim <shekel8@gmail.com>
2023-03-21 08:01:59 +00:00
David Wertenteil
b4180b34e7 core(logs): Enhance logs (#1158)
* adding ks version

Signed-off-by: David Wertenteil <dwertent@armosec.io>

* Initialize scanInfo

Signed-off-by: David Wertenteil <dwertent@armosec.io>

* print if logger level is lower than warning

Signed-off-by: David Wertenteil <dwertent@armosec.io>

* wip: scan default frameworks when scanning files

Signed-off-by: David Wertenteil <dwertent@armosec.io>

* change print to log

Signed-off-by: David Wertenteil <dwertent@armosec.io>

* wip: Add end-line after last log

Signed-off-by: David Wertenteil <dwertent@armosec.io>

* wip: silent spinner when logger is warn

Signed-off-by: David Wertenteil <dwertent@armosec.io>

---------

Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-20 17:49:51 +02:00
Matan Shkalim
6a750671c3 change trigger for wf
Signed-off-by: Matan Shkalim <shekel8@gmail.com>
2023-03-20 15:40:52 +00:00
David Wertenteil
bb5fedc661 Merge to master - PR number: 1160 2023-03-20 17:36:17 +02:00
Matan Avital
678ef2b787 changed ks_branch to release
Signed-off-by: Matan Avital <matavital13@gmail.com>
2023-03-20 17:34:48 +02:00
David Wertenteil
8c238232a1 Merge to master - PR number: 1159 2023-03-20 17:25:56 +02:00
Matan Avital
2ea9e1a596 moved the output TEST_NAMES to wf-preparation job (was check-secret job) and added step export_tests..
Signed-off-by: Matan Avital <matavital13@gmail.com>
2023-03-20 17:22:52 +02:00
matanshk
e788d68f2c Merge pull request #1157 from kubescape/change-wf
Change wf
2023-03-20 14:12:28 +02:00
Matan Avital
62e3d3263d fixed syntax error
Signed-off-by: Matan Avital <matavital13@gmail.com>
2023-03-20 14:11:09 +02:00
Matan Avital
650d489c26 fixed syntax error
Signed-off-by: Matan Avital <matavital13@gmail.com>
2023-03-20 14:09:04 +02:00
matanshk
ea4914057e Merge pull request #1156 from kubescape/change-wf
added input to make the binary build and test dynamic
2023-03-20 13:49:50 +02:00
Matan Avital
100822f48d added input to make the binary build and test dynamic
Signed-off-by: Matan Avital <matavital13@gmail.com>
2023-03-20 13:45:31 +02:00
matanshk
a5f254bebd Merge pull request #1155 from kubescape/change-wf
added CHECKOUT_REPO input parameter
2023-03-19 18:22:12 +02:00
Matan Avital
e3d5a8c3c6 added CHECKOUT_REPO input parameter
Signed-off-by: Matan Avital <matavital13@gmail.com>
2023-03-19 18:19:48 +02:00
Matthias Bertschy
63ff0f5dc9 Merge pull request #1151 from docwhat/patch-1
fix references to kubectl in completion help
2023-03-18 21:55:38 +01:00
David Wertenteil
5173016a1e Merge pull request #1152 from dwertent/update-otel-events
fix(otel): Update otel events
2023-03-16 14:09:58 +02:00
David Wertenteil
4a95e29d5d Merge to master - PR number: 1150 2023-03-16 10:28:44 +02:00
David Wertenteil
d0b5c7c2c2 update host scanner image tag
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-16 09:45:12 +02:00
David Wertenteil
6671ac46f4 change failed to submit message
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-16 09:42:39 +02:00
David Wertenteil
28531859f3 Signed-off-by: David Wertenteil <dwertent@armosec.io>
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-15 21:26:10 +02:00
Christian Höltje
4ee209c1ea fix references to kubectl in completion help
Signed-off-by: Christian Höltje <docwhat@gerf.org>
2023-03-15 14:30:38 -04:00
David Wertenteil
4edeec146a Set scanning event
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-15 18:34:02 +02:00
David Wertenteil
ec4a098b1c replace error by warning
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-15 17:17:29 +02:00
David Wertenteil
a29fe367dc Added context to HandleResults
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-15 16:58:02 +02:00
Avraham Shalev
aceb4eb0de add dependencies to httphandler
Signed-off-by: Avraham Shalev <8184528+avrahams@users.noreply.github.com>
2023-03-15 14:49:47 +02:00
David Wertenteil
e7afe45706 Merge to master - PR number: 1149 2023-03-15 14:26:56 +02:00
Avraham Shalev
55ce7086d7 upgrade opa-utils and armo api
Signed-off-by: Avraham Shalev <8184528+avrahams@users.noreply.github.com>
2023-03-15 13:53:30 +02:00
Hollow Man
bb04e98d69 Add prompt for removing old way of installation
Signed-off-by: Hollow Man <hollowman@opensuse.org>
2023-03-15 00:04:21 +02:00
Hollow Man
0ae4ef2244 Clean uninstall of old installation
Signed-off-by: Hollow Man <hollowman@opensuse.org>
2023-03-14 22:23:29 +02:00
Hollow Man
f9e38fd6a2 Change installation path to ~/.kubescape/bin
Signed-off-by: Hollow Man <hollowman@opensuse.org>
2023-03-14 21:16:25 +02:00
Amir Malka
106db84a66 bump go-logger (#1144)
Signed-off-by: Amir Malka <amirm@armosec.io>
2023-03-14 10:00:08 +02:00
David Wertenteil
1930004e60 Merge to master - PR number: 1146 2023-03-14 08:17:49 +02:00
Craig Box
015476bf97 Update CONTRIBUTING.md
Fix the link to correcting the DCO.

Signed-off-by: Craig Box <craigb@armosec.io>
2023-03-14 16:33:26 +13:00
David Wertenteil
1e0b9563a1 Merge to master - PR number: 1129 2023-03-13 13:43:07 +02:00
Alessio Greggi
5aa56b1c0a feat: integrate support to retrieve eks policies
Signed-off-by: Alessio Greggi <ale_grey_91@hotmail.it>
2023-03-13 11:35:07 +01:00
David Wertenteil
fd92411593 Merge pull request #1140 from HollowMan6/master
ci(release): fix publishing krew plugin; add '.exe' extension to Windows binary
2023-03-13 10:42:54 +02:00
David Wertenteil
cb97a424fd Merge pull request #1139 from matthyx/fixcontext
initialize context in Prometheus handler
2023-03-12 16:37:50 +02:00
Hollow Man
2542692f25 Revert add '.exe' to Win release binary
Signed-off-by: Hollow Man <hollowman@opensuse.org>
2023-03-12 11:58:43 +02:00
Hollow Man
640483e991 ci(release): fix publishing krew plugin; add .exe suffix to Win binary
Signed-off-by: Hollow Man <hollowman@opensuse.org>
2023-03-12 00:39:34 +02:00
Matthias Bertschy
1004902f51 initialize context in Prometheus handler
Signed-off-by: Matthias Bertschy <matthias.bertschy@gmail.com>
2023-03-09 14:05:26 +01:00
Matthias Bertschy
3b9ce494f5 Merge pull request #1131 from fredbi/test/more-tests-report-receiver
test(reports): adds unit test to the report receiver
2023-03-08 16:56:51 +01:00
Matthias Bertschy
5a37045d9b Merge pull request #1138 from fredbi/test/unit-tests-hostsensorutils
test(hostsensorutils): added unit tests to the hostsensorutils package
2023-03-08 11:12:26 +01:00
Frederic BIDON
91af277a1c fixup unit test: error handling
Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2023-03-08 08:53:28 +01:00
Frederic BIDON
556962a7e1 test(hostsensorutils): added unit tests to the hostsensorutils package
This PR introduces a (limited) mock for the kubernetes client API.

Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2023-03-07 20:35:29 +01:00
Frederic BIDON
306da021db test(reports): adds unit test to the report receiver
Signed-off-by: Frederic BIDON <fredbi@yahoo.com>

replace mock

Signed-off-by: Daniel-GrunbergerCA@armosec.com
2023-03-07 19:59:31 +01:00
David Wertenteil
03b0147e39 Merge pull request #1130 from dwertent/update-utm-link-v2
docs(links): Update URLs
2023-03-06 14:08:25 +02:00
Matthias Bertschy
ff9652bd77 Merge pull request #1136 from fredbi/chore/linting-again
chore(linting): run another pass of linting with the rules already in place
2023-03-05 21:17:45 +01:00
Frederic BIDON
7174f49f87 chore(lintin): run another pass of linting with the rules already in place
Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2023-03-05 20:16:37 +01:00
David Wertenteil
7dfbbe7e39 Merge pull request #1133 from amirmalka/remove-otel-middleware-from-some-endpoints
Removed otel middleware from some APIs
2023-03-05 14:26:40 +02:00
Amir Malka
b3079df8ae removed otel middleware from some APIs
Signed-off-by: Amir Malka <amirm@armosec.io>
2023-03-05 11:49:00 +02:00
David Wertenteil
0698c99241 wip: update UTMs & display UTM only on first scan
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-04 23:05:38 +02:00
David Wertenteil
2cda4864e7 wip: do not add message when account ID is empty
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-04 23:05:02 +02:00
David Wertenteil
c2b0e5c0a2 Do not display URL when message is empty
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-04 23:04:26 +02:00
David Wertenteil
6c54aff451 wip: removed unused code
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-04 22:46:20 +02:00
David Wertenteil
dea5649e01 wip: update link in docs
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-03-04 22:34:08 +02:00
Matthias Bertschy
9e6c9e0f65 Merge pull request #1127 from irLinja/master
refactor: update node scanner daemonset tolerations
2023-03-03 11:49:25 +01:00
Arash Haghighat
3dfd758a82 refactor: update node scanner daemonset tolerations
Signed-off-by: Arash Haghighat <arash@linja.pro>
2023-03-01 16:36:08 +01:00
David Wertenteil
0526f58657 Merge to master - PR number: 1121 2023-02-28 07:40:20 +02:00
Alessio Greggi
e419af6c03 ci: pin workflows versions to fixed commits
Signed-off-by: Alessio Greggi <ale_grey_91@hotmail.it>
2023-02-27 21:43:09 +01:00
Matthias Bertschy
03766ec0cd Merge pull request #1120 from alegrey91/fix/remove-hostnetwork-and-hostport-from-host-scanner-deployment
fix(hostsensorsutils): remove hostNetwork and hostPort from deployment
2023-02-27 19:12:05 +01:00
Alessio Greggi
39e2e34fc0 fix(hostsensorsutils): remove hostNet and hostPort from deployment
Signed-off-by: Alessio Greggi <ale_grey_91@hotmail.it>
2023-02-27 18:20:55 +01:00
David Wertenteil
245331b82a Merge pull request #1119 from amirmalka/added-cluster-name-to-otel-init
added clusterName to otel initialization
2023-02-26 19:27:14 +02:00
Amir Malka
cec4e5ca39 added clusterName to otel initialization
Signed-off-by: Amir Malka <amirm@armosec.io>
2023-02-26 18:07:38 +02:00
David Wertenteil
b772588e96 Merge pull request #1117 from dwertent/v2.2.1-patches
V2.2.1 patches
2023-02-26 16:25:13 +02:00
David Wertenteil
5d6ac80c38 Move GITHUB_REF to the krew step
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-02-26 14:47:25 +02:00
David Wertenteil
33df0e5462 add unitsests to new behavior
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-02-24 09:49:32 +02:00
David Wertenteil
26ab049622 Do not print table when logger level is warn
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-02-24 09:19:43 +02:00
David Wertenteil
ac2aa764a4 marking structs that are implementing IPrinter
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-02-24 09:18:54 +02:00
David Wertenteil
d02bef62d3 wip: re-arange struct
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-02-24 09:17:31 +02:00
David Wertenteil
46682dfe16 Override GITHUB_REF env when releasing krew
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-02-24 00:34:18 +02:00
David Wertenteil
01c65194a8 removing host scanner otel env
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-02-24 00:13:22 +02:00
David Wertenteil
25e42ee4b6 Update rbac-utils pkg
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-02-23 23:33:35 +02:00
David Wertenteil
7e5abbdd73 Merge pull request #1054 from fredbi/fix/1051-gc-pressure
fix(processorhandler): reduce GC pressure
2023-02-23 23:15:22 +02:00
David Wertenteil
56183ba369 Merge to master - PR number: 1115 2023-02-23 17:32:48 +02:00
David Wertenteil
a9c1ecd3b8 Merge pull request #1104 from alegrey91/fix/improve-namespace-removing-in-host-sensor-lifecycle
fix(hostsensorutils): improve namespace deletion in host-scanner lifecycle
2023-02-23 16:54:09 +02:00
Alessio Greggi
d900ce6146 fix(hostsensorutils): improve namespace deletion in host-scanner lifecycle
Signed-off-by: Alessio Greggi <ale_grey_91@hotmail.it>
2023-02-23 14:41:57 +01:00
David Wertenteil
3a80ff00b6 update opa pkg to 238
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-02-23 14:25:21 +02:00
David Wertenteil
b989c4c21f update opa pkg
Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-02-23 09:48:52 +02:00
Frédéric BIDON
65c26e22cf fix(processorhandler): reduce GC pressure
* this onboards an optimization from the opa-utils package (caching
exceptions processing)

Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2023-02-22 20:53:02 +01:00
David Wertenteil
915fa919b2 Fix HTML output (#1111)
* Fixed HTML template

Signed-off-by: David Wertenteil <dwertent@armosec.io>

* Adding HTML output format example

Signed-off-by: David Wertenteil <dwertent@armosec.io>

---------

Signed-off-by: David Wertenteil <dwertent@armosec.io>
2023-02-21 13:55:12 +02:00
Matthias Bertschy
8102dd93ba bump go-git-url (#1110)
Signed-off-by: Matthias Bertschy <matthias.bertschy@gmail.com>
2023-02-21 11:42:59 +02:00
David Wertenteil
35cafa9eb4 Merge pull request #1113 from amirmalka/fix-macos-build
Fix macos build - add missing pkg-config
2023-02-21 10:19:06 +02:00
Amir Malka
cc823d7559 fix macos build - add missing pkg-config
Signed-off-by: Amir Malka <amirm@armosec.io>
2023-02-21 10:13:08 +02:00
David Wertenteil
eaa74487c2 Merge pull request #1103 from matthyx/enable-krew
enable krew plugin publishing action
2023-02-20 17:55:44 +02:00
David Wertenteil
e8a4c2033f Merge pull request #1084 from fredbi/test/download-release-policy
test(getter): more unit tests
2023-02-20 17:55:08 +02:00
Rotem Refael
8fd9258efa Merge pull request #1101 from alegrey91/fix/improve-cloud-provider-detection 2023-02-16 15:25:38 +02:00
Alessio Greggi
159d3907b5 style(hostsensorutils): simplify code with gofmt
Signed-off-by: Alessio Greggi <ale_grey_91@hotmail.it>
2023-02-16 11:38:55 +01:00
Matthias Bertschy
cde916bec8 Merge pull request #1095 from HollowMan6/master
fix(build): LICENSE file in release tarballs
2023-02-15 15:48:25 +01:00
Matthias Bertschy
8d289bd924 Merge pull request #1105 from HollowMan6/readme
fix(README): broken links
2023-02-15 13:33:59 +01:00
Hollow Man
fda1c83d01 fix(build): LICENSE file
Signed-off-by: Hollow Man <hollowman@opensuse.org>
2023-02-15 14:21:42 +02:00
Hollow Man
31b6a3c571 fix(README): broken links
Signed-off-by: Hollow Man <hollowman@opensuse.org>
2023-02-15 14:15:38 +02:00
Matthias Bertschy
31a693e9b6 enable krew plugin publishing action
Signed-off-by: Matthias Bertschy <matthias.bertschy@gmail.com>
2023-02-15 08:02:24 +01:00
Matthias Bertschy
5de228ce0f Merge pull request #1102 from johnmanjiro13/remove-ds-store
chore: Remove an unwanted file
2023-02-15 07:14:02 +01:00
johnmanjiro13
ed27641f04 chore: Remove an unwanted file
Signed-off-by: johnmanjiro13 <28798279+johnmanjiro13@users.noreply.github.com>
2023-02-15 00:07:12 +09:00
Alessio Greggi
c7d1292c7d fix(hostsensorutils): improve cloud provider detection
Signed-off-by: Alessio Greggi <ale_grey_91@hotmail.it>
2023-02-14 13:46:09 +01:00
Frederic BIDON
d8f1a25ab7 generated rego policy json fixture file, short-circuited call to github when fixture is here
Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2023-02-10 17:29:46 +01:00
Frederic BIDON
56cfb4fcef test(getters): added unit tests for utilities
Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2023-02-10 17:29:46 +01:00
Frederic BIDON
894d436274 test(getters): added unit tests to the kubescape API client
Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2023-02-10 17:29:45 +01:00
Frederic BIDON
39166d40bf tests(cautils): added unit tests for released policy
Signed-off-by: Frederic BIDON <fredbi@yahoo.com>
2023-02-10 17:29:45 +01:00
111 changed files with 105703 additions and 1521 deletions

BIN
.DS_Store vendored

Binary file not shown.

View File

@@ -1,9 +1,8 @@
name: 00-pr_scanner
on:
pull_request:
types: [opened, reopened, synchronize, ready_for_review]
branches:
branches:
- 'master'
- 'main'
- 'dev'
@@ -16,15 +15,14 @@ on:
- 'docs/*'
- 'build/*'
- '.github/*'
concurrency:
group: ${{ github.head_ref }}
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
pr-scanner:
permissions:
permissions:
pull-requests: write
uses: ./.github/workflows/a-pr-scanner.yaml
with:

View File

@@ -1,57 +0,0 @@
name: 01-code_review_approved
on:
pull_request_review:
types: [submitted]
branches:
- 'master'
- 'main'
paths-ignore:
- '**.yaml'
- '**.md'
- '**.sh'
- 'website/*'
- 'examples/*'
- 'docs/*'
- 'build/*'
- '.github/*'
concurrency:
group: code-review-approved
cancel-in-progress: true
jobs:
binary-build:
if: ${{ github.event.review.state == 'approved' &&
contains( github.event.pull_request.labels.*.name, 'trigger-integration-test') &&
github.event.pull_request.base.ref == 'master' }} ## run only if labeled as "trigger-integration-test" and base branch is master
uses: ./.github/workflows/b-binary-build-and-e2e-tests.yaml
with:
COMPONENT_NAME: kubescape
CGO_ENABLED: 1
GO111MODULE: ""
GO_VERSION: "1.19"
RELEASE: ""
CLIENT: test
secrets: inherit
merge-to-master:
needs: binary-build
env:
GH_PERSONAL_ACCESS_TOKEN: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
if: ${{ (github.event.review.state == 'approved' && github.event.pull_request.base.ref == 'master') &&
(always() && (contains(needs.*.result, 'success') || contains(needs.*.result, 'skipped')) && !(contains(needs.*.result, 'failure')) && !(contains(needs.*.result, 'cancelled'))) }}
runs-on: ubuntu-latest
steps:
- name: merge-to-master
if: ${{ env.GH_PERSONAL_ACCESS_TOKEN }}
uses: pascalgn/automerge-action@v0.15.5
env:
GITHUB_TOKEN: "${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}"
MERGE_COMMIT_MESSAGE: "Merge to master - PR number: {pullRequest.number}"
MERGE_ERROR_FAIL: "true"
MERGE_METHOD: "merge"
MERGE_LABELS: ""
UPDATE_LABELS: ""

34
.github/workflows/01-pr-merged.yaml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: 01-pr-merged
on:
pull_request_target:
types: [closed]
branches:
- 'master'
- 'main'
paths-ignore:
- '**.yaml'
- '**.md'
- '**.sh'
- 'website/*'
- 'examples/*'
- 'docs/*'
- 'build/*'
- '.github/*'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
binary-build:
if: ${{ github.event.pull_request.merged == true && contains( github.event.pull_request.labels.*.name, 'trigger-integration-test') && github.event.pull_request.base.ref == 'master' }} ## run only if labeled as "trigger-integration-test" and base branch is master
uses: ./.github/workflows/b-binary-build-and-e2e-tests.yaml
with:
COMPONENT_NAME: kubescape
CGO_ENABLED: 1
GO111MODULE: ""
GO_VERSION: "1.19"
RELEASE: ""
CLIENT: test
secrets: inherit

View File

@@ -1,23 +1,19 @@
name: 02-create_release
on:
push:
tags:
- 'v*.*.*-rc.*'
- 'v*.*.*-rc.*'
jobs:
retag:
outputs:
NEW_TAG: ${{ steps.tag-calculator.outputs.NEW_TAG }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # ratchet:actions/checkout@v3
- id: tag-calculator
uses: ./.github/actions/tag-action
with:
SUB_STRING: "-rc"
binary-build:
needs: [retag]
uses: ./.github/workflows/b-binary-build-and-e2e-tests.yaml
@@ -29,37 +25,23 @@ jobs:
RELEASE: ${{ needs.retag.outputs.NEW_TAG }}
CLIENT: release
secrets: inherit
create-release:
permissions:
contents: write
contents: write
needs: [retag, binary-build]
uses: ./.github/workflows/c-create-release.yaml
with:
RELEASE_NAME: "Release ${{ needs.retag.outputs.NEW_TAG }}"
TAG: ${{ needs.retag.outputs.NEW_TAG }}
DRAFT: false
secrets: inherit
# publish-krew-plugin:
# name: Publish Krew plugin
# runs-on: ubuntu-latest
# if: "${{ github.repository_owner }} == kubescape"
# needs: create-release
# steps:
# - uses: actions/checkout@v3
# with:
# submodules: recursive
# - name: Update new version in krew-index
# uses: rajatjindal/krew-release-bot@v0.0.43
secrets: inherit
publish-image:
permissions:
id-token: write
packages: write
contents: read
contents: read
uses: ./.github/workflows/d-publish-image.yaml
needs: [ create-release, retag ]
needs: [create-release, retag]
with:
client: "image-release"
image_name: "quay.io/${{ github.repository_owner }}/kubescape"

View File

@@ -1,19 +1,17 @@
name: 03-create_release_digests
on:
release:
types: [ published ]
types: [published]
branches:
- 'master'
- 'main'
- 'master'
- 'main'
jobs:
create_release_digests:
name: Creating digests
runs-on: ubuntu-latest
steps:
- name: Digest
uses: MCJack123/ghaction-generate-release-hashes@v1
uses: MCJack123/ghaction-generate-release-hashes@c03f3111b39432dde3edebe401c5a8d1ffbbf917 # ratchet:MCJack123/ghaction-generate-release-hashes@v1
with:
hash-type: sha1
file-name: kubescape-release-digests

View File

@@ -0,0 +1,16 @@
name: 04-publish_krew_plugin
on:
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+'
jobs:
publish_krew_plugin:
name: Publish Krew plugin
runs-on: ubuntu-latest
if: github.repository_owner == 'kubescape'
steps:
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # ratchet:actions/checkout@v3
with:
submodules: recursive
- name: Update new version in krew-index
uses: rajatjindal/krew-release-bot@92da038bbf995803124a8e50ebd438b2f37bbbb0 # ratchet:rajatjindal/krew-release-bot@v0.0.43

View File

@@ -1,5 +1,4 @@
name: a-pr-scanner
on:
workflow_call:
inputs:
@@ -11,27 +10,23 @@ on:
description: 'Client name'
required: true
type: string
jobs:
scanners:
env:
GITGUARDIAN_API_KEY: ${{ secrets.GITGUARDIAN_API_KEY }}
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
GITGUARDIAN_API_KEY: ${{ secrets.GITGUARDIAN_API_KEY }}
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
name: PR Scanner
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # ratchet:actions/checkout@v3
with:
fetch-depth: 0
submodules: recursive
- uses: actions/setup-go@v3 # Install go because go-licenses use it
- uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # Install go because go-licenses use it ratchet:actions/setup-go@v3
name: Installing go
with:
go-version: '1.19'
cache: true
- name: Scanning - Forbidden Licenses (go-licenses)
id: licenses-scan
continue-on-error: true
@@ -40,43 +35,39 @@ jobs:
go install github.com/google/go-licenses@latest
echo "## Scanning for forbiden licenses ##"
go-licenses check .
- name: Scanning - Credentials (GitGuardian)
if: ${{ env.GITGUARDIAN_API_KEY }}
continue-on-error: true
continue-on-error: true
id: credentials-scan
uses: GitGuardian/ggshield-action@master
uses: GitGuardian/ggshield-action@4ab2994172fadab959240525e6b833d9ae3aca61 # ratchet:GitGuardian/ggshield-action@master
with:
args: -v --all-policies
args: -v --all-policies
env:
GITHUB_PUSH_BEFORE_SHA: ${{ github.event.before }}
GITHUB_PUSH_BASE_SHA: ${{ github.event.base }}
GITHUB_PULL_BASE_SHA: ${{ github.event.pull_request.base.sha }}
GITHUB_DEFAULT_BRANCH: ${{ github.event.repository.default_branch }}
GITGUARDIAN_API_KEY: ${{ secrets.GITGUARDIAN_API_KEY }}
- name: Scanning - Vulnerabilities (Snyk)
if: ${{ env.SNYK_TOKEN }}
id: vulnerabilities-scan
continue-on-error: true
uses: snyk/actions/golang@master
uses: snyk/actions/golang@806182742461562b67788a64410098c9d9b96adb # ratchet:snyk/actions/golang@master
with:
command: test --all-projects
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
- name: Comment results to PR
continue-on-error: true # Warning: This might break opening PRs from forks
uses: peter-evans/create-or-update-comment@v2.1.0
uses: peter-evans/create-or-update-comment@5adcb0bb0f9fb3f95ef05400558bdb3f329ee808 # ratchet:peter-evans/create-or-update-comment@v2.1.0
with:
issue-number: ${{ github.event.pull_request.number }}
issue-number: ${{ github.event.pull_request.number }}
body: |
Scan results:
- License scan: ${{ steps.licenses-scan.outcome }}
- Credentials scan: ${{ steps.credentials-scan.outcome }}
- Vulnerabilities scan: ${{ steps.vulnerabilities-scan.outcome }}
reactions: 'eyes'
basic-tests:
needs: scanners
name: Create cross-platform build
@@ -89,13 +80,12 @@ jobs:
matrix:
os: [ubuntu-20.04, macos-latest, windows-latest]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # ratchet:actions/checkout@v3
with:
submodules: recursive
- name: Cache Go modules (Linux)
if: matrix.os == 'ubuntu-latest'
uses: actions/cache@v3
uses: actions/cache@69d9d449aced6a2ede0bc19182fadc3a0a42d2b0 # ratchet:actions/cache@v3
with:
path: |
~/.cache/go-build
@@ -103,10 +93,9 @@ jobs:
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Cache Go modules (macOS)
if: matrix.os == 'macos-latest'
uses: actions/cache@v3
if: matrix.os == 'macos-latest'
uses: actions/cache@69d9d449aced6a2ede0bc19182fadc3a0a42d2b0 # ratchet:actions/cache@v3
with:
path: |
~/Library/Caches/go-build
@@ -114,10 +103,9 @@ jobs:
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Cache Go modules (Windows)
if: matrix.os == 'windows-latest'
uses: actions/cache@v3
uses: actions/cache@69d9d449aced6a2ede0bc19182fadc3a0a42d2b0 # ratchet:actions/cache@v3
with:
path: |
~\AppData\Local\go-build
@@ -125,53 +113,47 @@ jobs:
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # ratchet:actions/setup-go@v3
with:
go-version: 1.19
- name: Install MSYS2 & libgit2 (Windows)
shell: cmd
run: .\build.bat all
if: matrix.os == 'windows-latest'
- name: Install pkg-config (macOS)
run: brew install pkg-config
if: matrix.os == 'macos-latest'
- name: Install libgit2 (Linux/macOS)
run: make libgit2
if: matrix.os != 'windows-latest'
- name: Test core pkg
run: go test "-tags=static,gitenabled" -v ./...
- name: Test httphandler pkg
run: cd httphandler && go test "-tags=static,gitenabled" -v ./...
- name: Build
env:
RELEASE: ${{ inputs.RELEASE }}
CLIENT: ${{ inputs.CLIENT }}
CGO_ENABLED: 1
run: python3 --version && python3 build.py
- name: Smoke Testing (Windows / MacOS)
env:
RELEASE: ${{ inputs.RELEASE }}
RELEASE: ${{ inputs.RELEASE }}
KUBESCAPE_SKIP_UPDATE_CHECK: "true"
run: python3 smoke_testing/init.py ${PWD}/build/kubescape-${{ matrix.os }}
if: matrix.os != 'ubuntu-20.04'
- name: Smoke Testing (Linux)
env:
RELEASE: ${{ inputs.RELEASE }}
RELEASE: ${{ inputs.RELEASE }}
KUBESCAPE_SKIP_UPDATE_CHECK: "true"
run: python3 smoke_testing/init.py ${PWD}/build/kubescape-ubuntu-latest
if: matrix.os == 'ubuntu-20.04'
if: matrix.os == 'ubuntu-20.04'
- name: golangci-lint
if: matrix.os == 'ubuntu-20.04'
if: matrix.os == 'ubuntu-20.04'
continue-on-error: true
uses: golangci/golangci-lint-action@v3
uses: golangci/golangci-lint-action@08e2f20817b15149a52b5b3ebe7de50aff2ba8c5 # ratchet:golangci/golangci-lint-action@v3
with:
version: latest
args: --timeout 10m --build-tags=static
only-new-issues: true
only-new-issues: true

View File

@@ -22,57 +22,43 @@ on:
default: 1
BINARY_TESTS:
type: string
default: '[
"scan_nsa",
"scan_mitre",
"scan_with_exceptions",
"scan_repository",
"scan_local_file",
"scan_local_glob_files",
"scan_local_list_of_files",
"scan_nsa_and_submit_to_backend",
"scan_mitre_and_submit_to_backend",
"scan_local_repository_and_submit_to_backend",
"scan_repository_from_url_and_submit_to_backend",
"scan_with_exception_to_backend",
"scan_with_custom_framework",
"scan_customer_configuration",
"host_scanner"
]'
default: '[ "scan_nsa", "scan_mitre", "scan_with_exceptions", "scan_repository", "scan_local_file", "scan_local_glob_files", "scan_local_list_of_files", "scan_nsa_and_submit_to_backend", "scan_mitre_and_submit_to_backend", "scan_local_repository_and_submit_to_backend", "scan_repository_from_url_and_submit_to_backend", "scan_with_exception_to_backend", "scan_with_custom_framework", "scan_customer_configuration", "host_scanner" ]'
CHECKOUT_REPO:
required: false
type: string
jobs:
check-secret:
wf-preparation:
name: secret-validator
runs-on: ubuntu-latest
outputs:
TEST_NAMES: ${{ steps.export_tests_to_env.outputs.TEST_NAMES }}
is-secret-set: ${{ steps.check-secret-set.outputs.is-secret-set }}
steps:
- name: check if the necessary secrets are set in github secrets
id: check-secret-set
env:
CUSTOMER: ${{ secrets.CUSTOMER }}
USERNAME: ${{ secrets.USERNAME }}
PASSWORD: ${{ secrets.PASSWORD }}
CLIENT_ID: ${{ secrets.CLIENT_ID_PROD }}
SECRET_KEY: ${{ secrets.SECRET_KEY_PROD }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
run: |
echo "is-secret-set=${{ env.CUSTOMER != '' &&
env.USERNAME != '' &&
env.PASSWORD != '' &&
env.CLIENT_ID != '' &&
env.SECRET_KEY != '' &&
env.REGISTRY_USERNAME != '' &&
env.REGISTRY_PASSWORD != ''
}}" >> $GITHUB_OUTPUT
CUSTOMER: ${{ secrets.CUSTOMER }}
USERNAME: ${{ secrets.USERNAME }}
PASSWORD: ${{ secrets.PASSWORD }}
CLIENT_ID: ${{ secrets.CLIENT_ID_PROD }}
SECRET_KEY: ${{ secrets.SECRET_KEY_PROD }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
run: "echo \"is-secret-set=${{ env.CUSTOMER != '' && \n env.USERNAME != '' &&\n env.PASSWORD != '' &&\n env.CLIENT_ID != '' &&\n env.SECRET_KEY != '' &&\n env.REGISTRY_USERNAME != '' &&\n env.REGISTRY_PASSWORD != ''\n }}\" >> $GITHUB_OUTPUT\n"
- id: export_tests_to_env
name: set test name
run: |
echo "TEST_NAMES=$input" >> $GITHUB_OUTPUT
env:
input: ${{ inputs.BINARY_TESTS }}
binary-build:
name: Create cross-platform build
outputs:
TEST_NAMES: ${{ steps.export_tests_to_env.outputs.TEST_NAMES }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
runs-on: ${{ matrix.os }}
@@ -81,14 +67,15 @@ jobs:
os: [ubuntu-20.04, macos-latest, windows-latest]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # ratchet:actions/checkout@v3
with:
repository: ${{inputs.CHECKOUT_REPO}}
fetch-depth: 0
submodules: recursive
- name: Cache Go modules (Linux)
if: matrix.os == 'ubuntu-20.04'
uses: actions/cache@v3
if: matrix.os == 'ubuntu-20.04'
uses: actions/cache@69d9d449aced6a2ede0bc19182fadc3a0a42d2b0 # ratchet:actions/cache@v3
with:
path: |
~/.cache/go-build
@@ -98,8 +85,8 @@ jobs:
${{ runner.os }}-go-
- name: Cache Go modules (macOS)
if: matrix.os == 'macos-latest'
uses: actions/cache@v3
if: matrix.os == 'macos-latest'
uses: actions/cache@69d9d449aced6a2ede0bc19182fadc3a0a42d2b0 # ratchet:actions/cache@v3
with:
path: |
~/Library/Caches/go-build
@@ -110,7 +97,7 @@ jobs:
- name: Cache Go modules (Windows)
if: matrix.os == 'windows-latest'
uses: actions/cache@v3
uses: actions/cache@69d9d449aced6a2ede0bc19182fadc3a0a42d2b0 # ratchet:actions/cache@v3
with:
path: |
~\AppData\Local\go-build
@@ -119,7 +106,7 @@ jobs:
restore-keys: |
${{ runner.os }}-go-
- uses: actions/setup-go@v3
- uses: actions/setup-go@6edd4406fa81c3da01a34fa6f6343087c207a568 # ratchet:actions/setup-go@v3
name: Installing go
with:
go-version: ${{ inputs.GO_VERSION }}
@@ -130,10 +117,14 @@ jobs:
run: .\build.bat all
if: matrix.os == 'windows-latest'
- name: Install pkg-config (macOS)
run: brew install pkg-config
if: matrix.os == 'macos-latest'
- name: Install libgit2 (Linux/macOS)
run: make libgit2
if: matrix.os != 'windows-latest'
- name: Test core pkg
run: go test "-tags=static,gitenabled" -v ./...
@@ -149,35 +140,28 @@ jobs:
- name: Smoke Testing (Windows / MacOS)
env:
RELEASE: ${{ inputs.RELEASE }}
RELEASE: ${{ inputs.RELEASE }}
KUBESCAPE_SKIP_UPDATE_CHECK: "true"
run: python3 smoke_testing/init.py ${PWD}/build/kubescape-${{ matrix.os }}
if: matrix.os != 'ubuntu-20.04'
- name: Smoke Testing (Linux)
env:
RELEASE: ${{ inputs.RELEASE }}
RELEASE: ${{ inputs.RELEASE }}
KUBESCAPE_SKIP_UPDATE_CHECK: "true"
run: python3 smoke_testing/init.py ${PWD}/build/kubescape-ubuntu-latest
if: matrix.os == 'ubuntu-20.04'
if: matrix.os == 'ubuntu-20.04'
- name: golangci-lint
if: matrix.os == 'ubuntu-20.04'
if: matrix.os == 'ubuntu-20.04'
continue-on-error: true
uses: golangci/golangci-lint-action@v3
uses: golangci/golangci-lint-action@08e2f20817b15149a52b5b3ebe7de50aff2ba8c5 # ratchet:golangci/golangci-lint-action@v3
with:
version: latest
args: --timeout 10m --build-tags=static
only-new-issues: true
- id: export_tests_to_env
name: set test name
run: |
echo "TEST_NAMES=$input" >> $GITHUB_OUTPUT
env:
input: ${{ inputs.BINARY_TESTS }}
- uses: actions/upload-artifact@v3.1.1
- uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # ratchet:actions/upload-artifact@v3.1.1
name: Upload artifact (Linux)
if: matrix.os == 'ubuntu-20.04'
with:
@@ -185,7 +169,7 @@ jobs:
path: build/
if-no-files-found: error
- uses: actions/upload-artifact@v3.1.1
- uses: actions/upload-artifact@83fd05a356d7e2593de66fc9913b3002723633cb # ratchet:actions/upload-artifact@v3.1.1
name: Upload artifact (MacOS, Win)
if: matrix.os != 'ubuntu-20.04'
with:
@@ -195,15 +179,14 @@ jobs:
run-tests:
strategy:
fail-fast: false
fail-fast: false
matrix:
TEST: ${{ fromJson(needs.binary-build.outputs.TEST_NAMES) }}
needs: [check-secret, binary-build]
if: needs.check-secret.outputs.is-secret-set == 'true'
TEST: ${{ fromJson(needs.wf-preparation.outputs.TEST_NAMES) }}
needs: [wf-preparation, binary-build]
if: ${{ (needs.wf-preparation.outputs.is-secret-set == 'true') && (always() && (contains(needs.*.result, 'success') || contains(needs.*.result, 'skipped')) && !(contains(needs.*.result, 'failure')) && !(contains(needs.*.result, 'cancelled'))) }}
runs-on: ubuntu-latest # This cannot change
steps:
- uses: actions/download-artifact@v3.0.2
- uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # ratchet:actions/download-artifact@v3.0.2
id: download-artifact
with:
name: kubescape-ubuntu-latest
@@ -215,12 +198,12 @@ jobs:
run: chmod +x -R ${{steps.download-artifact.outputs.download-path}}/kubescape-ubuntu-latest
- name: Checkout systests repo
uses: actions/checkout@v3
uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # ratchet:actions/checkout@v3
with:
repository: armosec/system-tests
path: .
- uses: actions/setup-python@v4
- uses: actions/setup-python@d27e3f3d7c64b4bbf8e4abfb9b63b83e846e0435 # ratchet:actions/setup-python@v4
with:
python-version: '3.8.13'
cache: 'pip'
@@ -230,16 +213,16 @@ jobs:
- name: Generate uuid
id: uuid
run: |
run: |
echo "RANDOM_UUID=$(uuidgen)" >> $GITHUB_OUTPUT
- name: Create k8s Kind Cluster
id: kind-cluster-install
uses: helm/kind-action@v1.3.0
uses: helm/kind-action@d08cf6ff1575077dee99962540d77ce91c62387d # ratchet:helm/kind-action@v1.3.0
with:
cluster_name: ${{ steps.uuid.outputs.RANDOM_UUID }}
- name: run-tests
- name: run-tests-on-local-built-kubescape
env:
CUSTOMER: ${{ secrets.CUSTOMER }}
USERNAME: ${{ secrets.USERNAME }}
@@ -248,7 +231,6 @@ jobs:
SECRET_KEY: ${{ secrets.SECRET_KEY_PROD }}
REGISTRY_USERNAME: ${{ secrets.REGISTRY_USERNAME }}
REGISTRY_PASSWORD: ${{ secrets.REGISTRY_PASSWORD }}
run: |
echo "Test history:"
echo " ${{ matrix.TEST }} " >/tmp/testhistory
@@ -262,14 +244,12 @@ jobs:
--duration 3 \
--logger DEBUG \
--kwargs kubescape=${{steps.download-artifact.outputs.download-path}}/kubescape-ubuntu-latest
deactivate
- name: Test Report
uses: mikepenz/action-junit-report@v3.6.1
uses: mikepenz/action-junit-report@6e9933f4a97f4d2b99acef4d7b97924466037882 # ratchet:mikepenz/action-junit-report@v3.6.1
if: always() # always run even if the previous step fails
with:
report_paths: '**/results_xml_format/**.xml'
commit: ${{github.event.workflow_run.head_sha}}

View File

@@ -15,28 +15,32 @@ on:
required: false
type: boolean
default: false
jobs:
create-release:
name: create-release
runs-on: ubuntu-latest
# permissions:
# contents: write
# contents: write
steps:
- uses: actions/download-artifact@v3.0.2
- uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # ratchet:actions/download-artifact@v3.0.2
id: download-artifact
with:
path: .
- name: Set release token
run: |
if [ "${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}" != "" ]; then
echo "TOKEN=${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}" >> $GITHUB_ENV;
else
echo "TOKEN=${{ secrets.GITHUB_TOKEN }}" >> $GITHUB_ENV;
fi
- name: Release
uses: softprops/action-gh-release@v1
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # ratchet:softprops/action-gh-release@v1
env:
MAC_OS: macos-latest
UBUNTU_OS: ubuntu-latest
WINDOWS_OS: windows-latest
with:
token: ${{ secrets.GITHUB_TOKEN }}
token: ${{ env.TOKEN }}
name: ${{ inputs.RELEASE_NAME }}
tag_name: ${{ inputs.TAG }}
body: ${{ github.event.pull_request.body }}
@@ -53,5 +57,3 @@ jobs:
./kubescape-${{ env.WINDOWS_OS }}/kubescape-${{ env.WINDOWS_OS }}
./kubescape-${{ env.WINDOWS_OS }}/kubescape-${{ env.WINDOWS_OS }}.sha256
./kubescape-${{ env.WINDOWS_OS }}/kubescape-${{ env.WINDOWS_OS }}.tar.gz

View File

@@ -1,5 +1,4 @@
name: d-publish-image
on:
workflow_call:
inputs:
@@ -25,7 +24,6 @@ on:
default: true
type: boolean
description: 'support amd64/arm64'
jobs:
check-secret:
name: check if QUAYIO_REGISTRY_USERNAME & QUAYIO_REGISTRY_PASSWORD is set in github secrets
@@ -36,44 +34,36 @@ jobs:
- name: check if QUAYIO_REGISTRY_USERNAME & QUAYIO_REGISTRY_PASSWORD is set in github secrets
id: check-secret-set
env:
QUAYIO_REGISTRY_USERNAME: ${{ secrets.QUAYIO_REGISTRY_USERNAME }}
QUAYIO_REGISTRY_PASSWORD: ${{ secrets.QUAYIO_REGISTRY_PASSWORD }}
QUAYIO_REGISTRY_USERNAME: ${{ secrets.QUAYIO_REGISTRY_USERNAME }}
QUAYIO_REGISTRY_PASSWORD: ${{ secrets.QUAYIO_REGISTRY_PASSWORD }}
run: |
echo "is-secret-set=${{ env.QUAYIO_REGISTRY_USERNAME != '' && env.QUAYIO_REGISTRY_PASSWORD != '' }}" >> $GITHUB_OUTPUT
echo "is-secret-set=${{ env.QUAYIO_REGISTRY_USERNAME != '' && env.QUAYIO_REGISTRY_PASSWORD != '' }}" >> $GITHUB_OUTPUT
build-image:
needs: [check-secret]
if: needs.check-secret.outputs.is-secret-set == 'true'
name: Build image and upload to registry
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@ac593985615ec2ede58e132d2e21d2b1cbd6127c # ratchet:actions/checkout@v3
with:
submodules: recursive
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 # ratchet:docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@f03ac48505955848960e80bbb68046aa35c7b9e7 # ratchet:docker/setup-buildx-action@v2
- name: Login to Quay.io
env:
QUAY_PASSWORD: ${{ secrets.QUAYIO_REGISTRY_PASSWORD }}
QUAY_USERNAME: ${{ secrets.QUAYIO_REGISTRY_USERNAME }}
run: docker login -u="${QUAY_USERNAME}" -p="${QUAY_PASSWORD}" quay.io
- name: Build and push image
if: ${{ inputs.support_platforms }}
run: docker buildx build . --file build/Dockerfile --tag ${{ inputs.image_name }}:${{ inputs.image_tag }} --tag ${{ inputs.image_name }}:latest --build-arg image_version=${{ inputs.image_tag }} --build-arg client=${{ inputs.client }} --push --platform linux/amd64,linux/arm64
- name: Build and push image without amd64/arm64 support
if: ${{ !inputs.support_platforms }}
run: docker buildx build . --file build/Dockerfile --tag ${{ inputs.image_name }}:${{ inputs.image_tag }} --tag ${{ inputs.image_name }}:latest --build-arg image_version=${{ inputs.image_tag }} --build-arg client=${{ inputs.client }} --push
run: docker buildx build . --file build/Dockerfile --tag ${{ inputs.image_name }}:${{ inputs.image_tag }} --tag ${{ inputs.image_name }}:latest --build-arg image_version=${{ inputs.image_tag }} --build-arg client=${{ inputs.client }} --push
- name: Install cosign
uses: sigstore/cosign-installer@main
uses: sigstore/cosign-installer@4079ad3567a89f68395480299c77e40170430341 # ratchet:sigstore/cosign-installer@main
with:
cosign-release: 'v1.12.0'
- name: sign kubescape container image
@@ -81,5 +71,4 @@ jobs:
env:
COSIGN_EXPERIMENTAL: "true"
run: |
cosign sign --force ${{ inputs.image_name }}
cosign sign --force ${{ inputs.image_name }}

View File

@@ -1,23 +1,19 @@
on:
issues:
types: [opened, labeled]
jobs:
open_PR_message:
if: github.event.label.name == 'typo'
runs-on: ubuntu-latest
steps:
- uses: ben-z/actions-comment-on-issue@1.0.2
- uses: ben-z/actions-comment-on-issue@10be23f9c43ac792663043420fda29dde07e2f0f # ratchet:ben-z/actions-comment-on-issue@1.0.2
with:
message: "Hello! :wave:\n\nThis issue is being automatically closed, Please open a PR with a relevant fix."
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
auto_close_issues:
runs-on: ubuntu-latest
steps:
- uses: lee-dohm/close-matching-issues@v2
- uses: lee-dohm/close-matching-issues@e9e43aad2fa6f06a058cedfd8fb975fd93b56d8f # ratchet:lee-dohm/close-matching-issues@v2
with:
query: 'label:typo'
token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -27,4 +27,4 @@ spec:
os: windows
arch: amd64
{{ addURIAndSha "https://github.com/kubescape/kubescape/releases/download/{{ .TagName }}/kubescape-windows-latest.tar.gz" .TagName }}
bin: kubescape
bin: kubescape.exe

View File

@@ -47,7 +47,7 @@ Add [`-s`](https://git-scm.com/docs/git-commit#Documentation/git-commit.txt--s)
```git commit -s -m "Fix issue 64738"```
This is tedious, and if you forget, you'll have to [amend your commit](#f)
This is tedious, and if you forget, you'll have to [amend your commit](#fixing-a-commit-where-the-dco-failed).
### Configure a repository to always include sign off

View File

@@ -1,5 +1,5 @@
[![Version](https://img.shields.io/github/v/release/kubescape/kubescape)](releases)
[![build](https://github.com/kubescape/kubescape/actions/workflows/build.yaml/badge.svg)](https://github.com/kubescape/kubescape/actions/workflows/build.yaml)
[![Version](https://img.shields.io/github/v/release/kubescape/kubescape)](https://github.com/kubescape/kubescape/releases)
[![build](https://github.com/kubescape/kubescape/actions/workflows/02-release.yaml/badge.svg)](https://github.com/kubescape/kubescape/actions/workflows/02-release.yaml)
[![Go Report Card](https://goreportcard.com/badge/github.com/kubescape/kubescape)](https://goreportcard.com/report/github.com/kubescape/kubescape)
[![Gitpod Ready-to-Code](https://img.shields.io/badge/Gitpod-Ready--to--Code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/kubescape/kubescape)
[![GitHub](https://img.shields.io/github/license/kubescape/kubescape)](https://github.com/kubescape/kubescape/blob/master/LICENSE)
@@ -65,7 +65,7 @@ It retrieves Kubernetes objects from the API server and runs a set of [Rego snip
Kubescape is an open source project, we welcome your feedback and ideas for improvement. We are part of the Kubernetes community and are building more tests and controls as the ecosystem develops.
We hold [community meetings](https://us02web.zoom.us/j/84020231442) on Zoom, on the first Tuesday of every month, at 14:00 GMT.
We hold [community meetings](https://zoom.us/j/95174063585) on Zoom, on the first Tuesday of every month, at 14:00 GMT. ([See that in your local time zone](https://time.is/compare/1400_in_GMT)).
The Kubescape project follows the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).

View File

@@ -6,6 +6,7 @@ import subprocess
import tarfile
BASE_GETTER_CONST = "github.com/kubescape/kubescape/v2/core/cautils/getter"
CURRENT_PLATFORM = platform.system()
platformSuffixes = {
"Windows": "windows-latest",
@@ -24,11 +25,9 @@ def get_build_dir():
def get_package_name():
current_platform = platform.system()
if CURRENT_PLATFORM not in platformSuffixes: raise OSError("Platform %s is not supported!" % (CURRENT_PLATFORM))
if current_platform not in platformSuffixes: raise OSError("Platform %s is not supported!" % (current_platform))
return "kubescape-" + platformSuffixes[current_platform]
return "kubescape-" + platformSuffixes[CURRENT_PLATFORM]
def main():
@@ -76,8 +75,11 @@ def main():
kube_sha.write(sha256.hexdigest())
with tarfile.open(tar_file, 'w:gz') as archive:
archive.add(ks_file, "kubescape")
archive.add(ks_file, "LICENSE")
name = "kubescape"
if CURRENT_PLATFORM == "Windows":
name += ".exe"
archive.add(ks_file, name)
archive.add("LICENSE", "LICENSE")
print("Build Done")

View File

@@ -15,8 +15,8 @@ var completionCmdExamples = fmt.Sprintf(`
$ echo 'source <(%[1]s completion bash)' >> ~/.bashrc
# Enable ZSH shell autocompletion
$ source <(kubectl completion zsh)
$ echo 'source <(kubectl completion zsh)' >> "${fpath[1]}/_kubectl"
$ source <(%[1]s completion zsh)
$ echo 'source <(%[1]s completion zsh)' >> "${fpath[1]}/_%[1]s"
`, cautils.ExecName())
func GetCompletionCmd() *cobra.Command {

View File

@@ -85,6 +85,11 @@ func initEnvironment() {
if len(urlSlices) >= 4 {
ksAuthURL = urlSlices[3]
}
getter.SetKSCloudAPIConnector(getter.NewKSCloudAPICustomized(ksEventReceiverURL, ksBackendURL, ksFrontendURL, ksAuthURL))
getter.SetKSCloudAPIConnector(getter.NewKSCloudAPICustomized(
ksBackendURL, ksAuthURL,
getter.WithReportURL(ksEventReceiverURL),
getter.WithFrontendURL(ksFrontendURL),
))
}
}

View File

@@ -14,7 +14,6 @@ import (
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/meta"
"github.com/enescakir/emoji"
"github.com/spf13/cobra"
)
@@ -106,7 +105,7 @@ func getControlCmd(ks meta.IKubescape, scanInfo *cautils.ScanInfo) *cobra.Comman
logger.L().Fatal(err.Error())
}
if !scanInfo.VerboseMode {
cautils.SimpleDisplay(os.Stderr, "%s Run with '--verbose'/'-v' flag for detailed resources view\n\n", emoji.Detective)
logger.L().Info("Run with '--verbose'/'-v' flag for detailed resources view\n")
}
if results.GetRiskScore() > float32(scanInfo.FailThreshold) {
logger.L().Fatal("scan risk-score is above permitted threshold", helpers.String("risk-score", fmt.Sprintf("%.2f", results.GetRiskScore())), helpers.String("fail-threshold", fmt.Sprintf("%.2f", scanInfo.FailThreshold)))

View File

@@ -119,7 +119,7 @@ func getFrameworkCmd(ks meta.IKubescape, scanInfo *cautils.ScanInfo) *cobra.Comm
logger.L().Fatal(err.Error())
}
if !scanInfo.VerboseMode {
cautils.SimpleDisplay(os.Stderr, "Run with '--verbose'/'-v' flag for detailed resources view\n\n")
logger.L().Info("Run with '--verbose'/'-v' flag for detailed resources view\n")
}
if results.GetRiskScore() > float32(scanInfo.FailThreshold) {
logger.L().Fatal("scan risk-score is above permitted threshold", helpers.String("risk-score", fmt.Sprintf("%.2f", results.GetRiskScore())), helpers.String("fail-threshold", fmt.Sprintf("%.2f", scanInfo.FailThreshold)))

View File

@@ -43,7 +43,7 @@ func GetScanCommand(ks meta.IKubescape) *cobra.Command {
Args: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 {
if args[0] != "framework" && args[0] != "control" {
return getFrameworkCmd(ks, &scanInfo).RunE(cmd, append([]string{"all"}, args...))
return getFrameworkCmd(ks, &scanInfo).RunE(cmd, append([]string{strings.Join(getter.NativeFrameworks, ",")}, args...))
}
}
return nil

View File

@@ -94,6 +94,9 @@ type ITenantConfig interface {
// ============================ Local Config ============================================
// ======================================================================================
// Config when scanning YAML files or URL but not a Kubernetes cluster
var _ ITenantConfig = &LocalConfig{}
type LocalConfig struct {
backendAPI getter.IBackend
configObj *ConfigObj
@@ -146,6 +149,8 @@ func NewLocalConfig(
}
logger.L().Debug("Kubescape Cloud URLs", helpers.String("api", lc.backendAPI.GetCloudAPIURL()), helpers.String("auth", lc.backendAPI.GetCloudAuthURL()), helpers.String("report", lc.backendAPI.GetCloudReportURL()), helpers.String("UI", lc.backendAPI.GetCloudUIURL()))
initializeCloudAPI(lc)
return lc
}
@@ -220,6 +225,8 @@ KS_SECRET_KEY
TODO - support:
KS_CACHE // path to cached files
*/
var _ ITenantConfig = &ClusterConfig{}
type ClusterConfig struct {
backendAPI getter.IBackend
k8s *k8sinterface.KubernetesApi
@@ -288,6 +295,8 @@ func NewClusterConfig(k8s *k8sinterface.KubernetesApi, backendAPI getter.IBacken
}
logger.L().Debug("Kubescape Cloud URLs", helpers.String("api", c.backendAPI.GetCloudAPIURL()), helpers.String("auth", c.backendAPI.GetCloudAuthURL()), helpers.String("report", c.backendAPI.GetCloudReportURL()), helpers.String("UI", c.backendAPI.GetCloudUIURL()))
initializeCloudAPI(c)
return c
}
@@ -622,3 +631,15 @@ func updateCloudURLs(configObj *ConfigObj) {
}
}
func initializeCloudAPI(c ITenantConfig) {
cloud := getter.GetKSCloudAPIConnector()
cloud.SetAccountID(c.GetAccountID())
cloud.SetClientID(c.GetClientID())
cloud.SetSecretKey(c.GetSecretKey())
cloud.SetCloudAuthURL(c.GetCloudAuthURL())
cloud.SetCloudReportURL(c.GetCloudReportURL())
cloud.SetCloudUIURL(c.GetCloudUIURL())
cloud.SetCloudAPIURL(c.GetCloudAPIURL())
getter.SetKSCloudAPIConnector(cloud)
}

View File

@@ -5,6 +5,7 @@ import (
"os"
"testing"
"github.com/kubescape/kubescape/v2/core/cautils/getter"
"github.com/stretchr/testify/assert"
corev1 "k8s.io/api/core/v1"
)
@@ -268,3 +269,33 @@ func TestUpdateCloudURLs(t *testing.T) {
updateCloudURLs(co)
assert.Equal(t, co.CloudAPIURL, mockCloudAPIURL)
}
func Test_initializeCloudAPI(t *testing.T) {
type args struct {
c ITenantConfig
}
tests := []struct {
name string
args args
}{
{
name: "test",
args: args{
c: mockClusterConfig(),
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
initializeCloudAPI(tt.args.c)
cloud := getter.GetKSCloudAPIConnector()
assert.Equal(t, tt.args.c.GetCloudAPIURL(), cloud.GetCloudAPIURL())
assert.Equal(t, tt.args.c.GetCloudAuthURL(), cloud.GetCloudAuthURL())
assert.Equal(t, tt.args.c.GetCloudUIURL(), cloud.GetCloudUIURL())
assert.Equal(t, tt.args.c.GetCloudReportURL(), cloud.GetCloudReportURL())
assert.Equal(t, tt.args.c.GetAccountID(), cloud.GetAccountID())
assert.Equal(t, tt.args.c.GetClientID(), cloud.GetClientID())
assert.Equal(t, tt.args.c.GetSecretKey(), cloud.GetSecretKey())
})
}
}

View File

@@ -6,6 +6,8 @@ import (
spinnerpkg "github.com/briandowns/spinner"
"github.com/fatih/color"
logger "github.com/kubescape/go-logger"
"github.com/kubescape/go-logger/helpers"
"github.com/mattn/go-isatty"
"github.com/schollz/progressbar/v3"
)
@@ -22,6 +24,10 @@ var DescriptionDisplay = color.New(color.Faint, color.FgWhite).FprintfFunc()
var spinner *spinnerpkg.Spinner
func StartSpinner() {
if helpers.ToLevel(logger.L().GetLevel()) >= helpers.WarningLevel {
return
}
if spinner != nil {
if !spinner.Active() {
spinner.Start()
@@ -42,8 +48,8 @@ func StopSpinner() {
}
type ProgressHandler struct {
title string
pb *progressbar.ProgressBar
title string
}
func NewProgressHandler(title string) *ProgressHandler {
@@ -51,11 +57,11 @@ func NewProgressHandler(title string) *ProgressHandler {
}
func (p *ProgressHandler) Start(allSteps int) {
if isatty.IsTerminal(os.Stderr.Fd()) {
p.pb = progressbar.Default(int64(allSteps), p.title)
} else {
if !isatty.IsTerminal(os.Stderr.Fd()) || helpers.ToLevel(logger.L().GetLevel()) >= helpers.WarningLevel {
p.pb = progressbar.DefaultSilent(int64(allSteps), p.title)
return
}
p.pb = progressbar.Default(int64(allSteps), p.title)
}
func (p *ProgressHandler) ProgressJob(step int, message string) {

View File

@@ -0,0 +1,32 @@
package cautils
import (
"testing"
"github.com/kubescape/go-logger"
)
func TestStartSpinner(t *testing.T) {
tests := []struct {
name string
loggerLevel string
enabled bool
}{
{
name: "TestStartSpinner - disabled",
loggerLevel: "warning",
enabled: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
logger.L().SetLevel(tt.loggerLevel)
StartSpinner()
if !tt.enabled {
if spinner != nil {
t.Errorf("spinner should be nil")
}
}
})
}
}

View File

@@ -48,7 +48,7 @@ func LoadResourcesFromHelmCharts(ctx context.Context, basePath string) (map[stri
if err == nil {
wls, errs := chart.GetWorkloadsWithDefaultValues()
if len(errs) > 0 {
logger.L().Ctx(ctx).Error(fmt.Sprintf("Rendering of Helm chart template '%s', failed: %v", chart.GetName(), errs))
logger.L().Ctx(ctx).Warning(fmt.Sprintf("Rendering of Helm chart template '%s', failed: %v", chart.GetName(), errs))
continue
}
@@ -88,7 +88,7 @@ func LoadResourcesFromKustomizeDirectory(ctx context.Context, basePath string) (
kustomizeDirectoryName := GetKustomizeDirectoryName(newBasePath)
if len(errs) > 0 {
logger.L().Ctx(ctx).Error(fmt.Sprintf("Rendering yaml from Kustomize failed: %v", errs))
logger.L().Ctx(ctx).Warning(fmt.Sprintf("Rendering yaml from Kustomize failed: %v", errs))
}
for k, v := range wls {
@@ -100,15 +100,16 @@ func LoadResourcesFromKustomizeDirectory(ctx context.Context, basePath string) (
func LoadResourcesFromFiles(ctx context.Context, input, rootPath string) map[string][]workloadinterface.IMetadata {
files, errs := listFiles(input)
if len(errs) > 0 {
logger.L().Ctx(ctx).Error(fmt.Sprintf("%v", errs))
logger.L().Ctx(ctx).Warning(fmt.Sprintf("%v", errs))
}
if len(files) == 0 {
logger.L().Ctx(ctx).Error("no files found to scan", helpers.String("input", input))
return nil
}
workloads, errs := loadFiles(rootPath, files)
if len(errs) > 0 {
logger.L().Ctx(ctx).Error(fmt.Sprintf("%v", errs))
logger.L().Ctx(ctx).Warning(fmt.Sprintf("%v", errs))
}
return workloads

View File

@@ -1,24 +1,61 @@
package getter
type FeLoginData struct {
Secret string `json:"secret"`
ClientId string `json:"clientId"`
}
import (
"github.com/armosec/armoapi-go/armotypes"
"github.com/kubescape/opa-utils/reporthandling"
"github.com/kubescape/opa-utils/reporthandling/attacktrack/v1alpha1"
reporthandlingv2 "github.com/kubescape/opa-utils/reporthandling/v2"
)
type FeLoginResponse struct {
Token string `json:"accessToken"`
RefreshToken string `json:"refreshToken"`
Expires string `json:"expires"`
ExpiresIn int32 `json:"expiresIn"`
}
// NativeFrameworks identifies all pre-built, native frameworks.
var NativeFrameworks = []string{"allcontrols", "nsa", "mitre"}
type KSCloudSelectCustomer struct {
SelectedCustomerGuid string `json:"selectedCustomer"`
}
type (
// TenantResponse holds the credentials for a tenant.
TenantResponse struct {
TenantID string `json:"tenantId"`
Token string `json:"token"`
Expires string `json:"expires"`
AdminMail string `json:"adminMail,omitempty"`
}
type TenantResponse struct {
TenantID string `json:"tenantId"`
Token string `json:"token"`
Expires string `json:"expires"`
AdminMail string `json:"adminMail,omitempty"`
}
// AttackTrack is an alias to the API type definition for attack tracks.
AttackTrack = v1alpha1.AttackTrack
// Framework is an alias to the API type definition for a framework.
Framework = reporthandling.Framework
// Control is an alias to the API type definition for a control.
Control = reporthandling.Control
// PostureExceptionPolicy is an alias to the API type definition for posture exception policy.
PostureExceptionPolicy = armotypes.PostureExceptionPolicy
// CustomerConfig is an alias to the API type definition for a customer configuration.
CustomerConfig = armotypes.CustomerConfig
// PostureReport is an alias to the API type definition for a posture report.
PostureReport = reporthandlingv2.PostureReport
)
type (
// internal data descriptors
// feLoginData describes the input to a login challenge.
feLoginData struct {
Secret string `json:"secret"`
ClientId string `json:"clientId"`
}
// feLoginResponse describes the response to a login challenge.
feLoginResponse struct {
Token string `json:"accessToken"`
RefreshToken string `json:"refreshToken"`
Expires string `json:"expires"`
ExpiresIn int32 `json:"expiresIn"`
}
ksCloudSelectCustomer struct {
SelectedCustomerGuid string `json:"selectedCustomer"`
}
)

View File

@@ -0,0 +1,8 @@
// Package getter provides functionality to retrieve policy objects.
//
// It comes with 3 implementations:
//
// * KSCloudAPI is a client for the KS Cloud SaaS API
// * LoadPolicy exposes policy objects stored in a local repository
// * DownloadReleasedPolicy downloads policy objects from the policy library released on github: https://github.com/kubescape/regolibrary
package getter

View File

@@ -14,6 +14,12 @@ import (
// =======================================================================================================================
// ======================================== DownloadReleasedPolicy =======================================================
// =======================================================================================================================
var (
_ IPolicyGetter = &DownloadReleasedPolicy{}
_ IExceptionsGetter = &DownloadReleasedPolicy{}
_ IAttackTracksGetter = &DownloadReleasedPolicy{}
_ IControlsInputsGetter = &DownloadReleasedPolicy{}
)
// Use gitregostore to get policies from github release
type DownloadReleasedPolicy struct {
@@ -106,19 +112,6 @@ func (drp *DownloadReleasedPolicy) SetRegoObjects() error {
return drp.gs.SetRegoObjects()
}
func isNativeFramework(framework string) bool {
return contains(NativeFrameworks, framework)
}
func contains(s []string, str string) bool {
for _, v := range s {
if strings.EqualFold(v, str) {
return true
}
}
return false
}
func (drp *DownloadReleasedPolicy) GetExceptions(clusterName string) ([]armotypes.PostureExceptionPolicy, error) {
exceptions, err := drp.gs.GetSystemPostureExceptionPolicies()
if err != nil {

View File

@@ -0,0 +1,164 @@
package getter
import (
"errors"
"fmt"
"io/fs"
"os"
"path/filepath"
"strings"
"testing"
"github.com/kubescape/kubescape/v2/internal/testutils"
jsoniter "github.com/json-iterator/go"
"github.com/stretchr/testify/require"
)
func TestReleasedPolicy(t *testing.T) {
t.Parallel()
p := NewDownloadReleasedPolicy()
t.Run("should initialize objects", func(t *testing.T) {
t.Parallel()
// acquire from github or from local fixture
hydrateReleasedPolicyFromMock(t, p)
require.NoError(t, p.SetRegoObjects())
t.Run("with ListControls", func(t *testing.T) {
t.Parallel()
controlIDs, err := p.ListControls()
require.NoError(t, err)
require.NotEmpty(t, controlIDs)
sampleSize := int(min(int64(len(controlIDs)), 10))
for _, toPin := range controlIDs[:sampleSize] {
// Example of a returned "ID": `C-0154|Ensure_that_the_--client-cert-auth_argument_is_set_to_true|`
controlString := toPin
parts := strings.Split(controlString, "|")
controlID := parts[0]
t.Run(fmt.Sprintf("with GetControl(%q)", controlID), func(t *testing.T) {
t.Parallel()
ctrl, err := p.GetControl(controlID)
require.NoError(t, err)
require.NotEmpty(t, ctrl)
require.Equal(t, controlID, ctrl.ControlID)
})
}
t.Run("with unknown GetControl()", func(t *testing.T) {
t.Parallel()
ctrl, err := p.GetControl("zork")
require.Error(t, err)
require.Nil(t, ctrl)
})
})
t.Run("with GetFrameworks", func(t *testing.T) {
t.Parallel()
frameworks, err := p.GetFrameworks()
require.NoError(t, err)
require.NotEmpty(t, frameworks)
for _, toPin := range frameworks {
framework := toPin
require.NotEmpty(t, framework)
require.NotEmpty(t, framework.Name)
t.Run(fmt.Sprintf("with GetFramework(%q)", framework.Name), func(t *testing.T) {
t.Parallel()
fw, err := p.GetFramework(framework.Name)
require.NoError(t, err)
require.NotNil(t, fw)
require.EqualValues(t, framework, *fw)
})
}
t.Run("with unknown GetFramework()", func(t *testing.T) {
t.Parallel()
ctrl, err := p.GetFramework("zork")
require.Error(t, err)
require.Nil(t, ctrl)
})
t.Run("with ListFrameworks", func(t *testing.T) {
t.Parallel()
frameworkIDs, err := p.ListFrameworks()
require.NoError(t, err)
require.NotEmpty(t, frameworkIDs)
require.Len(t, frameworkIDs, len(frameworks))
})
})
t.Run("with GetControlsInput", func(t *testing.T) {
t.Parallel()
controlInputs, err := p.GetControlsInputs("") // NOTE: cluster name currently unused
require.NoError(t, err)
require.NotEmpty(t, controlInputs)
})
t.Run("with GetAttackTracks", func(t *testing.T) {
t.Parallel()
attackTracks, err := p.GetAttackTracks()
require.NoError(t, err)
require.NotEmpty(t, attackTracks)
})
t.Run("with GetExceptions", func(t *testing.T) {
t.Parallel()
exceptions, err := p.GetExceptions("") // NOTE: cluster name currently unused
require.NoError(t, err)
require.NotEmpty(t, exceptions)
})
})
}
func hydrateReleasedPolicyFromMock(t testing.TB, p *DownloadReleasedPolicy) {
regoFile := testRegoFile("policy")
if _, err := os.Stat(regoFile); errors.Is(err, fs.ErrNotExist) {
// retrieve fixture from latest released policy from github.
//
// NOTE: to update the mock, just delete the testdata/policy.json file and run the tests again.
t.Logf("updating fixture file %q from github", regoFile)
require.NoError(t, p.SetRegoObjects())
require.NotNil(t, p.gs)
require.NoError(t,
SaveInFile(p.gs, regoFile),
)
return
}
// we have a mock fixture: load this rather than calling github
t.Logf("populating rego policy from fixture file %q", regoFile)
buf, err := os.ReadFile(regoFile)
require.NoError(t, err)
require.NoError(t,
jsoniter.Unmarshal(buf, p.gs),
)
}
func testRegoFile(framework string) string {
return filepath.Join(testutils.CurrentDir(), "testdata", fmt.Sprintf("%s.json", framework))
}

View File

@@ -1,47 +0,0 @@
package getter
import (
"github.com/armosec/armoapi-go/armotypes"
"github.com/kubescape/opa-utils/reporthandling"
"github.com/kubescape/opa-utils/reporthandling/attacktrack/v1alpha1"
)
type IPolicyGetter interface {
GetFramework(name string) (*reporthandling.Framework, error)
GetFrameworks() ([]reporthandling.Framework, error)
GetControl(ID string) (*reporthandling.Control, error)
ListFrameworks() ([]string, error)
ListControls() ([]string, error)
}
type IExceptionsGetter interface {
GetExceptions(clusterName string) ([]armotypes.PostureExceptionPolicy, error)
}
type IBackend interface {
GetAccountID() string
GetClientID() string
GetSecretKey() string
GetCloudReportURL() string
GetCloudAPIURL() string
GetCloudUIURL() string
GetCloudAuthURL() string
SetAccountID(accountID string)
SetClientID(clientID string)
SetSecretKey(secretKey string)
SetCloudReportURL(cloudReportURL string)
SetCloudAPIURL(cloudAPIURL string)
SetCloudUIURL(cloudUIURL string)
SetCloudAuthURL(cloudAuthURL string)
GetTenant() (*TenantResponse, error)
}
type IControlsInputsGetter interface {
GetControlsInputs(clusterName string) (map[string][]string, error)
}
type IAttackTracksGetter interface {
GetAttackTracks() ([]v1alpha1.AttackTrack, error)
}

View File

@@ -6,24 +6,27 @@ import (
"io"
"net/http"
"os"
"path"
"path/filepath"
"strings"
)
// GetDefaultPath returns a location under the local dot files for kubescape.
//
// This is typically located under $HOME/.kubescape
func GetDefaultPath(name string) string {
return filepath.Join(DefaultLocalStore, name)
}
func SaveInFile(policy interface{}, pathStr string) error {
encodedData, err := json.MarshalIndent(policy, "", " ")
// SaveInFile serializes any object as a JSON file.
func SaveInFile(object interface{}, targetFile string) error {
encodedData, err := json.MarshalIndent(object, "", " ")
if err != nil {
return err
}
err = os.WriteFile(pathStr, encodedData, 0644) //nolint:gosec
err = os.WriteFile(targetFile, encodedData, 0644) //nolint:gosec
if err != nil {
if os.IsNotExist(err) {
pathDir := path.Dir(pathStr)
pathDir := filepath.Dir(targetFile)
// pathDir could contain subdirectories
if erm := os.MkdirAll(pathDir, 0755); erm != nil {
return erm
@@ -32,7 +35,7 @@ func SaveInFile(policy interface{}, pathStr string) error {
return err
}
err = os.WriteFile(pathStr, encodedData, 0644) //nolint:gosec
err = os.WriteFile(targetFile, encodedData, 0644) //nolint:gosec
if err != nil {
return err
}
@@ -40,6 +43,9 @@ func SaveInFile(policy interface{}, pathStr string) error {
return nil
}
// HttpDelete provides a low-level capability to send a HTTP DELETE request and serialize the response as a string.
//
// Deprecated: use methods of the KSCloudAPI client instead.
func HttpDelete(httpClient *http.Client, fullURL string, headers map[string]string) (string, error) {
req, err := http.NewRequest("DELETE", fullURL, nil)
@@ -59,8 +65,10 @@ func HttpDelete(httpClient *http.Client, fullURL string, headers map[string]stri
return respStr, nil
}
// HttpGetter provides a low-level capability to send a HTTP GET request and serialize the response as a string.
//
// Deprecated: use methods of the KSCloudAPI client instead.
func HttpGetter(httpClient *http.Client, fullURL string, headers map[string]string) (string, error) {
req, err := http.NewRequest("GET", fullURL, nil)
if err != nil {
return "", err
@@ -78,8 +86,10 @@ func HttpGetter(httpClient *http.Client, fullURL string, headers map[string]stri
return respStr, nil
}
// HttpPost provides a low-level capability to send a HTTP POST request and serialize the response as a string.
//
// Deprecated: use methods of the KSCloudAPI client instead.
func HttpPost(httpClient *http.Client, fullURL string, headers map[string]string, body []byte) (string, error) {
req, err := http.NewRequest("POST", fullURL, bytes.NewReader(body))
if err != nil {
return "", err
@@ -104,7 +114,7 @@ func setHeaders(req *http.Request, headers map[string]string) {
}
}
// HTTPRespToString parses the body as string and checks the HTTP status code, it closes the body reader at the end
// httpRespToString parses the body as string and checks the HTTP status code, it closes the body reader at the end
func httpRespToString(resp *http.Response) (string, error) {
if resp == nil || resp.Body == nil {
return "", nil
@@ -114,6 +124,7 @@ func httpRespToString(resp *http.Response) (string, error) {
if resp.ContentLength > 0 {
strBuilder.Grow(int(resp.ContentLength))
}
_, err := io.Copy(&strBuilder, resp.Body)
respStr := strBuilder.String()
if err != nil {

View File

@@ -0,0 +1,97 @@
package getter
import (
"net/http"
"os"
"path/filepath"
"testing"
"github.com/stretchr/testify/require"
)
func TestGetDefaultPath(t *testing.T) {
t.Parallel()
const name = "mine"
pth := GetDefaultPath(name)
require.Equal(t, name, filepath.Base(pth))
require.Equal(t, ".kubescape", filepath.Base(filepath.Dir(pth)))
}
func TestSaveInFile(t *testing.T) {
t.Parallel()
dir, err := os.MkdirTemp(".", "test")
require.NoError(t, err)
defer func() {
_ = os.RemoveAll(dir)
}()
policy := map[string]interface{}{
"key": "value",
"number": 1.00,
}
t.Run("should save data as JSON (target folder exists)", func(t *testing.T) {
target := filepath.Join(dir, "target.json")
require.NoError(t, SaveInFile(policy, target))
buf, err := os.ReadFile(target)
require.NoError(t, err)
var retrieved interface{}
require.NoError(t, json.Unmarshal(buf, &retrieved))
require.EqualValues(t, policy, retrieved)
})
t.Run("should save data as JSON (new target folder)", func(t *testing.T) {
target := filepath.Join(dir, "subdir", "target.json")
require.NoError(t, SaveInFile(policy, target))
buf, err := os.ReadFile(target)
require.NoError(t, err)
var retrieved interface{}
require.NoError(t, json.Unmarshal(buf, &retrieved))
require.EqualValues(t, policy, retrieved)
})
t.Run("should error", func(t *testing.T) {
badPolicy := map[string]interface{}{
"key": "value",
"number": 1.00,
"err": func() {},
}
target := filepath.Join(dir, "error.json")
require.Error(t, SaveInFile(badPolicy, target))
})
}
func TestHttpMethods(t *testing.T) {
client := http.DefaultClient
hdrs := map[string]string{"key": "value"}
srv := mockAPIServer(t)
t.Cleanup(srv.Close)
t.Run("HttpGetter should GET", func(t *testing.T) {
resp, err := HttpGetter(client, srv.URL(pathTestGet), hdrs)
require.NoError(t, err)
require.EqualValues(t, "body-get", resp)
})
t.Run("HttpPost should POST", func(t *testing.T) {
body := []byte("body-post")
resp, err := HttpPost(client, srv.URL(pathTestPost), hdrs, body)
require.NoError(t, err)
require.EqualValues(t, string(body), resp)
})
t.Run("HttpDelete should DELETE", func(t *testing.T) {
resp, err := HttpDelete(client, srv.URL(pathTestDelete), hdrs)
require.NoError(t, err)
require.EqualValues(t, "body-delete", resp)
})
}

View File

@@ -0,0 +1,55 @@
package getter
import (
"github.com/armosec/armoapi-go/armotypes"
"github.com/kubescape/opa-utils/reporthandling"
"github.com/kubescape/opa-utils/reporthandling/attacktrack/v1alpha1"
)
type (
// IPolicyGetter knows how to retrieve policies, i.e. frameworks and their controls.
IPolicyGetter interface {
GetFramework(name string) (*reporthandling.Framework, error)
GetFrameworks() ([]reporthandling.Framework, error)
GetControl(ID string) (*reporthandling.Control, error)
ListFrameworks() ([]string, error)
ListControls() ([]string, error)
}
// IExceptionsGetter knows how to retrieve exceptions.
IExceptionsGetter interface {
GetExceptions(clusterName string) ([]armotypes.PostureExceptionPolicy, error)
}
// IControlsInputsGetter knows how to retrieve controls inputs.
IControlsInputsGetter interface {
GetControlsInputs(clusterName string) (map[string][]string, error)
}
// IAttackTracksGetter knows how to retrieve attack tracks.
IAttackTracksGetter interface {
GetAttackTracks() ([]v1alpha1.AttackTrack, error)
}
// IBackend knows how to configure a KS Cloud client
IBackend interface {
GetAccountID() string
GetClientID() string
GetSecretKey() string
GetCloudReportURL() string
GetCloudAPIURL() string
GetCloudUIURL() string
GetCloudAuthURL() string
SetAccountID(accountID string)
SetClientID(clientID string)
SetSecretKey(secretKey string)
SetCloudReportURL(cloudReportURL string)
SetCloudAPIURL(cloudAPIURL string)
SetCloudUIURL(cloudUIURL string)
SetCloudAuthURL(cloudAuthURL string)
GetTenant() (*TenantResponse, error)
}
)

View File

@@ -1,16 +1,13 @@
package getter
import (
"io"
"strings"
stdjson "encoding/json"
jsoniter "github.com/json-iterator/go"
)
var (
json jsoniter.API
)
var json jsoniter.API
func init() {
// NOTE(fredbi): attention, this configuration rounds floats down to 6 digits
@@ -18,9 +15,24 @@ func init() {
json = jsoniter.ConfigFastest
}
// JSONDecoder returns JSON decoder for given string
func JSONDecoder(origin string) *stdjson.Decoder {
dec := stdjson.NewDecoder(strings.NewReader(origin))
// JSONDecoder provides a low-level utility that returns a JSON decoder for given string.
//
// Deprecated: use higher level methods from the KSCloudAPI client instead.
func JSONDecoder(origin string) *jsoniter.Decoder {
dec := jsoniter.NewDecoder(strings.NewReader(origin))
dec.UseNumber()
return dec
}
func decode[T any](rdr io.Reader) (T, error) {
var receiver T
dec := newDecoder(rdr)
err := dec.Decode(&receiver)
return receiver, err
}
func newDecoder(rdr io.Reader) *jsoniter.Decoder {
return json.NewDecoder(rdr)
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,294 @@
package getter
import (
"os"
"path/filepath"
"testing"
"github.com/armosec/armoapi-go/armotypes"
jsoniter "github.com/json-iterator/go"
"github.com/kubescape/kubescape/v2/internal/testutils"
"github.com/kubescape/opa-utils/reporthandling"
"github.com/kubescape/opa-utils/reporthandling/attacktrack/v1alpha1"
"github.com/stretchr/testify/require"
)
func mockAttackTracks() []v1alpha1.AttackTrack {
return []v1alpha1.AttackTrack{
{
ApiVersion: "v1",
Kind: "track",
Metadata: map[string]interface{}{"label": "name"},
Spec: v1alpha1.AttackTrackSpecification{
Version: "v2",
Description: "a mock",
Data: v1alpha1.AttackTrackStep{
Name: "track1",
Description: "mock-step",
SubSteps: []v1alpha1.AttackTrackStep{
{
Name: "track1",
Description: "mock-step",
Controls: []v1alpha1.IAttackTrackControl{
mockControlPtr("control-1"),
},
},
},
Controls: []v1alpha1.IAttackTrackControl{
mockControlPtr("control-2"),
mockControlPtr("control-3"),
},
},
},
},
{
ApiVersion: "v1",
Kind: "track",
Metadata: map[string]interface{}{"label": "stuff"},
Spec: v1alpha1.AttackTrackSpecification{
Version: "v1",
Description: "another mock",
Data: v1alpha1.AttackTrackStep{
Name: "track2",
Description: "mock-step2",
SubSteps: []v1alpha1.AttackTrackStep{
{
Name: "track3",
Description: "mock-step",
Controls: []v1alpha1.IAttackTrackControl{
mockControlPtr("control-4"),
},
},
},
Controls: []v1alpha1.IAttackTrackControl{
mockControlPtr("control-5"),
mockControlPtr("control-6"),
},
},
},
},
}
}
func mockFrameworks() []reporthandling.Framework {
id1s := []string{"control-1", "control-2"}
id2s := []string{"control-3", "control-4"}
id3s := []string{"control-5", "control-6"}
return []reporthandling.Framework{
{
PortalBase: armotypes.PortalBase{
Name: "mock-1",
},
CreationTime: "now",
Description: "mock-1",
Controls: []reporthandling.Control{
mockControl("control-1"),
mockControl("control-2"),
},
ControlsIDs: &id1s,
SubSections: map[string]*reporthandling.FrameworkSubSection{
"section1": {
ID: "section-id",
ControlIDs: id1s,
},
},
},
{
PortalBase: armotypes.PortalBase{
Name: "mock-2",
},
CreationTime: "then",
Description: "mock-2",
Controls: []reporthandling.Control{
mockControl("control-3"),
mockControl("control-4"),
},
ControlsIDs: &id2s,
SubSections: map[string]*reporthandling.FrameworkSubSection{
"section2": {
ID: "section-id",
ControlIDs: id2s,
},
},
},
{
PortalBase: armotypes.PortalBase{
Name: "nsa",
},
CreationTime: "tomorrow",
Description: "nsa mock",
Controls: []reporthandling.Control{
mockControl("control-5"),
mockControl("control-6"),
},
ControlsIDs: &id3s,
SubSections: map[string]*reporthandling.FrameworkSubSection{
"section2": {
ID: "section-id",
ControlIDs: id3s,
},
},
},
}
}
func mockControl(controlID string) reporthandling.Control {
return reporthandling.Control{
ControlID: controlID,
}
}
func mockControlPtr(controlID string) *reporthandling.Control {
val := mockControl(controlID)
return &val
}
func mockExceptions() []armotypes.PostureExceptionPolicy {
return []armotypes.PostureExceptionPolicy{
{
PolicyType: "postureExceptionPolicy",
CreationTime: "now",
Actions: []armotypes.PostureExceptionPolicyActions{
"alertOnly",
},
Resources: []armotypes.PortalDesignator{
{
DesignatorType: "Attributes",
Attributes: map[string]string{
"kind": "Pod",
"name": "coredns-[A-Za-z0-9]+-[A-Za-z0-9]+",
"namespace": "kube-system",
},
},
{
DesignatorType: "Attributes",
Attributes: map[string]string{
"kind": "Pod",
"name": "etcd-.*",
"namespace": "kube-system",
},
},
},
PosturePolicies: []armotypes.PosturePolicy{
{
FrameworkName: "MITRE",
ControlID: "C-.*",
},
{
FrameworkName: "another-framework",
ControlID: "a regexp",
},
},
},
{
PolicyType: "postureExceptionPolicy",
CreationTime: "then",
Actions: []armotypes.PostureExceptionPolicyActions{
"alertOnly",
},
Resources: []armotypes.PortalDesignator{
{
DesignatorType: "Attributes",
Attributes: map[string]string{
"kind": "Deployment",
"name": "my-regexp",
},
},
{
DesignatorType: "Attributes",
Attributes: map[string]string{
"kind": "Secret",
"name": "another-regexp",
},
},
},
PosturePolicies: []armotypes.PosturePolicy{
{
FrameworkName: "yet-another-framework",
ControlID: "a regexp",
},
},
},
}
}
func mockTenantResponse() *TenantResponse {
return &TenantResponse{
TenantID: "id",
Token: "token",
Expires: "expiry-time",
AdminMail: "admin@example.com",
}
}
func mockCustomerConfig(cluster, scope string) func() *armotypes.CustomerConfig {
if cluster == "" {
cluster = "my-cluster"
}
if scope == "" {
scope = "default"
}
return func() *armotypes.CustomerConfig {
return &armotypes.CustomerConfig{
Name: "user",
Attributes: map[string]interface{}{
"label": "value",
},
Scope: armotypes.PortalDesignator{
DesignatorType: "Attributes",
Attributes: map[string]string{
"kind": "Cluster",
"name": cluster,
"scope": scope,
},
},
Settings: armotypes.Settings{
PostureControlInputs: map[string][]string{
"inputs-1": {"x1", "y2"},
"inputs-2": {"x2", "y2"},
},
PostureScanConfig: armotypes.PostureScanConfig{
ScanFrequency: armotypes.ScanFrequency("weekly"),
},
VulnerabilityScanConfig: armotypes.VulnerabilityScanConfig{
ScanFrequency: armotypes.ScanFrequency("daily"),
CriticalPriorityThreshold: 1,
HighPriorityThreshold: 2,
MediumPriorityThreshold: 3,
ScanNewDeployment: true,
AllowlistRegistries: []string{"a", "b"},
BlocklistRegistries: []string{"c", "d"},
},
SlackConfigurations: armotypes.SlackSettings{
Token: "slack-token",
},
},
}
}
}
func mockLoginResponse() *feLoginResponse {
return &feLoginResponse{
Token: "access-token",
RefreshToken: "refresh-token",
Expires: "expiry-time",
ExpiresIn: 123,
}
}
func mockPostureReport(t testing.TB, reportID, cluster string) *PostureReport {
fixture := filepath.Join(testutils.CurrentDir(), "testdata", "mock_posture_report.json")
buf, err := os.ReadFile(fixture)
require.NoError(t, err)
var report PostureReport
require.NoError(t,
jsoniter.Unmarshal(buf, &report),
)
return &report
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,186 +0,0 @@
package getter
import (
"bytes"
"fmt"
"net/http"
"net/url"
"strings"
)
var NativeFrameworks = []string{"allcontrols", "nsa", "mitre"}
func (api *KSCloudAPI) getFrameworkURL(frameworkName string) string {
u := url.URL{}
u.Scheme, u.Host = parseHost(api.GetCloudAPIURL())
u.Path = "api/v1/armoFrameworks"
q := u.Query()
q.Add("customerGUID", api.getCustomerGUIDFallBack())
if isNativeFramework(frameworkName) {
q.Add("frameworkName", strings.ToUpper(frameworkName))
} else {
// For customer framework has to be the way it was added
q.Add("frameworkName", frameworkName)
}
u.RawQuery = q.Encode()
return u.String()
}
func (api *KSCloudAPI) getAttackTracksURL() string {
u := url.URL{}
u.Scheme, u.Host = parseHost(api.GetCloudAPIURL())
u.Path = "api/v1/attackTracks"
q := u.Query()
q.Add("customerGUID", api.getCustomerGUIDFallBack())
u.RawQuery = q.Encode()
return u.String()
}
func (api *KSCloudAPI) getListFrameworkURL() string {
u := url.URL{}
u.Scheme, u.Host = parseHost(api.GetCloudAPIURL())
u.Path = "api/v1/armoFrameworks"
q := u.Query()
q.Add("customerGUID", api.getCustomerGUIDFallBack())
u.RawQuery = q.Encode()
return u.String()
}
func (api *KSCloudAPI) getExceptionsURL(clusterName string) string {
u := url.URL{}
u.Scheme, u.Host = parseHost(api.GetCloudAPIURL())
u.Path = "api/v1/armoPostureExceptions"
q := u.Query()
q.Add("customerGUID", api.getCustomerGUIDFallBack())
// if clusterName != "" { // TODO - fix customer name support in Armo BE
// q.Add("clusterName", clusterName)
// }
u.RawQuery = q.Encode()
return u.String()
}
func (api *KSCloudAPI) exceptionsURL(exceptionsPolicyName string) string {
u := url.URL{}
u.Scheme, u.Host = parseHost(api.GetCloudAPIURL())
u.Path = "api/v1/postureExceptionPolicy"
q := u.Query()
q.Add("customerGUID", api.getCustomerGUIDFallBack())
if exceptionsPolicyName != "" { // for delete
q.Add("policyName", exceptionsPolicyName)
}
u.RawQuery = q.Encode()
return u.String()
}
func (api *KSCloudAPI) getAccountConfigDefault(clusterName string) string {
config := api.getAccountConfig(clusterName)
url := config + "&scope=customer"
return url
}
func (api *KSCloudAPI) getAccountConfig(clusterName string) string {
u := url.URL{}
u.Scheme, u.Host = parseHost(api.GetCloudAPIURL())
u.Path = "api/v1/armoCustomerConfiguration"
q := u.Query()
q.Add("customerGUID", api.getCustomerGUIDFallBack())
if clusterName != "" { // TODO - fix customer name support in Armo BE
q.Add("clusterName", clusterName)
}
u.RawQuery = q.Encode()
return u.String()
}
func (api *KSCloudAPI) getAccountURL() string {
u := url.URL{}
u.Scheme, u.Host = parseHost(api.GetCloudAPIURL())
u.Path = "api/v1/createTenant"
return u.String()
}
func (api *KSCloudAPI) getApiToken() string {
u := url.URL{}
u.Scheme, u.Host = parseHost(api.GetCloudAuthURL())
u.Path = "identity/resources/auth/v1/api-token"
return u.String()
}
func (api *KSCloudAPI) getOpenidCustomers() string {
u := url.URL{}
u.Scheme, u.Host = parseHost(api.GetCloudAPIURL())
u.Path = "api/v1/openid_customers"
return u.String()
}
func (api *KSCloudAPI) getAuthCookie() (string, error) {
selectCustomer := KSCloudSelectCustomer{SelectedCustomerGuid: api.accountID}
requestBody, _ := json.Marshal(selectCustomer)
client := &http.Client{}
httpRequest, err := http.NewRequest(http.MethodPost, api.getOpenidCustomers(), bytes.NewBuffer(requestBody))
if err != nil {
return "", err
}
httpRequest.Header.Set("Content-Type", "application/json")
httpRequest.Header.Set("Authorization", fmt.Sprintf("Bearer %s", api.feToken.Token))
httpResponse, err := client.Do(httpRequest)
if err != nil {
return "", err
}
defer httpResponse.Body.Close()
if httpResponse.StatusCode != http.StatusOK {
return "", fmt.Errorf("failed to get cookie from %s: status %d", api.getOpenidCustomers(), httpResponse.StatusCode)
}
cookies := httpResponse.Header.Get("set-cookie")
if len(cookies) == 0 {
return "", fmt.Errorf("no cookie field in response from %s", api.getOpenidCustomers())
}
authCookie := ""
for _, cookie := range strings.Split(cookies, ";") {
kv := strings.Split(cookie, "=")
if kv[0] == "auth" {
authCookie = kv[1]
}
}
if len(authCookie) == 0 {
return "", fmt.Errorf("no auth cookie field in response from %s", api.getOpenidCustomers())
}
return authCookie, nil
}
func (api *KSCloudAPI) appendAuthHeaders(headers map[string]string) {
if api.feToken.Token != "" {
headers["Authorization"] = fmt.Sprintf("Bearer %s", api.feToken.Token)
}
if api.authCookie != "" {
headers["Cookie"] = fmt.Sprintf("auth=%s", api.authCookie)
}
}
func (api *KSCloudAPI) getCustomerGUIDFallBack() string {
if api.accountID != "" {
return api.accountID
}
return "11111111-1111-1111-1111-111111111111"
}
func parseHost(host string) (string, string) {
if strings.HasPrefix(host, "http://") {
return "http", strings.Replace(host, "http://", "", 1)
}
// default scheme
return "https", strings.Replace(host, "https://", "", 1)
}

View File

@@ -0,0 +1,202 @@
package getter
import (
"context"
"fmt"
"log"
"net/http"
"net/http/httputil"
"time"
)
type (
// KSCloudOption allows to configure the behavior of the KS Cloud client.
KSCloudOption func(*ksCloudOptions)
// ksCloudOptions holds all the configurable parts of the KS Cloud client.
ksCloudOptions struct {
httpClient *http.Client
cloudReportURL string
cloudUIURL string
timeout *time.Duration
withTrace bool
}
// request option instructs post/get/delete to alter the outgoing request
requestOption func(*requestOptions)
// requestOptions knows how to enrich a request with headers
requestOptions struct {
withJSON bool
withToken string
withCookie *http.Cookie
withTrace bool
headers map[string]string
reqContext context.Context
}
)
// KS Cloud client options
// WithHTTPClient overrides the default http.Client used by the KS Cloud client.
func WithHTTPClient(client *http.Client) KSCloudOption {
return func(o *ksCloudOptions) {
o.httpClient = client
}
}
// WithTimeout sets a global timeout on a operations performed by the KS Cloud client.
//
// A value of 0 means no timeout.
//
// The default is 61s.
func WithTimeout(timeout time.Duration) KSCloudOption {
duration := timeout
return func(o *ksCloudOptions) {
o.timeout = &duration
}
}
// WithReportURL specifies the URL to post reports.
func WithReportURL(u string) KSCloudOption {
return func(o *ksCloudOptions) {
o.cloudReportURL = u
}
}
// WithFrontendURL specifies the URL to access the KS Cloud UI.
func WithFrontendURL(u string) KSCloudOption {
return func(o *ksCloudOptions) {
o.cloudUIURL = u
}
}
// WithTrace toggles requests dump for inspection & debugging.
func WithTrace(enabled bool) KSCloudOption {
return func(o *ksCloudOptions) {
o.withTrace = enabled
}
}
var defaultClient = &http.Client{
Timeout: 61 * time.Second,
}
// ksCloudOptionsWithDefaults sets defaults for the KS client and applies overrides.
func ksCloudOptionsWithDefaults(opts []KSCloudOption) *ksCloudOptions {
options := &ksCloudOptions{
httpClient: defaultClient,
}
for _, apply := range opts {
apply(options)
}
if options.timeout != nil {
// non-default timeout (0 means no timeout)
// clone the client and override the timeout
client := *options.httpClient
client.Timeout = *options.timeout
options.httpClient = &client
}
return options
}
// http request options
// withContentJSON sets JSON content type for a request
func withContentJSON(enabled bool) requestOption {
return func(o *requestOptions) {
o.withJSON = enabled
}
}
// withToken sets an Authorization header for a request
func withToken(token string) requestOption {
return func(o *requestOptions) {
o.withToken = token
}
}
// withCookie sets an authentication cookie for a request
func withCookie(cookie *http.Cookie) requestOption {
return func(o *requestOptions) {
o.withCookie = cookie
}
}
// withExtraHeaders adds extra headers to a request
func withExtraHeaders(headers map[string]string) requestOption {
return func(o *requestOptions) {
o.headers = headers
}
}
/* not used yet
// withContext sets the context of a request.
//
// By default, context.Background() is used.
func withContext(ctx context.Context) requestOption {
return func(o *requestOptions) {
o.reqContext = ctx
}
}
*/
// withTrace dumps requests for debugging
func withTrace(enabled bool) requestOption {
return func(o *requestOptions) {
o.withTrace = enabled
}
}
func (o *requestOptions) setHeaders(req *http.Request) {
if o.withJSON {
req.Header.Set("Content-Type", "application/json")
}
if len(o.withToken) > 0 {
req.Header.Set("Authorization", fmt.Sprintf("Bearer %s", o.withToken))
}
if o.withCookie != nil {
req.AddCookie(o.withCookie)
}
for k, v := range o.headers {
req.Header.Set(k, v)
}
}
// traceReq dumps the content of an outgoing request for inspecting or debugging the client.
func (o *requestOptions) traceReq(req *http.Request) {
if !o.withTrace {
return
}
dump, _ := httputil.DumpRequestOut(req, true)
log.Printf("%s\n", dump)
}
// traceResp dumps the content of an API response for inspecting or debugging the client.
func (o *requestOptions) traceResp(resp *http.Response) {
if !o.withTrace {
return
}
dump, _ := httputil.DumpResponse(resp, true)
log.Printf("%s\n", dump)
}
func requestOptionsWithDefaults(opts []requestOption) *requestOptions {
o := &requestOptions{
reqContext: context.Background(),
}
for _, apply := range opts {
apply(o)
}
return o
}

View File

@@ -24,9 +24,13 @@ var (
ErrIDRequired = errors.New("missing required input control ID")
ErrFrameworkNotMatching = errors.New("framework from file not matching")
ErrControlNotMatching = errors.New("framework from file not matching")
)
_ IPolicyGetter = &LoadPolicy{}
_ IExceptionsGetter = &LoadPolicy{}
var (
_ IPolicyGetter = &LoadPolicy{}
_ IExceptionsGetter = &LoadPolicy{}
_ IAttackTracksGetter = &LoadPolicy{}
_ IControlsInputsGetter = &LoadPolicy{}
)
func getCacheDir() string {

View File

@@ -6,6 +6,7 @@ import (
"path/filepath"
"testing"
"github.com/kubescape/kubescape/v2/internal/testutils"
"github.com/stretchr/testify/require"
)
@@ -386,7 +387,7 @@ func TestLoadPolicy(t *testing.T) {
}
func testFrameworkFile(framework string) string {
return filepath.Join(".", "testdata", fmt.Sprintf("%s.json", framework))
return filepath.Join(testutils.CurrentDir(), "testdata", fmt.Sprintf("%s.json", framework))
}
func writeTempJSONControlInputs(t testing.TB) (string, map[string][]string) {

File diff suppressed because it is too large Load Diff

25821
core/cautils/getter/testdata/policy.json vendored Normal file

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,65 @@
package getter
import (
"net/url"
"path"
)
// buildAPIURL builds an URL pointing to the API backend.
func (api *KSCloudAPI) buildAPIURL(pth string, pairs ...string) string {
return buildQuery(url.URL{
Scheme: api.scheme,
Host: api.host,
Path: pth,
}, pairs...)
}
// buildUIURL builds an URL pointing to the UI frontend.
func (api *KSCloudAPI) buildUIURL(pth string, pairs ...string) string {
return buildQuery(url.URL{
Scheme: api.uischeme,
Host: api.uihost,
Path: pth,
}, pairs...)
}
// buildAuthURL builds an URL pointing to the authentication endpoint.
func (api *KSCloudAPI) buildAuthURL(pth string, pairs ...string) string {
return buildQuery(url.URL{
Scheme: api.authscheme,
Host: api.authhost,
Path: pth,
}, pairs...)
}
// buildReportURL builds an URL pointing to the reporting endpoint.
func (api *KSCloudAPI) buildReportURL(pth string, pairs ...string) string {
return buildQuery(url.URL{
Scheme: api.reportscheme,
Host: api.reporthost,
Path: pth,
}, pairs...)
}
// buildQuery builds an URL with query params.
//
// Params are provided in pairs (param name, value).
func buildQuery(u url.URL, pairs ...string) string {
if len(pairs)%2 != 0 {
panic("dev error: buildURL accepts query params in (name, value) pairs")
}
q := u.Query()
for i := 0; i < len(pairs)-1; i += 2 {
param := pairs[i]
value := pairs[i+1]
q.Add(param, value)
}
u.RawQuery = q.Encode()
u.Path = path.Clean(u.Path)
return u.String()
}

View File

@@ -0,0 +1,86 @@
package getter
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestBuildURL(t *testing.T) {
t.Parallel()
ks := NewKSCloudAPICustomized(
"api.example.com", "auth.example.com", // required
WithFrontendURL("ui.example.com"), // optional
WithReportURL("report.example.com"), // optional
)
t.Run("should build API URL with query params on https host", func(t *testing.T) {
require.Equal(t,
"https://api.example.com/path?q1=v1&q2=v2",
ks.buildAPIURL("/path", "q1", "v1", "q2", "v2"),
)
})
t.Run("should build API URL with query params on http host", func(t *testing.T) {
ku := NewKSCloudAPICustomized("http://api.example.com", "auth.example.com")
require.Equal(t,
"http://api.example.com/path?q1=v1&q2=v2",
ku.buildAPIURL("/path", "q1", "v1", "q2", "v2"),
)
})
t.Run("should panic when params are not provided in pairs", func(t *testing.T) {
require.Panics(t, func() {
// notice how the linter detects wrong args
_ = ks.buildAPIURL("/path", "q1", "v1", "q2") //nolint:staticcheck
})
})
t.Run("should build UI URL with query params on https host", func(t *testing.T) {
require.Equal(t,
"https://ui.example.com/path?q1=v1&q2=v2",
ks.buildUIURL("/path", "q1", "v1", "q2", "v2"),
)
})
t.Run("should build report URL with query params on https host", func(t *testing.T) {
require.Equal(t,
"https://report.example.com/path?q1=v1&q2=v2",
ks.buildReportURL("/path", "q1", "v1", "q2", "v2"),
)
})
}
func TestViewURL(t *testing.T) {
t.Parallel()
ks := NewKSCloudAPICustomized(
"api.example.com", "auth.example.com", // required
WithFrontendURL("ui.example.com"), // optional
WithReportURL("report.example.com"), // optional
)
ks.SetAccountID("me")
ks.SetInvitationToken("invite")
t.Run("should render UI report URL", func(t *testing.T) {
require.Equal(t, "https://ui.example.com/repository-scanning/xyz", ks.ViewReportURL("xyz"))
})
t.Run("should render UI dashboard URL", func(t *testing.T) {
require.Equal(t, "https://ui.example.com/dashboard", ks.ViewDashboardURL())
})
t.Run("should render UI RBAC URL", func(t *testing.T) {
require.Equal(t, "https://ui.example.com/rbac-visualizer", ks.ViewRBACURL())
})
t.Run("should render UI scan URL", func(t *testing.T) {
require.Equal(t, "https://ui.example.com/compliance/cluster", ks.ViewScanURL("cluster"))
})
t.Run("should render UI sign URL", func(t *testing.T) {
require.Equal(t, "https://ui.example.com/account/sign-up?customerGUID=me&invitationToken=invite&utm_medium=createaccount&utm_source=ARMOgithub", ks.ViewSignURL())
})
}

View File

@@ -0,0 +1,76 @@
package getter
import (
"fmt"
"io"
"net/http"
"strings"
)
// parseHost picks a host from a hostname or an URL and detects the scheme.
//
// The default scheme is https. This may be altered by specifying an explicit http://hostname URL.
func parseHost(host string) (string, string) {
if strings.HasPrefix(host, "http://") {
return "http", strings.Replace(host, "http://", "", 1) // cut... index ...
}
// default scheme
return "https", strings.Replace(host, "https://", "", 1)
}
func isNativeFramework(framework string) bool {
return contains(NativeFrameworks, framework)
}
func contains(s []string, str string) bool {
for _, v := range s {
if strings.EqualFold(v, str) {
return true
}
}
return false
}
func min(a, b int64) int64 {
if a < b {
return a
}
return b
}
// errAPI reports an API error, with a cap on the length of the error message.
func errAPI(resp *http.Response) error {
const maxSize = 1024
reason := new(strings.Builder)
if resp.Body != nil {
size := min(resp.ContentLength, maxSize)
if size > 0 {
reason.Grow(int(size))
}
_, _ = io.CopyN(reason, resp.Body, size)
defer resp.Body.Close()
}
return fmt.Errorf("http-error: '%s', reason: '%s'", resp.Status, reason.String())
}
// errAuth returns an authentication error.
//
// Authentication errors upon login croak a less detailed message.
func errAuth(resp *http.Response) error {
return fmt.Errorf("error authenticating: %d", resp.StatusCode)
}
func readString(rdr io.Reader, sizeHint int64) (string, error) {
var b strings.Builder
b.Grow(int(sizeHint))
_, err := io.Copy(&b, rdr)
return b.String(), err
}

View File

@@ -0,0 +1,45 @@
package getter
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestParseHost(t *testing.T) {
t.Parallel()
t.Run("should recognize http scheme", func(t *testing.T) {
t.Parallel()
const input = "http://localhost:7555"
scheme, host := parseHost(input)
require.Equal(t, "http", scheme)
require.Equal(t, "localhost:7555", host)
})
t.Run("should recognize https scheme", func(t *testing.T) {
t.Parallel()
const input = "https://localhost:7555"
scheme, host := parseHost(input)
require.Equal(t, "https", scheme)
require.Equal(t, "localhost:7555", host)
})
t.Run("should adopt https scheme by default", func(t *testing.T) {
t.Parallel()
const input = "portal-dev.armo.cloud"
scheme, host := parseHost(input)
require.Equal(t, "https", scheme)
require.Equal(t, "portal-dev.armo.cloud", host)
})
}
func TestIsNativeFramework(t *testing.T) {
t.Parallel()
require.Truef(t, isNativeFramework("nSa"), "expected nsa to be native (case insensitive)")
require.Falsef(t, isNativeFramework("foo"), "expected framework to be custom")
}

View File

@@ -226,7 +226,7 @@ func (scanInfo *ScanInfo) contains(policyName string) bool {
func scanInfoToScanMetadata(ctx context.Context, scanInfo *ScanInfo) *reporthandlingv2.Metadata {
metadata := &reporthandlingv2.Metadata{}
metadata.ScanMetadata.Format = scanInfo.Format
metadata.ScanMetadata.Formats = []string{scanInfo.Format}
metadata.ScanMetadata.FormatVersion = scanInfo.FormatVersion
metadata.ScanMetadata.Submit = scanInfo.Submit

View File

@@ -27,6 +27,7 @@ var (
cloudapis.CloudProviderDescribeKind,
cloudapis.CloudProviderDescribeRepositoriesKind,
cloudapis.CloudProviderListEntitiesForPoliciesKind,
cloudapis.CloudProviderPolicyVersionKind,
string(cloudsupport.TypeApiServerInfo),
}
)

View File

@@ -84,7 +84,7 @@ func downloadArtifacts(ctx context.Context, downloadInfo *metav1.DownloadInfo) e
}
for artifact := range artifacts {
if err := downloadArtifact(ctx, &metav1.DownloadInfo{Target: artifact, Path: downloadInfo.Path, FileName: fmt.Sprintf("%s.json", artifact)}, artifacts); err != nil {
logger.L().Ctx(ctx).Error("error downloading", helpers.String("artifact", artifact), helpers.Error(err))
logger.L().Ctx(ctx).Warning("error downloading", helpers.String("artifact", artifact), helpers.Error(err))
}
}
return nil

View File

@@ -11,9 +11,11 @@ import (
"github.com/kubescape/kubescape/v2/core/pkg/fixhandler"
)
const NoChangesApplied = "No changes were applied."
const NoResourcesToFix = "No issues to fix."
const ConfirmationQuestion = "Would you like to apply the changes to the files above? [y|n]: "
const (
noChangesApplied = "No changes were applied."
noResourcesToFix = "No issues to fix."
confirmationQuestion = "Would you like to apply the changes to the files above? [y|n]: "
)
func (ks *Kubescape) Fix(ctx context.Context, fixInfo *metav1.FixInfo) error {
logger.L().Info("Reading report file...")
@@ -25,19 +27,19 @@ func (ks *Kubescape) Fix(ctx context.Context, fixInfo *metav1.FixInfo) error {
resourcesToFix := handler.PrepareResourcesToFix(ctx)
if len(resourcesToFix) == 0 {
logger.L().Info(NoResourcesToFix)
logger.L().Info(noResourcesToFix)
return nil
}
handler.PrintExpectedChanges(resourcesToFix)
if fixInfo.DryRun {
logger.L().Info(NoChangesApplied)
logger.L().Info(noChangesApplied)
return nil
}
if !fixInfo.NoConfirm && !userConfirmed() {
logger.L().Info(NoChangesApplied)
logger.L().Info(noChangesApplied)
return nil
}
@@ -46,7 +48,7 @@ func (ks *Kubescape) Fix(ctx context.Context, fixInfo *metav1.FixInfo) error {
if len(errors) > 0 {
for _, err := range errors {
logger.L().Ctx(ctx).Error(err.Error())
logger.L().Ctx(ctx).Warning(err.Error())
}
return fmt.Errorf("Failed to fix some resources, check the logs for more details")
}
@@ -58,7 +60,7 @@ func userConfirmed() bool {
var input string
for {
fmt.Printf(ConfirmationQuestion)
fmt.Println(confirmationQuestion)
if _, err := fmt.Scanln(&input); err != nil {
continue
}

View File

@@ -66,8 +66,9 @@ func getRBACHandler(tenantConfig cautils.ITenantConfig, k8s *k8sinterface.Kubern
}
func getReporter(ctx context.Context, tenantConfig cautils.ITenantConfig, reportID string, submit, fwScan bool, scanningContext cautils.ScanningContext) reporter.IReport {
ctx, span := otel.Tracer("").Start(ctx, "getReporter")
_, span := otel.Tracer("").Start(ctx, "getReporter")
defer span.End()
if submit {
submitData := reporterv2.SubmitContextScan
if scanningContext != cautils.ContextCluster {
@@ -77,7 +78,7 @@ func getReporter(ctx context.Context, tenantConfig cautils.ITenantConfig, report
}
if tenantConfig.GetAccountID() == "" {
// Add link only when scanning a cluster using a framework
return reporterv2.NewReportMock("https://hub.armosec.io/docs/installing-kubescape", "run kubescape with the '--account' flag")
return reporterv2.NewReportMock("", "")
}
var message string
if !fwScan {
@@ -90,6 +91,7 @@ func getReporter(ctx context.Context, tenantConfig cautils.ITenantConfig, report
func getResourceHandler(ctx context.Context, scanInfo *cautils.ScanInfo, tenantConfig cautils.ITenantConfig, k8s *k8sinterface.KubernetesApi, hostSensorHandler hostsensorutils.IHostSensor, registryAdaptors *resourcehandler.RegistryAdaptors) resourcehandler.IResourceHandler {
ctx, span := otel.Tracer("").Start(ctx, "getResourceHandler")
defer span.End()
if len(scanInfo.InputPatterns) > 0 || k8s == nil {
// scanInfo.HostSensor.SetBool(false)
return resourcehandler.NewFileResourceHandler(ctx, scanInfo.InputPatterns, registryAdaptors)
@@ -99,26 +101,38 @@ func getResourceHandler(ctx context.Context, scanInfo *cautils.ScanInfo, tenantC
return resourcehandler.NewK8sResourceHandler(k8s, getFieldSelector(scanInfo), hostSensorHandler, rbacObjects, registryAdaptors)
}
// getHostSensorHandler yields a IHostSensor that knows how to collect a host's scanned resources.
//
// A noop sensor is returned whenever host scanning is disabled or an error prevented the scanner to properly deploy.
func getHostSensorHandler(ctx context.Context, scanInfo *cautils.ScanInfo, k8s *k8sinterface.KubernetesApi) hostsensorutils.IHostSensor {
if !k8sinterface.IsConnectedToCluster() || k8s == nil {
return &hostsensorutils.HostSensorHandlerMock{}
}
const wantsHostSensorControls = true // defaults to disabling the scanner if not explictly enabled (TODO(fredbi): should be addressed by injecting ScanInfo defaults)
hostSensorVal := scanInfo.HostSensorEnabled.Get()
hasHostSensorControls := true
// we need to determined which controls needs host scanner
if scanInfo.HostSensorEnabled.Get() == nil && hasHostSensorControls {
scanInfo.HostSensorEnabled.SetBool(false) // default - do not run host scanner
}
if hostSensorVal := scanInfo.HostSensorEnabled.Get(); hostSensorVal != nil && *hostSensorVal {
switch {
case !k8sinterface.IsConnectedToCluster() || k8s == nil: // TODO(fred): fix race condition on global KSConfig there
return hostsensorutils.NewHostSensorHandlerMock()
case hostSensorVal != nil && *hostSensorVal:
hostSensorHandler, err := hostsensorutils.NewHostSensorHandler(k8s, scanInfo.HostSensorYamlPath)
if err != nil {
logger.L().Ctx(ctx).Warning(fmt.Sprintf("failed to create host scanner: %s", err.Error()))
return &hostsensorutils.HostSensorHandlerMock{}
return hostsensorutils.NewHostSensorHandlerMock()
}
return hostSensorHandler
case hostSensorVal == nil && wantsHostSensorControls:
// TODO: we need to determine which controls need the host scanner
scanInfo.HostSensorEnabled.SetBool(false)
fallthrough
default:
return hostsensorutils.NewHostSensorHandlerMock()
}
return &hostsensorutils.HostSensorHandlerMock{}
}
func getFieldSelector(scanInfo *cautils.ScanInfo) resourcehandler.IFieldSelector {
if scanInfo.IncludeNamespaces != "" {
return resourcehandler.NewIncludeSelector(scanInfo.IncludeNamespaces)
@@ -273,10 +287,15 @@ func getAttackTracksGetter(ctx context.Context, attackTracks, accountID string,
// getUIPrinter returns a printer that will be used to print to the programs UI (terminal)
func getUIPrinter(ctx context.Context, verboseMode bool, formatVersion string, attackTree bool, viewType cautils.ViewTypes) printer.IPrinter {
p := printerv2.NewPrettyPrinter(verboseMode, formatVersion, attackTree, viewType)
var p printer.IPrinter
if helpers.ToLevel(logger.L().GetLevel()) >= helpers.WarningLevel {
p = &printerv2.SilentPrinter{}
} else {
p = printerv2.NewPrettyPrinter(verboseMode, formatVersion, attackTree, viewType)
// Since the UI of the program is a CLI (Stdout), it means that it should always print to Stdout
p.SetWriter(ctx, os.Stdout.Name())
// Since the UI of the program is a CLI (Stdout), it means that it should always print to Stdout
p.SetWriter(ctx, os.Stdout.Name())
}
return p
}

View File

@@ -5,7 +5,13 @@ import (
"reflect"
"testing"
"github.com/kubescape/go-logger"
"github.com/kubescape/go-logger/helpers"
"github.com/kubescape/k8s-interface/k8sinterface"
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/pkg/hostsensorutils"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func Test_getUIPrinter(t *testing.T) {
@@ -14,27 +20,159 @@ func Test_getUIPrinter(t *testing.T) {
VerboseMode: true,
View: "control",
}
wantFormatVersion := scanInfo.FormatVersion
wantVerboseMode := scanInfo.VerboseMode
wantViewType := cautils.ViewTypes(scanInfo.View)
got := getUIPrinter(context.TODO(), scanInfo.VerboseMode, scanInfo.FormatVersion, scanInfo.PrintAttackTree, cautils.ViewTypes(scanInfo.View))
gotValue := reflect.ValueOf(got).Elem()
gotFormatVersion := gotValue.FieldByName("formatVersion").String()
gotVerboseMode := gotValue.FieldByName("verboseMode").Bool()
gotViewType := cautils.ViewTypes(gotValue.FieldByName("viewType").String())
if gotFormatVersion != wantFormatVersion {
t.Errorf("Got: %s, want: %s", gotFormatVersion, wantFormatVersion)
type args struct {
ctx context.Context
formatVersion string
viewType cautils.ViewTypes
verboseMode bool
printAttack bool
loggerLevel helpers.Level
}
type wantTypes struct {
structType string
formatVersion string
viewType cautils.ViewTypes
verboseMode bool
}
tests := []struct {
name string
args args
want wantTypes
testAllFields bool
}{
{
name: "Test getUIPrinter PrettyPrinter",
args: args{
ctx: context.TODO(),
verboseMode: scanInfo.VerboseMode,
formatVersion: scanInfo.FormatVersion,
printAttack: scanInfo.PrintAttackTree,
viewType: cautils.ViewTypes(scanInfo.View),
loggerLevel: helpers.InfoLevel,
},
want: wantTypes{
structType: "*printer.PrettyPrinter",
formatVersion: scanInfo.FormatVersion,
verboseMode: scanInfo.VerboseMode,
viewType: cautils.ViewTypes(scanInfo.View),
},
testAllFields: true,
},
{
name: "Test getUIPrinter SilentPrinter",
args: args{
ctx: context.TODO(),
verboseMode: scanInfo.VerboseMode,
formatVersion: scanInfo.FormatVersion,
printAttack: scanInfo.PrintAttackTree,
viewType: cautils.ViewTypes(scanInfo.View),
loggerLevel: helpers.WarningLevel,
},
want: wantTypes{
structType: "*printer.SilentPrinter",
formatVersion: "",
verboseMode: false,
viewType: cautils.ViewTypes(""),
},
testAllFields: false,
},
}
if gotVerboseMode != wantVerboseMode {
t.Errorf("Got: %t, want: %t", gotVerboseMode, wantVerboseMode)
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
logger.L().SetLevel(tt.args.loggerLevel.String())
got := getUIPrinter(tt.args.ctx, tt.args.verboseMode, tt.args.formatVersion, tt.args.printAttack, tt.args.viewType)
if gotViewType != wantViewType {
t.Errorf("Got: %v, want: %v", gotViewType, wantViewType)
}
assert.Equal(t, tt.want.structType, reflect.TypeOf(got).String())
if !tt.testAllFields {
return
}
gotValue := reflect.ValueOf(got).Elem()
gotFormatVersion := gotValue.FieldByName("formatVersion").String()
gotVerboseMode := gotValue.FieldByName("verboseMode").Bool()
gotViewType := cautils.ViewTypes(gotValue.FieldByName("viewType").String())
if gotFormatVersion != tt.want.formatVersion {
t.Errorf("Got: %s, want: %s", gotFormatVersion, tt.want.formatVersion)
}
if gotVerboseMode != tt.want.verboseMode {
t.Errorf("Got: %t, want: %t", gotVerboseMode, tt.want.verboseMode)
}
if gotViewType != tt.want.viewType {
t.Errorf("Got: %v, want: %v", gotViewType, tt.want.viewType)
}
})
}
}
func TestGetSensorHandler(t *testing.T) {
t.Parallel()
ctx := context.Background()
t.Run("should return mock sensor if not k8s interface is provided", func(t *testing.T) {
t.Parallel()
scanInfo := &cautils.ScanInfo{}
var k8s *k8sinterface.KubernetesApi
sensor := getHostSensorHandler(ctx, scanInfo, k8s)
require.NotNil(t, sensor)
_, isMock := sensor.(*hostsensorutils.HostSensorHandlerMock)
require.True(t, isMock)
})
t.Run("should return mock sensor if the sensor is not enabled", func(t *testing.T) {
t.Parallel()
scanInfo := &cautils.ScanInfo{}
k8s := &k8sinterface.KubernetesApi{}
sensor := getHostSensorHandler(ctx, scanInfo, k8s)
require.NotNil(t, sensor)
_, isMock := sensor.(*hostsensorutils.HostSensorHandlerMock)
require.True(t, isMock)
})
t.Run("should return mock sensor if the sensor is disabled", func(t *testing.T) {
t.Parallel()
falseFlag := cautils.NewBoolPtr(nil)
falseFlag.SetBool(false)
scanInfo := &cautils.ScanInfo{
HostSensorEnabled: falseFlag,
}
k8s := &k8sinterface.KubernetesApi{}
sensor := getHostSensorHandler(ctx, scanInfo, k8s)
require.NotNil(t, sensor)
_, isMock := sensor.(*hostsensorutils.HostSensorHandlerMock)
require.True(t, isMock)
})
t.Run("should return mock sensor if the sensor is enabled, but can't deploy (nil)", func(t *testing.T) {
t.Parallel()
falseFlag := cautils.NewBoolPtr(nil)
falseFlag.SetBool(true)
scanInfo := &cautils.ScanInfo{
HostSensorEnabled: falseFlag,
}
var k8s *k8sinterface.KubernetesApi
sensor := getHostSensorHandler(ctx, scanInfo, k8s)
require.NotNil(t, sensor)
_, isMock := sensor.(*hostsensorutils.HostSensorHandlerMock)
require.True(t, isMock)
})
// TODO(fredbi): need to share the k8s client mock to test a happy path / deployment failure path
}

View File

@@ -33,7 +33,7 @@ type componentInterfaces struct {
}
func getInterfaces(ctx context.Context, scanInfo *cautils.ScanInfo) componentInterfaces {
ctx, span := otel.Tracer("").Start(ctx, "getInterfaces")
ctx, span := otel.Tracer("").Start(ctx, "setup interfaces")
defer span.End()
// ================== setup k8s interface object ======================================
@@ -74,7 +74,7 @@ func getInterfaces(ctx context.Context, scanInfo *cautils.ScanInfo) componentInt
hostSensorHandler := getHostSensorHandler(ctx, scanInfo, k8s)
if err := hostSensorHandler.Init(ctxHostScanner); err != nil {
logger.L().Ctx(ctxHostScanner).Error("failed to init host scanner", helpers.Error(err))
hostSensorHandler = &hostsensorutils.HostSensorHandlerMock{}
hostSensorHandler = hostsensorutils.NewHostSensorHandlerMock()
}
spanHostScanner.End()
@@ -119,12 +119,11 @@ func getInterfaces(ctx context.Context, scanInfo *cautils.ScanInfo) componentInt
}
func (ks *Kubescape) Scan(ctx context.Context, scanInfo *cautils.ScanInfo) (*resultshandling.ResultsHandler, error) {
ctx, spanScan := otel.Tracer("").Start(ctx, "kubescape.Scan")
defer spanScan.End()
ctxInit, spanInit := otel.Tracer("").Start(ctx, "initialization")
logger.L().Info("Kubescape scanner starting")
// ===================== Initialization =====================
ctxInit, spanInit := otel.Tracer("").Start(ctx, "initialization")
scanInfo.Init(ctxInit) // initialize scan info
interfaces := getInterfaces(ctxInit, scanInfo)
@@ -137,10 +136,10 @@ func (ks *Kubescape) Scan(ctx context.Context, scanInfo *cautils.ScanInfo) (*res
downloadReleasedPolicy := getter.NewDownloadReleasedPolicy() // download config inputs from github release
// set policy getter only after setting the customerGUID
scanInfo.Getters.PolicyGetter = getPolicyGetter(ctx, scanInfo.UseFrom, interfaces.tenantConfig.GetTenantEmail(), scanInfo.FrameworkScan, downloadReleasedPolicy)
scanInfo.Getters.ControlsInputsGetter = getConfigInputsGetter(ctx, scanInfo.ControlsInputs, interfaces.tenantConfig.GetAccountID(), downloadReleasedPolicy)
scanInfo.Getters.ExceptionsGetter = getExceptionsGetter(ctx, scanInfo.UseExceptions, interfaces.tenantConfig.GetAccountID(), downloadReleasedPolicy)
scanInfo.Getters.AttackTracksGetter = getAttackTracksGetter(ctx, scanInfo.AttackTracks, interfaces.tenantConfig.GetAccountID(), downloadReleasedPolicy)
scanInfo.Getters.PolicyGetter = getPolicyGetter(ctxInit, scanInfo.UseFrom, interfaces.tenantConfig.GetTenantEmail(), scanInfo.FrameworkScan, downloadReleasedPolicy)
scanInfo.Getters.ControlsInputsGetter = getConfigInputsGetter(ctxInit, scanInfo.ControlsInputs, interfaces.tenantConfig.GetAccountID(), downloadReleasedPolicy)
scanInfo.Getters.ExceptionsGetter = getExceptionsGetter(ctxInit, scanInfo.UseExceptions, interfaces.tenantConfig.GetAccountID(), downloadReleasedPolicy)
scanInfo.Getters.AttackTracksGetter = getAttackTracksGetter(ctxInit, scanInfo.AttackTracks, interfaces.tenantConfig.GetAccountID(), downloadReleasedPolicy)
// TODO - list supported frameworks/controls
if scanInfo.ScanAll {
@@ -150,35 +149,37 @@ func (ks *Kubescape) Scan(ctx context.Context, scanInfo *cautils.ScanInfo) (*res
// remove host scanner components
defer func() {
if err := interfaces.hostSensorHandler.TearDown(); err != nil {
logger.L().Ctx(ctxInit).Error("failed to tear down host scanner", helpers.Error(err))
logger.L().Ctx(ctx).Error("failed to tear down host scanner", helpers.Error(err))
}
}()
resultsHandling := resultshandling.NewResultsHandler(interfaces.report, interfaces.outputPrinters, interfaces.uiPrinter)
spanInit.End()
// ===================== policies & resources =====================
ctxPolicies, spanPolicies := otel.Tracer("").Start(ctx, "policies & resources")
ctxPolicies, spanPolicies := otel.Tracer("").Start(ctxInit, "policies & resources")
policyHandler := policyhandler.NewPolicyHandler(interfaces.resourceHandler)
scanData, err := policyHandler.CollectResources(ctxPolicies, scanInfo.PolicyIdentifier, scanInfo)
scanData, err := policyHandler.CollectResources(ctxPolicies, scanInfo.PolicyIdentifier, scanInfo, cautils.NewProgressHandler(""))
if err != nil {
spanInit.End()
return resultsHandling, err
}
spanPolicies.End()
spanInit.End()
// ========================= opa testing =====================
ctxOpa, spanOpa := otel.Tracer("").Start(ctx, "opa testing")
defer spanOpa.End()
deps := resources.NewRegoDependenciesData(k8sinterface.GetK8sConfig(), interfaces.tenantConfig.GetContextName())
reportResults := opaprocessor.NewOPAProcessor(scanData, deps)
if err := reportResults.ProcessRulesListenner(ctxOpa, cautils.NewProgressHandler("")); err != nil {
if err := reportResults.ProcessRulesListener(ctxOpa, cautils.NewProgressHandler("")); err != nil {
// TODO - do something
return resultsHandling, fmt.Errorf("%w", err)
}
spanOpa.End()
// ======================== prioritization ===================
_, spanPrioritization := otel.Tracer("").Start(ctx, "prioritization")
if priotizationHandler, err := resourcesprioritization.NewResourcesPrioritizationHandler(ctx, scanInfo.Getters.AttackTracksGetter, scanInfo.PrintAttackTree); err != nil {
_, spanPrioritization := otel.Tracer("").Start(ctxOpa, "prioritization")
if priotizationHandler, err := resourcesprioritization.NewResourcesPrioritizationHandler(ctxOpa, scanInfo.Getters.AttackTracksGetter, scanInfo.PrintAttackTree); err != nil {
logger.L().Ctx(ctx).Warning("failed to get attack tracks, this may affect the scanning results", helpers.Error(err))
} else if err := priotizationHandler.PrioritizeResources(scanData); err != nil {
return resultsHandling, fmt.Errorf("%w", err)

View File

@@ -43,7 +43,7 @@ func (ks *Kubescape) SubmitExceptions(ctx context.Context, credentials *cautils.
// load cached config
tenantConfig := getTenantConfig(credentials, "", "", getKubernetesApi())
if err := tenantConfig.SetTenant(); err != nil {
logger.L().Ctx(ctx).Error("failed setting account ID", helpers.Error(err))
logger.L().Ctx(ctx).Warning("failed setting account ID", helpers.Error(err))
}
// load exceptions from file

View File

@@ -58,7 +58,7 @@ func withNewline(content, targetNewline string) string {
replaceNewlines := map[string]bool{
unixNewline: true,
windowsNewline: true,
oldMacNewline: true,
oldMacNewline: true,
}
replaceNewlines[targetNewline] = false

View File

@@ -4,7 +4,7 @@ import (
"context"
"encoding/json"
"fmt"
"io/ioutil"
"io"
"os"
"path"
"path/filepath"
@@ -36,7 +36,7 @@ func NewFixHandler(fixInfo *metav1.FixInfo) (*FixHandler, error) {
return nil, err
}
defer jsonFile.Close()
byteValue, _ := ioutil.ReadAll(jsonFile)
byteValue, _ := io.ReadAll(jsonFile)
var reportObj reporthandlingv2.PostureReport
if err = json.Unmarshal(byteValue, &reportObj); err != nil {
@@ -142,13 +142,13 @@ func (h *FixHandler) PrepareResourcesToFix(ctx context.Context) []ResourceFixInf
relativePath, documentIndex, err := h.getFilePathAndIndex(resourcePath)
if err != nil {
logger.L().Ctx(ctx).Error("Skipping invalid resource path: " + resourcePath)
logger.L().Ctx(ctx).Warning("Skipping invalid resource path: " + resourcePath)
continue
}
absolutePath := path.Join(h.localBasePath, relativePath)
if _, err := os.Stat(absolutePath); err != nil {
logger.L().Ctx(ctx).Error("Skipping missing file: " + absolutePath)
logger.L().Ctx(ctx).Warning("Skipping missing file: " + absolutePath)
continue
}
@@ -220,7 +220,7 @@ func (h *FixHandler) ApplyChanges(ctx context.Context, resourcesToFix []Resource
err = writeFixesToFile(filepath, fixedYamlString)
if err != nil {
logger.L().Ctx(ctx).Error(fmt.Sprintf("Failed to write fixes to file %s, %v", filepath, err.Error()))
logger.L().Ctx(ctx).Warning(fmt.Sprintf("Failed to write fixes to file %s, %v", filepath, err.Error()))
errors = append(errors, err)
}
}
@@ -259,9 +259,9 @@ func (h *FixHandler) ApplyFixToContent(ctx context.Context, yamlAsString, yamlEx
return "", err
}
fileFixInfo := getFixInfo(ctx, originalRootNodes, fixedRootNodes)
fixInfo := getFixInfo(ctx, originalRootNodes, fixedRootNodes)
fixedYamlLines := getFixedYamlLines(yamlLines, fileFixInfo, newline)
fixedYamlLines := getFixedYamlLines(yamlLines, fixInfo, newline)
fixedString = getStringFromSlice(fixedYamlLines, newline)
@@ -270,7 +270,9 @@ func (h *FixHandler) ApplyFixToContent(ctx context.Context, yamlAsString, yamlEx
func (h *FixHandler) getFileYamlExpressions(resourcesToFix []ResourceFixInfo) map[string]string {
fileYamlExpressions := make(map[string]string, 0)
for _, resourceToFix := range resourcesToFix {
for _, toPin := range resourcesToFix {
resourceToFix := toPin
singleExpression := reduceYamlExpressions(&resourceToFix)
resourceFilePath := resourceToFix.FilePath
@@ -339,7 +341,7 @@ func joinStrings(inputStrings ...string) string {
}
func getFileString(filepath string) (string, error) {
bytes, err := ioutil.ReadFile(filepath)
bytes, err := os.ReadFile(filepath)
if err != nil {
return "", fmt.Errorf("Error reading file %s", filepath)
@@ -349,7 +351,7 @@ func getFileString(filepath string) (string, error) {
}
func writeFixesToFile(filepath, content string) error {
err := ioutil.WriteFile(filepath, []byte(content), 0644)
err := os.WriteFile(filepath, []byte(content), 0644) //nolint:gosec
if err != nil {
return fmt.Errorf("Error writing fixes to file: %w", err)

View File

@@ -8,6 +8,7 @@ import (
logger "github.com/kubescape/go-logger"
metav1 "github.com/kubescape/kubescape/v2/core/meta/datastructures/v1"
"github.com/kubescape/kubescape/v2/internal/testutils"
reporthandlingv2 "github.com/kubescape/opa-utils/reporthandling/v2"
"github.com/mikefarah/yq/v4/pkg/yqlib"
"github.com/stretchr/testify/assert"
@@ -32,11 +33,6 @@ func NewFixHandlerMock() (*FixHandler, error) {
}, nil
}
func getTestdataPath() string {
currentDir, _ := os.Getwd()
return filepath.Join(currentDir, "testdata")
}
func getTestCases() []indentationTestCase {
indentationTestCases := []indentationTestCase{
// Insertion Scenarios
@@ -123,7 +119,7 @@ func getTestCases() []indentationTestCase {
},
{
"removes/tc-04-00-input.yaml",
`del(select(di==0).spec.containers[0].securityContext) |
`del(select(di==0).spec.containers[0].securityContext) |
del(select(di==1).spec.containers[1])`,
"removes/tc-04-01-expected.yaml",
},
@@ -177,9 +173,8 @@ func TestApplyFixKeepsFormatting(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.inputFile, func(t *testing.T) {
getTestDataPath := func(filename string) string {
currentDir, _ := os.Getwd()
currentFile := "testdata/" + filename
return filepath.Join(currentDir, currentFile)
return filepath.Join(testutils.CurrentDir(), currentFile)
}
input, _ := os.ReadFile(getTestDataPath(tc.inputFile))

View File

@@ -74,9 +74,6 @@ func adjustFixedListLines(originalList, fixedList *[]nodeInfo) {
node.node.Line += differenceAtTop
}
}
return
}
func enocodeIntoYaml(parentNode *yaml.Node, nodeList *[]nodeInfo, tracker int) (string, error) {
@@ -309,7 +306,7 @@ func readDocuments(ctx context.Context, reader io.Reader, decoder yqlib.Decoder)
func safelyCloseFile(ctx context.Context, file *os.File) {
err := file.Close()
if err != nil {
logger.L().Ctx(ctx).Error("Error Closing File")
logger.L().Ctx(ctx).Warning("Error Closing File")
}
}

View File

@@ -27,20 +27,12 @@ spec:
name: host-scanner
spec:
tolerations:
# this toleration is to have the DaemonDet runnable on master nodes
# this toleration is to have the DaemonDet runnable on all nodes (including masters)
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- operator: Exists
containers:
- name: host-sensor
image: quay.io/kubescape/host-scanner:v1.0.45
env:
- name: OTEL_COLLECTOR_SVC
value: "otel-collector.kubescape:4317"
image: quay.io/kubescape/host-scanner:v1.0.54
securityContext:
allowPrivilegeEscalation: true
privileged: true
@@ -48,7 +40,6 @@ spec:
procMount: Unmasked
ports:
- name: scanner # Do not change port name
hostPort: 7888
containerPort: 7888
protocol: TCP
resources:
@@ -75,6 +66,5 @@ spec:
path: /
type: Directory
name: host-filesystem
hostNetwork: true
hostPID: true
hostIPC: true

View File

@@ -0,0 +1,23 @@
package hostsensorutils
import (
"context"
"testing"
"github.com/stretchr/testify/require"
)
func TestHostSensorHandlerMock(t *testing.T) {
ctx := context.Background()
h := &HostSensorHandlerMock{}
require.NoError(t, h.Init(ctx))
envelope, status, err := h.CollectResources(ctx)
require.Empty(t, envelope)
require.Nil(t, status)
require.NoError(t, err)
require.Empty(t, h.GetNamespace())
require.NoError(t, h.TearDown())
}

View File

@@ -3,7 +3,6 @@ package hostsensorutils
import (
"context"
_ "embed"
"encoding/json"
"fmt"
"os"
"sync"
@@ -23,43 +22,49 @@ import (
var (
//go:embed hostsensor.yaml
hostSensorYAML string
hostSensorYAML string
namespaceWasPresent bool
)
const PortName string = "scanner"
const portName string = "scanner"
// HostSensorHandler is a client that interacts with a host-scanner component deployed on nodes.
//
// The API exposed by the host sensor is defined here: https://github.com/kubescape/host-scanner
type HostSensorHandler struct {
HostSensorPort int32
HostSensorPodNames map[string]string //map from pod names to node names
HostSensorUnscheduledPodNames map[string]string //map from pod names to node names
IsReady <-chan bool //readonly chan
hostSensorPort int32
hostSensorPodNames map[string]string //map from pod names to node names
hostSensorUnscheduledPodNames map[string]string //map from pod names to node names
k8sObj *k8sinterface.KubernetesApi
DaemonSet *appsv1.DaemonSet
daemonSet *appsv1.DaemonSet
podListLock sync.RWMutex
gracePeriod int64
workerPool workerPool
}
// NewHostSensorHandler builds a new http client to the host-scanner API.
func NewHostSensorHandler(k8sObj *k8sinterface.KubernetesApi, hostSensorYAMLFile string) (*HostSensorHandler, error) {
if k8sObj == nil {
return nil, fmt.Errorf("nil k8s interface received")
}
if hostSensorYAMLFile != "" {
d, err := loadHostSensorFromFile(hostSensorYAMLFile)
if err != nil {
return nil, fmt.Errorf("failed to load host-scan yaml file, reason: %s", err.Error())
return nil, fmt.Errorf("failed to load host-scan yaml file, reason: %w", err)
}
hostSensorYAML = d
}
hsh := &HostSensorHandler{
k8sObj: k8sObj,
HostSensorPodNames: map[string]string{},
HostSensorUnscheduledPodNames: map[string]string{},
hostSensorPodNames: map[string]string{},
hostSensorUnscheduledPodNames: map[string]string{},
gracePeriod: int64(15),
workerPool: NewWorkerPool(),
workerPool: newWorkerPool(),
}
// Don't deploy on cluster with no nodes. Some cloud providers prevents termination of K8s objects for cluster with no nodes!!!
// Don't deploy on a cluster with no nodes. Some cloud providers prevent the termination of K8s objects for cluster with no nodes!!!
if nodeList, err := k8sObj.KubernetesClient.CoreV1().Nodes().List(k8sObj.Context, metav1.ListOptions{}); err != nil || len(nodeList.Items) == 0 {
if err == nil {
err = fmt.Errorf("no nodes to scan")
@@ -70,6 +75,7 @@ func NewHostSensorHandler(k8sObj *k8sinterface.KubernetesApi, hostSensorYAMLFile
return hsh, nil
}
// Init deploys the host-scanner and start watching the pods on the host.
func (hsh *HostSensorHandler) Init(ctx context.Context) error {
// deploy the YAML
// store namespace + port
@@ -79,19 +85,45 @@ func (hsh *HostSensorHandler) Init(ctx context.Context) error {
logger.L().Debug("The host scanner is a DaemonSet that runs on each node in the cluster. The DaemonSet will be running in it's own namespace and will be deleted once the scan is completed. If you do not wish to install the host scanner, please run the scan without the --enable-host-scan flag.")
cautils.StartSpinner()
defer cautils.StopSpinner()
if err := hsh.applyYAML(ctx); err != nil {
cautils.StopSpinner()
return fmt.Errorf("failed to apply host scanner YAML, reason: %v", err)
}
hsh.populatePodNamesToNodeNames(ctx)
if err := hsh.checkPodForEachNode(); err != nil {
logger.L().Ctx(ctx).Error("failed to validate host-sensor pods status", helpers.Error(err))
logger.L().Ctx(ctx).Warning("failed to validate host-sensor pods status", helpers.Error(err))
}
cautils.StopSpinner()
return nil
}
// checkNamespaceWasPresent check if the given namespace was already present on kubernetes and in "Active" state.
// Return true in case it find the namespace on the list, false otherwise.
// In case we have some error with the kubernetes APIs, it returns an error.
func (hsh *HostSensorHandler) checkNamespaceWasPresent(namespace string) bool {
ns, err := hsh.k8sObj.KubernetesClient.
CoreV1().
Namespaces().
Get(hsh.k8sObj.Context, namespace, metav1.GetOptions{})
if err != nil {
return false
}
// check also if it is in "Active" state.
if ns.Status.Phase != corev1.NamespaceActive {
return false
}
return true
}
// namespaceWasPresent return the namespaceWasPresent variable value.
func (hsh *HostSensorHandler) namespaceWasPresent() bool {
return namespaceWasPresent
}
func (hsh *HostSensorHandler) applyYAML(ctx context.Context) error {
workloads, err := cautils.ReadFile([]byte(hostSensorYAML), cautils.YAML_FILE_FORMAT)
if err != nil {
@@ -106,6 +138,8 @@ func (hsh *HostSensorHandler) applyYAML(ctx context.Context) error {
break
}
}
// check if namespace was already present on kubernetes
namespaceWasPresent = hsh.checkNamespaceWasPresent(namespaceName)
// Update workload data before applying
for i := range workloads {
@@ -128,12 +162,11 @@ func (hsh *HostSensorHandler) applyYAML(ctx context.Context) error {
}
for j := range containers {
for k := range containers[j].Ports {
if containers[j].Ports[k].Name == PortName {
hsh.HostSensorPort = containers[j].Ports[k].ContainerPort
if containers[j].Ports[k].Name == portName {
hsh.hostSensorPort = containers[j].Ports[k].ContainerPort
}
}
}
}
// Apply workload
@@ -168,9 +201,10 @@ func (hsh *HostSensorHandler) applyYAML(ctx context.Context) error {
}
return fmt.Errorf("failed to Unmarshal YAML of DaemonSet, reason: %v", err)
}
hsh.DaemonSet = &ds
hsh.daemonSet = &ds
}
}
return nil
}
@@ -181,41 +215,45 @@ func (hsh *HostSensorHandler) checkPodForEachNode() error {
if err != nil {
return fmt.Errorf("in checkPodsForEveryNode, failed to get nodes list: %v", nodesList)
}
hsh.podListLock.RLock()
podsNum := len(hsh.HostSensorPodNames)
unschedPodNum := len(hsh.HostSensorUnscheduledPodNames)
podsNum := len(hsh.hostSensorPodNames)
unschedPodNum := len(hsh.hostSensorUnscheduledPodNames)
hsh.podListLock.RUnlock()
if len(nodesList.Items) <= podsNum+unschedPodNum {
break
}
if time.Now().After(deadline) {
hsh.podListLock.RLock()
podsMap := hsh.HostSensorPodNames
podsMap := hsh.hostSensorPodNames
hsh.podListLock.RUnlock()
return fmt.Errorf("host-sensor pods number (%d) differ than nodes number (%d) after deadline exceeded. Kubescape will take data only from the pods below: %v",
podsNum, len(nodesList.Items), podsMap)
}
time.Sleep(100 * time.Millisecond)
}
return nil
}
// initiating routine to keep pod list updated
func (hsh *HostSensorHandler) populatePodNamesToNodeNames(ctx context.Context) {
go func() {
var watchRes watch.Interface
var err error
watchRes, err = hsh.k8sObj.KubernetesClient.CoreV1().Pods(hsh.DaemonSet.Namespace).Watch(hsh.k8sObj.Context, metav1.ListOptions{
watchRes, err = hsh.k8sObj.KubernetesClient.CoreV1().Pods(hsh.daemonSet.Namespace).Watch(hsh.k8sObj.Context, metav1.ListOptions{
Watch: true,
LabelSelector: fmt.Sprintf("name=%s", hsh.DaemonSet.Spec.Template.Labels["name"]),
LabelSelector: fmt.Sprintf("name=%s", hsh.daemonSet.Spec.Template.Labels["name"]),
})
if err != nil {
logger.L().Ctx(ctx).Error("failed to watch over daemonset pods - are we missing watch pods permissions?", helpers.Error(err))
logger.L().Ctx(ctx).Warning("failed to watch over DaemonSet pods - are we missing watch pods permissions?", helpers.Error(err))
}
if watchRes == nil {
logger.L().Ctx(ctx).Error("failed to watch over DaemonSet pods, will not be able to get host-sensor data")
return
}
for eve := range watchRes.ResultChan() {
pod, ok := eve.Object.(*corev1.Pod)
if !ok {
@@ -234,8 +272,8 @@ func (hsh *HostSensorHandler) updatePodInListAtomic(ctx context.Context, eventTy
case watch.Added, watch.Modified:
if podObj.Status.Phase == corev1.PodRunning && len(podObj.Status.ContainerStatuses) > 0 &&
podObj.Status.ContainerStatuses[0].Ready {
hsh.HostSensorPodNames[podObj.ObjectMeta.Name] = podObj.Spec.NodeName
delete(hsh.HostSensorUnscheduledPodNames, podObj.ObjectMeta.Name)
hsh.hostSensorPodNames[podObj.ObjectMeta.Name] = podObj.Spec.NodeName
delete(hsh.hostSensorUnscheduledPodNames, podObj.ObjectMeta.Name)
} else {
if podObj.Status.Phase == corev1.PodPending && len(podObj.Status.Conditions) > 0 &&
podObj.Status.Conditions[0].Reason == corev1.PodReasonUnschedulable {
@@ -252,30 +290,37 @@ func (hsh *HostSensorHandler) updatePodInListAtomic(ctx context.Context, eventTy
helpers.String("nodeName", nodeName),
helpers.String("podName", podObj.ObjectMeta.Name))
if nodeName != "" {
hsh.HostSensorUnscheduledPodNames[podObj.ObjectMeta.Name] = nodeName
hsh.hostSensorUnscheduledPodNames[podObj.ObjectMeta.Name] = nodeName
}
} else {
delete(hsh.HostSensorPodNames, podObj.ObjectMeta.Name)
delete(hsh.hostSensorPodNames, podObj.ObjectMeta.Name)
}
}
default:
delete(hsh.HostSensorPodNames, podObj.ObjectMeta.Name)
delete(hsh.hostSensorPodNames, podObj.ObjectMeta.Name)
}
}
func (hsh *HostSensorHandler) tearDownNamespace(namespace string) error {
// if namespace was already present on kubernetes (before installing host-scanner),
// then we shouldn't delete it.
if hsh.namespaceWasPresent() {
return nil
}
if err := hsh.k8sObj.KubernetesClient.CoreV1().Namespaces().Delete(hsh.k8sObj.Context, namespace, metav1.DeleteOptions{GracePeriodSeconds: &hsh.gracePeriod}); err != nil {
return fmt.Errorf("failed to delete host-sensor namespace: %v", err)
}
return nil
}
func (hsh *HostSensorHandler) TearDown() error {
namespace := hsh.GetNamespace()
if err := hsh.k8sObj.KubernetesClient.AppsV1().DaemonSets(hsh.GetNamespace()).Delete(hsh.k8sObj.Context, hsh.DaemonSet.Name, metav1.DeleteOptions{GracePeriodSeconds: &hsh.gracePeriod}); err != nil {
// delete DaemonSet
if err := hsh.k8sObj.KubernetesClient.AppsV1().DaemonSets(hsh.GetNamespace()).Delete(hsh.k8sObj.Context, hsh.daemonSet.Name, metav1.DeleteOptions{GracePeriodSeconds: &hsh.gracePeriod}); err != nil {
return fmt.Errorf("failed to delete host-sensor daemonset: %v", err)
}
// delete Namespace
if err := hsh.tearDownNamespace(namespace); err != nil {
return fmt.Errorf("failed to delete host-sensor daemonset: %v", err)
}
@@ -285,10 +330,10 @@ func (hsh *HostSensorHandler) TearDown() error {
}
func (hsh *HostSensorHandler) GetNamespace() string {
if hsh.DaemonSet == nil {
if hsh.daemonSet == nil {
return ""
}
return hsh.DaemonSet.Namespace
return hsh.daemonSet.Namespace
}
func loadHostSensorFromFile(hostSensorYAMLFile string) (string, error) {

View File

@@ -0,0 +1,242 @@
package hostsensorutils
import (
"context"
"os"
"path/filepath"
"testing"
"github.com/kubescape/kubescape/v2/internal/testutils"
"github.com/kubescape/opa-utils/objectsenvelopes/hostsensor"
"github.com/stretchr/testify/require"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func TestHostSensorHandler(t *testing.T) {
t.Parallel()
ctx := context.Background()
t.Run("with default manifest", func(t *testing.T) {
t.Run("should build host sensor", func(t *testing.T) {
k8s := NewKubernetesApiMock(WithNode(mockNode1()), WithPod(mockPod1()), WithPod(mockPod2()), WithResponses(mockResponses()))
h, err := NewHostSensorHandler(k8s, "")
require.NoError(t, err)
require.NotNil(t, h)
t.Run("should initialize host sensor", func(t *testing.T) {
require.NoError(t, h.Init(ctx))
w, err := k8s.KubernetesClient.CoreV1().Pods(h.daemonSet.Namespace).Watch(ctx, metav1.ListOptions{})
require.NoError(t, err)
w.Stop()
require.Len(t, h.hostSensorPodNames, 2)
})
t.Run("should return namespace", func(t *testing.T) {
require.Equal(t, "kubescape-host-scanner", h.GetNamespace())
})
t.Run("should collect resources from pods - happy path", func(t *testing.T) {
envelope, status, err := h.CollectResources(ctx)
require.NoError(t, err)
require.Len(t, envelope, 11*2) // has cloud provider, no control plane requested
require.Len(t, status, 0)
foundControl, foundProvider := false, false
for _, sensed := range envelope {
if sensed.Kind == ControlPlaneInfo.String() {
foundControl = true
}
if sensed.Kind == CloudProviderInfo.String() {
foundProvider = hasCloudProviderInfo([]hostsensor.HostSensorDataEnvelope{sensed})
}
}
require.False(t, foundControl)
require.True(t, foundProvider)
})
})
t.Run("should build host sensor without cloud provider", func(t *testing.T) {
k8s := NewKubernetesApiMock(WithNode(mockNode1()), WithPod(mockPod1()), WithPod(mockPod2()), WithResponses(mockResponsesNoCloudProvider()))
h, err := NewHostSensorHandler(k8s, "")
require.NoError(t, err)
require.NotNil(t, h)
t.Run("should initialize host sensor", func(t *testing.T) {
require.NoError(t, h.Init(ctx))
w, err := k8s.KubernetesClient.CoreV1().Pods(h.daemonSet.Namespace).Watch(ctx, metav1.ListOptions{})
require.NoError(t, err)
w.Stop()
require.Len(t, h.hostSensorPodNames, 2)
})
t.Run("should get version", func(t *testing.T) {
version, err := h.getVersion()
require.NoError(t, err)
require.Equal(t, "v1.0.45", version)
})
t.Run("ForwardToPod is a stub, not implemented", func(t *testing.T) {
resp, err := h.forwardToPod("pod1", "/version")
require.Contains(t, err.Error(), "not implemented")
require.Nil(t, resp)
})
t.Run("should collect resources from pods", func(t *testing.T) {
envelope, status, err := h.CollectResources(ctx)
require.NoError(t, err)
require.Len(t, envelope, 12*2) // has empty cloud provider, has control plane info
require.Len(t, status, 0)
foundControl, foundProvider := false, false
for _, sensed := range envelope {
if sensed.Kind == ControlPlaneInfo.String() {
foundControl = true
}
if sensed.Kind == CloudProviderInfo.String() {
foundProvider = hasCloudProviderInfo([]hostsensor.HostSensorDataEnvelope{sensed})
}
}
require.True(t, foundControl)
require.False(t, foundProvider)
})
})
t.Run("should build host sensor with error in response from /version", func(t *testing.T) {
k8s := NewKubernetesApiMock(WithNode(mockNode1()),
WithPod(mockPod1()),
WithPod(mockPod2()),
WithResponses(mockResponsesNoCloudProvider()),
WithErrorResponse(RestURL{"http", "pod1", "7888", "/version"}), // this endpoint will return an error from this pod
WithErrorResponse(RestURL{"http", "pod2", "7888", "/version"}), // this endpoint will return an error from this pod
)
h, err := NewHostSensorHandler(k8s, "")
require.NoError(t, err)
require.NotNil(t, h)
t.Run("should initialize host sensor", func(t *testing.T) {
require.NoError(t, h.Init(ctx))
w, err := k8s.KubernetesClient.CoreV1().Pods(h.daemonSet.Namespace).Watch(ctx, metav1.ListOptions{})
require.NoError(t, err)
w.Stop()
require.Len(t, h.hostSensorPodNames, 2)
})
t.Run("should NOT be able to get version", func(t *testing.T) {
// NOTE: GetVersion might be successful if only one pod responds successfully.
// In order to ensure an error, we need ALL pods to error.
_, err := h.getVersion()
require.Error(t, err)
require.Contains(t, err.Error(), "mock")
})
})
t.Run("should build host sensor with error in response from /kubeletConfigurations", func(t *testing.T) {
k8s := NewKubernetesApiMock(WithNode(mockNode1()),
WithPod(mockPod1()),
WithPod(mockPod2()),
WithResponses(mockResponsesNoCloudProvider()),
WithErrorResponse(RestURL{"http", "pod1", "7888", "/kubeletConfigurations"}), // this endpoint will return an error from this pod
)
h, err := NewHostSensorHandler(k8s, "")
require.NoError(t, err)
require.NotNil(t, h)
t.Run("should initialize host sensor", func(t *testing.T) {
require.NoError(t, h.Init(ctx))
w, err := k8s.KubernetesClient.CoreV1().Pods(h.daemonSet.Namespace).Watch(ctx, metav1.ListOptions{})
require.NoError(t, err)
w.Stop()
require.Len(t, h.hostSensorPodNames, 2)
})
t.Run("should collect resources from pods, with some errors", func(t *testing.T) {
envelope, status, err := h.CollectResources(ctx)
require.NoError(t, err)
require.Len(t, envelope, 12*2-1) // one resource is missing
require.Len(t, status, 0) // error is not reported in status: this is due to the worker pool not bubbling up errors
})
})
t.Run("should FAIL to build host sensor because there are no nodes", func(t *testing.T) {
h, err := NewHostSensorHandler(NewKubernetesApiMock(), "")
require.Error(t, err)
require.NotNil(t, h)
require.Contains(t, err.Error(), "no nodes to scan")
})
})
t.Run("should NOT build host sensor with nil k8s API", func(t *testing.T) {
h, err := NewHostSensorHandler(nil, "")
require.Error(t, err)
require.Nil(t, h)
})
t.Run("with manifest from YAML file", func(t *testing.T) {
t.Run("should build host sensor", func(t *testing.T) {
k8s := NewKubernetesApiMock(WithNode(mockNode1()), WithPod(mockPod1()), WithPod(mockPod2()), WithResponses(mockResponses()))
h, err := NewHostSensorHandler(k8s, filepath.Join(testutils.CurrentDir(), "hostsensor.yaml"))
require.NoError(t, err)
require.NotNil(t, h)
t.Run("should initialize host sensor", func(t *testing.T) {
require.NoError(t, h.Init(ctx))
w, err := k8s.KubernetesClient.CoreV1().Pods(h.daemonSet.Namespace).Watch(ctx, metav1.ListOptions{})
require.NoError(t, err)
w.Stop()
require.Len(t, h.hostSensorPodNames, 2)
})
})
})
t.Run("with manifest from invalid YAML file", func(t *testing.T) {
t.Run("should NOT build host sensor", func(t *testing.T) {
var invalid string
t.Run("should create temp file", func(t *testing.T) {
file, err := os.CreateTemp("", "*.yaml")
require.NoError(t, err)
t.Cleanup(func() {
_ = os.Remove(file.Name())
})
_, err = file.Write([]byte(" x: 1"))
require.NoError(t, err)
invalid = file.Name()
require.NoError(t, file.Close())
})
k8s := NewKubernetesApiMock(WithNode(mockNode1()), WithPod(mockPod1()), WithPod(mockPod2()), WithResponses(mockResponses()))
_, err := NewHostSensorHandler(k8s, filepath.Join(testutils.CurrentDir(), invalid))
require.Error(t, err)
})
})
// TODO(test coverage): the following cases are not covered by tests yet.
//
// * applyYAML fails
// * checkPodForEachNode fails, or times out
// * non-active namespace
// * getPodList fails when GetVersion
// * getPodList fails when CollectResources
// * error cases that trigger a namespace tear-down
// * watch pods with a Delete event
// * explicit TearDown()
//
// Notice that the package doesn't current pass tests with the race detector enabled.
}

View File

@@ -2,8 +2,10 @@ package hostsensorutils
import (
"context"
"encoding/json"
stdjson "encoding/json"
"errors"
"fmt"
"reflect"
"strings"
"sync"
@@ -16,45 +18,43 @@ import (
"sigs.k8s.io/yaml"
)
func (hsh *HostSensorHandler) getPodList() (res map[string]string, err error) {
// getPodList clones the internal list of pods being watched as a map of pod names.
func (hsh *HostSensorHandler) getPodList() map[string]string {
hsh.podListLock.RLock()
jsonBytes, err := json.Marshal(hsh.HostSensorPodNames)
res := make(map[string]string, len(hsh.hostSensorPodNames))
for k, v := range hsh.hostSensorPodNames {
res[k] = v
}
hsh.podListLock.RUnlock()
if err != nil {
return res, fmt.Errorf("failed to marshal pod list: %v", err)
}
err = json.Unmarshal(jsonBytes, &res)
if err != nil {
return res, fmt.Errorf("failed to unmarshal pod list: %v", err)
}
return res, nil
return res
}
func (hsh *HostSensorHandler) HTTPGetToPod(podName, path string) ([]byte, error) {
// send the request to the port
restProxy := hsh.k8sObj.KubernetesClient.CoreV1().Pods(hsh.DaemonSet.Namespace).ProxyGet("http", podName, fmt.Sprintf("%d", hsh.HostSensorPort), path, map[string]string{})
// httpGetToPod sends the request to a pod using the HostSensorPort.
func (hsh *HostSensorHandler) httpGetToPod(podName, path string) ([]byte, error) {
restProxy := hsh.k8sObj.KubernetesClient.CoreV1().Pods(hsh.daemonSet.Namespace).ProxyGet("http", podName, fmt.Sprintf("%d", hsh.hostSensorPort), path, map[string]string{})
return restProxy.DoRaw(hsh.k8sObj.Context)
}
func (hsh *HostSensorHandler) getResourcesFromPod(podName, nodeName, resourceKind, path string) (hostsensor.HostSensorDataEnvelope, error) {
func (hsh *HostSensorHandler) getResourcesFromPod(podName, nodeName string, resourceKind scannerResource, path string) (hostsensor.HostSensorDataEnvelope, error) {
// send the request and pack the response as an hostSensorDataEnvelope
resBytes, err := hsh.HTTPGetToPod(podName, path)
resBytes, err := hsh.httpGetToPod(podName, path)
if err != nil {
return hostsensor.HostSensorDataEnvelope{}, err
}
hostSensorDataEnvelope := hostsensor.HostSensorDataEnvelope{}
hostSensorDataEnvelope.SetApiVersion(k8sinterface.JoinGroupVersion(hostsensor.GroupHostSensor, hostsensor.Version))
hostSensorDataEnvelope.SetKind(resourceKind)
hostSensorDataEnvelope.SetKind(resourceKind.String())
hostSensorDataEnvelope.SetName(nodeName)
hostSensorDataEnvelope.SetData(resBytes)
return hostSensorDataEnvelope, nil
}
func (hsh *HostSensorHandler) ForwardToPod(podName, path string) ([]byte, error) {
// forwardToPod is currently not implemented.
func (hsh *HostSensorHandler) forwardToPod(podName, path string) ([]byte, error) {
// NOT IN USE:
// ---
// spawn port forwarding
@@ -73,21 +73,18 @@ func (hsh *HostSensorHandler) ForwardToPod(podName, path string) ([]byte, error)
// }
// hostIP := strings.TrimLeft(req.RestConfig.Host, "htps:/")
// dialer := spdy.NewDialer(upgrader, &http.Client{Transport: transport}, http.MethodPost, &url.URL{Scheme: "http", Path: path, Host: hostIP})
return nil, nil
return nil, errors.New("not implemented")
}
// sendAllPodsHTTPGETRequest fills the raw byte response in the envelope and the node name, but not the GroupVersionKind
// sendAllPodsHTTPGETRequest fills the raw bytes response in the envelope and the node name, but not the GroupVersionKind
// so the caller is responsible to convert the raw data to some structured data and add the GroupVersionKind details
//
// The function produces a worker-pool with a fixed number of workers.
//
// For each node the request is pushed to the jobs channel, the worker sends the request and pushes the result to the result channel.
// When all workers have finished, the function returns a list of results
func (hsh *HostSensorHandler) sendAllPodsHTTPGETRequest(ctx context.Context, path, requestKind string) ([]hostsensor.HostSensorDataEnvelope, error) {
podList, err := hsh.getPodList()
if err != nil {
return nil, fmt.Errorf("failed to sendAllPodsHTTPGETRequest: %v", err)
}
func (hsh *HostSensorHandler) sendAllPodsHTTPGETRequest(ctx context.Context, path string, requestKind scannerResource) ([]hostsensor.HostSensorDataEnvelope, error) {
podList := hsh.getPodList()
res := make([]hostsensor.HostSensorDataEnvelope, 0, len(podList))
var wg sync.WaitGroup
// initialization of the channels
@@ -101,256 +98,237 @@ func (hsh *HostSensorHandler) sendAllPodsHTTPGETRequest(ctx context.Context, pat
return res, nil
}
// return host-scanner version
func (hsh *HostSensorHandler) GetVersion() (string, error) {
// getVersion returns the version of the deployed host scanner.
//
// NOTE: we pick the version from the first responding pod.
func (hsh *HostSensorHandler) getVersion() (string, error) {
// loop over pods and port-forward it to each of them
podList, err := hsh.getPodList()
if err != nil {
return "", fmt.Errorf("failed to sendAllPodsHTTPGETRequest: %v", err)
}
podList := hsh.getPodList()
// initialization of the channels
hsh.workerPool.init(len(podList))
hsh.workerPool.hostSensorApplyJobs(podList, "/version", "version")
for job := range hsh.workerPool.jobs {
resBytes, err := hsh.HTTPGetToPod(job.podName, job.path)
resBytes, err := hsh.httpGetToPod(job.podName, job.path)
if err != nil {
return "", err
} else {
version := strings.ReplaceAll(string(resBytes), "\"", "")
version = strings.ReplaceAll(version, "\n", "")
return version, nil
}
}
return "", nil
}
// return list of LinuxKernelVariables
func (hsh *HostSensorHandler) GetKernelVariables(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getKernelVariables returns the list of Linux Kernel variables.
func (hsh *HostSensorHandler) getKernelVariables(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
return hsh.sendAllPodsHTTPGETRequest(ctx, "/LinuxKernelVariables", LinuxKernelVariables)
}
// return list of OpenPortsList
func (hsh *HostSensorHandler) GetOpenPortsList(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getOpenPortsList returns the list of open ports.
func (hsh *HostSensorHandler) getOpenPortsList(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
return hsh.sendAllPodsHTTPGETRequest(ctx, "/openedPorts", OpenPortsList)
}
// return list of LinuxSecurityHardeningStatus
func (hsh *HostSensorHandler) GetLinuxSecurityHardeningStatus(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getLinuxSecurityHardeningStatus returns the list of LinuxSecurityHardeningStatus metadata.
func (hsh *HostSensorHandler) getLinuxSecurityHardeningStatus(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
return hsh.sendAllPodsHTTPGETRequest(ctx, "/linuxSecurityHardening", LinuxSecurityHardeningStatus)
}
// return list of KubeletInfo
func (hsh *HostSensorHandler) GetKubeletInfo(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getKubeletInfo returns the list of kubelet metadata.
func (hsh *HostSensorHandler) getKubeletInfo(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
return hsh.sendAllPodsHTTPGETRequest(ctx, "/kubeletInfo", KubeletInfo)
}
// return list of kubeProxyInfo
func (hsh *HostSensorHandler) GetKubeProxyInfo(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getKubeProxyInfo returns the list of kubeProxy metadata.
func (hsh *HostSensorHandler) getKubeProxyInfo(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
return hsh.sendAllPodsHTTPGETRequest(ctx, "/kubeProxyInfo", KubeProxyInfo)
}
// return list of controlPlaneInfo
func (hsh *HostSensorHandler) GetControlPlaneInfo(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getControlPlanInfo returns the list of controlPlaneInfo metadata
func (hsh *HostSensorHandler) getControlPlaneInfo(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
return hsh.sendAllPodsHTTPGETRequest(ctx, "/controlPlaneInfo", ControlPlaneInfo)
}
// return list of cloudProviderInfo
func (hsh *HostSensorHandler) GetCloudProviderInfo(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getCloudProviderInfo returns the list of cloudProviderInfo metadata.
func (hsh *HostSensorHandler) getCloudProviderInfo(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
return hsh.sendAllPodsHTTPGETRequest(ctx, "/cloudProviderInfo", CloudProviderInfo)
}
// return list of KubeletCommandLine
func (hsh *HostSensorHandler) GetKubeletCommandLine(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getKubeletCommandLine returns the list of kubelet command lines.
func (hsh *HostSensorHandler) getKubeletCommandLine(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
resps, err := hsh.sendAllPodsHTTPGETRequest(ctx, "/kubeletCommandLine", KubeletCommandLine)
if err != nil {
return resps, err
}
for resp := range resps {
var data = make(map[string]interface{})
data["fullCommand"] = string(resps[resp].Data)
resBytesMarshal, err := json.Marshal(data)
// TODO catch error
if err == nil {
resps[resp].Data = json.RawMessage(resBytesMarshal)
resps[resp].Data = stdjson.RawMessage(resBytesMarshal)
}
}
return resps, nil
}
// return list of CNIInfo
func (hsh *HostSensorHandler) GetCNIInfo(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getCNIInfo returns the list of CNI metadata
func (hsh *HostSensorHandler) getCNIInfo(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
return hsh.sendAllPodsHTTPGETRequest(ctx, "/CNIInfo", CNIInfo)
}
// return list of kernelVersion
func (hsh *HostSensorHandler) GetKernelVersion(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getKernelVersion returns the list of kernelVersion metadata.
func (hsh *HostSensorHandler) getKernelVersion(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
return hsh.sendAllPodsHTTPGETRequest(ctx, "/kernelVersion", "KernelVersion")
}
// return list of osRelease
func (hsh *HostSensorHandler) GetOsReleaseFile(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getOsReleaseFile returns the list of osRelease metadata.
func (hsh *HostSensorHandler) getOsReleaseFile(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
return hsh.sendAllPodsHTTPGETRequest(ctx, "/osRelease", "OsReleaseFile")
}
// return list of kubeletConfigurations
func (hsh *HostSensorHandler) GetKubeletConfigurations(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// getKubeletConfigurations returns the list of kubelet configurations.
func (hsh *HostSensorHandler) getKubeletConfigurations(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, error) {
// loop over pods and port-forward it to each of them
res, err := hsh.sendAllPodsHTTPGETRequest(ctx, "/kubeletConfigurations", "KubeletConfiguration") // empty kind, will be overridden
for resIdx := range res {
jsonBytes, ery := yaml.YAMLToJSON(res[resIdx].Data)
if ery != nil {
logger.L().Ctx(ctx).Error("failed to convert kubelet configurations from yaml to json", helpers.Error(ery))
logger.L().Ctx(ctx).Warning("failed to convert kubelet configurations from yaml to json", helpers.Error(ery))
continue
}
res[resIdx].SetData(jsonBytes)
}
return res, err
}
// hasCloudProviderInfo iterates over the []hostsensor.HostSensorDataEnvelope list to find info about the cloud provider.
//
// If information are found, then return true. Return false otherwise.
func hasCloudProviderInfo(cpi []hostsensor.HostSensorDataEnvelope) bool {
for index := range cpi {
if !reflect.DeepEqual(cpi[index].GetData(), stdjson.RawMessage("{}\n")) {
return true
}
}
return false
}
// CollectResources collects all required information about all the pods for this host.
func (hsh *HostSensorHandler) CollectResources(ctx context.Context) ([]hostsensor.HostSensorDataEnvelope, map[string]apis.StatusInfo, error) {
res := make([]hostsensor.HostSensorDataEnvelope, 0)
infoMap := make(map[string]apis.StatusInfo)
if hsh.DaemonSet == nil {
if hsh.daemonSet == nil {
return res, nil, nil
}
var kcData []hostsensor.HostSensorDataEnvelope
var err error
logger.L().Debug("Accessing host scanner")
version, err := hsh.GetVersion()
version, err := hsh.getVersion()
if err != nil {
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(version) > 0 {
logger.L().Info("Host scanner version : " + version)
} else {
logger.L().Info("Unknown host scanner version")
}
//
kcData, err = hsh.GetKubeletConfigurations(ctx)
if err != nil {
addInfoToMap(KubeletConfiguration, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
//
kcData, err = hsh.GetKubeletCommandLine(ctx)
if err != nil {
addInfoToMap(KubeletCommandLine, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
//
kcData, err = hsh.GetOsReleaseFile(ctx)
if err != nil {
addInfoToMap(OsReleaseFile, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
//
kcData, err = hsh.GetKernelVersion(ctx)
if err != nil {
addInfoToMap(KernelVersion, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
//
kcData, err = hsh.GetLinuxSecurityHardeningStatus(ctx)
if err != nil {
addInfoToMap(LinuxSecurityHardeningStatus, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
//
kcData, err = hsh.GetOpenPortsList(ctx)
if err != nil {
addInfoToMap(OpenPortsList, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
// GetKernelVariables
kcData, err = hsh.GetKernelVariables(ctx)
if err != nil {
addInfoToMap(LinuxKernelVariables, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
// GetKubeletInfo
kcData, err = hsh.GetKubeletInfo(ctx)
if err != nil {
addInfoToMap(KubeletInfo, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
var hasCloudProvider bool
for _, toPin := range []struct {
Resource scannerResource
Query func(context.Context) ([]hostsensor.HostSensorDataEnvelope, error)
}{
// queries to the deployed host-scanner
{
Resource: KubeletConfiguration,
Query: hsh.getKubeletConfigurations,
},
{
Resource: KubeletCommandLine,
Query: hsh.getKubeletCommandLine,
},
{
Resource: OsReleaseFile,
Query: hsh.getOsReleaseFile,
},
{
Resource: KernelVersion,
Query: hsh.getKernelVersion,
},
{
Resource: LinuxSecurityHardeningStatus,
Query: hsh.getLinuxSecurityHardeningStatus,
},
{
Resource: OpenPortsList,
Query: hsh.getOpenPortsList,
},
{
Resource: LinuxKernelVariables,
Query: hsh.getKernelVariables,
},
{
Resource: KubeletInfo,
Query: hsh.getKubeletInfo,
},
{
Resource: KubeProxyInfo,
Query: hsh.getKubeProxyInfo,
},
{
Resource: CloudProviderInfo,
Query: hsh.getCloudProviderInfo,
},
{
Resource: CNIInfo,
Query: hsh.getCNIInfo,
},
{
// ControlPlaneInfo is queried _after_ CloudProviderInfo.
Resource: ControlPlaneInfo,
Query: hsh.getControlPlaneInfo,
},
} {
k8sInfo := toPin
// GetKubeProxyInfo
kcData, err = hsh.GetKubeProxyInfo(ctx)
if err != nil {
addInfoToMap(KubeProxyInfo, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
if k8sInfo.Resource == ControlPlaneInfo && hasCloudProvider {
// we retrieve control plane info only if we are not using a cloud provider
continue
}
// GetControlPlaneInfo
kcData, err = hsh.GetControlPlaneInfo(ctx)
if err != nil {
addInfoToMap(ControlPlaneInfo, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
kcData, err := k8sInfo.Query(ctx)
if err != nil {
addInfoToMap(k8sInfo.Resource, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
// GetCloudProviderInfo
kcData, err = hsh.GetCloudProviderInfo(ctx)
if err != nil {
addInfoToMap(CloudProviderInfo, infoMap, err)
logger.L().Ctx(ctx).Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
}
if k8sInfo.Resource == CloudProviderInfo {
hasCloudProvider = hasCloudProviderInfo(kcData)
}
// GetCNIInfo
kcData, err = hsh.GetCNIInfo(ctx)
if err != nil {
addInfoToMap(CNIInfo, infoMap, err)
logger.L().Warning(err.Error())
}
if len(kcData) > 0 {
res = append(res, kcData...)
if len(kcData) > 0 {
res = append(res, kcData...)
}
}
logger.L().Debug("Done reading information from host scanner")

View File

@@ -7,9 +7,15 @@ import (
"github.com/kubescape/opa-utils/reporthandling/apis"
)
// HostSensorHandlerMock is a noop sensor when the host scanner is disabled.
type HostSensorHandlerMock struct {
}
// NewHostSensorHandlerMock yields a dummy host sensor.
func NewHostSensorHandlerMock() *HostSensorHandlerMock {
return &HostSensorHandlerMock{}
}
func (hshm *HostSensorHandlerMock) Init(_ context.Context) error {
return nil
}

View File

@@ -14,7 +14,7 @@ const noOfWorkers int = 10
type job struct {
podName string
nodeName string
requestKind string
requestKind scannerResource
path string
}
@@ -25,7 +25,7 @@ type workerPool struct {
noOfWorkers int
}
func NewWorkerPool() workerPool {
func newWorkerPool() workerPool {
wp := workerPool{}
wp.noOfWorkers = noOfWorkers
wp.init()
@@ -48,10 +48,10 @@ func (wp *workerPool) hostSensorWorker(ctx context.Context, hsh *HostSensorHandl
for job := range wp.jobs {
hostSensorDataEnvelope, err := hsh.getResourcesFromPod(job.podName, job.nodeName, job.requestKind, job.path)
if err != nil {
logger.L().Ctx(ctx).Error("failed to get data", helpers.String("path", job.path), helpers.String("podName", job.podName), helpers.Error(err))
} else {
wp.results <- hostSensorDataEnvelope
logger.L().Ctx(ctx).Warning("failed to get data", helpers.String("path", job.path), helpers.String("podName", job.podName), helpers.Error(err))
continue
}
wp.results <- hostSensorDataEnvelope
}
}
@@ -80,7 +80,7 @@ func (wp *workerPool) hostSensorGetResults(result *[]hostsensor.HostSensorDataEn
}()
}
func (wp *workerPool) hostSensorApplyJobs(podList map[string]string, path, requestKind string) {
func (wp *workerPool) hostSensorApplyJobs(podList map[string]string, path string, requestKind scannerResource) {
go func() {
for podName, nodeName := range podList {
thisJob := job{

View File

@@ -0,0 +1,15 @@
package hostsensorutils
import (
jsoniter "github.com/json-iterator/go"
)
var (
json jsoniter.API
)
func init() {
// NOTE(fredbi): attention, this configuration rounds floats down to 6 digits
// For finer-grained config, see: https://pkg.go.dev/github.com/json-iterator/go#section-readme
json = jsoniter.ConfigFastest
}

File diff suppressed because one or more lines are too long

View File

@@ -5,39 +5,54 @@ import (
"github.com/kubescape/opa-utils/reporthandling/apis"
)
var (
KubeletConfiguration = "KubeletConfiguration"
OsReleaseFile = "OsReleaseFile"
KernelVersion = "KernelVersion"
LinuxSecurityHardeningStatus = "LinuxSecurityHardeningStatus"
OpenPortsList = "OpenPortsList"
LinuxKernelVariables = "LinuxKernelVariables"
KubeletCommandLine = "KubeletCommandLine"
KubeletInfo = "KubeletInfo"
KubeProxyInfo = "KubeProxyInfo"
ControlPlaneInfo = "ControlPlaneInfo"
CloudProviderInfo = "CloudProviderInfo"
CNIInfo = "CNIInfo"
// scannerResource is the enumerated type listing all resources from the host-scanner.
type scannerResource string
MapHostSensorResourceToApiGroup = map[string]string{
KubeletConfiguration: "hostdata.kubescape.cloud/v1beta0",
OsReleaseFile: "hostdata.kubescape.cloud/v1beta0",
KubeletCommandLine: "hostdata.kubescape.cloud/v1beta0",
KernelVersion: "hostdata.kubescape.cloud/v1beta0",
LinuxSecurityHardeningStatus: "hostdata.kubescape.cloud/v1beta0",
OpenPortsList: "hostdata.kubescape.cloud/v1beta0",
LinuxKernelVariables: "hostdata.kubescape.cloud/v1beta0",
KubeletInfo: "hostdata.kubescape.cloud/v1beta0",
KubeProxyInfo: "hostdata.kubescape.cloud/v1beta0",
ControlPlaneInfo: "hostdata.kubescape.cloud/v1beta0",
CloudProviderInfo: "hostdata.kubescape.cloud/v1beta0",
CNIInfo: "hostdata.kubescape.cloud/v1beta0",
}
const (
// host-scanner resources
KubeletConfiguration scannerResource = "KubeletConfiguration"
OsReleaseFile scannerResource = "OsReleaseFile"
KernelVersion scannerResource = "KernelVersion"
LinuxSecurityHardeningStatus scannerResource = "LinuxSecurityHardeningStatus"
OpenPortsList scannerResource = "OpenPortsList"
LinuxKernelVariables scannerResource = "LinuxKernelVariables"
KubeletCommandLine scannerResource = "KubeletCommandLine"
KubeletInfo scannerResource = "KubeletInfo"
KubeProxyInfo scannerResource = "KubeProxyInfo"
ControlPlaneInfo scannerResource = "ControlPlaneInfo"
CloudProviderInfo scannerResource = "CloudProviderInfo"
CNIInfo scannerResource = "CNIInfo"
)
func addInfoToMap(resource string, infoMap map[string]apis.StatusInfo, err error) {
group, version := k8sinterface.SplitApiVersion(MapHostSensorResourceToApiGroup[resource])
r := k8sinterface.JoinResourceTriplets(group, version, resource)
func mapHostSensorResourceToApiGroup(r scannerResource) string {
switch r {
case
KubeletConfiguration,
OsReleaseFile,
KubeletCommandLine,
KernelVersion,
LinuxSecurityHardeningStatus,
OpenPortsList,
LinuxKernelVariables,
KubeletInfo,
KubeProxyInfo,
ControlPlaneInfo,
CloudProviderInfo,
CNIInfo:
return "hostdata.kubescape.cloud/v1beta0"
default:
return ""
}
}
func (r scannerResource) String() string {
return string(r)
}
func addInfoToMap(resource scannerResource, infoMap map[string]apis.StatusInfo, err error) {
group, version := k8sinterface.SplitApiVersion(mapHostSensorResourceToApiGroup(resource))
r := k8sinterface.JoinResourceTriplets(group, version, resource.String())
infoMap[r] = apis.StatusInfo{
InnerStatus: apis.StatusSkipped,
InnerInfo: err.Error(),

View File

@@ -0,0 +1,68 @@
package hostsensorutils
import (
"errors"
"fmt"
"testing"
"github.com/kubescape/opa-utils/reporthandling/apis"
"github.com/stretchr/testify/require"
)
func TestAddInfoToMap(t *testing.T) {
t.Parallel()
// NOTE: the function being tested is hard to test, because
// the worker pool mutes most errors.
//
// Essentially, unless we hit some extreme edge case, we never get an error to be added to the map.
testErr := errors.New("test error")
for _, toPin := range []struct {
Resource scannerResource
Err error
Expected map[string]apis.StatusInfo
}{
{
Resource: KubeletConfiguration,
Err: testErr,
Expected: map[string]apis.StatusInfo{
"hostdata.kubescape.cloud/v1beta0/KubeletConfiguration": {
InnerStatus: apis.StatusSkipped,
InnerInfo: testErr.Error(),
},
},
},
{
Resource: CNIInfo,
Err: testErr,
Expected: map[string]apis.StatusInfo{
"hostdata.kubescape.cloud/v1beta0/CNIInfo": {
InnerStatus: apis.StatusSkipped,
InnerInfo: testErr.Error(),
},
},
},
{
Resource: scannerResource("invalid"),
Err: testErr,
Expected: map[string]apis.StatusInfo{
"//invalid": { // no group, no version
InnerStatus: apis.StatusSkipped,
InnerInfo: testErr.Error(),
},
},
},
} {
tc := toPin
t.Run(fmt.Sprintf("should expect a status for resource %s", tc.Resource), func(t *testing.T) {
t.Parallel()
result := make(map[string]apis.StatusInfo, 1)
addInfoToMap(tc.Resource, result, tc.Err)
require.EqualValues(t, tc.Expected, result)
})
}
}

View File

@@ -7,15 +7,15 @@ import (
)
func Test_has_signature(t *testing.T) {
tests := []struct {
name string
img string
img string
want bool
}{
{
name: "valid signature",
img: "quay.io/kubescape/gateway",
img: "quay.io/kubescape/gateway",
want: true,
},
}

View File

@@ -15,7 +15,6 @@ import (
)
// VerifyCommand verifies a signature on a supplied container image
// nolint
type VerifyCommand struct {
options.RegistryOptions
Annotations sigs.AnnotationsMap

View File

@@ -3,26 +3,27 @@ package opaprocessor
import (
"context"
"fmt"
"runtime"
"sync"
"time"
"github.com/armosec/armoapi-go/armotypes"
logger "github.com/kubescape/go-logger"
"github.com/kubescape/go-logger/helpers"
"github.com/kubescape/k8s-interface/workloadinterface"
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/pkg/score"
"github.com/kubescape/opa-utils/objectsenvelopes"
"github.com/kubescape/opa-utils/reporthandling"
"github.com/kubescape/opa-utils/reporthandling/apis"
"github.com/kubescape/opa-utils/reporthandling/results/v1/resourcesresults"
"github.com/open-policy-agent/opa/storage"
"go.opentelemetry.io/otel"
"github.com/kubescape/k8s-interface/workloadinterface"
reporthandlingv2 "github.com/kubescape/opa-utils/reporthandling/v2"
"github.com/kubescape/opa-utils/resources"
"github.com/open-policy-agent/opa/ast"
"github.com/open-policy-agent/opa/rego"
"github.com/open-policy-agent/opa/storage"
"go.opentelemetry.io/otel"
"golang.org/x/sync/errgroup"
)
const ScoreConfigPath = "/resources/config"
@@ -33,9 +34,16 @@ type IJobProgressNotificationClient interface {
Stop()
}
const (
heuristicAllocResources = 100
heuristicAllocControls = 100
)
// OPAProcessor processes Open Policy Agent rules.
type OPAProcessor struct {
regoDependenciesData *resources.RegoDependenciesData
*cautils.OPASessionObj
opaRegisterOnce sync.Once
}
func NewOPAProcessor(sessionObj *cautils.OPASessionObj, regoDependenciesData *resources.RegoDependenciesData) *OPAProcessor {
@@ -43,20 +51,21 @@ func NewOPAProcessor(sessionObj *cautils.OPASessionObj, regoDependenciesData *re
regoDependenciesData.PostureControlInputs = sessionObj.RegoInputData.PostureControlInputs
regoDependenciesData.DataControlInputs = sessionObj.RegoInputData.DataControlInputs
}
return &OPAProcessor{
OPASessionObj: sessionObj,
regoDependenciesData: regoDependenciesData,
}
}
func (opap *OPAProcessor) ProcessRulesListenner(ctx context.Context, progressListener IJobProgressNotificationClient) error {
func (opap *OPAProcessor) ProcessRulesListener(ctx context.Context, progressListener IJobProgressNotificationClient) error {
opap.OPASessionObj.AllPolicies = ConvertFrameworksToPolicies(opap.Policies, cautils.BuildNumber)
ConvertFrameworksToSummaryDetails(&opap.Report.SummaryDetails, opap.Policies, opap.OPASessionObj.AllPolicies)
// process
if err := opap.Process(ctx, opap.OPASessionObj.AllPolicies, progressListener); err != nil {
logger.L().Ctx(ctx).Error(err.Error())
logger.L().Ctx(ctx).Warning(err.Error())
// Return error?
}
@@ -65,51 +74,123 @@ func (opap *OPAProcessor) ProcessRulesListenner(ctx context.Context, progressLis
//TODO: review this location
scorewrapper := score.NewScoreWrapper(opap.OPASessionObj)
scorewrapper.Calculate(score.EPostureReportV2)
_ = scorewrapper.Calculate(score.EPostureReportV2)
return nil
}
// Process OPA policies (rules) on all configured controls.
func (opap *OPAProcessor) Process(ctx context.Context, policies *cautils.Policies, progressListener IJobProgressNotificationClient) error {
ctx, span := otel.Tracer("").Start(ctx, "OPAProcessor.Process")
defer span.End()
opap.loggerStartScanning()
defer opap.loggerDoneScanning()
cautils.StartSpinner()
defer cautils.StopSpinner()
if progressListener != nil {
progressListener.Start(len(policies.Controls))
defer progressListener.Stop()
}
for _, toPin := range policies.Controls {
if progressListener != nil {
progressListener.ProgressJob(1, fmt.Sprintf("Control %s", toPin.ControlID))
}
control := toPin
resourcesAssociatedControl, err := opap.processControl(ctx, &control)
if err != nil {
logger.L().Ctx(ctx).Error(err.Error())
}
if len(resourcesAssociatedControl) == 0 {
continue
}
// update resources with latest results
for resourceID, controlResult := range resourcesAssociatedControl {
if _, ok := opap.ResourcesResult[resourceID]; !ok {
opap.ResourcesResult[resourceID] = resourcesresults.Result{ResourceID: resourceID}
}
t := opap.ResourcesResult[resourceID]
t.AssociatedControls = append(t.AssociatedControls, controlResult)
opap.ResourcesResult[resourceID] = t
}
// results to collect from controls being processed in parallel
type results struct {
resourceAssociatedControl map[string]resourcesresults.ResourceAssociatedControl
allResources map[string]workloadinterface.IMetadata
}
opap.Report.ReportGenerationTime = time.Now().UTC()
resultsChan := make(chan results)
controlsGroup, groupCtx := errgroup.WithContext(ctx)
controlsGroup.SetLimit(2 * runtime.NumCPU())
opap.loggerDoneScanning()
allResources := make(map[string]workloadinterface.IMetadata, max(len(opap.AllResources), heuristicAllocResources))
for k, v := range opap.AllResources {
allResources[k] = v
}
var resultsCollector sync.WaitGroup
resultsCollector.Add(1)
go func() {
// collects the results from processing all rules for all controls.
//
// NOTE: since policies.Controls is a map, iterating over it doesn't guarantee any
// specific ordering. Therefore, if a conflict is possible on resources, e.g. 2 rules,
// referencing the same resource, the eventual result of the merge is not guaranteed to be
// stable. This behavior is consistent with the previous (unparallelized) processing.
defer resultsCollector.Done()
for result := range resultsChan {
// merge both maps in parallel
var merger sync.WaitGroup
merger.Add(1)
go func() {
// merge all resources
defer merger.Done()
for k, v := range result.allResources {
allResources[k] = v
}
}()
merger.Add(1)
go func() {
defer merger.Done()
// update resources with latest results
for resourceID, controlResult := range result.resourceAssociatedControl {
result, found := opap.ResourcesResult[resourceID]
if !found {
result = resourcesresults.Result{ResourceID: resourceID}
}
result.AssociatedControls = append(result.AssociatedControls, controlResult)
opap.ResourcesResult[resourceID] = result
}
}()
merger.Wait()
}
}()
// processes rules for all controls in parallel
for _, controlToPin := range policies.Controls {
if progressListener != nil {
progressListener.ProgressJob(1, fmt.Sprintf("Control: %s", controlToPin.ControlID))
}
control := controlToPin
controlsGroup.Go(func() error {
resourceAssociatedControl, allResourcesFromControl, err := opap.processControl(groupCtx, &control)
if err != nil {
logger.L().Ctx(groupCtx).Warning(err.Error())
}
select {
case resultsChan <- results{
resourceAssociatedControl: resourceAssociatedControl,
allResources: allResourcesFromControl,
}:
case <-groupCtx.Done(): // interrupted (NOTE: at this moment, this never happens since errors are muted)
return groupCtx.Err()
}
return nil
})
}
// wait for all results from all rules to be collected
err := controlsGroup.Wait()
close(resultsChan)
resultsCollector.Wait()
if err != nil {
return err
}
// merge the final result in resources
for k, v := range allResources {
opap.AllResources[k] = v
}
opap.Report.ReportGenerationTime = time.Now().UTC()
return nil
}
@@ -132,113 +213,126 @@ func (opap *OPAProcessor) loggerDoneScanning() {
}
}
func (opap *OPAProcessor) processControl(ctx context.Context, control *reporthandling.Control) (map[string]resourcesresults.ResourceAssociatedControl, error) {
var errs error
// processControl processes all the rules for a given control
//
// NOTE: the call to processControl no longer mutates the state of the current OPAProcessor instance,
// but returns a map instead, to be merged by the caller.
func (opap *OPAProcessor) processControl(ctx context.Context, control *reporthandling.Control) (map[string]resourcesresults.ResourceAssociatedControl, map[string]workloadinterface.IMetadata, error) {
resourcesAssociatedControl := make(map[string]resourcesresults.ResourceAssociatedControl, heuristicAllocControls)
allResources := make(map[string]workloadinterface.IMetadata, heuristicAllocResources)
resourcesAssociatedControl := make(map[string]resourcesresults.ResourceAssociatedControl)
// ruleResults := make(map[string][]resourcesresults.ResourceAssociatedRule)
for i := range control.Rules {
resourceAssociatedRule, err := opap.processRule(ctx, &control.Rules[i], control.FixedInput)
resourceAssociatedRule, allResourcesFromRule, err := opap.processRule(ctx, &control.Rules[i], control.FixedInput)
if err != nil {
logger.L().Ctx(ctx).Error(err.Error())
logger.L().Ctx(ctx).Warning(err.Error())
continue
}
// merge all resources for all processed rules in this control
for k, v := range allResourcesFromRule {
allResources[k] = v
}
// append failed rules to controls
if len(resourceAssociatedRule) != 0 {
for resourceID, ruleResponse := range resourceAssociatedRule {
for resourceID, ruleResponse := range resourceAssociatedRule {
var controlResult resourcesresults.ResourceAssociatedControl
controlResult.SetID(control.ControlID)
controlResult.SetName(control.Name)
controlResult := resourcesresults.ResourceAssociatedControl{}
controlResult.SetID(control.ControlID)
controlResult.SetName(control.Name)
if _, ok := resourcesAssociatedControl[resourceID]; ok {
controlResult.ResourceAssociatedRules = resourcesAssociatedControl[resourceID].ResourceAssociatedRules
}
if ruleResponse != nil {
controlResult.ResourceAssociatedRules = append(controlResult.ResourceAssociatedRules, *ruleResponse)
}
if control, ok := opap.AllPolicies.Controls[control.ControlID]; ok {
controlResult.SetStatus(control)
}
resourcesAssociatedControl[resourceID] = controlResult
if associatedControl, ok := resourcesAssociatedControl[resourceID]; ok {
controlResult.ResourceAssociatedRules = associatedControl.ResourceAssociatedRules
}
if ruleResponse != nil {
controlResult.ResourceAssociatedRules = append(controlResult.ResourceAssociatedRules, *ruleResponse)
}
if control, ok := opap.AllPolicies.Controls[control.ControlID]; ok {
controlResult.SetStatus(control)
}
resourcesAssociatedControl[resourceID] = controlResult
}
}
return resourcesAssociatedControl, errs
return resourcesAssociatedControl, allResources, nil
}
func (opap *OPAProcessor) processRule(ctx context.Context, rule *reporthandling.PolicyRule, fixedControlInputs map[string][]string) (map[string]*resourcesresults.ResourceAssociatedRule, error) {
// processRule processes a single policy rule, with some extra fixed control inputs.
//
// NOTE: processRule no longer mutates the state of the current OPAProcessor instance,
// and returns a map instead, to be merged by the caller.
func (opap *OPAProcessor) processRule(ctx context.Context, rule *reporthandling.PolicyRule, fixedControlInputs map[string][]string) (map[string]*resourcesresults.ResourceAssociatedRule, map[string]workloadinterface.IMetadata, error) {
ruleRegoDependenciesData := opap.makeRegoDeps(rule.ConfigInputs, fixedControlInputs)
postureControlInputs := opap.regoDependenciesData.GetFilteredPostureControlInputs(rule.ConfigInputs) // get store
dataControlInputs := map[string]string{"cloudProvider": opap.OPASessionObj.Report.ClusterCloudProvider}
// Merge configurable control input and fixed control input
for k, v := range fixedControlInputs {
postureControlInputs[k] = v
}
RuleRegoDependenciesData := resources.RegoDependenciesData{DataControlInputs: dataControlInputs,
PostureControlInputs: postureControlInputs}
inputResources, err := reporthandling.RegoResourcesAggregator(rule, getAllSupportedObjects(opap.K8SResources, opap.ArmoResource, opap.AllResources, rule))
inputResources, err := reporthandling.RegoResourcesAggregator(
rule,
getAllSupportedObjects(opap.K8SResources, opap.ArmoResource, opap.AllResources, rule), // NOTE: this uses the initial snapshot of AllResources
)
if err != nil {
return nil, fmt.Errorf("error getting aggregated k8sObjects: %s", err.Error())
return nil, nil, fmt.Errorf("error getting aggregated k8sObjects: %w", err)
}
if len(inputResources) == 0 {
return nil, nil // no resources found for testing
return nil, nil, nil // no resources found for testing
}
inputRawResources := workloadinterface.ListMetaToMap(inputResources)
resources := map[string]*resourcesresults.ResourceAssociatedRule{}
// the failed resources are a subgroup of the enumeratedData, so we store the enumeratedData like it was the input data
enumeratedData, err := opap.enumerateData(ctx, rule, inputRawResources)
if err != nil {
return nil, err
return nil, nil, err
}
inputResources = objectsenvelopes.ListMapToMeta(enumeratedData)
for i := range inputResources {
resources[inputResources[i].GetID()] = &resourcesresults.ResourceAssociatedRule{
resources := make(map[string]*resourcesresults.ResourceAssociatedRule, len(inputResources))
allResources := make(map[string]workloadinterface.IMetadata, len(inputResources))
for i, inputResource := range inputResources {
resources[inputResource.GetID()] = &resourcesresults.ResourceAssociatedRule{
Name: rule.Name,
ControlConfigurations: postureControlInputs,
ControlConfigurations: ruleRegoDependenciesData.PostureControlInputs,
Status: apis.StatusPassed,
}
opap.AllResources[inputResources[i].GetID()] = inputResources[i]
allResources[inputResource.GetID()] = inputResources[i]
}
ruleResponses, err := opap.runOPAOnSingleRule(ctx, rule, inputRawResources, ruleData, RuleRegoDependenciesData)
ruleResponses, err := opap.runOPAOnSingleRule(ctx, rule, inputRawResources, ruleData, ruleRegoDependenciesData)
if err != nil {
// TODO - Handle error
logger.L().Ctx(ctx).Error(err.Error())
} else {
// ruleResponse to ruleResult
for i := range ruleResponses {
failedResources := objectsenvelopes.ListMapToMeta(ruleResponses[i].GetFailedResources())
for j := range failedResources {
ruleResult := &resourcesresults.ResourceAssociatedRule{}
if r, k := resources[failedResources[j].GetID()]; k {
ruleResult = r
}
return resources, allResources, err
}
ruleResult.SetStatus(apis.StatusFailed, nil)
for j := range ruleResponses[i].FailedPaths {
ruleResult.Paths = append(ruleResult.Paths, armotypes.PosturePaths{FailedPath: ruleResponses[i].FailedPaths[j]})
// ruleResponse to ruleResult
for _, ruleResponse := range ruleResponses {
failedResources := objectsenvelopes.ListMapToMeta(ruleResponse.GetFailedResources())
for _, failedResource := range failedResources {
var ruleResult *resourcesresults.ResourceAssociatedRule
if r, found := resources[failedResource.GetID()]; found {
ruleResult = r
} else {
ruleResult = &resourcesresults.ResourceAssociatedRule{
Paths: make([]armotypes.PosturePaths, 0, len(ruleResponse.FailedPaths)+len(ruleResponse.FixPaths)+1),
}
for j := range ruleResponses[i].FixPaths {
ruleResult.Paths = append(ruleResult.Paths, armotypes.PosturePaths{FixPath: ruleResponses[i].FixPaths[j]})
}
if ruleResponses[i].FixCommand != "" {
ruleResult.Paths = append(ruleResult.Paths, armotypes.PosturePaths{FixCommand: ruleResponses[i].FixCommand})
}
resources[failedResources[j].GetID()] = ruleResult
}
ruleResult.SetStatus(apis.StatusFailed, nil)
for _, failedPath := range ruleResponse.FailedPaths {
ruleResult.Paths = append(ruleResult.Paths, armotypes.PosturePaths{FailedPath: failedPath})
}
for _, fixPath := range ruleResponse.FixPaths {
ruleResult.Paths = append(ruleResult.Paths, armotypes.PosturePaths{FixPath: fixPath})
}
if ruleResponse.FixCommand != "" {
ruleResult.Paths = append(ruleResult.Paths, armotypes.PosturePaths{FixCommand: ruleResponse.FixCommand})
}
resources[failedResource.GetID()] = ruleResult
}
}
return resources, err
return resources, allResources, nil
}
func (opap *OPAProcessor) runOPAOnSingleRule(ctx context.Context, rule *reporthandling.PolicyRule, k8sObjects []map[string]interface{}, getRuleData func(*reporthandling.PolicyRule) string, ruleRegoDependenciesData resources.RegoDependenciesData) ([]reporthandling.RuleResponse, error) {
@@ -250,20 +344,25 @@ func (opap *OPAProcessor) runOPAOnSingleRule(ctx context.Context, rule *reportha
}
}
// runRegoOnK8s compiles an OPA PolicyRule and evaluates its against k8s
func (opap *OPAProcessor) runRegoOnK8s(ctx context.Context, rule *reporthandling.PolicyRule, k8sObjects []map[string]interface{}, getRuleData func(*reporthandling.PolicyRule) string, ruleRegoDependenciesData resources.RegoDependenciesData) ([]reporthandling.RuleResponse, error) {
// compile modules
modules, err := getRuleDependencies(ctx)
if err != nil {
return nil, fmt.Errorf("rule: '%s', %s", rule.Name, err.Error())
}
rego.RegisterBuiltin2(cosignVerifySignatureDeclaration, cosignVerifySignatureDefinition)
rego.RegisterBuiltin1(cosignHasSignatureDeclaration, cosignHasSignatureDefinition)
modules[rule.Name] = getRuleData(rule)
compiled, err := ast.CompileModules(modules)
opap.opaRegisterOnce.Do(func() {
// register signature verification methods for the OPA ast engine (since these are package level symbols, we do it only once)
rego.RegisterBuiltin2(cosignVerifySignatureDeclaration, cosignVerifySignatureDefinition)
rego.RegisterBuiltin1(cosignHasSignatureDeclaration, cosignHasSignatureDefinition)
})
modules[rule.Name] = getRuleData(rule)
// NOTE: OPA module compilation is the most resource-intensive operation.
compiled, err := ast.CompileModules(modules)
if err != nil {
return nil, fmt.Errorf("in 'runRegoOnSingleRule', failed to compile rule, name: %s, reason: %s", rule.Name, err.Error())
return nil, fmt.Errorf("in 'runRegoOnK8s', failed to compile rule, name: %s, reason: %w", rule.Name, err)
}
store, err := ruleRegoDependenciesData.TOStorage()
@@ -272,17 +371,15 @@ func (opap *OPAProcessor) runRegoOnK8s(ctx context.Context, rule *reporthandling
}
// Eval
results, err := opap.regoEval(k8sObjects, compiled, &store)
results, err := opap.regoEval(ctx, k8sObjects, compiled, &store)
if err != nil {
logger.L().Ctx(ctx).Error(err.Error())
logger.L().Ctx(ctx).Warning(err.Error())
}
return results, nil
}
func (opap *OPAProcessor) regoEval(inputObj []map[string]interface{}, compiledRego *ast.Compiler, store *storage.Store) ([]reporthandling.RuleResponse, error) {
// opap.regoDependenciesData.PostureControlInputs
func (opap *OPAProcessor) regoEval(ctx context.Context, inputObj []map[string]interface{}, compiledRego *ast.Compiler, store *storage.Store) ([]reporthandling.RuleResponse, error) {
rego := rego.New(
rego.Query("data.armo_builtins"), // get package name from rule
rego.Compiler(compiledRego),
@@ -291,7 +388,7 @@ func (opap *OPAProcessor) regoEval(inputObj []map[string]interface{}, compiledRe
)
// Run evaluation
resultSet, err := rego.Eval(context.Background())
resultSet, err := rego.Eval(ctx)
if err != nil {
return nil, err
}
@@ -304,23 +401,49 @@ func (opap *OPAProcessor) regoEval(inputObj []map[string]interface{}, compiledRe
}
func (opap *OPAProcessor) enumerateData(ctx context.Context, rule *reporthandling.PolicyRule, k8sObjects []map[string]interface{}) ([]map[string]interface{}, error) {
if ruleEnumeratorData(rule) == "" {
return k8sObjects, nil
}
postureControlInputs := opap.regoDependenciesData.GetFilteredPostureControlInputs(rule.ConfigInputs)
dataControlInputs := map[string]string{"cloudProvider": opap.OPASessionObj.Report.ClusterCloudProvider}
RuleRegoDependenciesData := resources.RegoDependenciesData{DataControlInputs: dataControlInputs,
PostureControlInputs: postureControlInputs}
ruleResponse, err := opap.runOPAOnSingleRule(ctx, rule, k8sObjects, ruleEnumeratorData, RuleRegoDependenciesData)
ruleRegoDependenciesData := opap.makeRegoDeps(rule.ConfigInputs, nil)
ruleResponse, err := opap.runOPAOnSingleRule(ctx, rule, k8sObjects, ruleEnumeratorData, ruleRegoDependenciesData)
if err != nil {
return nil, err
}
failedResources := []map[string]interface{}{}
failedResources := make([]map[string]interface{}, 0, len(ruleResponse))
for _, ruleResponse := range ruleResponse {
failedResources = append(failedResources, ruleResponse.GetFailedResources()...)
}
return failedResources, nil
}
// makeRegoDeps builds a resources.RegoDependenciesData struct for the current cloud provider.
//
// If some extra fixedControlInputs are provided, they are merged into the "posture" control inputs.
func (opap *OPAProcessor) makeRegoDeps(configInputs []string, fixedControlInputs map[string][]string) resources.RegoDependenciesData {
postureControlInputs := opap.regoDependenciesData.GetFilteredPostureControlInputs(configInputs) // get store
// merge configurable control input and fixed control input
for k, v := range fixedControlInputs {
postureControlInputs[k] = v
}
dataControlInputs := map[string]string{
"cloudProvider": opap.OPASessionObj.Report.ClusterCloudProvider,
}
return resources.RegoDependenciesData{
DataControlInputs: dataControlInputs,
PostureControlInputs: postureControlInputs,
}
}
func max(a, b int) int {
if a > b {
return a
}
return b
}

View File

@@ -7,9 +7,11 @@ import (
"github.com/kubescape/k8s-interface/k8sinterface"
"github.com/kubescape/k8s-interface/workloadinterface"
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/opa-utils/exceptions"
"github.com/kubescape/opa-utils/reporthandling"
"github.com/kubescape/opa-utils/reporthandling/apis"
"github.com/kubescape/opa-utils/reporthandling/results/v1/reportsummary"
"github.com/kubescape/opa-utils/reporthandling/results/v1/resourcesresults"
resources "github.com/kubescape/opa-utils/resources"
"go.opentelemetry.io/otel"
)
@@ -21,21 +23,29 @@ import (
// - adds exceptions (and updates controls status)
// - summarizes results
func (opap *OPAProcessor) updateResults(ctx context.Context) {
ctx, span := otel.Tracer("").Start(ctx, "OPAProcessor.updateResults")
_, span := otel.Tracer("").Start(ctx, "OPAProcessor.updateResults")
defer span.End()
// remove data from all objects
for i := range opap.AllResources {
removeData(opap.AllResources[i])
}
processor := exceptions.NewProcessor()
// set exceptions
for i := range opap.ResourcesResult {
t := opap.ResourcesResult[i]
// first set exceptions
// first set exceptions (reuse the same exceptions processor)
if resource, ok := opap.AllResources[i]; ok {
t.SetExceptions(resource, opap.Exceptions, cautils.ClusterName, opap.AllPolicies.Controls)
t.SetExceptions(
resource,
opap.Exceptions,
cautils.ClusterName,
opap.AllPolicies.Controls, // update status depending on action required
resourcesresults.WithExceptionsProcessor(processor),
)
}
// summarize the resources
@@ -140,10 +150,9 @@ func filterOutChildResources(objects []workloadinterface.IMetadata, match []repo
response := []workloadinterface.IMetadata{}
owners := []string{}
for m := range match {
for i := range match[m].Resources {
owners = append(owners, match[m].Resources[i])
}
owners = append(owners, match[m].Resources...)
}
for i := range objects {
if !k8sinterface.IsTypeWorkload(objects[i].GetObject()) {
response = append(response, objects[i])
@@ -157,8 +166,10 @@ func filterOutChildResources(objects []workloadinterface.IMetadata, match []repo
response = append(response, w)
}
}
return response
}
func getRuleDependencies(ctx context.Context) (map[string]string, error) {
modules := resources.LoadRegoModules()
if len(modules) == 0 {

View File

@@ -12,6 +12,7 @@ import (
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
cloudsupportv1 "github.com/kubescape/k8s-interface/cloudsupport/v1"
"github.com/kubescape/kubescape/v2/core/pkg/opaprocessor"
reportv2 "github.com/kubescape/opa-utils/reporthandling/v2"
"github.com/kubescape/k8s-interface/cloudsupport"
@@ -35,7 +36,7 @@ func NewPolicyHandler(resourceHandler resourcehandler.IResourceHandler) *PolicyH
}
}
func (policyHandler *PolicyHandler) CollectResources(ctx context.Context, policyIdentifier []cautils.PolicyIdentifier, scanInfo *cautils.ScanInfo) (*cautils.OPASessionObj, error) {
func (policyHandler *PolicyHandler) CollectResources(ctx context.Context, policyIdentifier []cautils.PolicyIdentifier, scanInfo *cautils.ScanInfo, progressListener opaprocessor.IJobProgressNotificationClient) (*cautils.OPASessionObj, error) {
opaSessionObj := cautils.NewOPASessionObj(ctx, nil, nil, scanInfo)
// validate notification
@@ -47,7 +48,7 @@ func (policyHandler *PolicyHandler) CollectResources(ctx context.Context, policy
return opaSessionObj, err
}
err := policyHandler.getResources(ctx, policyIdentifier, opaSessionObj)
err := policyHandler.getResources(ctx, policyIdentifier, opaSessionObj, progressListener)
if err != nil {
return opaSessionObj, err
}
@@ -59,7 +60,7 @@ func (policyHandler *PolicyHandler) CollectResources(ctx context.Context, policy
return opaSessionObj, nil
}
func (policyHandler *PolicyHandler) getResources(ctx context.Context, policyIdentifier []cautils.PolicyIdentifier, opaSessionObj *cautils.OPASessionObj) error {
func (policyHandler *PolicyHandler) getResources(ctx context.Context, policyIdentifier []cautils.PolicyIdentifier, opaSessionObj *cautils.OPASessionObj, progressListener opaprocessor.IJobProgressNotificationClient) error {
ctx, span := otel.Tracer("").Start(ctx, "policyHandler.getResources")
defer span.End()
opaSessionObj.Report.ClusterAPIServerInfo = policyHandler.resourceHandler.GetClusterAPIServerInfo(ctx)
@@ -69,7 +70,7 @@ func (policyHandler *PolicyHandler) getResources(ctx context.Context, policyIden
setCloudMetadata(opaSessionObj)
}
resourcesMap, allResources, ksResources, err := policyHandler.resourceHandler.GetResources(ctx, opaSessionObj, &policyIdentifier[0].Designators)
resourcesMap, allResources, ksResources, err := policyHandler.resourceHandler.GetResources(ctx, opaSessionObj, &policyIdentifier[0].Designators, progressListener)
if err != nil {
return err
}

View File

@@ -6,9 +6,7 @@ import (
"encoding/json"
"testing"
"github.com/armosec/armoapi-go/armotypes"
"github.com/kubescape/k8s-interface/k8sinterface"
"github.com/kubescape/k8s-interface/workloadinterface"
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/pkg/resourcehandler"
"github.com/kubescape/opa-utils/reporthandling/apis"
@@ -210,6 +208,7 @@ func Test_isAKS(t *testing.T) {
}
}
/* unused for now.
type iResourceHandlerMock struct{}
func (*iResourceHandlerMock) GetResources(*cautils.OPASessionObj, *armotypes.PortalDesignator) (*cautils.K8SResources, map[string]workloadinterface.IMetadata, *cautils.KSResources, error) {
@@ -218,6 +217,7 @@ func (*iResourceHandlerMock) GetResources(*cautils.OPASessionObj, *armotypes.Por
func (*iResourceHandlerMock) GetClusterAPIServerInfo() *version.Info {
return nil
}
*/
// https://github.com/kubescape/kubescape/pull/1004
// Cluster named .*eks.* config without a cloudconfig panics whereas we just want to scan a file
@@ -249,12 +249,12 @@ func Test_getResources(t *testing.T) {
policyIdentifier := []cautils.PolicyIdentifier{{}}
assert.NotPanics(t, func() {
policyHandler.getResources(context.TODO(), policyIdentifier, objSession)
policyHandler.getResources(context.TODO(), policyIdentifier, objSession, cautils.NewProgressHandler(""))
}, "Cluster named .*eks.* without a cloud config panics on cluster scan !")
assert.NotPanics(t, func() {
objSession.Metadata.ScanMetadata.ScanningTarget = reportv2.File
policyHandler.getResources(context.TODO(), policyIdentifier, objSession)
policyHandler.getResources(context.TODO(), policyIdentifier, objSession, cautils.NewProgressHandler(""))
}, "Cluster named .*eks.* without a cloud config panics on non-cluster scan !")
}

View File

@@ -37,7 +37,7 @@ func (policyHandler *PolicyHandler) getPolicies(ctx context.Context, policyIdent
if err == nil {
policiesAndResources.Exceptions = exceptionPolicies
} else {
logger.L().Ctx(ctx).Error("failed to load exceptions", helpers.Error(err))
logger.L().Ctx(ctx).Warning("failed to load exceptions", helpers.Error(err))
}
// get account configuration
@@ -45,7 +45,7 @@ func (policyHandler *PolicyHandler) getPolicies(ctx context.Context, policyIdent
if err == nil {
policiesAndResources.RegoInputData.PostureControlInputs = controlsInputs
} else {
logger.L().Ctx(ctx).Error(err.Error())
logger.L().Ctx(ctx).Warning(err.Error())
}
cautils.StopSpinner()

View File

@@ -15,6 +15,7 @@ import (
"github.com/kubescape/go-logger/helpers"
"github.com/kubescape/k8s-interface/k8sinterface"
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/pkg/opaprocessor"
)
// FileResourceHandler handle resources from files and URLs
@@ -31,7 +32,7 @@ func NewFileResourceHandler(_ context.Context, inputPatterns []string, registryA
}
}
func (fileHandler *FileResourceHandler) GetResources(ctx context.Context, sessionObj *cautils.OPASessionObj, _ *armotypes.PortalDesignator) (*cautils.K8SResources, map[string]workloadinterface.IMetadata, *cautils.KSResources, error) {
func (fileHandler *FileResourceHandler) GetResources(ctx context.Context, sessionObj *cautils.OPASessionObj, _ *armotypes.PortalDesignator, progressListener opaprocessor.IJobProgressNotificationClient) (*cautils.K8SResources, map[string]workloadinterface.IMetadata, *cautils.KSResources, error) {
//
// build resources map

View File

@@ -9,6 +9,7 @@ import (
"github.com/kubescape/go-logger/helpers"
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/pkg/hostsensorutils"
"github.com/kubescape/kubescape/v2/core/pkg/opaprocessor"
"github.com/kubescape/opa-utils/objectsenvelopes"
"github.com/kubescape/opa-utils/reporthandling/apis"
@@ -35,6 +36,7 @@ var cloudResourceGetterMapping = map[string]cloudResourceGetter{
cloudapis.CloudProviderDescribeKind: cloudsupport.GetDescriptiveInfoFromCloudProvider,
cloudapis.CloudProviderDescribeRepositoriesKind: cloudsupport.GetDescribeRepositoriesFromCloudProvider,
cloudapis.CloudProviderListEntitiesForPoliciesKind: cloudsupport.GetListEntitiesForPoliciesFromCloudProvider,
cloudapis.CloudProviderPolicyVersionKind: cloudsupport.GetPolicyVersionFromCloudProvider,
}
type K8sResourceHandler struct {
@@ -55,7 +57,7 @@ func NewK8sResourceHandler(k8s *k8sinterface.KubernetesApi, fieldSelector IField
}
}
func (k8sHandler *K8sResourceHandler) GetResources(ctx context.Context, sessionObj *cautils.OPASessionObj, designator *armotypes.PortalDesignator) (*cautils.K8SResources, map[string]workloadinterface.IMetadata, *cautils.KSResources, error) {
func (k8sHandler *K8sResourceHandler) GetResources(ctx context.Context, sessionObj *cautils.OPASessionObj, designator *armotypes.PortalDesignator, progressListener opaprocessor.IJobProgressNotificationClient) (*cautils.K8SResources, map[string]workloadinterface.IMetadata, *cautils.KSResources, error) {
allResources := map[string]workloadinterface.IMetadata{}
// get k8s resources
@@ -136,14 +138,13 @@ func (k8sHandler *K8sResourceHandler) GetResources(ctx context.Context, sessionO
if err := k8sHandler.collectRbacResources(allResources); err != nil {
logger.L().Ctx(ctx).Warning("failed to collect rbac resources", helpers.Error(err))
}
cloudResources := cautils.MapCloudResources(ksResourceMap)
setMapNamespaceToNumOfResources(ctx, allResources, sessionObj)
// check that controls use cloud resources
if len(cloudResources) > 0 {
err := k8sHandler.collectCloudResources(ctx, sessionObj, allResources, ksResourceMap, cloudResources)
err := k8sHandler.collectCloudResources(ctx, sessionObj, allResources, ksResourceMap, cloudResources, progressListener)
if err != nil {
cautils.SetInfoMapForResources(err.Error(), cloudResources, sessionObj.InfoMap)
logger.L().Debug("failed to collect cloud data", helpers.Error(err))
@@ -153,44 +154,61 @@ func (k8sHandler *K8sResourceHandler) GetResources(ctx context.Context, sessionO
return k8sResourcesMap, allResources, ksResourceMap, nil
}
func (k8sHandler *K8sResourceHandler) collectCloudResources(ctx context.Context, sessionObj *cautils.OPASessionObj, allResources map[string]workloadinterface.IMetadata, ksResourceMap *cautils.KSResources, cloudResources []string) error {
var err error
func (k8sHandler *K8sResourceHandler) collectCloudResources(ctx context.Context, sessionObj *cautils.OPASessionObj, allResources map[string]workloadinterface.IMetadata, ksResourceMap *cautils.KSResources, cloudResources []string, progressListener opaprocessor.IJobProgressNotificationClient) error {
clusterName := cautils.ClusterName
provider := cloudsupport.GetCloudProvider(clusterName)
if provider == "" {
return fmt.Errorf("failed to get cloud provider, cluster: %s", clusterName)
}
if sessionObj.Metadata != nil && sessionObj.Metadata.ContextMetadata.ClusterContextMetadata != nil {
sessionObj.Metadata.ContextMetadata.ClusterContextMetadata.CloudProvider = provider
}
logger.L().Debug("cloud", helpers.String("cluster", clusterName), helpers.String("clusterName", clusterName), helpers.String("provider", provider))
for resourceKind, resourceGetter := range cloudResourceGetterMapping {
if cloudResourceRequired(cloudResources, resourceKind) {
logger.L().Debug("Collecting cloud data ", helpers.String("resourceKind", resourceKind))
wl, err := resourceGetter(clusterName, provider)
if err != nil {
if !strings.Contains(err.Error(), cloudv1.NotSupportedMsg) {
// Return error with useful info on how to configure credentials for getting cloud provider info
logger.L().Debug("failed to get cloud data", helpers.String("resourceKind", resourceKind), helpers.Error(err))
err = fmt.Errorf("failed to get %s descriptive information. Read more: https://hub.armosec.io/docs/kubescape-integration-with-cloud-providers", strings.ToUpper(provider))
cautils.SetInfoMapForResources(err.Error(), cloudResources, sessionObj.InfoMap)
}
} else {
allResources[wl.GetID()] = wl
(*ksResourceMap)[fmt.Sprintf("%s/%s", wl.GetApiVersion(), wl.GetKind())] = []string{wl.GetID()}
}
}
logger.L().Info("Downloading cloud resources")
// start progressbar during pull of cloud resources (this can take a while).
if progressListener != nil {
progressListener.Start(len(cloudResources))
defer progressListener.Stop()
}
for resourceKind, resourceGetter := range cloudResourceGetterMapping {
// set way to progress
if progressListener != nil {
progressListener.ProgressJob(1, fmt.Sprintf("Cloud Resource: %s", resourceKind))
}
if !cloudResourceRequired(cloudResources, resourceKind) {
continue
}
logger.L().Debug("Collecting cloud data ", helpers.String("resourceKind", resourceKind))
wl, err := resourceGetter(clusterName, provider)
if err != nil {
if !strings.Contains(err.Error(), cloudv1.NotSupportedMsg) {
// Return error with useful info on how to configure credentials for getting cloud provider info
logger.L().Debug("failed to get cloud data", helpers.String("resourceKind", resourceKind), helpers.Error(err))
err = fmt.Errorf("failed to get %s descriptive information. Read more: https://hub.armosec.io/docs/kubescape-integration-with-cloud-providers", strings.ToUpper(provider))
cautils.SetInfoMapForResources(err.Error(), cloudResources, sessionObj.InfoMap)
}
continue
}
allResources[wl.GetID()] = wl
(*ksResourceMap)[fmt.Sprintf("%s/%s", wl.GetApiVersion(), wl.GetKind())] = []string{wl.GetID()}
}
logger.L().Success("Downloaded cloud resources")
// get api server info resource
if cloudResourceRequired(cloudResources, string(cloudsupport.TypeApiServerInfo)) {
err = k8sHandler.collectAPIServerInfoResource(allResources, ksResourceMap)
if err != nil {
if err := k8sHandler.collectAPIServerInfoResource(allResources, ksResourceMap); err != nil {
logger.L().Ctx(ctx).Warning("failed to collect api server info resource", helpers.Error(err))
return err
}
}
return err
return nil
}
func cloudResourceRequired(cloudResources []string, resource string) bool {
@@ -217,7 +235,7 @@ func (k8sHandler *K8sResourceHandler) collectAPIServerInfoResource(allResources
func (k8sHandler *K8sResourceHandler) GetClusterAPIServerInfo(ctx context.Context) *version.Info {
clusterAPIServerInfo, err := k8sHandler.k8s.DiscoveryClient.ServerVersion()
if err != nil {
logger.L().Ctx(ctx).Error("failed to discover API server information", helpers.Error(err))
logger.L().Ctx(ctx).Warning("failed to discover API server information", helpers.Error(err))
return nil
}
return clusterAPIServerInfo

View File

@@ -6,10 +6,11 @@ import (
"github.com/armosec/armoapi-go/armotypes"
"github.com/kubescape/k8s-interface/workloadinterface"
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/pkg/opaprocessor"
"k8s.io/apimachinery/pkg/version"
)
type IResourceHandler interface {
GetResources(context.Context, *cautils.OPASessionObj, *armotypes.PortalDesignator) (*cautils.K8SResources, map[string]workloadinterface.IMetadata, *cautils.KSResources, error)
GetResources(context.Context, *cautils.OPASessionObj, *armotypes.PortalDesignator, opaprocessor.IJobProgressNotificationClient) (*cautils.K8SResources, map[string]workloadinterface.IMetadata, *cautils.KSResources, error)
GetClusterAPIServerInfo(ctx context.Context) *version.Info
}

View File

@@ -32,12 +32,12 @@ type IPrinter interface {
func GetWriter(ctx context.Context, outputFile string) *os.File {
if outputFile != "" {
if err := os.MkdirAll(filepath.Dir(outputFile), os.ModePerm); err != nil {
logger.L().Ctx(ctx).Error(fmt.Sprintf("failed to create directory, reason: %s", err.Error()))
logger.L().Ctx(ctx).Warning(fmt.Sprintf("failed to create directory, reason: %s", err.Error()))
return os.Stdout
}
f, err := os.Create(outputFile)
if err != nil {
logger.L().Ctx(ctx).Error(fmt.Sprintf("failed to open file for writing, reason: %s", err.Error()))
logger.L().Ctx(ctx).Warning(fmt.Sprintf("failed to open file for writing, reason: %s", err.Error()))
return os.Stdout
}
return f

View File

@@ -34,7 +34,6 @@ func (prettyPrinter *PrettyPrinter) printAttackTreeNode(node v1alpha1.IAttackTra
}
*/
func (prettyPrinter *PrettyPrinter) createFailedControlList(node v1alpha1.IAttackTrackStep) string {
var r string
for i, control := range node.GetControls() {

View File

@@ -72,14 +72,13 @@
<img class="logo" src="https://raw.githubusercontent.com/kubescape/kubescape/master/core/pkg/resultshandling/printer/v2/pdf/logo.png">
<h1>Kubescape Scan Report</h1>
{{ with .OPASessionObj.Report.SummaryDetails }}
<h2>By Controls</h2>
<h3>Summary</h3>
</br>
<h2>Summary:</h2>
<table>
<thead>
<tr>
<th>All</th>
<th>Failed</th>
<th>Excluded</th>
<th>Skipped</th>
</tr>
</thead>
@@ -87,19 +86,18 @@
<tr>
<td>{{ .NumberOfControls.All }}</td>
<td>{{ .NumberOfControls.Failed }}</td>
<td>{{ .NumberOfControls.Excluded }}</td>
<td>{{ .NumberOfControls.Skipped }}</td>
</tr>
</tbody>
</table>
<h3>Details</h3>
</br>
<h2>Details</h2>
<table>
<thead>
<tr>
<th class="controlSeverityCell">Severity</th>
<th class="controlNameCell">Control Name</th>
<th class="controlRiskCell">Failed Resources</th>
<th class="controlRiskCell">Excluded Resources</th>
<th class="controlRiskCell">All Resources</th>
<th class="controlRiskCell">Risk Score, %</th>
</tr>
@@ -110,9 +108,8 @@
<tr>
<td class="controlSeverityCell">{{ controlSeverityToString $control.ScoreFactor }}</td>
<td class="controlNameCell">{{ $control.Name }}</td>
<td class="controlRiskCell numericCell">{{ $control.ResourceCounters.FailedResources }}</td>
<td class="controlRiskCell numericCell">{{ $control.ResourceCounters.ExcludedResources }}</td>
<td class="controlRiskCell numericCell">{{ sum $control.ResourceCounters.ExcludedResources $control.ResourceCounters.FailedResources $control.ResourceCounters.PassedResources }}</td>
<td class="controlRiskCell numericCell">{{ $control.StatusCounters.FailedResources }}</td>
<td class="controlRiskCell numericCell">{{ sum $control.StatusCounters.SkippedResources $control.StatusCounters.FailedResources $control.StatusCounters.PassedResources }}</td>
<td class="controlRiskCell numericCell">{{ float32ToInt $control.Score }}</td>
</tr>
</tr>
@@ -120,7 +117,9 @@
<tbody>
</table>
{{ end }}
<h2>By Resource</h2>
</br>
<h2>Failed Resources:</h2>
</br>
{{ $sortedResourceTableView := sortByNamespace .ResourceTableView }}
{{ range $sortedResourceTableView }}
<h3>Name: {{ .Resource.GetName }}</h3>

View File

@@ -26,6 +26,8 @@ const (
//go:embed html/report.gohtml
var reportTemplate string
var _ printer.IPrinter = &HtmlPrinter{}
type HTMLReportingCtx struct {
OPASessionObj *cautils.OPASessionObj
ResourceTableView ResourceTableView
@@ -108,9 +110,9 @@ func (hp *HtmlPrinter) ActionPrint(ctx context.Context, opaSessionObj *cautils.O
err := tpl.Execute(hp.writer, reportingCtx)
if err != nil {
logger.L().Ctx(ctx).Error("failed to render template", helpers.Error(err))
} else {
printer.LogOutputFile(hp.writer.Name())
return
}
printer.LogOutputFile(hp.writer.Name())
}

View File

@@ -19,6 +19,8 @@ const (
jsonOutputExt = ".json"
)
var _ printer.IPrinter = &JsonPrinter{}
type JsonPrinter struct {
writer *os.File
}
@@ -49,7 +51,7 @@ func (jp *JsonPrinter) ActionPrint(ctx context.Context, opaSessionObj *cautils.O
if _, err := jp.writer.Write(r); err != nil {
logger.L().Ctx(ctx).Error("failed to write results", helpers.Error(err))
} else {
printer.LogOutputFile(jp.writer.Name())
return
}
printer.LogOutputFile(jp.writer.Name())
}

View File

@@ -23,10 +23,8 @@ const (
junitOutputExt = ".xml"
)
/*
riskScore
status
*/
var _ printer.IPrinter = &JunitPrinter{}
type JunitPrinter struct {
writer *os.File
verbose bool
@@ -122,9 +120,9 @@ func (jp *JunitPrinter) ActionPrint(ctx context.Context, opaSessionObj *cautils.
if _, err := jp.writer.Write(postureReportStr); err != nil {
logger.L().Ctx(ctx).Error("failed to write results", helpers.Error(err))
} else {
printer.LogOutputFile(jp.writer.Name())
return
}
printer.LogOutputFile(jp.writer.Name())
}
func testsSuites(results *cautils.OPASessionObj) *JUnitTestSuites {

View File

@@ -32,6 +32,8 @@ var (
kubescapeLogo []byte
)
var _ printer.IPrinter = &PdfPrinter{}
type PdfPrinter struct {
writer *os.File
}
@@ -96,9 +98,9 @@ func (pp *PdfPrinter) ActionPrint(ctx context.Context, opaSessionObj *cautils.OP
if _, err := pp.writer.Write(outBuff.Bytes()); err != nil {
logger.L().Ctx(ctx).Error("failed to write results", helpers.Error(err))
} else {
printer.LogOutputFile(pp.writer.Name())
return
}
printer.LogOutputFile(pp.writer.Name())
}
// printHeader prints the Kubescape logo and report date

View File

@@ -24,6 +24,8 @@ const (
prettyPrinterOutputExt = ".txt"
)
var _ printer.IPrinter = &PrettyPrinter{}
type PrettyPrinter struct {
writer *os.File
formatVersion string

View File

@@ -14,6 +14,8 @@ import (
"github.com/kubescape/opa-utils/reporthandling/results/v1/resourcesresults"
)
var _ printer.IPrinter = &PrometheusPrinter{}
type PrometheusPrinter struct {
writer *os.File
verboseMode bool
@@ -51,7 +53,7 @@ func (pp *PrometheusPrinter) ActionPrint(ctx context.Context, opaSessionObj *cau
if _, err := pp.writer.Write([]byte(metrics.String())); err != nil {
logger.L().Ctx(ctx).Error("failed to write results", helpers.Error(err))
} else {
printer.LogOutputFile(pp.writer.Name())
return
}
printer.LogOutputFile(pp.writer.Name())
}

View File

@@ -51,6 +51,8 @@ func scoreFactorToSARIFSeverityLevel(score float32) sarifSeverityLevel {
return sarifSeverityLevelNote
}
var _ printer.IPrinter = &SARIFPrinter{}
// SARIFPrinter is a printer that emits the report in the SARIF format
type SARIFPrinter struct {
// outputFile is the name of the output file

View File

@@ -1,11 +1,23 @@
package printer
import (
"context"
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/pkg/resultshandling/printer"
)
var _ printer.IPrinter = &SilentPrinter{}
// SilentPrinter is a printer that does not print anything
type SilentPrinter struct {
}
func (silentPrinter *SilentPrinter) ActionPrint(opaSessionObj *cautils.OPASessionObj) {
func (silentPrinter *SilentPrinter) ActionPrint(ctx context.Context, opaSessionObj *cautils.OPASessionObj) {
}
func (silentPrinter *SilentPrinter) SetWriter(ctx context.Context, outputFile string) {
}
func (silentPrinter *SilentPrinter) Score(score float32) {
}

View File

@@ -7,8 +7,11 @@ import (
"os"
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/pkg/resultshandling/reporter"
)
var _ reporter.IReport = &ReportMock{}
type ReportMock struct {
query string
message string
@@ -36,17 +39,19 @@ func (reportMock *ReportMock) GetURL() string {
return ""
}
q := u.Query()
q.Add("utm_source", "GitHub")
q.Add("utm_medium", "CLI")
q.Add("utm_campaign", "Submit")
u.RawQuery = q.Encode()
return u.String()
}
func (reportMock *ReportMock) DisplayReportURL() {
if m := reportMock.strToDisplay(); m != "" {
cautils.InfoTextDisplay(os.Stderr, m)
}
}
func (reportMock *ReportMock) strToDisplay() string {
if reportMock.message == "" {
return ""
}
sep := "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
message := sep + "\n"
@@ -55,5 +60,5 @@ func (reportMock *ReportMock) DisplayReportURL() {
message += "For more details: " + link + "\n"
}
message += sep + "\n"
cautils.InfoTextDisplay(os.Stderr, fmt.Sprintf("\n%s\n", message))
return fmt.Sprintf("\n%s\n", message)
}

View File

@@ -1,10 +1,40 @@
package reporter
import "testing"
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"net/http/httptest"
"net/http/httputil"
"net/url"
"os"
"path/filepath"
"strings"
"testing"
"github.com/armosec/armoapi-go/armotypes"
"github.com/kubescape/k8s-interface/workloadinterface"
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/pkg/resultshandling/reporter"
"github.com/kubescape/kubescape/v2/internal/testutils"
"github.com/kubescape/opa-utils/reporthandling"
"github.com/kubescape/opa-utils/reporthandling/apis"
"github.com/kubescape/opa-utils/reporthandling/attacktrack/v1alpha1"
"github.com/kubescape/opa-utils/reporthandling/results/v1/prioritization"
"github.com/kubescape/opa-utils/reporthandling/results/v1/resourcesresults"
reporthandlingv2 "github.com/kubescape/opa-utils/reporthandling/v2"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestReportMockGetURL(t *testing.T) {
t.Parallel()
func TestReportMock_GetURL(t *testing.T) {
type fields struct {
query string
query string
message string
}
tests := []struct {
name string
@@ -13,19 +43,86 @@ func TestReportMock_GetURL(t *testing.T) {
}{
{
name: "TestReportMock_GetURL",
fields: struct {
query string
}{
query: "https://kubescape.io",
fields: fields{
query: "https://kubescape.io",
message: "some message",
},
want: "https://kubescape.io?utm_campaign=Submit&utm_medium=CLI&utm_source=GitHub",
want: "https://kubescape.io",
},
{
name: "TestReportMock_GetURL_empty",
fields: struct {
query string
}{
query: "",
fields: fields{
query: "",
message: "",
},
want: "",
},
}
for _, toPin := range tests {
tc := toPin
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
var reportMock reporter.IReport = NewReportMock(tc.fields.query, tc.fields.message)
t.Run("mock reports should support GetURL", func(t *testing.T) {
got := reportMock.GetURL()
require.Equalf(t, tc.want, got,
"ReportMock.GetURL() = %v, want %v", got, tc.want,
)
})
t.Run("mock reports should support DisplayReportURL", func(t *testing.T) {
capture, clean := captureStderr(t)
defer clean()
reportMock.DisplayReportURL()
require.NoError(t, capture.Close())
buf, err := os.ReadFile(capture.Name())
require.NoError(t, err)
if tc.fields.message != "" {
require.NotEmpty(t, buf)
} else {
require.Empty(t, buf)
}
})
t.Run("mock reports should support Submit", func(t *testing.T) {
require.NoError(t,
reportMock.Submit(context.Background(), &cautils.OPASessionObj{}),
)
})
})
}
}
func TestReportMock_strToDisplay(t *testing.T) {
type fields struct {
query string
message string
}
tests := []struct {
name string
fields fields
want string
}{
{
name: "TestReportMock_strToDisplay",
fields: fields{
query: "https://kubescape.io",
message: "some message",
},
want: "\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nScan results have not been submitted: some message\nFor more details: https://kubescape.io\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n",
},
{
name: "TestReportMock_strToDisplay_empty",
fields: fields{
query: "https://kubescape.io",
message: "",
},
want: "",
},
@@ -33,11 +130,222 @@ func TestReportMock_GetURL(t *testing.T) {
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
reportMock := &ReportMock{
query: tt.fields.query,
query: tt.fields.query,
message: tt.fields.message,
}
if got := reportMock.GetURL(); got != tt.want {
t.Errorf("ReportMock.GetURL() = %v, want %v", got, tt.want)
if got := reportMock.strToDisplay(); got != tt.want {
t.Errorf("ReportMock.strToDisplay() = %v, want %v", got, tt.want)
}
})
}
}
const pathTestReport = "/k8s/v2/postureReport"
type (
// mockableOPASessionObj reproduces OPASessionObj with concrete types instead of interfaces.
// It may be unmarshaled from a JSON fixture.
mockableOPASessionObj struct {
K8SResources *cautils.K8SResources
ArmoResource *cautils.KSResources
AllPolicies *cautils.Policies
AllResources map[string]*workloadinterface.Workload
ResourcesResult map[string]resourcesresults.Result
ResourceSource map[string]reporthandling.Source
ResourcesPrioritized map[string]prioritization.PrioritizedResource
ResourceAttackTracks map[string]*v1alpha1.AttackTrack
AttackTracks map[string]*v1alpha1.AttackTrack
Report *reporthandlingv2.PostureReport
RegoInputData cautils.RegoInputData
Metadata *reporthandlingv2.Metadata
InfoMap map[string]apis.StatusInfo
ResourceToControlsMap map[string][]string
SessionID string
Policies []reporthandling.Framework
Exceptions []armotypes.PostureExceptionPolicy
OmitRawResources bool
}
// testServer wraps a mock http server.
//
// It exposes a route to POST reports and asserts the submitted requests.
testServer struct {
*httptest.Server
}
// interceptor is a http.RoundTripper used to re-route the calls to the mock API server.
//
// NOTE(fredbi): ideally, the target URL is configurable so we don't need to resort to this to run tests.
interceptor struct {
original http.RoundTripper
host string
}
)
// mockOPASessionObj builds an OPASessionObj from a JSON fixture.
func mockOPASessionObj(t testing.TB) *cautils.OPASessionObj {
buf, err := os.ReadFile(filepath.Join(testutils.CurrentDir(), "testdata", "mock_opasessionobj.json"))
require.NoError(t, err)
var v mockableOPASessionObj
require.NoError(t,
json.Unmarshal(buf, &v),
)
o := cautils.OPASessionObj{
K8SResources: v.K8SResources,
ArmoResource: v.ArmoResource,
AllPolicies: v.AllPolicies,
//AllResources map[string]*workloadinterface.Workload // all scanned resources, map[<resource ID>]<resource>
ResourcesResult: v.ResourcesResult,
ResourceSource: v.ResourceSource,
ResourcesPrioritized: v.ResourcesPrioritized,
//ResourceAttackTracks map[string]*v1alpha1.AttackTrack // resources attack tracks, map[<resource ID>]<attack track>
//AttackTracks map[string]*v1alpha1.AttackTrack
Report: v.Report,
RegoInputData: v.RegoInputData,
Metadata: v.Metadata,
InfoMap: v.InfoMap,
ResourceToControlsMap: v.ResourceToControlsMap,
SessionID: v.SessionID,
Policies: v.Policies,
Exceptions: v.Exceptions,
OmitRawResources: v.OmitRawResources,
}
o.AllResources = make(map[string]workloadinterface.IMetadata, len(v.AllResources))
for k, val := range v.AllResources {
o.AllResources[k] = val
}
o.ResourceAttackTracks = make(map[string]v1alpha1.IAttackTrack, len(v.ResourceAttackTracks))
for k, val := range v.ResourceAttackTracks {
o.ResourceAttackTracks[k] = val
}
o.AttackTracks = make(map[string]v1alpha1.IAttackTrack, len(v.AttackTracks))
for k, val := range v.AttackTracks {
o.AttackTracks[k] = val
}
return &o
}
func (s *testServer) Root() string {
return s.Server.URL
}
func (s *testServer) URL(pth string) string {
pth = strings.TrimLeft(pth, "/")
return fmt.Sprintf("%s/%s", s.Server.URL, pth)
}
// mockAPIServer builds a mock API running with a TLS endpoint.
//
// Running tests with the DEBUG_TEST=1 environment will result in dumping a trace of
// the incoming requests.
func mockAPIServer(t testing.TB) *testServer {
h := http.NewServeMux()
server := &testServer{
Server: httptest.NewUnstartedServer(h),
}
h.HandleFunc(pathTestReport, func(w http.ResponseWriter, r *http.Request) {
if os.Getenv("DEBUG_TEST") != "" {
dump, _ := httputil.DumpRequest(r, true)
t.Logf("%s\n", dump)
}
if !assert.Equal(t, http.MethodPost, r.Method) {
w.WriteHeader(http.StatusMethodNotAllowed)
return
}
if !assert.NoErrorf(t, r.ParseForm(), "expected params to parse") {
w.WriteHeader(http.StatusBadRequest)
return
}
cluster := r.Form.Get("clusterName")
contextName := r.Form.Get("contextName")
customer := r.Form.Get("customerGUID")
report := r.Form.Get("reportGUID")
if cluster == "" || contextName == "" || customer == "" || report == "" {
t.Error("missing query parameter")
w.WriteHeader(http.StatusBadRequest)
return
}
// NOTE(fredbi): (i) requests should have header Content-Type: "application/json"
// NOTE(fredbi): (ii) shouldn't we require an extra authentication (e.g. secretKey or Token)?
buf, err := io.ReadAll(r.Body)
defer func() {
_ = r.Body.Close()
}()
if !assert.NoError(t, err) {
w.WriteHeader(http.StatusInternalServerError)
return
}
var input reporthandlingv2.PostureReport
if !assert.NoError(t, json.Unmarshal(buf, &input)) {
w.WriteHeader(http.StatusInternalServerError)
return
}
})
h.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
dump, _ := httputil.DumpRequest(r, true)
t.Logf("%s\n", dump)
t.Errorf("unexpected route in input request: %v", r.URL)
w.WriteHeader(http.StatusNotFound)
})
server.StartTLS()
return server
}
// newInterceptor builds a new http.RoundTripper to re-route outgoing requests.
func newInterceptor(transport http.RoundTripper, host string) *interceptor {
return &interceptor{
original: transport,
host: host,
}
}
func (i *interceptor) RoundTrip(r *http.Request) (*http.Response, error) {
defer r.Body.Close()
hijacked := r.Clone(r.Context())
hijacked.URL.Host = i.host
return i.original.RoundTrip(hijacked)
}
// hijackedClient builds an HTTP client suited for working against a mock server.
//
// This client supports mocked TLS and re-routes outgoing calls to the local mock server.
func hijackedClient(t testing.TB, srv *testServer) *http.Client {
tlsClient := srv.Client()
transport, ok := tlsClient.Transport.(*http.Transport)
require.True(t, ok)
mockURL, err := url.Parse(srv.Root())
require.NoError(t, err)
return &http.Client{
Transport: newInterceptor(transport, mockURL.Host),
}
}

View File

@@ -14,6 +14,7 @@ import (
"github.com/kubescape/k8s-interface/workloadinterface"
"github.com/kubescape/kubescape/v2/core/cautils"
"github.com/kubescape/kubescape/v2/core/cautils/getter"
"github.com/kubescape/kubescape/v2/core/pkg/resultshandling/reporter"
"github.com/kubescape/opa-utils/reporthandling"
"github.com/kubescape/opa-utils/reporthandling/results/v1/prioritization"
"github.com/kubescape/opa-utils/reporthandling/results/v1/resourcesresults"
@@ -31,6 +32,8 @@ const (
SubmitContextRepository SubmitContext = "repository"
)
var _ reporter.IReport = &ReportEventReceiver{}
type ReportEventReceiver struct {
httpClient *http.Client
clusterName string
@@ -59,24 +62,22 @@ func (report *ReportEventReceiver) Submit(ctx context.Context, opaSessionObj *ca
ctx, span := otel.Tracer("").Start(ctx, "reportEventReceiver.Submit")
defer span.End()
if report.customerGUID == "" {
logger.L().Ctx(ctx).Warning("failed to publish results. Reason: Unknown accout ID. Run kubescape with the '--account <account ID>' flag. Contact ARMO team for more details")
logger.L().Ctx(ctx).Error("failed to publish results. Reason: Unknown account ID. Run kubescape with the '--account <account ID>' flag. Contact ARMO team for more details")
return nil
}
if opaSessionObj.Metadata.ScanMetadata.ScanningTarget == reporthandlingv2.Cluster && report.clusterName == "" {
logger.L().Ctx(ctx).Warning("failed to publish results because the cluster name is Unknown. If you are scanning YAML files the results are not submitted to the Kubescape SaaS")
logger.L().Ctx(ctx).Error("failed to publish results because the cluster name is Unknown. If you are scanning YAML files the results are not submitted to the Kubescape SaaS")
return nil
}
err := report.prepareReport(opaSessionObj)
if err == nil {
report.generateMessage()
} else {
err = fmt.Errorf("failed to submit scan results. url: '%s', reason: %s", report.GetURL(), err.Error())
if err := report.prepareReport(opaSessionObj); err != nil {
return fmt.Errorf("failed to submit scan results. url: '%s', reason: %s", report.GetURL(), err.Error())
}
report.generateMessage()
logger.L().Debug("", helpers.String("account ID", report.customerGUID))
return err
return nil
}
func (report *ReportEventReceiver) SetCustomerGUID(customerGUID string) {
@@ -121,13 +122,6 @@ func (report *ReportEventReceiver) GetURL() string {
parseHost(&u)
report.addPathURL(&u)
q := u.Query()
q.Add("utm_source", "GitHub")
q.Add("utm_medium", "CLI")
q.Add("utm_campaign", "Submit")
u.RawQuery = q.Encode()
return u.String()
}
@@ -262,7 +256,9 @@ func (report *ReportEventReceiver) generateMessage() {
}
func (report *ReportEventReceiver) DisplayReportURL() {
if report.message != "" {
// print if logger level is lower than warning (debug/info)
if report.message != "" && helpers.ToLevel(logger.L().GetLevel()) < helpers.WarningLevel {
cautils.InfoTextDisplay(os.Stderr, fmt.Sprintf("\n\n%s\n\n", report.message))
}
}
@@ -286,6 +282,10 @@ func (report *ReportEventReceiver) addPathURL(urlObj *url.URL) {
q := urlObj.Query()
q.Add("invitationToken", report.token)
q.Add("customerGUID", report.customerGUID)
// Adding utm parameters
q.Add("utm_source", "ARMOgithub")
q.Add("utm_medium", "createaccount")
urlObj.RawQuery = q.Encode()
}

View File

@@ -1,23 +1,36 @@
package reporter
import (
"context"
"math/rand"
"net/url"
"os"
"strconv"
"sync"
"testing"
logger "github.com/kubescape/go-logger"
"github.com/kubescape/go-logger/prettylogger"
"github.com/kubescape/kubescape/v2/core/cautils"
reporthandlingv2 "github.com/kubescape/opa-utils/reporthandling/v2"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// mxStdio serializes the capture of os.Stderr or os.Stdout
var mxStdio sync.Mutex
func TestReportEventReceiver_addPathURL(t *testing.T) {
t.Parallel()
tests := []struct {
name string
report *ReportEventReceiver
urlObj *url.URL
want *url.URL
name string
}{
{
name: "add scan path",
name: "URL for submitted data",
report: &ReportEventReceiver{
clusterName: "test",
customerGUID: "FFFF",
@@ -30,25 +43,173 @@ func TestReportEventReceiver_addPathURL(t *testing.T) {
Scheme: "https",
Host: "localhost:8080",
},
want: &url.URL{
Scheme: "https",
Host: "localhost:8080",
Path: "compliance/test",
RawQuery: "",
},
},
{
name: "URL for first scan",
report: &ReportEventReceiver{
clusterName: "test",
customerGUID: "FFFF",
token: "XXXX",
reportID: "1234",
submitContext: SubmitContextScan,
},
urlObj: &url.URL{
Scheme: "https",
Host: "localhost:8080",
},
want: &url.URL{
Scheme: "https",
Host: "localhost:8080",
Path: "account/sign-up",
RawQuery: "customerGUID=FFFF&invitationToken=XXXX&utm_medium=createaccount&utm_source=ARMOgithub",
},
},
{
name: "add rbac path",
report: &ReportEventReceiver{
clusterName: "test",
customerGUID: "FFFF",
token: "XXXX",
customerAdminEMail: "test@test",
reportID: "1234",
submitContext: SubmitContextRBAC,
},
urlObj: &url.URL{
Scheme: "https",
Host: "localhost:8080",
},
want: &url.URL{
Scheme: "https",
Host: "localhost:8080",
Path: "rbac-visualizer",
},
},
{
name: "add repository path",
report: &ReportEventReceiver{
clusterName: "test",
customerGUID: "FFFF",
token: "XXXX",
customerAdminEMail: "test@test",
reportID: "1234",
submitContext: SubmitContextRepository,
},
urlObj: &url.URL{
Scheme: "https",
Host: "localhost:8080",
},
want: &url.URL{
Scheme: "https",
Host: "localhost:8080",
Path: "repository-scanning/1234",
},
},
{
name: "add default path",
report: &ReportEventReceiver{
clusterName: "test",
customerGUID: "FFFF",
token: "XXXX",
customerAdminEMail: "test@test",
reportID: "1234",
submitContext: SubmitContext("invalid"),
},
urlObj: &url.URL{
Scheme: "https",
Host: "localhost:8080",
},
want: &url.URL{
Scheme: "https",
Host: "localhost:8080",
Path: "dashboard",
},
},
{
name: "path when no email and no token",
report: &ReportEventReceiver{
clusterName: "test",
customerGUID: "FFFF",
token: "",
customerAdminEMail: "",
reportID: "1234",
submitContext: SubmitContextScan,
},
urlObj: &url.URL{
Scheme: "https",
Host: "localhost:8080",
},
want: &url.URL{
Scheme: "https",
Host: "localhost:8080",
Path: "compliance/test",
},
},
{
name: "path when email and no token",
report: &ReportEventReceiver{
clusterName: "test",
customerGUID: "FFFF",
token: "",
customerAdminEMail: "test@test",
reportID: "1234",
submitContext: SubmitContextScan,
},
urlObj: &url.URL{
Scheme: "https",
Host: "localhost:8080",
},
want: &url.URL{
Scheme: "https",
Host: "localhost:8080",
Path: "compliance/test",
},
},
{
name: "path when no email and token",
report: &ReportEventReceiver{
clusterName: "test",
customerGUID: "FFFF",
token: "XYZ",
customerAdminEMail: "",
reportID: "1234",
submitContext: SubmitContextScan,
},
urlObj: &url.URL{
Scheme: "https",
Host: "localhost:8080",
},
want: &url.URL{
Scheme: "https",
Host: "localhost:8080",
Path: "account/sign-up",
RawQuery: "customerGUID=FFFF&invitationToken=XYZ&utm_medium=createaccount&utm_source=ARMOgithub",
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
tt.report.addPathURL(tt.urlObj)
assert.Equal(t, tt.want.String(), tt.urlObj.String())
for _, toPin := range tests {
tc := toPin
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
tc.report.addPathURL(tc.urlObj)
require.Equal(t, tc.want.String(), tc.urlObj.String())
})
}
}
func TestGetURL(t *testing.T) {
// Test submit and registered url
{
t.Parallel()
t.Run("with scan submit and registered url", func(t *testing.T) {
t.Parallel()
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
AccountID: "1234",
@@ -59,11 +220,12 @@ func TestGetURL(t *testing.T) {
"",
SubmitContextScan,
)
assert.Equal(t, "https://cloud.armosec.io/compliance/test?utm_campaign=Submit&utm_medium=CLI&utm_source=GitHub", reporter.GetURL())
}
assert.Equal(t, "https://cloud.armosec.io/compliance/test", reporter.GetURL())
})
t.Run("with rbac submit and registered url", func(t *testing.T) {
t.Parallel()
// Test rbac submit and registered url
{
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
AccountID: "1234",
@@ -74,11 +236,12 @@ func TestGetURL(t *testing.T) {
"",
SubmitContextRBAC,
)
assert.Equal(t, "https://cloud.armosec.io/rbac-visualizer?utm_campaign=Submit&utm_medium=CLI&utm_source=GitHub", reporter.GetURL())
}
assert.Equal(t, "https://cloud.armosec.io/rbac-visualizer", reporter.GetURL())
})
t.Run("with repository submit and registered url", func(t *testing.T) {
t.Parallel()
// Test repo submit and registered url
{
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
AccountID: "1234",
@@ -89,11 +252,11 @@ func TestGetURL(t *testing.T) {
"XXXX",
SubmitContextRepository,
)
assert.Equal(t, "https://cloud.armosec.io/repository-scanning/XXXX?utm_campaign=Submit&utm_medium=CLI&utm_source=GitHub", reporter.GetURL())
}
assert.Equal(t, "https://cloud.armosec.io/repository-scanning/XXXX", reporter.GetURL())
})
// Test submit and NOT registered url
{
t.Run("with scan submit and NOT registered url", func(t *testing.T) {
t.Parallel()
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
@@ -104,52 +267,287 @@ func TestGetURL(t *testing.T) {
"",
SubmitContextScan,
)
assert.Equal(t, "https://cloud.armosec.io/account/sign-up?customerGUID=1234&invitationToken=token&utm_campaign=Submit&utm_medium=CLI&utm_source=GitHub", reporter.GetURL())
}
assert.Equal(t, "https://cloud.armosec.io/account/sign-up?customerGUID=1234&invitationToken=token&utm_medium=createaccount&utm_source=ARMOgithub", reporter.GetURL())
})
t.Run("with unknown submit and NOT registered url (default route)", func(t *testing.T) {
t.Parallel()
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
AccountID: "1234",
ClusterName: "test",
},
"",
SubmitContext("unknown"),
)
assert.Equal(t, "https://cloud.armosec.io/dashboard", reporter.GetURL())
})
}
func Test_prepareReportKeepsOriginalScanningTarget(t *testing.T) {
func TestDisplayReportURL(t *testing.T) {
t.Parallel()
// prepareReport should keep the original scanning target it received, and not mutate it
testCases := []struct {
Name string
Want reporthandlingv2.ScanningTarget
}{
{"Cluster", reporthandlingv2.Cluster},
{"File", reporthandlingv2.File},
{"Repo", reporthandlingv2.Repo},
{"GitLocal", reporthandlingv2.GitLocal},
{"Directory", reporthandlingv2.Directory},
t.Run("should display an empty message", func(t *testing.T) {
t.Parallel()
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
AccountID: "1234",
Token: "token",
ClusterName: "test",
},
"",
SubmitContextScan,
)
capture, clean := captureStderr(t)
defer clean()
reporter.DisplayReportURL()
require.NoError(t, capture.Close())
buf, err := os.ReadFile(capture.Name())
require.NoError(t, err)
require.Empty(t, buf)
})
t.Run("should display a non-empty message", func(t *testing.T) {
t.Parallel()
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
AccountID: "1234",
Token: "token",
ClusterName: "test",
},
"",
SubmitContextScan,
)
reporter.generateMessage()
capture, clean := captureStderr(t)
defer clean()
reporter.DisplayReportURL()
require.NoError(t, capture.Close())
buf, err := os.ReadFile(capture.Name())
require.NoError(t, err)
require.NotEmpty(t, buf)
assert.Contains(t, string(buf), "WOW!")
assert.Contains(t, string(buf), "https://cloud.armosec.io/account/sign-up")
t.Log(string(buf))
})
}
func TestPrepareReport(t *testing.T) {
t.Parallel()
t.Run("should keep the original scanning target it received and not mutate it", func(t *testing.T) {
testCases := []struct {
Name string
Want reporthandlingv2.ScanningTarget
}{
{"Cluster", reporthandlingv2.Cluster},
{"File", reporthandlingv2.File},
{"Repo", reporthandlingv2.Repo},
{"GitLocal", reporthandlingv2.GitLocal},
{"Directory", reporthandlingv2.Directory},
}
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
AccountID: "1e3ae7c4-a8bb-4d7c-9bdf-eb86bc25e6bb",
Token: "token",
ClusterName: "test",
},
"",
SubmitContextScan,
)
for _, tc := range testCases {
t.Run(tc.Name, func(t *testing.T) {
want := tc.Want
opaSessionObj := &cautils.OPASessionObj{
Report: &reporthandlingv2.PostureReport{},
Metadata: &reporthandlingv2.Metadata{
ScanMetadata: reporthandlingv2.ScanMetadata{ScanningTarget: want},
},
}
reporter.prepareReport(opaSessionObj)
got := opaSessionObj.Metadata.ScanMetadata.ScanningTarget
require.Equalf(t, want, got,
"Scanning targets dont match after preparing report. Got: %v, want %v", got, want,
)
})
}
})
}
func TestSubmit(t *testing.T) {
ctx := context.Background()
srv := mockAPIServer(t)
t.Cleanup(srv.Close)
t.Run("should submit simple report", func(t *testing.T) {
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
AccountID: "1e3ae7c4-a8bb-4d7c-9bdf-eb86bc25e6bb",
Token: "",
ClusterName: "test",
},
"cbabd56f-bac6-416a-836b-b815ef347647",
SubmitContextScan,
)
opaSession := mockOPASessionObj(t)
reporter.httpClient = hijackedClient(t, srv) // re-route the http client to our mock server, as this is not easily configurable in the reporter.
require.NoError(t,
reporter.Submit(ctx, opaSession),
)
})
t.Run("should warn when no customerGUID", func(t *testing.T) {
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
Token: "",
ClusterName: "test",
},
"cbabd56f-bac6-416a-836b-b815ef347647",
SubmitContextScan,
)
opaSession := mockOPASessionObj(t)
reporter.httpClient = hijackedClient(t, srv)
capture, clean := captureStderr(t)
if pretty, ok := logger.L().(*prettylogger.PrettyLogger); ok {
pretty.SetWriter(capture)
}
defer func() {
clean()
if pretty, ok := logger.L().(*prettylogger.PrettyLogger); ok {
pretty.SetWriter(os.Stderr)
}
}()
require.NoError(t,
reporter.Submit(ctx, opaSession),
)
require.NoError(t, capture.Close())
buf, err := os.ReadFile(capture.Name())
require.NoError(t, err)
assert.Contains(t, string(buf), "failed to publish result")
assert.Contains(t, string(buf), "Unknown acc")
})
t.Run("should warn when no cluster name", func(t *testing.T) {
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
AccountID: "1e3ae7c4-a8bb-4d7c-9bdf-eb86bc25e6bb",
Token: "",
},
"cbabd56f-bac6-416a-836b-b815ef347647",
SubmitContextScan,
)
opaSession := mockOPASessionObj(t)
opaSession.Metadata.ScanMetadata.ScanningTarget = reporthandlingv2.Cluster
reporter.httpClient = hijackedClient(t, srv)
capture, clean := captureStderr(t)
if pretty, ok := logger.L().(*prettylogger.PrettyLogger); ok {
pretty.SetWriter(capture)
}
defer func() {
clean()
if pretty, ok := logger.L().(*prettylogger.PrettyLogger); ok {
pretty.SetWriter(os.Stderr)
}
}()
require.NoError(t,
reporter.Submit(ctx, opaSession),
)
require.NoError(t, capture.Close())
buf, err := os.ReadFile(capture.Name())
require.NoError(t, err)
assert.Contains(t, string(buf), "failed to publish result")
assert.Contains(t, string(buf), "cluster name")
})
}
func TestSetters(t *testing.T) {
t.Parallel()
pickString := func() string {
return strconv.Itoa(rand.Intn(10000)) //nolint:gosec
}
reporter := NewReportEventReceiver(
&cautils.ConfigObj{
AccountID: "1e3ae7c4-a8bb-4d7c-9bdf-eb86bc25e6bb",
Token: "token",
ClusterName: "test",
AccountID: "1e3ae7c4-a8bb-4d7c-9bdf-eb86bc25e6bb",
Token: "",
},
"",
"cbabd56f-bac6-416a-836b-b815ef347647",
SubmitContextScan,
)
for _, tc := range testCases {
t.Run(tc.Name, func(t *testing.T) {
want := tc.Want
t.Run("should set customerID", func(t *testing.T) {
guid := pickString()
reporter.SetCustomerGUID(guid)
opaSessionObj := &cautils.OPASessionObj{
Report: &reporthandlingv2.PostureReport{},
Metadata: &reporthandlingv2.Metadata{
ScanMetadata: reporthandlingv2.ScanMetadata{ScanningTarget: want},
},
}
require.Equal(t, guid, reporter.customerGUID)
})
reporter.prepareReport(opaSessionObj)
t.Run("should set cluster name", func(t *testing.T) {
cluster := pickString()
reporter.SetClusterName(cluster)
got := opaSessionObj.Metadata.ScanMetadata.ScanningTarget
if got != want {
t.Errorf("Scanning targets dont match after preparing report. Got: %v, want %v", got, want)
}
},
)
require.Equal(t, cluster, reporter.clusterName)
})
t.Run("should normalize cluster name", func(t *testing.T) {
const cluster = " x y\t\tz"
reporter.SetClusterName(cluster)
require.Equal(t, "-x-y-z", reporter.clusterName)
})
}
func captureStderr(t testing.TB) (*os.File, func()) {
mxStdio.Lock()
saved := os.Stderr
capture, err := os.CreateTemp("", "stderr")
if !assert.NoError(t, err) {
mxStdio.Unlock()
t.FailNow()
return nil, nil
}
os.Stderr = capture
return capture, func() {
_ = capture.Close()
_ = os.Remove(capture.Name())
os.Stderr = saved
mxStdio.Unlock()
}
}

File diff suppressed because one or more lines are too long

View File

@@ -5,26 +5,6 @@ import (
"strings"
)
/* unused for now
func maskID(id string) string {
sep := "-"
splitted := strings.Split(id, sep)
if len(splitted) != 5 {
return ""
}
str := splitted[0][:4]
splitted[0] = splitted[0][4:]
for i := range splitted {
for j := 0; j < len(splitted[i]); j++ {
str += "X"
}
str += sep
}
return strings.TrimSuffix(str, sep)
}
*/
func parseHost(urlObj *url.URL) {
if strings.Contains(urlObj.Host, "http://") {
urlObj.Scheme = "http"

View File

@@ -17,9 +17,9 @@ import (
type ResultsHandler struct {
reporterObj reporter.IReport
printerObjs []printer.IPrinter
uiPrinter printer.IPrinter
scanData *cautils.OPASessionObj
printerObjs []printer.IPrinter
}
func NewResultsHandler(reporterObj reporter.IReport, printerObjs []printer.IPrinter, uiPrinter printer.IPrinter) *ResultsHandler {
@@ -116,7 +116,7 @@ func NewPrinter(ctx context.Context, printFormat, formatVersion string, verboseM
return printerv2.NewSARIFPrinter()
default:
if printFormat != printer.PrettyFormat {
logger.L().Ctx(ctx).Error(fmt.Sprintf("Invalid format \"%s\", default format \"pretty-printer\" is applied", printFormat))
logger.L().Ctx(ctx).Warning(fmt.Sprintf("Invalid format \"%s\", default format \"pretty-printer\" is applied", printFormat))
}
return printerv2.NewPrettyPrinter(verboseMode, formatVersion, attackTree, viewType)
}

View File

@@ -1,6 +1,6 @@
<img src="armo-powered-by-kubescape-logo-grey.svg" width="25%" height="25%" align="right">
[ARMO Platform](https://cloud.armosec.io/account/sign-up) is an enterprise solution based on Kubescape. Its a multi-cloud Kubernetes and CI/CD security platform with a single pane of glass including risk analysis, security compliance, misconfiguration, image vulnerability, repository and registry scanning, RBAC visualization, and more.
[ARMO Platform](https://cloud.armosec.io/account/sign-up?utm_source=ARMOgithub&utm_medium=ARMOcli) is an enterprise solution based on Kubescape. Its a multi-cloud Kubernetes and CI/CD security platform with a single pane of glass including risk analysis, security compliance, misconfiguration, image vulnerability, repository and registry scanning, RBAC visualization, and more.
## Connect Kubescape to ARMO Platform
Step #1: Install Kubescape in your CLI

Some files were not shown because too many files have changed in this diff Show More