Compare commits

..

61 Commits

Author SHA1 Message Date
lirazyehezkel
4bc83ebcb5 Fix WS error when switching from settings to traffic viewer (#985) 2022-04-11 14:21:31 +03:00
M. Mert Yıldıran
bbb44dae79 Fix the unit tests of protocol extensions (#982) 2022-04-09 06:56:09 -07:00
M. Mert Yıldıran
72a1aba3e5 TRA-4410 Display namespace field in the UI (#974) 2022-04-08 21:16:25 +03:00
RoyUP9
d8fb8ff710 Fix for OAS reset not working (#978) 2022-04-07 18:14:03 +03:00
Nimrod Gilboa Markevich
f344bd2633 Make minor changes to OasGenerator (#977)
* Added log message
* Remove Reset function from OasGenerator interface, use Stop+Start instead
* SetEntriesQuery returns a bool stating whether the query changed
2022-04-07 10:46:30 +03:00
M. Mert Yıldıran
6575495fa5 Remove gRPC related modifications (#958)
* Remove gRPC related modifications

* Remove gRPC status text related modifications as well

* Fixing gRPC vertical image

detect grpc when content type is 'application/grpc' as well  (and not only from the grpc-status)

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2022-04-06 18:50:36 +03:00
RoyUP9
cf5c03d45c Fixed service map returning nil values (#975) 2022-04-06 13:12:38 +03:00
Nimrod Gilboa Markevich
491da24c63 Add ability to set query in OAS Generator (#964) 2022-04-06 11:54:55 +03:00
leon-up9
832162ae0f Ui/Service-map-split-to-ui-common (#966)
* added serviceModal & selectList & noDataMessage
removed leftovers from split

* scroll fix

* sort by name

* search alightment

* space removed

* margin-bottom

* utils class

Co-authored-by: Leon <>
Co-authored-by: Igor Gov <iggvrv@gmail.com>
2022-04-05 13:23:46 +03:00
lirazyehezkel
866378b451 Mizu cant show more than 10000 entries (#973) 2022-04-05 12:25:01 +03:00
lirazyehezkel
0b0b9ce6d1 Performance fixes (#972) 2022-04-04 20:03:57 +03:00
RoyUP9
d99c632102 Fixed golint strings.Title is deprecated error (#971) 2022-04-04 18:06:22 +03:00
RamiBerm
76a6a77a14 Refactor ws (#961)
* Separate socket and basenine logic

* WIP

* Update socket_server_handlers.go

* Update socket_data_streamer.go and socket_server_handlers.go

* Update socket_server_handlers.go

* Merge branch 'develop' into refactor_ws
# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.

* empty commit for actions

* empty commit for actions

* commit for actions

* Revert "commit for actions"

This reverts commit 8ba2ecf7d3.

Co-authored-by: RoyUP9 <87927115+RoyUP9@users.noreply.github.com>
2022-04-04 17:33:53 +03:00
M. Mert Yıldıran
2bfc523bbc Handle reflect.TypeOf returning nil case (#970) 2022-04-04 16:25:18 +03:00
RoyUP9
66ba778384 Fixed golint modified files (#969) 2022-04-04 15:32:22 +03:00
leon-up9
7adbf7bf1b Ui/TRA-4461_service-map-&-OAS---GUI-changes (#962)
* OpenAPI renamed to Service Catalog

* Docs icon change
Hide Navbar on serviceMap modal open

* PR comments

Co-authored-by: Leon <>
Co-authored-by: RoyUP9 <87927115+RoyUP9@users.noreply.github.com>
2022-04-04 14:49:41 +03:00
Igor Gov
a97b5b3b38 Add conditional Go lint validation to CI (#967) 2022-04-04 14:35:47 +03:00
Nimrod Gilboa Markevich
aa8dcc5f5c Format commit message as code to handle multi line messages (#963)
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2022-04-03 22:10:43 +03:00
lirazyehezkel
9d08dbdd5d UI performance fix 2022-04-03 21:35:09 +03:00
lirazyehezkel
b47718e094 TRA-4442 Improve UI performance (#960)
* Move ws entry listener to entriesList component

* unused code
2022-04-03 15:51:20 +03:00
Igor Gov
6a7fad430c Adding resolved prop to service map node (#959)
* Adding resolved prop to service map node

* fixing tests
2022-04-03 15:32:21 +03:00
lirazyehezkel
59ad8d8fad TLS icon position (#956)
* TLS icon position

* cr fix
2022-03-31 11:26:58 +03:00
Nimrod Gilboa Markevich
a49443f101 Set the entry namespace to the source namespace if the destination is not resolved (#950)
* Set the entry namespace to the source namespace if the destination is not resolved
* Overwrite src namespace with dst namespace only if dst non-empty
2022-03-30 15:40:21 +03:00
lirazyehezkel
2427955aa4 Avoid overlap only for service map including under 10 services 2022-03-30 15:30:09 +03:00
David Levanon
27a73e21fb Read from service mesh network namespaces upon update (#944) 2022-03-30 13:56:37 +03:00
Igor Gov
8eeb0e54c9 Changing unit tests workflow timeout to 30 minutes 2022-03-30 11:52:47 +03:00
Andrey Pokhilko
97db24aeba OAS: rework data feeding + sampleIDs (#917)
* Call OAS feeder

* Don't call old OAS code

* Rework calls

* Work on it

* Put back rules

* Make it compile

* start thinking of test

* Compiles

* Save

* Fixes

* Save

* Fixing

* Trying to fake conn

* add timeout

* Test timeout

* Fix tests

* Only build OAS for HTTP entries

* Remove some dead code

* Adding SampleIDs

* Cosmetics

* lint

* Revert rename

* Sample ID for content

* Cleanuo

* Add more sample IDs

* Checking hypothesis

* Move assignment place a bit

* Cosmetics

* Update test.yml

Co-authored-by: undera <undera@undera-old-desktop.home>
Co-authored-by: Igor Gov <iggvrv@gmail.com>
2022-03-30 11:14:25 +03:00
RamiBerm
63cf7ac34e Refactor entries controller logic (#949)
* wip

* Update entries_controller.go and entries_provider.go

* Update entries_controller.go

* change entries provider into a struct + interface

* Update entries_provider.go

* Update entries_provider.go
2022-03-29 18:30:19 +03:00
Igor Gov
e867b7d0f1 Build ui-common part of CI (#914)
* Build ui-common always locally
2022-03-29 14:14:52 +03:00
lirazyehezkel
dcd8a64f43 Hotfix/remove token from community (#948) 2022-03-29 13:16:50 +03:00
lirazyehezkel
bf8d5ed069 Support multiple workspaces (TRA-4365) (#945)
* support multiple workspaces

* reopen by websocket url dep

* open websocket only when websocketURL is changed

* upgrade common version

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2022-03-29 09:56:59 +03:00
Nimrod Gilboa Markevich
1f6e539590 Add commit message and committer to acceptance tests slack alert (#946)
* Add commit message and committer username to slack alerts
* Use name instead of username
* Use name and email
2022-03-29 09:15:42 +03:00
David Levanon
590fa08c81 EBPF error handling 2022-03-28 14:19:06 +03:00
Igor Gov
0a9be1884a Traffic viewer list item ebpf to lock icon (#939)
* Traffic viewer changing ebpf to lock icon
2022-03-28 11:10:54 +03:00
Igor Gov
40c745068e Traffic viewer changing ebpf to lock icon (#938) 2022-03-28 10:45:52 +03:00
AmitUp9
10dffd9331 TRA-4419 remove headers from OAS select (#936) 2022-03-28 10:01:57 +03:00
leon-up9
0a800e8d8a passed containerId in the object config (#935)
* passed containerId in the object config

* pkg version

* tmp: run acceptance tests, revert before merge

* reverte

Co-authored-by: Leon <>
2022-03-27 19:41:51 +03:00
RamiBerm
068a4ff86e TRA-4422 line numbers fix (#934)
* fix line numbering in body syntax highlighter

* Packages
2022-03-27 16:17:34 +03:00
leon-up9
c45a869b75 Fix/regration-fixes-24.3 (#933)
* link open in a new tab

* fixed double toast

* added const

* javiers new icon

* epbg img fix

* pkg update

* changed import

* formated

Co-authored-by: Leon <>
2022-03-27 15:47:46 +03:00
leon-up9
0a793cd9e0 added props (#924) 2022-03-27 11:31:40 +03:00
Adrian Gąsior
8d19080c11 Updated Dockerfile and fixed typo in Makefile (#929)
* Updated Dockerfile and fixed typo in Makefile

* PR request

* added small hotfix to basenine
2022-03-27 01:39:50 +03:00
Nimrod Gilboa Markevich
319c3c7a8d Initialize tapper before ws (#932) 2022-03-26 21:25:18 +03:00
M. Mert Yıldıran
0e7704eb15 Bump the version to 0.6.6 and fix the index shift bug that occurs after database truncation (#931) 2022-03-26 18:25:38 +03:00
gadotroee
dbcb776139 no message (#930) 2022-03-26 13:19:04 +03:00
Igor Gov
a3de34f544 Turn OAS feature ON by default (#927)
* Turn OAS feature ON by default
2022-03-24 20:40:16 +02:00
Nimrod Gilboa Markevich
99667984d6 Update tap targets without reinitializing packet source manager (#925) 2022-03-24 15:39:20 +02:00
David Levanon
763b0e7362 quick tls update pods solution (#918)
Update TLS tappers when tapped pods are updated via WS.
2022-03-24 15:21:56 +02:00
AmitUp9
e07e04377f mizu and ui-common version update (#923) 2022-03-24 12:31:08 +02:00
AmitUp9
3c8f63ed92 Feature/UI/tra 4390 status bar banner demo env (#921)
* no message

* banner styling condition added and button text transform disabled

* added prps to container

* build fix
2022-03-24 12:14:31 +02:00
leon-up9
11a2246cb9 Fix/entry-title-responsive-comments (#922)
* Make `EntryTitle` component responsive

* useRequetTextByWidth added

* renamed

Co-authored-by: M. Mert Yildiran <mehmet@up9.com>
Co-authored-by: Leon <>
2022-03-24 12:09:28 +02:00
leon-up9
a2595afd5e changed icon (#920)
Co-authored-by: Leon <>
2022-03-24 10:43:35 +02:00
AmitUp9
0f4710918f TRA-4122 oas gui grooming (#915)
* oas styling
* oas button text change
* ui-common version update
* font fix
* version updating
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2022-03-23 22:15:04 +02:00
RoyUP9
4bdda920d5 Fix for check command (#916)
* Fix for check command

* empty commit for checks

Co-authored-by: Roee Gadot <roee.gadot@up9.com>
2022-03-23 22:02:29 +02:00
Adrian Gąsior
59e6268ddd Refactored Dockerfile (#902)
* Refactored Dockerfile

* PR request
2022-03-23 16:16:37 +02:00
leon-up9
2513e9099f UI/tra 4406 add an information icon (#913)
* information icon added

* new icon and position

* icon changed

* new Ver

* space removed

Co-authored-by: Adam Kol <adam@up9.com>
Co-authored-by: Leon <>
2022-03-23 14:54:51 +02:00
Nimrod Gilboa Markevich
a5c35d7d90 Update tap targets over ws (#901)
Update tappers via websocket instead of by env var. This way the DaemonSet doesn't have to be applied just to notify the tappers that the tap targets changed. The number of tapper restarts is reduced. The DaemonSet still gets applied when there is a need to add/remove a tapper from a node.
2022-03-23 13:50:33 +02:00
Igor Gov
41a7587088 Remove redundant Google auth from test workflow (#911)
* Remove google auth for test workflow
2022-03-23 11:51:42 +02:00
David Levanon
12f46da5c6 Support TLS big buffers (#893) 2022-03-23 11:27:06 +02:00
Adam Kol
17f7879cff Fixing Tabs for Ent (#909)
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2022-03-22 16:04:04 +02:00
lirazyehezkel
bc7776cbd3 Fix ws errors and warnings (#908)
* Fix ws errors and warnings

* versioning
2022-03-22 15:43:05 +02:00
gadotroee
2a31739100 Fix acceptance test (body size and right side pane changes) (#907) 2022-03-22 09:56:34 +02:00
154 changed files with 7058 additions and 3029 deletions

View File

@@ -43,7 +43,7 @@ jobs:
with:
status: ${{ job.status }}
notification_title: 'Mizu {workflow} has {status_message}'
message_format: '{emoji} *{workflow}* {status_message} during <{run_url}|run>, after commit: <{commit_url}|{commit_sha}>'
message_format: '{emoji} *{workflow}* {status_message} during <{run_url}|run>, after commit <{commit_url}|{commit_sha}> by ${{ github.event.head_commit.committer.name }} <${{ github.event.head_commit.committer.email }}> ```${{ github.event.head_commit.message }}```'
footer: 'Linked Repo <{repo_url}|{repo}>'
notify_when: 'failure'
env:

View File

@@ -45,7 +45,7 @@ jobs:
- name: Check modified files
id: modified_files
run: devops/check_modified_files.sh agent/ shared/ tap/ ui/ Dockerfile
run: devops/check_modified_files.sh agent/ shared/ tap/ ui/ ui-common/ Dockerfile
- name: Set up Docker Buildx
if: steps.modified_files.outputs.matched == 'true'

View File

@@ -15,6 +15,9 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 2
- uses: actions/setup-go@v2
with:
go-version: '^1.17'
@@ -24,67 +27,117 @@ jobs:
sudo apt update
sudo apt install -y libpcap-dev
- name: Check Agent modified files
id: agent_modified_files
run: devops/check_modified_files.sh agent/
- name: Go lint - agent
uses: golangci/golangci-lint-action@v2
if: steps.agent_modified_files.outputs.matched == 'true'
with:
version: latest
working-directory: agent
args: --timeout=3m
- name: Check shared modified files
id: shared_modified_files
run: devops/check_modified_files.sh shared/
- name: Go lint - shared
uses: golangci/golangci-lint-action@v2
if: steps.shared_modified_files.outputs.matched == 'true'
with:
version: latest
working-directory: shared
args: --timeout=3m
- name: Check tap modified files
id: tap_modified_files
run: devops/check_modified_files.sh tap/
- name: Go lint - tap
uses: golangci/golangci-lint-action@v2
if: steps.tap_modified_files.outputs.matched == 'true'
with:
version: latest
working-directory: tap
args: --timeout=3m
- name: Check cli modified files
id: cli_modified_files
run: devops/check_modified_files.sh cli/
- name: Go lint - CLI
uses: golangci/golangci-lint-action@v2
if: steps.cli_modified_files.outputs.matched == 'true'
with:
version: latest
working-directory: cli
args: --timeout=3m
- name: Check acceptanceTests modified files
id: acceptanceTests_modified_files
run: devops/check_modified_files.sh acceptanceTests/
- name: Go lint - acceptanceTests
uses: golangci/golangci-lint-action@v2
if: steps.acceptanceTests_modified_files.outputs.matched == 'true'
with:
version: latest
working-directory: acceptanceTests
args: --timeout=3m
- name: Check tap/api modified files
id: tap_api_modified_files
run: devops/check_modified_files.sh tap/api/
- name: Go lint - tap/api
uses: golangci/golangci-lint-action@v2
if: steps.tap_api_modified_files.outputs.matched == 'true'
with:
version: latest
working-directory: tap/api
- name: Check tap/extensions/amqp modified files
id: tap_amqp_modified_files
run: devops/check_modified_files.sh tap/extensions/amqp/
- name: Go lint - tap/extensions/amqp
uses: golangci/golangci-lint-action@v2
if: steps.tap_amqp_modified_files.outputs.matched == 'true'
with:
version: latest
working-directory: tap/extensions/amqp
- name: Check tap/extensions/http modified files
id: tap_http_modified_files
run: devops/check_modified_files.sh tap/extensions/http/
- name: Go lint - tap/extensions/http
uses: golangci/golangci-lint-action@v2
if: steps.tap_http_modified_files.outputs.matched == 'true'
with:
version: latest
working-directory: tap/extensions/http
- name: Check tap/extensions/kafka modified files
id: tap_kafka_modified_files
run: devops/check_modified_files.sh tap/extensions/kafka/
- name: Go lint - tap/extensions/kafka
uses: golangci/golangci-lint-action@v2
if: steps.tap_kafka_modified_files.outputs.matched == 'true'
with:
version: latest
working-directory: tap/extensions/kafka
- name: Check tap/extensions/redis modified files
id: tap_redis_modified_files
run: devops/check_modified_files.sh tap/extensions/redis/
- name: Go lint - tap/extensions/redis
uses: golangci/golangci-lint-action@v2
if: steps.tap_redis_modified_files.outputs.matched == 'true'
with:
version: latest
working-directory: tap/extensions/redis

View File

@@ -18,6 +18,7 @@ jobs:
run-unit-tests:
name: Unit Tests
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v2
@@ -34,14 +35,6 @@ jobs:
run: |
sudo apt-get install libpcap-dev
- id: 'auth'
uses: 'google-github-actions/auth@v0'
with:
credentials_json: '${{ secrets.GCR_JSON_KEY }}'
- name: 'Set up Cloud SDK'
uses: 'google-github-actions/setup-gcloud@v0'
- name: Check CLI modified files
id: cli_modified_files
run: devops/check_modified_files.sh cli/

View File

@@ -1,13 +1,23 @@
ARG BUILDARCH=amd64
ARG TARGETARCH=amd64
### Front-end common
FROM node:16 AS front-end-common
WORKDIR /app/ui-build
COPY ui-common/package.json .
COPY ui-common/package-lock.json .
RUN npm i
COPY ui-common .
RUN npm pack
### Front-end
FROM node:16 AS front-end
WORKDIR /app/ui-build
COPY ui/package.json .
COPY ui/package-lock.json .
COPY ui/package.json ui/package-lock.json ./
COPY --from=front-end-common ["/app/ui-build/up9-mizu-common-0.0.0.tgz", "."]
RUN npm i
COPY ui .
RUN npm run build
@@ -15,7 +25,7 @@ RUN npm run build
### Base builder image for native builds architecture
FROM golang:1.17-alpine AS builder-native-base
ENV CGO_ENABLED=1 GOOS=linux
RUN apk add libpcap-dev g++ perl-utils
RUN apk add --no-cache libpcap-dev g++ perl-utils
### Intermediate builder image for x86-64 to x86-64 native builds
@@ -77,17 +87,16 @@ RUN go build -ldflags="-extldflags=-static -s -w \
-X 'github.com/up9inc/mizu/agent/pkg/version.Ver=${VER}'" -o mizuagent .
# Download Basenine executable, verify the sha1sum
ADD https://github.com/up9inc/basenine/releases/download/v0.6.5/basenine_linux_${GOARCH} ./basenine_linux_${GOARCH}
ADD https://github.com/up9inc/basenine/releases/download/v0.6.5/basenine_linux_${GOARCH}.sha256 ./basenine_linux_${GOARCH}.sha256
RUN shasum -a 256 -c basenine_linux_${GOARCH}.sha256
RUN chmod +x ./basenine_linux_${GOARCH}
RUN mv ./basenine_linux_${GOARCH} ./basenine
ADD https://github.com/up9inc/basenine/releases/download/v0.6.6/basenine_linux_${GOARCH} ./basenine_linux_${GOARCH}
ADD https://github.com/up9inc/basenine/releases/download/v0.6.6/basenine_linux_${GOARCH}.sha256 ./basenine_linux_${GOARCH}.sha256
RUN shasum -a 256 -c basenine_linux_"${GOARCH}".sha256 && \
chmod +x ./basenine_linux_"${GOARCH}" && \
mv ./basenine_linux_"${GOARCH}" ./basenine
### The shipped image
ARG TARGETARCH=amd64
FROM ${TARGETARCH}/busybox:latest
# gin-gonic runs in debug mode without this
ENV GIN_MODE=release

View File

@@ -73,7 +73,7 @@ clean-agent: ## Clean agent.
clean-cli: ## Clean CLI.
@(cd cli; make clean ; echo "CLI cleanup done" )
clean-docker: ## Run clen docker
clean-docker: ## Run clean docker
@(echo "DOCKER cleanup - NOT IMPLEMENTED YET " )
test-lint: ## Run lint on all modules

View File

@@ -142,7 +142,9 @@ function deepCheck(generalDict, protocolDict, methodDict, entry) {
if (value) {
if (value.tab === valueTabs.response)
cy.contains('Response').click();
// temporary fix, change to some "data-cy" attribute,
// this will fix the issue that happen because we have "response:" in the header of the right side
cy.get('#rightSideContainer > :nth-child(3)').contains('Response').click();
cy.get(Cypress.env('bodyJsonClass')).then(text => {
expect(text.text()).to.match(value.regex)
});

View File

@@ -40,32 +40,23 @@ it('filtering guide check', function () {
});
it('right side sanity test', function () {
cy.get('#entryDetailedTitleBodySize').then(sizeTopLine => {
const sizeOnTopLine = sizeTopLine.text().replace(' B', '');
cy.contains('Response').click();
cy.contains('Body Size (bytes)').parent().next().then(size => {
const bodySizeByDetails = size.text();
expect(sizeOnTopLine).to.equal(bodySizeByDetails, 'The body size in the top line should match the details in the response');
cy.get('#entryDetailedTitleElapsedTime').then(timeInMs => {
const time = timeInMs.text();
if (time < '0ms') {
throw new Error(`The time in the top line cannot be negative ${time}`);
}
});
if (parseInt(bodySizeByDetails) < 0) {
throw new Error(`The body size cannot be negative. got the size: ${bodySizeByDetails}`)
}
// temporary fix, change to some "data-cy" attribute,
// this will fix the issue that happen because we have "response:" in the header of the right side
cy.get('#rightSideContainer > :nth-child(3)').contains('Response').click();
cy.get('#entryDetailedTitleElapsedTime').then(timeInMs => {
const time = timeInMs.text();
if (time < '0ms') {
throw new Error(`The time in the top line cannot be negative ${time}`);
}
cy.get('#rightSideContainer [title="Status Code"]').then(status => {
const statusCode = status.text();
cy.contains('Status').parent().next().then(statusInDetails => {
const statusCodeInDetails = statusInDetails.text();
cy.get('#rightSideContainer [title="Status Code"]').then(status => {
const statusCode = status.text();
cy.contains('Status').parent().next().then(statusInDetails => {
const statusCodeInDetails = statusInDetails.text();
expect(statusCode).to.equal(statusCodeInDetails, 'The status code in the top line should match the status code in details');
});
});
});
expect(statusCode).to.equal(statusCodeInDetails, 'The status code in the top line should match the status code in details');
});
});
});
@@ -252,7 +243,9 @@ function deeperChcek(leftSidePath, rightSidePath, filterName, leftSideExpectedTe
}
function checkRightSideResponseBody() {
cy.contains('Response').click();
// temporary fix, change to some "data-cy" attribute,
// this will fix the issue that happen because we have "response:" in the header of the right side
cy.get('#rightSideContainer > :nth-child(3)').contains('Response').click();
clickCheckbox('Decode Base64');
cy.get(`${Cypress.env('bodyJsonClass')}`).then(value => {

View File

@@ -20,7 +20,7 @@ require (
github.com/orcaman/concurrent-map v1.0.0
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/stretchr/testify v1.7.0
github.com/up9inc/basenine/client/go v0.0.0-20220317230530-8472d80307f6
github.com/up9inc/basenine/client/go v0.0.0-20220326121918-785f3061c8ce
github.com/up9inc/mizu/shared v0.0.0
github.com/up9inc/mizu/tap v0.0.0
github.com/up9inc/mizu/tap/api v0.0.0
@@ -37,53 +37,79 @@ require (
require (
cloud.google.com/go/compute v1.2.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
github.com/Azure/go-autorest v14.2.0+incompatible // indirect
github.com/Azure/go-autorest/autorest v0.11.24 // indirect
github.com/Azure/go-autorest/autorest/adal v0.9.18 // indirect
github.com/Azure/go-autorest/autorest/date v0.3.0 // indirect
github.com/Azure/go-autorest/logger v0.2.1 // indirect
github.com/Azure/go-autorest/tracing v0.6.0 // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
github.com/PuerkitoBio/purell v1.1.1 // indirect
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 // indirect
github.com/beevik/etree v1.1.0 // indirect
github.com/bradleyfalzon/tlsx v0.0.0-20170624122154-28fd0e59bac4 // indirect
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5 // indirect
github.com/chanced/dynamic v0.0.0-20211210164248-f8fadb1d735b // indirect
github.com/cilium/ebpf v0.8.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/evanphx/json-patch v5.6.0+incompatible // indirect
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f // indirect
github.com/fatih/camelcase v1.0.0 // indirect
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/fvbommel/sortorder v1.0.2 // indirect
github.com/ghodss/yaml v1.0.0 // indirect
github.com/gin-contrib/sse v0.1.0 // indirect
github.com/go-errors/errors v1.4.2 // indirect
github.com/go-logr/logr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.19.5 // indirect
github.com/go-openapi/jsonreference v0.19.6 // indirect
github.com/go-openapi/swag v0.21.1 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v4 v4.2.0 // indirect
github.com/golang/protobuf v1.5.2 // indirect
github.com/golang/snappy v0.0.4 // indirect
github.com/google/btree v1.0.1 // indirect
github.com/google/go-cmp v0.5.7 // indirect
github.com/google/gofuzz v1.2.0 // indirect
github.com/google/gopacket v1.1.19 // indirect
github.com/google/martian v2.1.0+incompatible // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/googleapis/gnostic v0.5.5 // indirect
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect
github.com/imdario/mergo v0.3.12 // indirect
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/compress v1.14.2 // indirect
github.com/leodido/go-urn v1.2.1 // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-isatty v0.0.14 // indirect
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/ohler55/ojg v1.12.12 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pierrec/lz4 v2.6.1+incompatible // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/russross/blackfriday v1.6.0 // indirect
github.com/santhosh-tekuri/jsonschema/v5 v5.0.0 // indirect
github.com/segmentio/kafka-go v0.4.27 // indirect
github.com/spf13/cobra v1.3.0 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/tidwall/gjson v1.14.0 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.0 // indirect
github.com/tidwall/sjson v1.2.4 // indirect
github.com/ugorji/go/codec v1.2.6 // indirect
github.com/vishvananda/netns v0.0.0-20211101163701-50045581ed74 // indirect
github.com/xlab/treeprint v1.1.0 // indirect
go.starlark.net v0.0.0-20220203230714-bb14e151c28f // indirect
golang.org/x/crypto v0.0.0-20220208050332-20e1d8d225ab // indirect
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd // indirect
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8 // indirect
@@ -96,10 +122,15 @@ require (
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b // indirect
k8s.io/cli-runtime v0.23.3 // indirect
k8s.io/component-base v0.23.3 // indirect
k8s.io/klog/v2 v2.40.1 // indirect
k8s.io/kube-openapi v0.0.0-20220124234850-424119656bbf // indirect
k8s.io/kubectl v0.23.3 // indirect
k8s.io/utils v0.0.0-20220127004650-9b3446523e65 // indirect
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 // indirect
sigs.k8s.io/kustomize/api v0.11.1 // indirect
sigs.k8s.io/kustomize/kyaml v0.13.3 // indirect
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 // indirect
sigs.k8s.io/yaml v1.3.0 // indirect
)

View File

@@ -53,6 +53,7 @@ cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RX
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/go-ansiterm v0.0.0-20210608223527-2377c96fe795/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
@@ -74,10 +75,13 @@ github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/MakeNowJust/heredoc v0.0.0-20170808103936-bb23615498cd/go.mod h1:64YHyfSL2R96J44Nlwm39UHepQbyR5q10x7iYa1ks2E=
github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ=
github.com/MakeNowJust/heredoc v1.0.0/go.mod h1:mG5amYoWBHf8vpLOuehzbGGw0EHxpZZ6lCpQ4fNJ8LE=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578 h1:d+Bc7a5rLufV/sSk/8dngufqelfh6jnri85riMAaF/M=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
@@ -111,6 +115,7 @@ github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5 h1:7aWHqerlJ41y6FOsEUvknqgXnGmJyJSbjhAWq5pO4F8=
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5/go.mod h1:/iP1qXHoty45bqomnu2LM+VVyAEdWN+vtSHGlQgyxbw=
github.com/chanced/cmpjson v0.0.0-20210415035445-da9262c1f20a h1:zG6t+4krPXcCKtLbjFvAh+fKN1d0qfD+RaCj+680OU8=
github.com/chanced/cmpjson v0.0.0-20210415035445-da9262c1f20a/go.mod h1:yhcmlFk1hxuZ+5XZbupzT/cEm/eE4ZvWbmsW1+Q/aZE=
@@ -147,6 +152,7 @@ github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfc
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.11 h1:07n33Z8lZxZ2qwegKbObQohDhXDQxiMMz1NOUGYlesw=
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
@@ -163,6 +169,7 @@ github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 h1:YEetp8
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/elastic/go-elasticsearch/v7 v7.17.0 h1:0fcSh4qeC/i1+7QU1KXpmq2iUAdMk4l0/vmbtW1+KJM=
github.com/elastic/go-elasticsearch/v7 v7.17.0/go.mod h1:OJ4wdbtDNk5g503kvlHLyErCgQwwzmDtaFC4XyOxXA4=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153 h1:yUdfgN0XgIJw7foRItutHYUIhlcKzcSf5vDpdhQAKTc=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
@@ -184,6 +191,7 @@ github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLi
github.com/evanphx/json-patch/v5 v5.6.0 h1:b91NhWfaz02IuVxO9faSllyAtNXHMPkC5J8sJCLunww=
github.com/evanphx/json-patch/v5 v5.6.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4=
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZMPRZwes7CROmyNKgQzC3XPs6L/G2EJLHddWejkmf4=
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f h1:Wl78ApPPB2Wvf/TIe2xdyJxTlb6obmF18d8QdkxNDu4=
github.com/exponent-io/jsonpath v0.0.0-20210407135951-1de76d718b3f/go.mod h1:OSYXu++VVOHnXeitef/D8n/6y4QV8uLHSFXX4NeXMGc=
github.com/fatih/camelcase v1.0.0 h1:hxNvNX/xYBp0ovncs8WyWZrOrpBNub/JfaMvbURyft8=
github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc=
@@ -201,6 +209,7 @@ github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4
github.com/fsnotify/fsnotify v1.5.1 h1:mZcQUHVQUQWoPXXtuf9yuEXKudkV2sx1E06UadKWpgI=
github.com/fsnotify/fsnotify v1.5.1/go.mod h1:T3375wBYaZdLLcVNkcVbzGHY7f1l/uK5T5Ai1i3InKU=
github.com/fvbommel/sortorder v1.0.1/go.mod h1:uk88iVf1ovNn1iLfgUVU2F9o5eO30ui720w+kxuqRs0=
github.com/fvbommel/sortorder v1.0.2 h1:mV4o8B2hKboCdkJm+a7uX/SIpZob4JzUpc5GGnM45eo=
github.com/fvbommel/sortorder v1.0.2/go.mod h1:uk88iVf1ovNn1iLfgUVU2F9o5eO30ui720w+kxuqRs0=
github.com/getkin/kin-openapi v0.76.0/go.mod h1:660oXbgy5JFMKreazJaQTw7o+X00qeSyhcnluiMv+Xg=
github.com/getkin/kin-openapi v0.89.0 h1:p4nagHchUKGn85z/f+pse4aSh50nIBOYjOhMIku2hiA=
@@ -237,6 +246,7 @@ github.com/go-openapi/jsonpointer v0.19.5 h1:gZr+CIYByUqjcgeLXnQu2gHYQC9o73G2XUe
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/jsonreference v0.19.5/go.mod h1:RdybgQwPxbL4UEjuAruzK1x3nE69AqPYEJeo/TWfEeg=
github.com/go-openapi/jsonreference v0.19.6 h1:UBIxjkht+AWIgYzCDSv2GN+E/togfwXUJFRTWhl2Jjs=
github.com/go-openapi/jsonreference v0.19.6/go.mod h1:diGHMEHg2IqXZGKxqyvWdfWU/aim5Dprw5bqpKkTvns=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
@@ -303,6 +313,7 @@ github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEW
github.com/golangplus/testing v0.0.0-20180327235837-af21d9c3145e/go.mod h1:0AA//k/eakGydO4jKRoRL2j92ZKSzTgj9tclaCrvXHk=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.1 h1:gK4Kx5IaGY9CD5sPJ36FHiBJ6ZXl0kilRiiCj+jdYp4=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
@@ -344,6 +355,7 @@ github.com/google/pprof v0.0.0-20210601050228-01bbb1931b22/go.mod h1:kpwsk12EmLe
github.com/google/pprof v0.0.0-20210609004039-a478d1d731e9/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210720184732-4bb14d4b1be1/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
@@ -363,6 +375,7 @@ github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoA
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 h1:+ngKgrYPPJrOjhax5N+uePQ0Fh1Z7PheYoUI/0nzkPA=
github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
@@ -410,7 +423,9 @@ github.com/iancoleman/strcase v0.2.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
@@ -451,6 +466,7 @@ github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII=
github.com/leodido/go-urn v1.2.1 h1:BqpAaACuzVSgi/VLzGZIobT2z4v53pjosyNd9Yv6n/w=
github.com/leodido/go-urn v1.2.1/go.mod h1:zt4jvISO2HfUBqxjfIshjdMTYS56ZS/qv49ictyFfxY=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de h1:9TO3cAIGXtEhnIaL+V+BEER86oLrvS+kWobKpbJuye0=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE=
github.com/lithammer/dedent v1.1.0/go.mod h1:jrXYCQtgg0nJiN+StA2KgR7w6CiQNv9Fd/Z9BP0jIOc=
github.com/lyft/protoc-gen-star v0.5.3/go.mod h1:V0xaHgaf5oCCqmcxYcWiDfTiKsZsRc87/1qhoTACD8w=
@@ -486,6 +502,7 @@ github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrk
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0=
github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0=
github.com/mitchellh/gox v0.4.0/go.mod h1:Sd9lOJ0+aimLBi73mGofS1ycjY8lL3uZM3JPS42BGNg=
github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0QubkSMEySY=
@@ -493,8 +510,10 @@ github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:F
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mitchellh/mapstructure v1.4.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/moby/spdystream v0.2.0 h1:cjW1zVyyoiM0T7b6UoySUFqzXMoqRckQtXwGPiBhOM8=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
github.com/moby/term v0.0.0-20210610120745-9d4ed1856297/go.mod h1:vgPCkQMyxTZ7IDy8SXRufE172gr8+K/JE/7hHFxHW3A=
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 h1:dcztxKSvZ4Id8iPpHERQBbIJfabdt4wUm5qy3wOL2Zc=
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6/go.mod h1:E2VnQOmVuvZB6UYnnDB0qG5Nq/1tD9acaOpo6xmt0Kw=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
@@ -503,10 +522,12 @@ github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lN
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 h1:n6/2gBQ3RWajuToeY6ZtZTIKv2v7ThUy5KKusIT0yc0=
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00/go.mod h1:Pm3mSP3c5uWn86xMLZ5Sa7JB9GsEZySvHYXCTK4E9q4=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
github.com/nav-inc/datetime v0.1.3 h1:PaybPUsScX+Cd3TEa1tYpfwU61deCEhMTlCO2hONm1c=
github.com/nav-inc/datetime v0.1.3/go.mod h1:gKGf5G+cW7qkTo5TC/sieNyz6lYdrA9cf1PNV+pXIOE=
@@ -538,6 +559,7 @@ github.com/patrickmn/go-cache v2.1.0+incompatible/go.mod h1:3Qf8kWWT7OJRJbdiICTK
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.9.3/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/pelletier/go-toml v1.9.4/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pierrec/lz4 v2.6.0+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pierrec/lz4 v2.6.1+incompatible h1:9UY3+iC23yxF0UfGaYrGplQ+79Rg+h/q9FV9ix19jjM=
@@ -583,6 +605,7 @@ github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTE
github.com/rogpeppe/go-internal v1.8.0 h1:FCbCCtXNOY3UtUuHUYaghJg4y7Fd14rXifAYUAtL9R8=
github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday v1.6.0 h1:KqfZb0pUVN2lYqZUYRddxF4OR8ZMURnJIG5Y3VRLtww=
github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
@@ -593,6 +616,7 @@ github.com/santhosh-tekuri/jsonschema/v5 v5.0.0/go.mod h1:FKdcjfQW6rpZSnxxUvEA5H
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/segmentio/kafka-go v0.4.27 h1:sIhEozeL/TLN2mZ5dkG462vcGEWYKS+u31sXPjKhAM4=
github.com/segmentio/kafka-go v0.4.27/go.mod h1:XzMcoMjSzDGHcIwpWUI7GB43iKZ2fTVmryPSGLf/MPg=
github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0=
github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
@@ -611,6 +635,7 @@ github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkU
github.com/spf13/cast v1.4.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/cobra v1.2.1/go.mod h1:ExllRjgxM/piMAM+3tAZvg8fsklGAf3tPfi+i8t68Nk=
github.com/spf13/cobra v1.3.0 h1:R7cSvGu+Vv+qX0gW5R/85dx2kmmJT5z5NM8ifdYjdn0=
github.com/spf13/cobra v1.3.0/go.mod h1:BrRVncBjOJa/eUcVVm9CE+oC6as8k+VYr4NY7WCi9V4=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
@@ -624,6 +649,7 @@ github.com/spf13/viper v1.10.0/go.mod h1:SoyBPwAtKDzypXNDFKN5kzH7ppppbGZtls1UpIy
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0 h1:Hbg2NidpLE8veEBkEZTL3CvlkUIVzuU9jDplZO54c48=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/testify v1.2.1/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
@@ -655,8 +681,8 @@ github.com/ugorji/go v1.2.6/go.mod h1:anCg0y61KIhDlPZmnH+so+RQbysYVyDko0IMgJv0Nn
github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=
github.com/ugorji/go/codec v1.2.6 h1:7kbGefxLoDBuYXOms4yD7223OpNMMPNPZxXk5TvFcyQ=
github.com/ugorji/go/codec v1.2.6/go.mod h1:V6TCNZ4PHqoHGFZuSG1W8nrCzzdgA2DozYxWFFpvxTw=
github.com/up9inc/basenine/client/go v0.0.0-20220317230530-8472d80307f6 h1:c0aVbLKYeFDAg246+NDgie2y484bsc20NaKLo8ODV3E=
github.com/up9inc/basenine/client/go v0.0.0-20220317230530-8472d80307f6/go.mod h1:SvJGPoa/6erhUQV7kvHBwM/0x5LyO6XaG2lUaCaKiUI=
github.com/up9inc/basenine/client/go v0.0.0-20220326121918-785f3061c8ce h1:vMTCpKItc9OyTLJXocNaq2NcBU5EnurJgTVOYb8W8dw=
github.com/up9inc/basenine/client/go v0.0.0-20220326121918-785f3061c8ce/go.mod h1:SvJGPoa/6erhUQV7kvHBwM/0x5LyO6XaG2lUaCaKiUI=
github.com/vishvananda/netns v0.0.0-20211101163701-50045581ed74 h1:gga7acRE695APm9hlsSMoOoE65U4/TcqNj90mc69Rlg=
github.com/vishvananda/netns v0.0.0-20211101163701-50045581ed74/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
github.com/wI2L/jsondiff v0.1.1 h1:r2TkoEet7E4JMO5+s1RCY2R0LrNPNHY6hbDeow2hRHw=
@@ -665,6 +691,7 @@ github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c/go.mod h1:lB8K/P019DLNhe
github.com/xdg/stringprep v1.0.0/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg=
github.com/xlab/treeprint v1.1.0 h1:G/1DjNkPpfZCFt9CSh6b5/nY4VimlbHF3Rh4obvtzDk=
github.com/xlab/treeprint v1.1.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0 h1:6fRhSjgLCkTD3JnJxvaJ4Sj+TYblw757bqYgZaOq5ZY=
@@ -701,6 +728,7 @@ go.opentelemetry.io/otel/sdk/metric v0.20.0/go.mod h1:knxiS8Xd4E/N+ZqKmUPf3gTTZ4
go.opentelemetry.io/otel/trace v0.20.0/go.mod h1:6GjCW8zgDjwGHGa6GkyeB8+/5vjT16gUEi0Nf1iBdgw=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5/go.mod h1:nmDLcffg48OtT/PSW0Hg7FvpRQsQh5OSqIylirxKC7o=
go.starlark.net v0.0.0-20220203230714-bb14e151c28f h1:aW4TkS39/naJa9wPSbIXtZUQOlvuUh8gxCsLRrJoByU=
go.starlark.net v0.0.0-20220203230714-bb14e151c28f/go.mod h1:t3mmBBPzAVvK0L0n1drDmrQsJ8FoIx4INCqVMTr/Zo0=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
@@ -1205,6 +1233,7 @@ gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
gotest.tools/v3 v3.0.3 h1:4AuOwCGf4lLR9u3YOe2awrHygurzhO/HeQ6laiA6Sx0=
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
@@ -1217,10 +1246,12 @@ k8s.io/api v0.23.3 h1:KNrME8KHGr12Ozjf8ytOewKzZh6hl/hHUZeHddT3a38=
k8s.io/api v0.23.3/go.mod h1:w258XdGyvCmnBj/vGzQMj6kzdufJZVUwEM1U2fRJwSQ=
k8s.io/apimachinery v0.23.3 h1:7IW6jxNzrXTsP0c8yXz2E5Yx/WTzVPTsHIx/2Vm0cIk=
k8s.io/apimachinery v0.23.3/go.mod h1:BEuFMMBaIbcOqVIJqNZJXGFTP4W6AycEpb5+m/97hrM=
k8s.io/cli-runtime v0.23.3 h1:aJiediw+uUbxkfO6BNulcAMTUoU9Om43g3R7rIkYqcw=
k8s.io/cli-runtime v0.23.3/go.mod h1:yA00O5pDqnjkBh8fkuugBbfIfjB1nOpz+aYLotbnOfc=
k8s.io/client-go v0.23.3 h1:23QYUmCQ/W6hW78xIwm3XqZrrKZM+LWDqW2zfo+szJs=
k8s.io/client-go v0.23.3/go.mod h1:47oMd+YvAOqZM7pcQ6neJtBiFH7alOyfunYN48VsmwE=
k8s.io/code-generator v0.23.3/go.mod h1:S0Q1JVA+kSzTI1oUvbKAxZY/DYbA/ZUb4Uknog12ETk=
k8s.io/component-base v0.23.3 h1:q+epprVdylgecijVGVdf4MbizEL2feW4ssd7cdo6LVY=
k8s.io/component-base v0.23.3/go.mod h1:1Smc4C60rWG7d3HjSYpIwEbySQ3YWg0uzH5a2AtaTLg=
k8s.io/component-helpers v0.23.3/go.mod h1:SH+W/WPTaTenbWyDEeY7iytAQiMh45aqKxkvlqQ57cg=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
@@ -1234,6 +1265,7 @@ k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e/go.mod h1:vHXdDvt9+2spS2R
k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65/go.mod h1:sX9MT8g7NVZM5lVL/j8QyCCJe8YSMW30QvGZWaCIDIk=
k8s.io/kube-openapi v0.0.0-20220124234850-424119656bbf h1:M9XBsiMslw2lb2ZzglC0TOkBPK5NQi0/noUrdnoFwUg=
k8s.io/kube-openapi v0.0.0-20220124234850-424119656bbf/go.mod h1:sX9MT8g7NVZM5lVL/j8QyCCJe8YSMW30QvGZWaCIDIk=
k8s.io/kubectl v0.23.3 h1:gJsF7cahkWDPYlNvYKK+OrBZLAJUBzCym+Zsi+dfi1E=
k8s.io/kubectl v0.23.3/go.mod h1:VBeeXNgLhSabu4/k0O7Q0YujgnA3+CLTUE0RcmF73yY=
k8s.io/metrics v0.23.3/go.mod h1:Ut8TvkbsO4oMVeUzaTArvPrcw9QRFLs2XNzUlORjdYE=
k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
@@ -1247,10 +1279,12 @@ sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6/go.mod h1:p4QtZmO4uMYipTQNza
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 h1:kDi4JBNAsJWfz1aEXhO8Jg87JJaPNLh5tIzYHgStQ9Y=
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2/go.mod h1:B+TnT182UBxE84DiCz4CVE26eOSDAeYCpfDnC2kdKMY=
sigs.k8s.io/kustomize/api v0.10.1/go.mod h1:2FigT1QN6xKdcnGS2Ppp1uIWrtWN28Ms8A3OZUZhwr8=
sigs.k8s.io/kustomize/api v0.11.1 h1:/Vutu+gAqVo8skw1xCZrsZD39SN4Adg+z7FrSTw9pds=
sigs.k8s.io/kustomize/api v0.11.1/go.mod h1:GZuhith5YcqxIDe0GnRJNx5xxPTjlwaLTt/e+ChUtJA=
sigs.k8s.io/kustomize/cmd/config v0.10.2/go.mod h1:K2aW7nXJ0AaT+VA/eO0/dzFLxmpFcTzudmAgDwPY1HQ=
sigs.k8s.io/kustomize/kustomize/v4 v4.4.1/go.mod h1:qOKJMMz2mBP+vcS7vK+mNz4HBLjaQSWRY22EF6Tb7Io=
sigs.k8s.io/kustomize/kyaml v0.13.0/go.mod h1:FTJxEZ86ScK184NpGSAQcfEqee0nul8oLCK30D47m4E=
sigs.k8s.io/kustomize/kyaml v0.13.3 h1:tNNQIC+8cc+aXFTVg+RtQAOsjwUdYBZRAgYOVI3RBc4=
sigs.k8s.io/kustomize/kyaml v0.13.3/go.mod h1:/ya3Gk4diiQzlE4mBh7wykyLRFZNvqlbh+JnwQ9Vhrc=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 h1:bKCqE9GvQ5tiVHn5rfn1r+yao3aLQEaLzkkmAkf+A6Y=

View File

@@ -18,6 +18,7 @@ import (
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/dependency"
"github.com/up9inc/mizu/agent/pkg/elastic"
"github.com/up9inc/mizu/agent/pkg/entries"
"github.com/up9inc/mizu/agent/pkg/middlewares"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/oas"
@@ -30,8 +31,6 @@ import (
"github.com/up9inc/mizu/agent/pkg/app"
"github.com/up9inc/mizu/agent/pkg/config"
v1 "k8s.io/api/core/v1"
"github.com/gorilla/websocket"
"github.com/op/go-logging"
"github.com/up9inc/mizu/shared"
@@ -155,11 +154,6 @@ func runInTapperMode() {
hostMode := os.Getenv(shared.HostModeEnvVar) == "1"
tapOpts := &tap.TapOpts{HostMode: hostMode}
tapTargets := getTapTargets()
if tapTargets != nil {
tapOpts.FilterAuthorities = tapTargets
logger.Log.Infof("Filtering for the following authorities: %v", tapOpts.FilterAuthorities)
}
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)
@@ -205,7 +199,7 @@ func runInHarReaderMode() {
func enableExpFeatureIfNeeded() {
if config.Config.OAS {
oasGenerator := dependency.GetInstance(dependency.OasGeneratorDependency).(oas.OasGenerator)
oasGenerator.Start()
oasGenerator.Start(nil)
}
if config.Config.ServiceMap {
serviceMapGenerator := dependency.GetInstance(dependency.ServiceMapGeneratorDependency).(servicemap.ServiceMap)
@@ -257,28 +251,6 @@ func setUIFlags(uiIndexPath string) error {
return nil
}
func parseEnvVar(env string) map[string][]v1.Pod {
var mapOfList map[string][]v1.Pod
val, present := os.LookupEnv(env)
if !present {
return mapOfList
}
err := json.Unmarshal([]byte(val), &mapOfList)
if err != nil {
panic(fmt.Sprintf("env var %s's value of %v is invalid! must be map[string][]v1.Pod %v", env, mapOfList, err))
}
return mapOfList
}
func getTapTargets() []v1.Pod {
nodeName := os.Getenv(shared.NodeNameEnvVar)
tappedAddressesPerNodeDict := parseEnvVar(shared.TappedAddressesPerNodeDictEnvVar)
return tappedAddressesPerNodeDict[nodeName]
}
func getTrafficFilteringOptions() *tapApi.TrafficFilteringOptions {
filteringOptionsJson := os.Getenv(shared.MizuFilteringOptionsEnvVar)
if filteringOptionsJson == "" {
@@ -381,6 +353,14 @@ func handleIncomingMessageAsTapper(socketConnection *websocket.Conn) {
} else {
tap.UpdateTapTargets(tapConfigMessage.TapTargets)
}
case shared.WebSocketMessageTypeUpdateTappedPods:
var tappedPodsMessage shared.WebSocketTappedPodsMessage
if err := json.Unmarshal(message, &tappedPodsMessage); err != nil {
logger.Log.Infof("Could not unmarshal message of message type %s %v", socketMessageBase.MessageType, err)
return
}
nodeName := os.Getenv(shared.NodeNameEnvVar)
tap.UpdateTapTargets(tappedPodsMessage.NodeToTappedPodMap[nodeName])
default:
logger.Log.Warningf("Received socket message of type %s for which no handlers are defined", socketMessageBase.MessageType)
}
@@ -392,4 +372,7 @@ func handleIncomingMessageAsTapper(socketConnection *websocket.Conn) {
func initializeDependencies() {
dependency.RegisterGenerator(dependency.ServiceMapGeneratorDependency, func() interface{} { return servicemap.GetDefaultServiceMapInstance() })
dependency.RegisterGenerator(dependency.OasGeneratorDependency, func() interface{} { return oas.GetDefaultOasGeneratorInstance() })
dependency.RegisterGenerator(dependency.EntriesProvider, func() interface{} { return &entries.BasenineEntriesProvider{} })
dependency.RegisterGenerator(dependency.EntriesSocketStreamer, func() interface{} { return &api.BasenineEntryStreamer{} })
dependency.RegisterGenerator(dependency.EntryStreamerSocketConnector, func() interface{} { return &api.DefaultEntryStreamerSocketConnector{} })
}

View File

@@ -0,0 +1,57 @@
package api
import (
"fmt"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/shared/logger"
tapApi "github.com/up9inc/mizu/tap/api"
)
type EntryStreamerSocketConnector interface {
SendEntry(socketId int, entry *tapApi.Entry, params *WebSocketParams)
SendMetadata(socketId int, metadata *basenine.Metadata)
SendToastError(socketId int, err error)
CleanupSocket(socketId int)
}
type DefaultEntryStreamerSocketConnector struct{}
func (e *DefaultEntryStreamerSocketConnector) SendEntry(socketId int, entry *tapApi.Entry, params *WebSocketParams) {
var message []byte
if params.EnableFullEntries {
message, _ = models.CreateFullEntryWebSocketMessage(entry)
} else {
extension := extensionsMap[entry.Protocol.Name]
base := extension.Dissector.Summarize(entry)
message, _ = models.CreateBaseEntryWebSocketMessage(base)
}
if err := SendToSocket(socketId, message); err != nil {
logger.Log.Error(err)
}
}
func (e *DefaultEntryStreamerSocketConnector) SendMetadata(socketId int, metadata *basenine.Metadata) {
metadataBytes, _ := models.CreateWebsocketQueryMetadataMessage(metadata)
if err := SendToSocket(socketId, metadataBytes); err != nil {
logger.Log.Error(err)
}
}
func (e *DefaultEntryStreamerSocketConnector) SendToastError(socketId int, err error) {
toastBytes, _ := models.CreateWebsocketToastMessage(&models.ToastMessage{
Type: "error",
AutoClose: 5000,
Text: fmt.Sprintf("Syntax error: %s", err.Error()),
})
if err := SendToSocket(socketId, toastBytes); err != nil {
logger.Log.Error(err)
}
}
func (e *DefaultEntryStreamerSocketConnector) CleanupSocket(socketId int) {
socketObj := connectedWebsockets[socketId]
socketCleanup(socketId, socketObj)
}

View File

@@ -5,6 +5,7 @@ import (
"context"
"encoding/json"
"fmt"
"github.com/up9inc/mizu/agent/pkg/models"
"os"
"path"
"sort"
@@ -19,8 +20,6 @@ import (
"github.com/up9inc/mizu/agent/pkg/servicemap"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/oas"
"github.com/up9inc/mizu/agent/pkg/resolver"
"github.com/up9inc/mizu/agent/pkg/utils"
@@ -140,20 +139,6 @@ func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extension
rules, _, _ := models.RunValidationRulesState(*harEntry, mizuEntry.Destination.Name)
mizuEntry.Rules = rules
}
entryWSource := oas.EntryWithSource{
Entry: *harEntry,
Source: mizuEntry.Source.Name,
Destination: mizuEntry.Destination.Name,
Id: mizuEntry.Id,
}
if entryWSource.Destination == "" {
entryWSource.Destination = mizuEntry.Destination.IP + ":" + mizuEntry.Destination.Port
}
oasGenerator := dependency.GetInstance(dependency.OasGeneratorDependency).(oas.OasGeneratorSink)
oasGenerator.PushEntry(&entryWSource)
}
data, err := json.Marshal(mizuEntry)
@@ -183,6 +168,7 @@ func resolveIP(connectionInfo *tapApi.ConnectionInfo) (resolvedSource string, re
}
} else {
resolvedSource = resolvedSourceObject.FullAddress
namespace = resolvedSourceObject.Namespace
}
unresolvedDestination := fmt.Sprintf("%s:%s", connectionInfo.ServerIP, connectionInfo.ServerPort)
@@ -194,7 +180,11 @@ func resolveIP(connectionInfo *tapApi.ConnectionInfo) (resolvedSource string, re
}
} else {
resolvedDestination = resolvedDestinationObject.FullAddress
namespace = resolvedDestinationObject.Namespace
// Overwrite namespace (if it was set according to the source)
// Only overwrite if non-empty
if resolvedDestinationObject.Namespace != "" {
namespace = resolvedDestinationObject.Namespace
}
}
}
return resolvedSource, resolvedDestination, namespace

View File

@@ -0,0 +1,92 @@
package api
import (
"context"
"encoding/json"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/agent/pkg/dependency"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
tapApi "github.com/up9inc/mizu/tap/api"
)
type EntryStreamer interface {
Get(ctx context.Context, socketId int, params *WebSocketParams) error
}
type BasenineEntryStreamer struct{}
func (e *BasenineEntryStreamer) Get(ctx context.Context, socketId int, params *WebSocketParams) error {
var connection *basenine.Connection
entryStreamerSocketConnector := dependency.GetInstance(dependency.EntryStreamerSocketConnector).(EntryStreamerSocketConnector)
connection, err := basenine.NewConnection(shared.BasenineHost, shared.BaseninePort)
if err != nil {
logger.Log.Errorf("failed to establish a connection to Basenine: %v", err)
entryStreamerSocketConnector.CleanupSocket(socketId)
return err
}
data := make(chan []byte)
meta := make(chan []byte)
query := params.Query
err = basenine.Validate(shared.BasenineHost, shared.BaseninePort, query)
if err != nil {
entryStreamerSocketConnector.SendToastError(socketId, err)
}
handleDataChannel := func(c *basenine.Connection, data chan []byte) {
for {
bytes := <-data
if string(bytes) == basenine.CloseChannel {
return
}
var entry *tapApi.Entry
err = json.Unmarshal(bytes, &entry)
if err != nil {
logger.Log.Debugf("error unmarshalling entry: %v", err.Error())
continue
}
entryStreamerSocketConnector.SendEntry(socketId, entry, params)
}
}
handleMetaChannel := func(c *basenine.Connection, meta chan []byte) {
for {
bytes := <-meta
if string(bytes) == basenine.CloseChannel {
return
}
var metadata *basenine.Metadata
err = json.Unmarshal(bytes, &metadata)
if err != nil {
logger.Log.Debugf("Error unmarshalling metadata: %v", err.Error())
continue
}
entryStreamerSocketConnector.SendMetadata(socketId, metadata)
}
}
go handleDataChannel(connection, data)
go handleMetaChannel(connection, meta)
connection.Query(query, data, meta)
go func() {
<-ctx.Done()
data <- []byte(basenine.CloseChannel)
meta <- []byte(basenine.CloseChannel)
connection.Close()
}()
return nil
}

View File

@@ -1,19 +1,15 @@
package api
import (
"encoding/json"
"fmt"
"net/http"
"sync"
"time"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/utils"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/utils"
"github.com/up9inc/mizu/shared/logger"
tapApi "github.com/up9inc/mizu/tap/api"
)
@@ -25,9 +21,9 @@ func InitExtensionsMap(ref map[string]*tapApi.Extension) {
}
type EventHandlers interface {
WebSocketConnect(socketId int, isTapper bool)
WebSocketConnect(c *gin.Context, socketId int, isTapper bool)
WebSocketDisconnect(socketId int, isTapper bool)
WebSocketMessage(socketId int, message []byte)
WebSocketMessage(socketId int, isTapper bool, message []byte)
}
type SocketConnection struct {
@@ -62,11 +58,11 @@ func init() {
func WebSocketRoutes(app *gin.Engine, eventHandlers EventHandlers) {
SocketGetBrowserHandler = func(c *gin.Context) {
websocketHandler(c.Writer, c.Request, eventHandlers, false)
websocketHandler(c, eventHandlers, false)
}
SocketGetTapperHandler = func(c *gin.Context) {
websocketHandler(c.Writer, c.Request, eventHandlers, true)
websocketHandler(c, eventHandlers, true)
}
app.GET("/ws", func(c *gin.Context) {
@@ -78,10 +74,10 @@ func WebSocketRoutes(app *gin.Engine, eventHandlers EventHandlers) {
})
}
func websocketHandler(w http.ResponseWriter, r *http.Request, eventHandlers EventHandlers, isTapper bool) {
ws, err := websocketUpgrader.Upgrade(w, r, nil)
func websocketHandler(c *gin.Context, eventHandlers EventHandlers, isTapper bool) {
ws, err := websocketUpgrader.Upgrade(c.Writer, c.Request, nil)
if err != nil {
logger.Log.Errorf("Failed to set websocket upgrade: %v", err)
logger.Log.Errorf("failed to set websocket upgrade: %v", err)
return
}
@@ -93,30 +89,11 @@ func websocketHandler(w http.ResponseWriter, r *http.Request, eventHandlers Even
websocketIdsLock.Unlock()
var connection *basenine.Connection
var isQuerySet bool
// `!isTapper` means it's a connection from the web UI
if !isTapper {
connection, err = basenine.NewConnection(shared.BasenineHost, shared.BaseninePort)
if err != nil {
logger.Log.Errorf("Failed to establish a connection to Basenine: %v", err)
socketCleanup(socketId, connectedWebsockets[socketId])
return
}
}
data := make(chan []byte)
meta := make(chan []byte)
defer func() {
socketCleanup(socketId, connectedWebsockets[socketId])
data <- []byte(basenine.CloseChannel)
meta <- []byte(basenine.CloseChannel)
connection.Close()
}()
eventHandlers.WebSocketConnect(socketId, isTapper)
eventHandlers.WebSocketConnect(c, socketId, isTapper)
startTimeBytes, _ := models.CreateWebsocketStartTimeMessage(utils.StartTime)
@@ -124,127 +101,32 @@ func websocketHandler(w http.ResponseWriter, r *http.Request, eventHandlers Even
logger.Log.Error(err)
}
var params WebSocketParams
for {
_, msg, err := ws.ReadMessage()
if err != nil {
if _, ok := err.(*websocket.CloseError); ok {
logger.Log.Debugf("Received websocket close message, socket id: %d", socketId)
logger.Log.Debugf("received websocket close message, socket id: %d", socketId)
} else {
logger.Log.Errorf("Error reading message, socket id: %d, error: %v", socketId, err)
logger.Log.Errorf("error reading message, socket id: %d, error: %v", socketId, err)
}
break
}
if !isTapper && !isQuerySet {
if err := json.Unmarshal(msg, &params); err != nil {
logger.Log.Errorf("Error unmarshalling parameters: %v", socketId, err)
continue
}
query := params.Query
err = basenine.Validate(shared.BasenineHost, shared.BaseninePort, query)
if err != nil {
toastBytes, _ := models.CreateWebsocketToastMessage(&models.ToastMessage{
Type: "error",
AutoClose: 5000,
Text: fmt.Sprintf("Syntax error: %s", err.Error()),
})
if err := SendToSocket(socketId, toastBytes); err != nil {
logger.Log.Error(err)
}
break
}
isQuerySet = true
handleDataChannel := func(c *basenine.Connection, data chan []byte) {
for {
bytes := <-data
if string(bytes) == basenine.CloseChannel {
return
}
var entry *tapApi.Entry
err = json.Unmarshal(bytes, &entry)
if err != nil {
logger.Log.Debugf("Error unmarshalling entry: %v", err.Error())
continue
}
var message []byte
if params.EnableFullEntries {
message, _ = models.CreateFullEntryWebSocketMessage(entry)
} else {
extension := extensionsMap[entry.Protocol.Name]
base := extension.Dissector.Summarize(entry)
message, _ = models.CreateBaseEntryWebSocketMessage(base)
}
if err := SendToSocket(socketId, message); err != nil {
logger.Log.Error(err)
}
}
}
handleMetaChannel := func(c *basenine.Connection, meta chan []byte) {
for {
bytes := <-meta
if string(bytes) == basenine.CloseChannel {
return
}
var metadata *basenine.Metadata
err = json.Unmarshal(bytes, &metadata)
if err != nil {
logger.Log.Debugf("Error unmarshalling metadata: %v", err.Error())
continue
}
metadataBytes, _ := models.CreateWebsocketQueryMetadataMessage(metadata)
if err := SendToSocket(socketId, metadataBytes); err != nil {
logger.Log.Error(err)
}
}
}
go handleDataChannel(connection, data)
go handleMetaChannel(connection, meta)
connection.Query(query, data, meta)
} else {
eventHandlers.WebSocketMessage(socketId, msg)
}
eventHandlers.WebSocketMessage(socketId, isTapper, msg)
}
}
func socketCleanup(socketId int, socketConnection *SocketConnection) {
err := socketConnection.connection.Close()
if err != nil {
logger.Log.Errorf("Error closing socket connection for socket id %d: %v", socketId, err)
}
websocketIdsLock.Lock()
connectedWebsockets[socketId] = nil
websocketIdsLock.Unlock()
socketConnection.eventHandlers.WebSocketDisconnect(socketId, socketConnection.isTapper)
}
func SendToSocket(socketId int, message []byte) error {
socketObj := connectedWebsockets[socketId]
if socketObj == nil {
return fmt.Errorf("Socket %v is disconnected", socketId)
return fmt.Errorf("socket %v is disconnected", socketId)
}
var sent = false
time.AfterFunc(time.Second*5, func() {
if !sent {
logger.Log.Error("Socket timed out")
logger.Log.Error("socket timed out")
socketCleanup(socketId, socketObj)
}
})
@@ -255,7 +137,20 @@ func SendToSocket(socketId int, message []byte) error {
sent = true
if err != nil {
return fmt.Errorf("Failed to write message to socket %v, err: %w", socketId, err)
return fmt.Errorf("failed to write message to socket %v, err: %w", socketId, err)
}
return nil
}
func socketCleanup(socketId int, socketConnection *SocketConnection) {
err := socketConnection.connection.Close()
if err != nil {
logger.Log.Errorf("error closing socket connection for socket id %d: %v", socketId, err)
}
websocketIdsLock.Lock()
connectedWebsockets[socketId] = nil
websocketIdsLock.Unlock()
socketConnection.eventHandlers.WebSocketDisconnect(socketId, socketConnection.isTapper)
}

View File

@@ -1,12 +1,14 @@
package api
import (
"context"
"encoding/json"
"fmt"
"sync"
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/dependency"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/providers"
"github.com/up9inc/mizu/agent/pkg/providers/tappedPods"
"github.com/up9inc/mizu/agent/pkg/providers/tappers"
"github.com/up9inc/mizu/agent/pkg/up9"
@@ -16,7 +18,12 @@ import (
"github.com/up9inc/mizu/shared/logger"
)
var browserClientSocketUUIDs = make([]int, 0)
type BrowserClient struct {
dataStreamCancelFunc context.CancelFunc
}
var browserClients = make(map[int]*BrowserClient, 0)
var tapperClientSocketUUIDs = make([]int, 0)
var socketListLock = sync.Mutex{}
type RoutesEventHandlers struct {
@@ -28,15 +35,22 @@ func init() {
go up9.UpdateAnalyzeStatus(BroadcastToBrowserClients)
}
func (h *RoutesEventHandlers) WebSocketConnect(socketId int, isTapper bool) {
func (h *RoutesEventHandlers) WebSocketConnect(_ *gin.Context, socketId int, isTapper bool) {
if isTapper {
logger.Log.Infof("Websocket event - Tapper connected, socket ID: %d", socketId)
tappers.Connected()
socketListLock.Lock()
tapperClientSocketUUIDs = append(tapperClientSocketUUIDs, socketId)
socketListLock.Unlock()
nodeToTappedPodMap := tappedPods.GetNodeToTappedPodMap()
SendTappedPods(socketId, nodeToTappedPodMap)
} else {
logger.Log.Infof("Websocket event - Browser socket connected, socket ID: %d", socketId)
socketListLock.Lock()
browserClientSocketUUIDs = append(browserClientSocketUUIDs, socketId)
browserClients[socketId] = &BrowserClient{}
socketListLock.Unlock()
BroadcastTappedPodsStatus()
@@ -47,16 +61,23 @@ func (h *RoutesEventHandlers) WebSocketDisconnect(socketId int, isTapper bool) {
if isTapper {
logger.Log.Infof("Websocket event - Tapper disconnected, socket ID: %d", socketId)
tappers.Disconnected()
socketListLock.Lock()
removeSocketUUIDFromTapperSlice(socketId)
socketListLock.Unlock()
} else {
logger.Log.Infof("Websocket event - Browser socket disconnected, socket ID: %d", socketId)
socketListLock.Lock()
removeSocketUUIDFromBrowserSlice(socketId)
if browserClients[socketId] != nil && browserClients[socketId].dataStreamCancelFunc != nil {
browserClients[socketId].dataStreamCancelFunc()
}
delete(browserClients, socketId)
socketListLock.Unlock()
}
}
func BroadcastToBrowserClients(message []byte) {
for _, socketId := range browserClientSocketUUIDs {
for socketId := range browserClients {
go func(socketId int) {
if err := SendToSocket(socketId, message); err != nil {
logger.Log.Error(err)
@@ -65,7 +86,43 @@ func BroadcastToBrowserClients(message []byte) {
}
}
func (h *RoutesEventHandlers) WebSocketMessage(_ int, message []byte) {
func BroadcastToTapperClients(message []byte) {
for _, socketId := range tapperClientSocketUUIDs {
go func(socketId int) {
if err := SendToSocket(socketId, message); err != nil {
logger.Log.Error(err)
}
}(socketId)
}
}
func (h *RoutesEventHandlers) WebSocketMessage(socketId int, isTapper bool, message []byte) {
if isTapper {
HandleTapperIncomingMessage(message, h.SocketOutChannel, BroadcastToBrowserClients)
} else {
// we initiate the basenine stream after the first websocket message we receive (it contains the entry query), we then store a cancelfunc to later cancel this stream
if browserClients[socketId] != nil && browserClients[socketId].dataStreamCancelFunc == nil {
var params WebSocketParams
if err := json.Unmarshal(message, &params); err != nil {
logger.Log.Errorf("Error: %v", socketId, err)
return
}
entriesStreamer := dependency.GetInstance(dependency.EntriesSocketStreamer).(EntryStreamer)
ctx, cancelFunc := context.WithCancel(context.Background())
err := entriesStreamer.Get(ctx, socketId, &params)
if err != nil {
logger.Log.Errorf("error initializing basenine stream for browser socket %d %+v", socketId, err)
cancelFunc()
} else {
browserClients[socketId].dataStreamCancelFunc = cancelFunc
}
}
}
}
func HandleTapperIncomingMessage(message []byte, socketOutChannel chan<- *tapApi.OutputChannelItem, broadcastMessageFunc func([]byte)) {
var socketMessageBase shared.WebSocketMessageMetadata
err := json.Unmarshal(message, &socketMessageBase)
if err != nil {
@@ -79,7 +136,7 @@ func (h *RoutesEventHandlers) WebSocketMessage(_ int, message []byte) {
logger.Log.Infof("Could not unmarshal message of message type %s %v", socketMessageBase.MessageType, err)
} else {
// NOTE: This is where the message comes back from the intermediate WebSocket to code.
h.SocketOutChannel <- tappedEntryMessage.Data
socketOutChannel <- tappedEntryMessage.Data
}
case shared.WebSocketMessageTypeUpdateStatus:
var statusMessage shared.WebSocketStatusMessage
@@ -87,15 +144,7 @@ func (h *RoutesEventHandlers) WebSocketMessage(_ int, message []byte) {
if err != nil {
logger.Log.Infof("Could not unmarshal message of message type %s %v", socketMessageBase.MessageType, err)
} else {
BroadcastToBrowserClients(message)
}
case shared.WebsocketMessageTypeOutboundLink:
var outboundLinkMessage models.WebsocketOutboundLinkMessage
err := json.Unmarshal(message, &outboundLinkMessage)
if err != nil {
logger.Log.Infof("Could not unmarshal message of message type %s %v", socketMessageBase.MessageType, err)
} else {
handleTLSLink(outboundLinkMessage)
broadcastMessageFunc(message)
}
default:
logger.Log.Infof("Received socket message of type %s for which no handlers are defined", socketMessageBase.MessageType)
@@ -103,35 +152,12 @@ func (h *RoutesEventHandlers) WebSocketMessage(_ int, message []byte) {
}
}
func handleTLSLink(outboundLinkMessage models.WebsocketOutboundLinkMessage) {
resolvedNameObject := k8sResolver.Resolve(outboundLinkMessage.Data.DstIP)
if resolvedNameObject != nil {
outboundLinkMessage.Data.DstIP = resolvedNameObject.FullAddress
} else if outboundLinkMessage.Data.SuggestedResolvedName != "" {
outboundLinkMessage.Data.DstIP = outboundLinkMessage.Data.SuggestedResolvedName
}
cacheKey := fmt.Sprintf("%s -> %s:%d", outboundLinkMessage.Data.Src, outboundLinkMessage.Data.DstIP, outboundLinkMessage.Data.DstPort)
_, isInCache := providers.RecentTLSLinks.Get(cacheKey)
if isInCache {
return
} else {
providers.RecentTLSLinks.SetDefault(cacheKey, outboundLinkMessage.Data)
}
marshaledMessage, err := json.Marshal(outboundLinkMessage)
if err != nil {
logger.Log.Errorf("Error marshaling outbound link message for broadcasting: %v", err)
} else {
logger.Log.Errorf("Broadcasting outboundlink message %s", string(marshaledMessage))
BroadcastToBrowserClients(marshaledMessage)
}
}
func removeSocketUUIDFromBrowserSlice(uuidToRemove int) {
newUUIDSlice := make([]int, 0, len(browserClientSocketUUIDs))
for _, uuid := range browserClientSocketUUIDs {
func removeSocketUUIDFromTapperSlice(uuidToRemove int) {
newUUIDSlice := make([]int, 0, len(tapperClientSocketUUIDs))
for _, uuid := range tapperClientSocketUUIDs {
if uuid != uuidToRemove {
newUUIDSlice = append(newUUIDSlice, uuid)
}
}
browserClientSocketUUIDs = newUUIDSlice
tapperClientSocketUUIDs = newUUIDSlice
}

View File

@@ -18,3 +18,23 @@ func BroadcastTappedPodsStatus() {
BroadcastToBrowserClients(jsonBytes)
}
}
func SendTappedPods(socketId int, nodeToTappedPodMap shared.NodeToPodsMap) {
message := shared.CreateWebSocketTappedPodsMessage(nodeToTappedPodMap)
if jsonBytes, err := json.Marshal(message); err != nil {
logger.Log.Errorf("Could not Marshal message %v", err)
} else {
if err := SendToSocket(socketId, jsonBytes); err != nil {
logger.Log.Error(err)
}
}
}
func BroadcastTappedPodsToTappers(nodeToTappedPodMap shared.NodeToPodsMap) {
message := shared.CreateWebSocketTappedPodsMessage(nodeToTappedPodMap)
if jsonBytes, err := json.Marshal(message); err != nil {
logger.Log.Errorf("Could not Marshal message %v", err)
} else {
BroadcastToTapperClients(jsonBytes)
}
}

View File

@@ -1,25 +1,20 @@
package controllers
import (
"encoding/json"
"net/http"
"strconv"
"time"
"github.com/up9inc/mizu/agent/pkg/app"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/agent/pkg/dependency"
"github.com/up9inc/mizu/agent/pkg/entries"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/validation"
"github.com/gin-gonic/gin"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
tapApi "github.com/up9inc/mizu/tap/api"
)
func Error(c *gin.Context, err error) bool {
func HandleEntriesError(c *gin.Context, err error) bool {
if err != nil {
logger.Log.Errorf("Error getting entry: %v", err)
_ = c.Error(err)
@@ -49,45 +44,18 @@ func GetEntries(c *gin.Context) {
entriesRequest.TimeoutMs = 3000
}
data, meta, err := basenine.Fetch(shared.BasenineHost, shared.BaseninePort,
entriesRequest.LeftOff, entriesRequest.Direction, entriesRequest.Query,
entriesRequest.Limit, time.Duration(entriesRequest.TimeoutMs)*time.Millisecond)
if err != nil {
c.JSON(http.StatusInternalServerError, validationError)
}
response := &models.EntriesResponse{}
var dataSlice []interface{}
for _, row := range data {
var entry *tapApi.Entry
err = json.Unmarshal(row, &entry)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{
"error": true,
"type": "error",
"autoClose": "5000",
"msg": string(row),
})
return // exit
entriesProvider := dependency.GetInstance(dependency.EntriesProvider).(entries.EntriesProvider)
entries, metadata, err := entriesProvider.GetEntries(entriesRequest)
if !HandleEntriesError(c, err) {
baseEntries := make([]interface{}, 0)
for _, entry := range entries {
baseEntries = append(baseEntries, entry.Base)
}
extension := app.ExtensionsMap[entry.Protocol.Name]
base := extension.Dissector.Summarize(entry)
dataSlice = append(dataSlice, base)
c.JSON(http.StatusOK, models.EntriesResponse{
Data: baseEntries,
Meta: metadata,
})
}
var metadata *basenine.Metadata
err = json.Unmarshal(meta, &metadata)
if err != nil {
logger.Log.Debugf("Error recieving metadata: %v", err.Error())
}
response.Data = dataSlice
response.Meta = metadata
c.JSON(http.StatusOK, response)
}
func GetEntry(c *gin.Context) {
@@ -102,54 +70,11 @@ func GetEntry(c *gin.Context) {
}
id, _ := strconv.Atoi(c.Param("id"))
var entry *tapApi.Entry
bytes, err := basenine.Single(shared.BasenineHost, shared.BaseninePort, id, singleEntryRequest.Query)
if Error(c, err) {
return // exit
}
err = json.Unmarshal(bytes, &entry)
if err != nil {
c.JSON(http.StatusNotFound, gin.H{
"error": true,
"type": "error",
"autoClose": "5000",
"msg": string(bytes),
})
return // exit
}
extension := app.ExtensionsMap[entry.Protocol.Name]
base := extension.Dissector.Summarize(entry)
var representation []byte
representation, err = extension.Dissector.Represent(entry.Request, entry.Response)
if err != nil {
c.JSON(http.StatusNotFound, gin.H{
"error": true,
"type": "error",
"autoClose": "5000",
"msg": err.Error(),
})
return // exit
}
entriesProvider := dependency.GetInstance(dependency.EntriesProvider).(entries.EntriesProvider)
entry, err := entriesProvider.GetEntry(singleEntryRequest, id)
var rules []map[string]interface{}
var isRulesEnabled bool
if entry.Protocol.Name == "http" {
harEntry, _ := har.NewEntry(entry.Request, entry.Response, entry.StartTime, entry.ElapsedTime)
_, rulesMatched, _isRulesEnabled := models.RunValidationRulesState(*harEntry, entry.Destination.Name)
isRulesEnabled = _isRulesEnabled
inrec, _ := json.Marshal(rulesMatched)
if err := json.Unmarshal(inrec, &rules); err != nil {
logger.Log.Error(err)
}
if !HandleEntriesError(c, err) {
c.JSON(http.StatusOK, entry)
}
c.JSON(http.StatusOK, tapApi.EntryWrapper{
Protocol: entry.Protocol,
Representation: string(representation),
Data: entry,
Base: base,
Rules: rules,
IsRulesEnabled: isRulesEnabled,
})
}

View File

@@ -1,8 +1,12 @@
package controllers
import (
"bytes"
basenine "github.com/up9inc/basenine/client/go"
"net"
"net/http/httptest"
"testing"
"time"
"github.com/up9inc/mizu/agent/pkg/dependency"
"github.com/up9inc/mizu/agent/pkg/oas"
@@ -11,39 +15,55 @@ import (
)
func TestGetOASServers(t *testing.T) {
dependency.RegisterGenerator(dependency.OasGeneratorDependency, func() interface{} { return oas.GetDefaultOasGeneratorInstance() })
recorder := httptest.NewRecorder()
c, _ := gin.CreateTestContext(recorder)
oas.GetDefaultOasGeneratorInstance().Start()
oas.GetDefaultOasGeneratorInstance().GetServiceSpecs().Store("some", oas.NewGen("some"))
recorder, c := getRecorderAndContext()
GetOASServers(c)
t.Logf("Written body: %s", recorder.Body.String())
}
func TestGetOASAllSpecs(t *testing.T) {
dependency.RegisterGenerator(dependency.OasGeneratorDependency, func() interface{} { return oas.GetDefaultOasGeneratorInstance() })
recorder := httptest.NewRecorder()
c, _ := gin.CreateTestContext(recorder)
oas.GetDefaultOasGeneratorInstance().Start()
oas.GetDefaultOasGeneratorInstance().GetServiceSpecs().Store("some", oas.NewGen("some"))
recorder, c := getRecorderAndContext()
GetOASAllSpecs(c)
t.Logf("Written body: %s", recorder.Body.String())
}
func TestGetOASSpec(t *testing.T) {
dependency.RegisterGenerator(dependency.OasGeneratorDependency, func() interface{} { return oas.GetDefaultOasGeneratorInstance() })
recorder := httptest.NewRecorder()
c, _ := gin.CreateTestContext(recorder)
oas.GetDefaultOasGeneratorInstance().Start()
oas.GetDefaultOasGeneratorInstance().GetServiceSpecs().Store("some", oas.NewGen("some"))
recorder, c := getRecorderAndContext()
c.Params = []gin.Param{{Key: "id", Value: "some"}}
GetOASSpec(c)
t.Logf("Written body: %s", recorder.Body.String())
}
type fakeConn struct {
sendBuffer *bytes.Buffer
receiveBuffer *bytes.Buffer
}
func (f fakeConn) Read(p []byte) (int, error) { return f.sendBuffer.Read(p) }
func (f fakeConn) Write(p []byte) (int, error) { return f.receiveBuffer.Write(p) }
func (fakeConn) Close() error { return nil }
func (fakeConn) LocalAddr() net.Addr { return nil }
func (fakeConn) RemoteAddr() net.Addr { return nil }
func (fakeConn) SetDeadline(t time.Time) error { return nil }
func (fakeConn) SetReadDeadline(t time.Time) error { return nil }
func (fakeConn) SetWriteDeadline(t time.Time) error { return nil }
func getRecorderAndContext() (*httptest.ResponseRecorder, *gin.Context) {
dummyConn := new(basenine.Connection)
dummyConn.Conn = fakeConn{
sendBuffer: bytes.NewBufferString("\n"),
receiveBuffer: bytes.NewBufferString("\n"),
}
dependency.RegisterGenerator(dependency.OasGeneratorDependency, func() interface{} {
return oas.GetDefaultOasGeneratorInstance()
})
recorder := httptest.NewRecorder()
c, _ := gin.CreateTestContext(recorder)
oas.GetDefaultOasGeneratorInstance().Start(dummyConn)
oas.GetDefaultOasGeneratorInstance().GetServiceSpecs().Store("some", oas.NewGen("some"))
return recorder, c
}

View File

@@ -101,16 +101,18 @@ func (s *ServiceMapControllerSuite) TestGet() {
// response nodes
aNode := servicemap.ServiceMapNode{
Id: 1,
Name: TCPEntryA.Name,
Entry: TCPEntryA,
Count: 1,
Id: 1,
Name: TCPEntryA.Name,
Entry: TCPEntryA,
Resolved: true,
Count: 1,
}
bNode := servicemap.ServiceMapNode{
Id: 2,
Name: TCPEntryB.Name,
Entry: TCPEntryB,
Count: 1,
Id: 2,
Name: TCPEntryB.Name,
Entry: TCPEntryB,
Resolved: true,
Count: 1,
}
assert.Contains(response.Nodes, aNode)
assert.Contains(response.Nodes, bNode)

View File

@@ -2,6 +2,7 @@ package controllers
import (
"net/http"
core "k8s.io/api/core/v1"
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/api"
@@ -12,6 +13,7 @@ import (
"github.com/up9inc/mizu/agent/pkg/up9"
"github.com/up9inc/mizu/agent/pkg/validation"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/kubernetes"
"github.com/up9inc/mizu/shared/logger"
)
@@ -30,15 +32,21 @@ func HealthCheck(c *gin.Context) {
}
func PostTappedPods(c *gin.Context) {
var requestTappedPods []*shared.PodInfo
var requestTappedPods []core.Pod
if err := c.Bind(&requestTappedPods); err != nil {
c.JSON(http.StatusBadRequest, err)
return
}
podInfos := kubernetes.GetPodInfosForPods(requestTappedPods)
logger.Log.Infof("[Status] POST request: %d tapped pods", len(requestTappedPods))
tappedPods.Set(requestTappedPods)
tappedPods.Set(podInfos)
api.BroadcastTappedPodsStatus()
nodeToTappedPodMap := kubernetes.GetNodeHostToTappedPodsMap(requestTappedPods)
tappedPods.SetNodeToTappedPodMap(nodeToTappedPodMap)
api.BroadcastTappedPodsToTappers(nodeToTappedPodMap)
}
func PostTapperStatus(c *gin.Context) {

View File

@@ -5,4 +5,7 @@ type DependencyContainerType string
const (
ServiceMapGeneratorDependency = "ServiceMapGeneratorDependency"
OasGeneratorDependency = "OasGeneratorDependency"
EntriesProvider = "EntriesProvider"
EntriesSocketStreamer = "EntriesSocketStreamer"
EntryStreamerSocketConnector = "EntryStreamerSocketConnector"
)

View File

@@ -0,0 +1,98 @@
package entries
import (
"encoding/json"
"time"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/agent/pkg/app"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
tapApi "github.com/up9inc/mizu/tap/api"
)
type EntriesProvider interface {
GetEntries(entriesRequest *models.EntriesRequest) ([]*tapApi.EntryWrapper, *basenine.Metadata, error)
GetEntry(singleEntryRequest *models.SingleEntryRequest, entryId int) (*tapApi.EntryWrapper, error)
}
type BasenineEntriesProvider struct{}
func (e *BasenineEntriesProvider) GetEntries(entriesRequest *models.EntriesRequest) ([]*tapApi.EntryWrapper, *basenine.Metadata, error) {
data, meta, err := basenine.Fetch(shared.BasenineHost, shared.BaseninePort,
entriesRequest.LeftOff, entriesRequest.Direction, entriesRequest.Query,
entriesRequest.Limit, time.Duration(entriesRequest.TimeoutMs)*time.Millisecond)
if err != nil {
return nil, nil, err
}
var dataSlice []*tapApi.EntryWrapper
for _, row := range data {
var entry *tapApi.Entry
err = json.Unmarshal(row, &entry)
if err != nil {
return nil, nil, err
}
extension := app.ExtensionsMap[entry.Protocol.Name]
base := extension.Dissector.Summarize(entry)
dataSlice = append(dataSlice, &tapApi.EntryWrapper{
Protocol: entry.Protocol,
Data: entry,
Base: base,
})
}
var metadata *basenine.Metadata
err = json.Unmarshal(meta, &metadata)
if err != nil {
logger.Log.Debugf("Error recieving metadata: %v", err.Error())
}
return dataSlice, metadata, nil
}
func (e *BasenineEntriesProvider) GetEntry(singleEntryRequest *models.SingleEntryRequest, entryId int) (*tapApi.EntryWrapper, error) {
var entry *tapApi.Entry
bytes, err := basenine.Single(shared.BasenineHost, shared.BaseninePort, entryId, singleEntryRequest.Query)
if err != nil {
return nil, err
}
err = json.Unmarshal(bytes, &entry)
if err != nil {
return nil, err
}
extension := app.ExtensionsMap[entry.Protocol.Name]
base := extension.Dissector.Summarize(entry)
var representation []byte
representation, err = extension.Dissector.Represent(entry.Request, entry.Response)
if err != nil {
return nil, err
}
var rules []map[string]interface{}
var isRulesEnabled bool
if entry.Protocol.Name == "http" {
harEntry, _ := har.NewEntry(entry.Request, entry.Response, entry.StartTime, entry.ElapsedTime)
_, rulesMatched, _isRulesEnabled := models.RunValidationRulesState(*harEntry, entry.Destination.Name)
isRulesEnabled = _isRulesEnabled
inrec, _ := json.Marshal(rulesMatched)
if err := json.Unmarshal(inrec, &rules); err != nil {
logger.Log.Error(err)
}
}
return &tapApi.EntryWrapper{
Protocol: entry.Protocol,
Representation: string(representation),
Data: entry,
Base: base,
Rules: rules,
IsRulesEnabled: isRulesEnabled,
}, nil
}

View File

@@ -67,23 +67,23 @@ func fileSize(fname string) int64 {
return fi.Size()
}
func feedEntries(fromFiles []string, isSync bool) (count int, err error) {
func feedEntries(fromFiles []string, isSync bool, gen *defaultOasGenerator) (count uint, err error) {
badFiles := make([]string, 0)
cnt := 0
cnt := uint(0)
for _, file := range fromFiles {
logger.Log.Info("Processing file: " + file)
ext := strings.ToLower(filepath.Ext(file))
eCnt := 0
eCnt := uint(0)
switch ext {
case ".har":
eCnt, err = feedFromHAR(file, isSync)
eCnt, err = feedFromHAR(file, isSync, gen)
if err != nil {
logger.Log.Warning("Failed processing file: " + err.Error())
badFiles = append(badFiles, file)
continue
}
case ".ldjson":
eCnt, err = feedFromLDJSON(file, isSync)
eCnt, err = feedFromLDJSON(file, isSync, gen)
if err != nil {
logger.Log.Warning("Failed processing file: " + err.Error())
badFiles = append(badFiles, file)
@@ -102,7 +102,7 @@ func feedEntries(fromFiles []string, isSync bool) (count int, err error) {
return cnt, nil
}
func feedFromHAR(file string, isSync bool) (int, error) {
func feedFromHAR(file string, isSync bool, gen *defaultOasGenerator) (uint, error) {
fd, err := os.Open(file)
if err != nil {
panic(err)
@@ -121,16 +121,16 @@ func feedFromHAR(file string, isSync bool) (int, error) {
return 0, err
}
cnt := 0
cnt := uint(0)
for _, entry := range harDoc.Log.Entries {
cnt += 1
feedEntry(&entry, "", isSync, file)
feedEntry(&entry, "", file, gen, cnt)
}
return cnt, nil
}
func feedEntry(entry *har.Entry, source string, isSync bool, file string) {
func feedEntry(entry *har.Entry, source string, file string, gen *defaultOasGenerator, cnt uint) {
entry.Comment = file
if entry.Response.Status == 302 {
logger.Log.Debugf("Dropped traffic entry due to permanent redirect status: %s", entry.StartedDateTime)
@@ -145,15 +145,11 @@ func feedEntry(entry *har.Entry, source string, isSync bool, file string) {
logger.Log.Errorf("Failed to parse entry URL: %v, err: %v", entry.Request.URL, err)
}
ews := EntryWithSource{Entry: *entry, Source: source, Destination: u.Host, Id: uint(0)}
if isSync {
GetDefaultOasGeneratorInstance().entriesChan <- ews // blocking variant, right?
} else {
GetDefaultOasGeneratorInstance().PushEntry(&ews)
}
ews := EntryWithSource{Entry: *entry, Source: source, Destination: u.Host, Id: cnt}
gen.handleHARWithSource(&ews)
}
func feedFromLDJSON(file string, isSync bool) (int, error) {
func feedFromLDJSON(file string, isSync bool, gen *defaultOasGenerator) (uint, error) {
fd, err := os.Open(file)
if err != nil {
panic(err)
@@ -165,7 +161,7 @@ func feedFromLDJSON(file string, isSync bool) (int, error) {
var meta map[string]interface{}
buf := strings.Builder{}
cnt := 0
cnt := uint(0)
source := ""
for {
substr, isPrefix, err := reader.ReadLine()
@@ -196,7 +192,7 @@ func feedFromLDJSON(file string, isSync bool) (int, error) {
logger.Log.Warningf("Failed decoding entry: %s", line)
} else {
cnt += 1
feedEntry(&entry, source, isSync, file)
feedEntry(&entry, source, file, gen, cnt)
}
}
}

View File

@@ -3,10 +3,13 @@ package oas
import (
"context"
"encoding/json"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap/api"
"net/url"
"sync"
"github.com/up9inc/mizu/agent/pkg/har"
"github.com/up9inc/mizu/shared/logger"
)
@@ -15,24 +18,21 @@ var (
instance *defaultOasGenerator
)
type OasGeneratorSink interface {
PushEntry(entryWithSource *EntryWithSource)
}
type OasGenerator interface {
Start()
Start(conn *basenine.Connection)
Stop()
IsStarted() bool
Reset()
GetServiceSpecs() *sync.Map
SetEntriesQuery(query string) bool
}
type defaultOasGenerator struct {
started bool
ctx context.Context
cancel context.CancelFunc
serviceSpecs *sync.Map
entriesChan chan EntryWithSource
started bool
ctx context.Context
cancel context.CancelFunc
serviceSpecs *sync.Map
dbConn *basenine.Connection
entriesQuery string
}
func GetDefaultOasGeneratorInstance() *defaultOasGenerator {
@@ -43,16 +43,33 @@ func GetDefaultOasGeneratorInstance() *defaultOasGenerator {
return instance
}
func (g *defaultOasGenerator) Start() {
func (g *defaultOasGenerator) Start(conn *basenine.Connection) {
if g.started {
return
}
if g.dbConn == nil {
if conn == nil {
logger.Log.Infof("Creating new DB connection for OAS generator to address %s:%s", shared.BasenineHost, shared.BaseninePort)
newConn, err := basenine.NewConnection(shared.BasenineHost, shared.BaseninePort)
if err != nil {
logger.Log.Error("Error connecting to DB for OAS generator, err: %v", err)
return
}
conn = newConn
}
g.dbConn = conn
}
ctx, cancel := context.WithCancel(context.Background())
g.cancel = cancel
g.ctx = ctx
g.entriesChan = make(chan EntryWithSource, 100) // buffer up to 100 entries for OAS processing
g.serviceSpecs = &sync.Map{}
g.started = true
go g.runGenerator()
}
@@ -60,8 +77,15 @@ func (g *defaultOasGenerator) Stop() {
if !g.started {
return
}
if g.dbConn != nil {
g.dbConn.Close()
g.dbConn = nil
}
g.cancel()
g.Reset()
g.reset()
g.started = false
}
@@ -70,80 +94,126 @@ func (g *defaultOasGenerator) IsStarted() bool {
}
func (g *defaultOasGenerator) runGenerator() {
// Make []byte channels to receive the data and the meta
dataChan := make(chan []byte)
metaChan := make(chan []byte)
logger.Log.Infof("Querying DB for OAS generator with query '%s'", g.entriesQuery)
g.dbConn.Query(g.entriesQuery, dataChan, metaChan)
for {
select {
case <-g.ctx.Done():
logger.Log.Infof("OAS Generator was canceled")
close(dataChan)
close(metaChan)
return
case entryWithSource, ok := <-g.entriesChan:
case metaBytes, ok := <-metaChan:
if !ok {
logger.Log.Infof("OAS Generator - meta channel closed")
break
}
logger.Log.Debugf("Meta: %s", metaBytes)
case dataBytes, ok := <-dataChan:
if !ok {
logger.Log.Infof("OAS Generator - entries channel closed")
break
}
entry := entryWithSource.Entry
u, err := url.Parse(entry.Request.URL)
logger.Log.Debugf("Data: %s", dataBytes)
e := new(api.Entry)
err := json.Unmarshal(dataBytes, e)
if err != nil {
logger.Log.Errorf("Failed to parse entry URL: %v, err: %v", entry.Request.URL, err)
}
val, found := g.serviceSpecs.Load(entryWithSource.Destination)
var gen *SpecGen
if !found {
gen = NewGen(u.Scheme + "://" + entryWithSource.Destination)
g.serviceSpecs.Store(entryWithSource.Destination, gen)
} else {
gen = val.(*SpecGen)
}
opId, err := gen.feedEntry(entryWithSource)
if err != nil {
txt, suberr := json.Marshal(entry)
if suberr == nil {
logger.Log.Debugf("Problematic entry: %s", txt)
}
logger.Log.Warningf("Failed processing entry: %s", err)
continue
}
logger.Log.Debugf("Handled entry %s as opId: %s", entry.Request.URL, opId) // TODO: set opId back to entry?
g.handleEntry(e)
}
}
}
func (g *defaultOasGenerator) Reset() {
g.serviceSpecs = &sync.Map{}
func (g *defaultOasGenerator) handleEntry(mizuEntry *api.Entry) {
if mizuEntry.Protocol.Name == "http" {
entry, err := har.NewEntry(mizuEntry.Request, mizuEntry.Response, mizuEntry.StartTime, mizuEntry.ElapsedTime)
if err != nil {
logger.Log.Warningf("Failed to turn MizuEntry %d into HAR Entry: %s", mizuEntry.Id, err)
return
}
dest := mizuEntry.Destination.Name
if dest == "" {
dest = mizuEntry.Destination.IP + ":" + mizuEntry.Destination.Port
}
entryWSource := &EntryWithSource{
Entry: *entry,
Source: mizuEntry.Source.Name,
Destination: dest,
Id: mizuEntry.Id,
}
g.handleHARWithSource(entryWSource)
} else {
logger.Log.Debugf("OAS: Unsupported protocol in entry %d: %s", mizuEntry.Id, mizuEntry.Protocol.Name)
}
}
func (g *defaultOasGenerator) PushEntry(entryWithSource *EntryWithSource) {
if !g.started {
func (g *defaultOasGenerator) handleHARWithSource(entryWSource *EntryWithSource) {
entry := entryWSource.Entry
gen := g.getGen(entryWSource.Destination, entry.Request.URL)
opId, err := gen.feedEntry(entryWSource)
if err != nil {
txt, suberr := json.Marshal(entry)
if suberr == nil {
logger.Log.Debugf("Problematic entry: %s", txt)
}
logger.Log.Warningf("Failed processing entry %d: %s", entryWSource.Id, err)
return
}
select {
case g.entriesChan <- *entryWithSource:
default:
logger.Log.Warningf("OAS Generator - entry wasn't sent to channel because the channel has no buffer or there is no receiver")
logger.Log.Debugf("Handled entry %d as opId: %s", entryWSource.Id, opId) // TODO: set opId back to entry?
}
func (g *defaultOasGenerator) getGen(dest string, urlStr string) *SpecGen {
u, err := url.Parse(urlStr)
if err != nil {
logger.Log.Errorf("Failed to parse entry URL: %v, err: %v", urlStr, err)
}
val, found := g.serviceSpecs.Load(dest)
var gen *SpecGen
if !found {
gen = NewGen(u.Scheme + "://" + dest)
g.serviceSpecs.Store(dest, gen)
} else {
gen = val.(*SpecGen)
}
return gen
}
func (g *defaultOasGenerator) reset() {
g.serviceSpecs = &sync.Map{}
}
func (g *defaultOasGenerator) GetServiceSpecs() *sync.Map {
return g.serviceSpecs
}
func (g *defaultOasGenerator) SetEntriesQuery(query string) bool {
changed := g.entriesQuery != query
g.entriesQuery = query
return changed
}
func NewDefaultOasGenerator() *defaultOasGenerator {
return &defaultOasGenerator{
started: false,
ctx: nil,
cancel: nil,
serviceSpecs: nil,
entriesChan: nil,
dbConn: nil,
}
}
type EntryWithSource struct {
Source string
Destination string
Entry har.Entry
Id uint
}

View File

@@ -0,0 +1,42 @@
package oas
import (
"encoding/json"
"github.com/up9inc/mizu/agent/pkg/har"
"testing"
)
func TestOASGen(t *testing.T) {
gen := new(defaultOasGenerator)
e := new(har.Entry)
err := json.Unmarshal([]byte(`{"startedDateTime": "20000101","request": {"url": "https://host/path", "method": "GET"}, "response": {"status": 200}}`), e)
if err != nil {
panic(err)
}
ews := &EntryWithSource{
Destination: "some",
Entry: *e,
}
dummyConn := GetFakeDBConn(`{"startedDateTime": "20000101","request": {"url": "https://host/path", "method": "GET"}, "response": {"status": 200}}`)
gen.Start(dummyConn)
gen.handleHARWithSource(ews)
g, ok := gen.serviceSpecs.Load("some")
if !ok {
panic("Failed")
}
sg := g.(*SpecGen)
spec, err := sg.GetSpec()
if err != nil {
panic(err)
}
specText, _ := json.Marshal(spec)
t.Log(string(specText))
if !gen.IsStarted() {
t.Errorf("Should be started")
}
gen.Stop()
}

View File

@@ -28,6 +28,13 @@ const CountersTotal = "x-counters-total"
const CountersPerSource = "x-counters-per-source"
const SampleId = "x-sample-entry"
type EntryWithSource struct {
Source string
Destination string
Entry har.Entry
Id uint
}
type reqResp struct { // hello, generics in Go
Req *har.Request
Resp *har.Response
@@ -60,7 +67,7 @@ func (g *SpecGen) StartFromSpec(oas *openapi.OpenAPI) {
g.tree = new(Node)
for pathStr, pathObj := range oas.Paths.Items {
pathSplit := strings.Split(string(pathStr), "/")
g.tree.getOrSet(pathSplit, pathObj)
g.tree.getOrSet(pathSplit, pathObj, 0)
// clean "last entry timestamp" markers from the past
for _, pathAndOp := range g.tree.listOps() {
@@ -69,11 +76,11 @@ func (g *SpecGen) StartFromSpec(oas *openapi.OpenAPI) {
}
}
func (g *SpecGen) feedEntry(entryWithSource EntryWithSource) (string, error) {
func (g *SpecGen) feedEntry(entryWithSource *EntryWithSource) (string, error) {
g.lock.Lock()
defer g.lock.Unlock()
opId, err := g.handlePathObj(&entryWithSource)
opId, err := g.handlePathObj(entryWithSource)
if err != nil {
return "", err
}
@@ -219,7 +226,7 @@ func (g *SpecGen) handlePathObj(entryWithSource *EntryWithSource) (string, error
} else {
split = strings.Split(urlParsed.Path, "/")
}
node := g.tree.getOrSet(split, new(openapi.PathObj))
node := g.tree.getOrSet(split, new(openapi.PathObj), entryWithSource.Id)
opObj, err := handleOpObj(entryWithSource, node.pathObj)
if opObj != nil {
@@ -242,12 +249,12 @@ func handleOpObj(entryWithSource *EntryWithSource, pathObj *openapi.PathObj) (*o
return nil, nil
}
err = handleRequest(&entry.Request, opObj, isSuccess)
err = handleRequest(&entry.Request, opObj, isSuccess, entryWithSource.Id)
if err != nil {
return nil, err
}
err = handleResponse(&entry.Response, opObj, isSuccess)
err = handleResponse(&entry.Response, opObj, isSuccess, entryWithSource.Id)
if err != nil {
return nil, err
}
@@ -257,6 +264,8 @@ func handleOpObj(entryWithSource *EntryWithSource, pathObj *openapi.PathObj) (*o
return nil, err
}
setSampleID(&opObj.Extensions, entryWithSource.Id)
return opObj, nil
}
@@ -329,15 +338,10 @@ func handleCounters(opObj *openapi.Operation, success bool, entryWithSource *Ent
return err
}
err = opObj.Extensions.SetExtension(SampleId, entryWithSource.Id)
if err != nil {
return err
}
return nil
}
func handleRequest(req *har.Request, opObj *openapi.Operation, isSuccess bool) error {
func handleRequest(req *har.Request, opObj *openapi.Operation, isSuccess bool, sampleId uint) error {
// TODO: we don't handle the situation when header/qstr param can be defined on pathObj level. Also the path param defined on opObj
urlParsed, err := url.Parse(req.URL)
if err != nil {
@@ -361,7 +365,7 @@ func handleRequest(req *har.Request, opObj *openapi.Operation, isSuccess bool) e
IsIgnored: func(name string) bool { return false },
GeneralizeName: func(name string) string { return name },
}
handleNameVals(qstrGW, &opObj.Parameters, false)
handleNameVals(qstrGW, &opObj.Parameters, false, sampleId)
hdrGW := nvParams{
In: openapi.InHeader,
@@ -369,7 +373,7 @@ func handleRequest(req *har.Request, opObj *openapi.Operation, isSuccess bool) e
IsIgnored: isHeaderIgnored,
GeneralizeName: strings.ToLower,
}
handleNameVals(hdrGW, &opObj.Parameters, true)
handleNameVals(hdrGW, &opObj.Parameters, true, sampleId)
if isSuccess {
reqBody, err := getRequestBody(req, opObj)
@@ -378,12 +382,14 @@ func handleRequest(req *har.Request, opObj *openapi.Operation, isSuccess bool) e
}
if reqBody != nil {
setSampleID(&reqBody.Extensions, sampleId)
if req.PostData.Text == "" {
reqBody.Required = false
} else {
reqCtype, _ := getReqCtype(req)
reqMedia, err := fillContent(reqResp{Req: req}, reqBody.Content, reqCtype)
reqMedia, err := fillContent(reqResp{Req: req}, reqBody.Content, reqCtype, sampleId)
if err != nil {
return err
}
@@ -395,18 +401,20 @@ func handleRequest(req *har.Request, opObj *openapi.Operation, isSuccess bool) e
return nil
}
func handleResponse(resp *har.Response, opObj *openapi.Operation, isSuccess bool) error {
func handleResponse(resp *har.Response, opObj *openapi.Operation, isSuccess bool, sampleId uint) error {
// TODO: we don't support "default" response
respObj, err := getResponseObj(resp, opObj, isSuccess)
if err != nil {
return err
}
handleRespHeaders(resp.Headers, respObj)
setSampleID(&respObj.Extensions, sampleId)
handleRespHeaders(resp.Headers, respObj, sampleId)
respCtype := getRespCtype(resp)
respContent := respObj.Content
respMedia, err := fillContent(reqResp{Resp: resp}, respContent, respCtype)
respMedia, err := fillContent(reqResp{Resp: resp}, respContent, respCtype, sampleId)
if err != nil {
return err
}
@@ -414,7 +422,7 @@ func handleResponse(resp *har.Response, opObj *openapi.Operation, isSuccess bool
return nil
}
func handleRespHeaders(reqHeaders []har.Header, respObj *openapi.ResponseObj) {
func handleRespHeaders(reqHeaders []har.Header, respObj *openapi.ResponseObj, sampleId uint) {
visited := map[string]*openapi.HeaderObj{}
for _, pair := range reqHeaders {
if isHeaderIgnored(pair.Name) {
@@ -436,6 +444,8 @@ func handleRespHeaders(reqHeaders []har.Header, respObj *openapi.ResponseObj) {
logger.Log.Warningf("Failed to add example to a parameter: %s", err)
}
visited[nameGeneral] = param
setSampleID(&param.Extensions, sampleId)
}
// maintain "required" flag
@@ -456,13 +466,15 @@ func handleRespHeaders(reqHeaders []har.Header, respObj *openapi.ResponseObj) {
}
}
func fillContent(reqResp reqResp, respContent openapi.Content, ctype string) (*openapi.MediaType, error) {
func fillContent(reqResp reqResp, respContent openapi.Content, ctype string, sampleId uint) (*openapi.MediaType, error) {
content, found := respContent[ctype]
if !found {
respContent[ctype] = &openapi.MediaType{}
content = respContent[ctype]
}
setSampleID(&content.Extensions, sampleId)
var text string
var isBinary bool
if reqResp.Req != nil {
@@ -474,10 +486,10 @@ func fillContent(reqResp reqResp, respContent openapi.Content, ctype string) (*o
if !isBinary && text != "" {
var exampleMsg []byte
// try treating it as json
any, isJSON := anyJSON(text)
anyVal, isJSON := anyJSON(text)
if isJSON {
// re-marshal with forced indent
if msg, err := json.MarshalIndent(any, "", "\t"); err != nil {
if msg, err := json.MarshalIndent(anyVal, "", "\t"); err != nil {
panic("Failed to re-marshal value, super-strange")
} else {
exampleMsg = msg

View File

@@ -1,25 +1,37 @@
package oas
import (
"bytes"
"encoding/json"
"io/ioutil"
"net"
"os"
"regexp"
"strings"
"sync"
"testing"
"time"
"github.com/chanced/openapi"
"github.com/op/go-logging"
"github.com/up9inc/mizu/shared/logger"
"github.com/wI2L/jsondiff"
basenine "github.com/up9inc/basenine/client/go"
"github.com/up9inc/mizu/agent/pkg/har"
)
func GetFakeDBConn(send string) *basenine.Connection {
dummyConn := new(basenine.Connection)
dummyConn.Conn = FakeConn{
sendBuffer: bytes.NewBufferString(send),
receiveBuffer: bytes.NewBufferString(""),
}
return dummyConn
}
// if started via env, write file into subdir
func outputSpec(label string, spec *openapi.OpenAPI, t *testing.T) string {
content, err := json.MarshalIndent(spec, "", "\t")
content, err := json.MarshalIndent(spec, "", " ")
if err != nil {
panic(err)
}
@@ -42,20 +54,22 @@ func outputSpec(label string, spec *openapi.OpenAPI, t *testing.T) string {
}
func TestEntries(t *testing.T) {
logger.InitLoggerStd(logging.INFO)
//logger.InitLoggerStd(logging.INFO) causes race condition
files, err := getFiles("./test_artifacts/")
if err != nil {
t.Log(err)
t.FailNow()
}
GetDefaultOasGeneratorInstance().Start()
loadStartingOAS("test_artifacts/catalogue.json", "catalogue")
loadStartingOAS("test_artifacts/trcc.json", "trcc-api-service")
gen := NewDefaultOasGenerator()
gen.serviceSpecs = new(sync.Map)
loadStartingOAS("test_artifacts/catalogue.json", "catalogue", gen.serviceSpecs)
loadStartingOAS("test_artifacts/trcc.json", "trcc-api-service", gen.serviceSpecs)
go func() {
for {
time.Sleep(1 * time.Second)
GetDefaultOasGeneratorInstance().GetServiceSpecs().Range(func(key, val interface{}) bool {
gen.serviceSpecs.Range(func(key, val interface{}) bool {
svc := key.(string)
t.Logf("Getting spec for %s", svc)
gen := val.(*SpecGen)
@@ -68,16 +82,14 @@ func TestEntries(t *testing.T) {
}
}()
cnt, err := feedEntries(files, true)
cnt, err := feedEntries(files, true, gen)
if err != nil {
t.Log(err)
t.Fail()
}
waitQueueProcessed()
svcs := strings.Builder{}
GetDefaultOasGeneratorInstance().GetServiceSpecs().Range(func(key, val interface{}) bool {
gen.serviceSpecs.Range(func(key, val interface{}) bool {
gen := val.(*SpecGen)
svc := key.(string)
svcs.WriteString(svc + ",")
@@ -99,7 +111,7 @@ func TestEntries(t *testing.T) {
return true
})
GetDefaultOasGeneratorInstance().GetServiceSpecs().Range(func(key, val interface{}) bool {
gen.serviceSpecs.Range(func(key, val interface{}) bool {
svc := key.(string)
gen := val.(*SpecGen)
spec, err := gen.GetSpec()
@@ -123,20 +135,18 @@ func TestEntries(t *testing.T) {
}
func TestFileSingle(t *testing.T) {
GetDefaultOasGeneratorInstance().Start()
GetDefaultOasGeneratorInstance().Reset()
gen := NewDefaultOasGenerator()
gen.serviceSpecs = new(sync.Map)
// loadStartingOAS()
file := "test_artifacts/params.har"
files := []string{file}
cnt, err := feedEntries(files, true)
cnt, err := feedEntries(files, true, gen)
if err != nil {
logger.Log.Warning("Failed processing file: " + err.Error())
t.Fail()
}
waitQueueProcessed()
GetDefaultOasGeneratorInstance().GetServiceSpecs().Range(func(key, val interface{}) bool {
gen.serviceSpecs.Range(func(key, val interface{}) bool {
svc := key.(string)
gen := val.(*SpecGen)
spec, err := gen.GetSpec()
@@ -189,18 +199,7 @@ func TestFileSingle(t *testing.T) {
logger.Log.Infof("Processed entries: %d", cnt)
}
func waitQueueProcessed() {
for {
time.Sleep(100 * time.Millisecond)
queue := len(GetDefaultOasGeneratorInstance().entriesChan)
logger.Log.Infof("Queue: %d", queue)
if queue < 1 {
break
}
}
}
func loadStartingOAS(file string, label string) {
func loadStartingOAS(file string, label string, specs *sync.Map) {
fd, err := os.Open(file)
if err != nil {
panic(err)
@@ -222,12 +221,14 @@ func loadStartingOAS(file string, label string) {
gen := NewGen(label)
gen.StartFromSpec(doc)
GetDefaultOasGeneratorInstance().GetServiceSpecs().Store(label, gen)
specs.Store(label, gen)
}
func TestEntriesNegative(t *testing.T) {
gen := NewDefaultOasGenerator()
gen.serviceSpecs = new(sync.Map)
files := []string{"invalid"}
_, err := feedEntries(files, false)
_, err := feedEntries(files, false, gen)
if err == nil {
t.Logf("Should have failed")
t.Fail()
@@ -235,8 +236,10 @@ func TestEntriesNegative(t *testing.T) {
}
func TestEntriesPositive(t *testing.T) {
gen := NewDefaultOasGenerator()
gen.serviceSpecs = new(sync.Map)
files := []string{"test_artifacts/params.har"}
_, err := feedEntries(files, false)
_, err := feedEntries(files, false, gen)
if err != nil {
t.Logf("Failed")
t.Fail()
@@ -275,3 +278,17 @@ func TestLoadValid3_1(t *testing.T) {
t.FailNow()
}
}
type FakeConn struct {
sendBuffer *bytes.Buffer
receiveBuffer *bytes.Buffer
}
func (f FakeConn) Read(p []byte) (int, error) { return f.sendBuffer.Read(p) }
func (f FakeConn) Write(p []byte) (int, error) { return f.receiveBuffer.Write(p) }
func (FakeConn) Close() error { return nil }
func (FakeConn) LocalAddr() net.Addr { return nil }
func (FakeConn) RemoteAddr() net.Addr { return nil }
func (FakeConn) SetDeadline(t time.Time) error { return nil }
func (FakeConn) SetReadDeadline(t time.Time) error { return nil }
func (FakeConn) SetWriteDeadline(t time.Time) error { return nil }

View File

@@ -21,9 +21,11 @@
"description": "Successful call with status 200",
"content": {
"application/json": {
"example": null
"example": null,
"x-sample-entry": 4
}
}
},
"x-sample-entry": 4
}
},
"x-counters-per-source": {
@@ -45,7 +47,7 @@
"sumDuration": 0
},
"x-last-seen-ts": 1567750580.04,
"x-sample-entry": 0
"x-sample-entry": 4
}
},
"/appears-twice": {
@@ -58,9 +60,11 @@
"description": "Successful call with status 200",
"content": {
"application/json": {
"example": null
"example": null,
"x-sample-entry": 6
}
}
},
"x-sample-entry": 6
}
},
"x-counters-per-source": {
@@ -82,7 +86,7 @@
"sumDuration": 1
},
"x-last-seen-ts": 1567750581.74,
"x-sample-entry": 0
"x-sample-entry": 6
}
},
"/body-optional": {
@@ -94,8 +98,11 @@
"200": {
"description": "Successful call with status 200",
"content": {
"": {}
}
"": {
"x-sample-entry": 12
}
},
"x-sample-entry": 12
}
},
"x-counters-per-source": {
@@ -117,14 +124,16 @@
"sumDuration": 0.01
},
"x-last-seen-ts": 1567750581.75,
"x-sample-entry": 0,
"x-sample-entry": 12,
"requestBody": {
"description": "Generic request body",
"content": {
"application/json": {
"example": "{\"key\", \"val\"}"
"example": "{\"key\", \"val\"}",
"x-sample-entry": 11
}
}
},
"x-sample-entry": 12
}
}
},
@@ -137,8 +146,11 @@
"200": {
"description": "Successful call with status 200",
"content": {
"": {}
}
"": {
"x-sample-entry": 13
}
},
"x-sample-entry": 13
}
},
"x-counters-per-source": {
@@ -160,15 +172,17 @@
"sumDuration": 0
},
"x-last-seen-ts": 1567750581.75,
"x-sample-entry": 0,
"x-sample-entry": 13,
"requestBody": {
"description": "Generic request body",
"content": {
"": {
"example": "body exists"
"example": "body exists",
"x-sample-entry": 13
}
},
"required": true
"required": true,
"x-sample-entry": 13
}
}
},
@@ -182,9 +196,11 @@
"description": "Successful call with status 200",
"content": {
"": {
"example": {}
"example": {},
"x-sample-entry": 9
}
}
},
"x-sample-entry": 9
}
},
"x-counters-per-source": {
@@ -206,7 +222,7 @@
"sumDuration": 0
},
"x-last-seen-ts": 1567750582.74,
"x-sample-entry": 0,
"x-sample-entry": 9,
"requestBody": {
"description": "Generic request body",
"content": {
@@ -233,10 +249,12 @@
}
}
},
"example": "--BOUNDARY\r\nContent-Disposition: form-data; name=\"file\"; filename=\"metadata.json\"\r\nContent-Type: application/json\r\n\r\n{\"functions\": 123}\r\n--BOUNDARY\r\nContent-Disposition: form-data; name=\"path\"\r\n\r\n/content/components\r\n--BOUNDARY--\r\n"
"example": "--BOUNDARY\r\nContent-Disposition: form-data; name=\"file\"; filename=\"metadata.json\"\r\nContent-Type: application/json\r\n\r\n{\"functions\": 123}\r\n--BOUNDARY\r\nContent-Disposition: form-data; name=\"path\"\r\n\r\n/content/components\r\n--BOUNDARY--\r\n",
"x-sample-entry": 9
}
},
"required": true
"required": true,
"x-sample-entry": 9
}
}
},
@@ -249,8 +267,11 @@
"200": {
"description": "Successful call with status 200",
"content": {
"": {}
}
"": {
"x-sample-entry": 8
}
},
"x-sample-entry": 8
}
},
"x-counters-per-source": {
@@ -272,7 +293,7 @@
"sumDuration": 1
},
"x-last-seen-ts": 1567750581.74,
"x-sample-entry": 0,
"x-sample-entry": 8,
"requestBody": {
"description": "Generic request body",
"content": {
@@ -312,10 +333,12 @@
}
}
},
"example": "agent-id=ade\u0026callback-url=\u0026token=sometoken"
"example": "agent-id=ade\u0026callback-url=\u0026token=sometoken",
"x-sample-entry": 8
}
},
"required": true
"required": true,
"x-sample-entry": 8
}
}
},
@@ -331,8 +354,11 @@
"200": {
"description": "Successful call with status 200",
"content": {
"": {}
}
"": {
"x-sample-entry": 14
}
},
"x-sample-entry": 14
}
},
"x-counters-per-source": {
@@ -354,7 +380,7 @@
"sumDuration": 0
},
"x-last-seen-ts": 1567750582,
"x-sample-entry": 0
"x-sample-entry": 14
},
"parameters": [
{
@@ -369,7 +395,8 @@
"example #0": {
"value": "234324"
}
}
},
"x-sample-entry": 14
}
]
},
@@ -385,8 +412,11 @@
"200": {
"description": "Successful call with status 200",
"content": {
"": {}
}
"": {
"x-sample-entry": 18
}
},
"x-sample-entry": 18
}
},
"x-counters-per-source": {
@@ -408,7 +438,7 @@
"sumDuration": 9.53e-7
},
"x-last-seen-ts": 1567750582.00,
"x-sample-entry": 0
"x-sample-entry": 18
},
"parameters": [
{
@@ -436,7 +466,8 @@
"example #4": {
"value": "prefix-gibberish-afterwards"
}
}
},
"x-sample-entry": 19
}
]
},
@@ -452,8 +483,11 @@
"200": {
"description": "Successful call with status 200",
"content": {
"": {}
}
"": {
"x-sample-entry": 15
}
},
"x-sample-entry": 15
}
},
"x-counters-per-source": {
@@ -475,7 +509,7 @@
"sumDuration": 0
},
"x-last-seen-ts": 1567750582.00,
"x-sample-entry": 0
"x-sample-entry": 15
},
"parameters": [
{
@@ -503,7 +537,8 @@
"example #4": {
"value": "prefix-gibberish-afterwards"
}
}
},
"x-sample-entry": 19
}
]
},
@@ -519,8 +554,11 @@
"200": {
"description": "Successful call with status 200",
"content": {
"": {}
}
"": {
"x-sample-entry": 16
}
},
"x-sample-entry": 16
}
},
"x-counters-per-source": {
@@ -542,7 +580,7 @@
"sumDuration": 0
},
"x-last-seen-ts": 1567750582.00,
"x-sample-entry": 0
"x-sample-entry": 16
},
"parameters": [
{
@@ -570,7 +608,8 @@
"example #4": {
"value": "prefix-gibberish-afterwards"
}
}
},
"x-sample-entry": 19
}
]
},
@@ -586,8 +625,11 @@
"200": {
"description": "Successful call with status 200",
"content": {
"": {}
}
"": {
"x-sample-entry": 19
}
},
"x-sample-entry": 19
}
},
"x-counters-per-source": {
@@ -609,7 +651,7 @@
"sumDuration": 0
},
"x-last-seen-ts": 1567750582.00,
"x-sample-entry": 0
"x-sample-entry": 19
},
"parameters": [
{
@@ -624,7 +666,8 @@
"example #0": {
"value": "23421"
}
}
},
"x-sample-entry": 19
},
{
"name": "parampatternId",
@@ -651,7 +694,8 @@
"example #4": {
"value": "prefix-gibberish-afterwards"
}
}
},
"x-sample-entry": 19
}
]
},
@@ -665,9 +709,11 @@
"description": "Successful call with status 200",
"content": {
"application/json": {
"example": null
"example": null,
"x-sample-entry": 3
}
}
},
"x-sample-entry": 3
}
},
"x-counters-per-source": {
@@ -689,7 +735,7 @@
"sumDuration": 0
},
"x-last-seen-ts": 1567750579.74,
"x-sample-entry": 0
"x-sample-entry": 3
},
"parameters": [
{
@@ -707,7 +753,8 @@
"example #1": {
"value": "<UUID4>"
}
}
},
"x-sample-entry": 3
}
]
},
@@ -720,8 +767,11 @@
"200": {
"description": "Successful call with status 200",
"content": {
"text/html": {}
}
"text/html": {
"x-sample-entry": 1
}
},
"x-sample-entry": 1
}
},
"x-counters-per-source": {
@@ -743,7 +793,7 @@
"sumDuration": 0
},
"x-last-seen-ts": 1567750483.86,
"x-sample-entry": 0
"x-sample-entry": 1
},
"parameters": [
{
@@ -761,7 +811,8 @@
"example #1": {
"value": "<UUID4>"
}
}
},
"x-sample-entry": 3
}
]
},
@@ -775,9 +826,11 @@
"description": "Successful call with status 200",
"content": {
"application/json": {
"example": null
"example": null,
"x-sample-entry": 2
}
}
},
"x-sample-entry": 2
}
},
"x-counters-per-source": {
@@ -799,7 +852,7 @@
"sumDuration": 0
},
"x-last-seen-ts": 1567750578.74,
"x-sample-entry": 0
"x-sample-entry": 2
},
"parameters": [
{
@@ -817,7 +870,8 @@
"example #1": {
"value": "<UUID4>"
}
}
},
"x-sample-entry": 3
}
]
}

View File

@@ -20,7 +20,7 @@ type Node struct {
children []*Node
}
func (n *Node) getOrSet(path NodePath, existingPathObj *openapi.PathObj) (node *Node) {
func (n *Node) getOrSet(path NodePath, existingPathObj *openapi.PathObj, sampleId uint) (node *Node) {
if existingPathObj == nil {
panic("Invalid function call")
}
@@ -70,6 +70,10 @@ func (n *Node) getOrSet(path NodePath, existingPathObj *openapi.PathObj) (node *
}
}
if node.pathParam != nil {
setSampleID(&node.pathParam.Extensions, sampleId)
}
// add example if it's a gibberish chunk
if node.pathParam != nil && !chunkIsParam {
exmp := &node.pathParam.Examples
@@ -85,7 +89,7 @@ func (n *Node) getOrSet(path NodePath, existingPathObj *openapi.PathObj) (node *
// TODO: eat up trailing slash, in a smart way: node.pathObj!=nil && path[1]==""
if len(path) > 1 {
return node.getOrSet(path[1:], existingPathObj)
return node.getOrSet(path[1:], existingPathObj, sampleId)
} else if node.pathObj == nil {
node.pathObj = existingPathObj
}

View File

@@ -20,10 +20,10 @@ func TestTree(t *testing.T) {
}
tree := new(Node)
for _, tc := range testCases {
for i, tc := range testCases {
split := strings.Split(tc.inp, "/")
pathObj := new(openapi.PathObj)
node := tree.getOrSet(split, pathObj)
node := tree.getOrSet(split, pathObj, uint(i))
fillPathParams(node, pathObj)

View File

@@ -115,7 +115,7 @@ type nvParams struct {
GeneralizeName func(name string) string
}
func handleNameVals(gw nvParams, params **openapi.ParameterList, checkIgnore bool) {
func handleNameVals(gw nvParams, params **openapi.ParameterList, checkIgnore bool, sampleId uint) {
visited := map[string]*openapi.ParameterObj{}
for _, pair := range gw.Pairs {
if (checkIgnore && gw.IsIgnored(pair.Name)) || pair.Name == "" {
@@ -137,6 +137,8 @@ func handleNameVals(gw nvParams, params **openapi.ParameterList, checkIgnore boo
logger.Log.Warningf("Failed to add example to a parameter: %s", err)
}
visited[nameGeneral] = param
setSampleID(&param.Extensions, sampleId)
}
// maintain "required" flag
@@ -474,3 +476,15 @@ func intersectSliceWithMap(required []string, names map[string]struct{}) []strin
}
return required
}
func setSampleID(extensions *openapi.Extensions, id uint) {
if id > 0 {
if *extensions == nil {
*extensions = openapi.Extensions{}
}
err := (extensions).SetExtension(SampleId, id)
if err != nil {
logger.Log.Warningf("Failed to set sample ID: %s", err)
}
}
}

View File

@@ -14,9 +14,10 @@ import (
const FilePath = shared.DataDirPath + "tapped-pods.json"
var (
lock = &sync.Mutex{}
syncOnce sync.Once
tappedPods []*shared.PodInfo
lock = &sync.Mutex{}
syncOnce sync.Once
tappedPods []*shared.PodInfo
nodeHostToTappedPodsMap shared.NodeToPodsMap
)
func Get() []*shared.PodInfo {
@@ -55,3 +56,14 @@ func GetTappedPodsStatus() []shared.TappedPodStatus {
return tappedPodsStatus
}
func SetNodeToTappedPodMap(nodeToTappedPodsMap shared.NodeToPodsMap) {
summary := nodeToTappedPodsMap.Summary()
logger.Log.Infof("Setting node to tapped pods map to %v", summary)
nodeHostToTappedPodsMap = nodeToTappedPodsMap
}
func GetNodeToTappedPodMap() shared.NodeToPodsMap {
return nodeHostToTappedPodsMap
}

View File

@@ -18,10 +18,11 @@ type ServiceMapResponse struct {
}
type ServiceMapNode struct {
Id int `json:"id"`
Name string `json:"name"`
Entry *tapApi.TCP `json:"entry"`
Count int `json:"count"`
Id int `json:"id"`
Name string `json:"name"`
Entry *tapApi.TCP `json:"entry"`
Count int `json:"count"`
Resolved bool `json:"resolved"`
}
type ServiceMapEdge struct {

View File

@@ -224,35 +224,41 @@ func (s *defaultServiceMap) GetStatus() ServiceMapStatus {
}
func (s *defaultServiceMap) GetNodes() []ServiceMapNode {
var nodes []ServiceMapNode
nodes := []ServiceMapNode{}
for i, n := range s.graph.Nodes {
nodes = append(nodes, ServiceMapNode{
Id: n.id,
Name: string(i),
Entry: n.entry,
Count: n.count,
Id: n.id,
Name: string(i),
Resolved: n.entry.Name != UnresolvedNodeName,
Entry: n.entry,
Count: n.count,
})
}
return nodes
}
func (s *defaultServiceMap) GetEdges() []ServiceMapEdge {
var edges []ServiceMapEdge
edges := []ServiceMapEdge{}
for u, m := range s.graph.Edges {
for v := range m {
for _, p := range s.graph.Edges[u][v].data {
edges = append(edges, ServiceMapEdge{
Source: ServiceMapNode{
Id: s.graph.Nodes[u].id,
Name: string(u),
Entry: s.graph.Nodes[u].entry,
Count: s.graph.Nodes[u].count,
Id: s.graph.Nodes[u].id,
Name: string(u),
Entry: s.graph.Nodes[u].entry,
Resolved: s.graph.Nodes[u].entry.Name != UnresolvedNodeName,
Count: s.graph.Nodes[u].count,
},
Destination: ServiceMapNode{
Id: s.graph.Nodes[v].id,
Name: string(v),
Entry: s.graph.Nodes[v].entry,
Count: s.graph.Nodes[v].count,
Id: s.graph.Nodes[v].id,
Name: string(v),
Entry: s.graph.Nodes[v].entry,
Resolved: s.graph.Nodes[v].entry.Name != UnresolvedNodeName,
Count: s.graph.Nodes[v].count,
},
Count: p.count,
Protocol: p.protocol,
@@ -260,6 +266,7 @@ func (s *defaultServiceMap) GetEdges() []ServiceMapEdge {
}
}
}
return edges
}

View File

@@ -403,10 +403,10 @@ func (s *ServiceMapEnabledSuite) TestServiceMap() {
assert.Equal(0, status.EdgeCount)
// Nodes after reset
assert.Equal([]ServiceMapNode(nil), nodes)
assert.Equal([]ServiceMapNode{}, nodes)
// Edges after reset
assert.Equal([]ServiceMapEdge(nil), edges)
assert.Equal([]ServiceMapEdge{}, edges)
}
func TestServiceMapSuite(t *testing.T) {

View File

@@ -10,8 +10,6 @@ import (
"net/url"
"time"
"github.com/up9inc/mizu/shared/kubernetes"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
@@ -83,15 +81,13 @@ func (provider *Provider) ReportTapperStatus(tapperStatus shared.TapperStatus) e
func (provider *Provider) ReportTappedPods(pods []core.Pod) error {
tappedPodsUrl := fmt.Sprintf("%s/status/tappedPods", provider.url)
podInfos := kubernetes.GetPodInfosForPods(pods)
if jsonValue, err := json.Marshal(podInfos); err != nil {
if jsonValue, err := json.Marshal(pods); err != nil {
return fmt.Errorf("failed Marshal the tapped pods %w", err)
} else {
if _, err := utils.Post(tappedPodsUrl, "application/json", bytes.NewBuffer(jsonValue), provider.client); err != nil {
return fmt.Errorf("failed sending to API server the tapped pods %w", err)
} else {
logger.Log.Debugf("Reported to server API about %d taped pods successfully", len(podInfos))
logger.Log.Debugf("Reported to server API about %d taped pods successfully", len(pods))
return nil
}
}

View File

@@ -111,9 +111,9 @@ func checkRulesPermissions(ctx context.Context, kubernetesProvider *kubernetes.P
func checkPermissionExist(group string, resource string, verb string, namespace string, exist bool, err error) bool {
var groupAndNamespace string
if group != "" && namespace != "" {
groupAndNamespace = fmt.Sprintf("in group '%v' and namespace '%v'", group, namespace)
groupAndNamespace = fmt.Sprintf("in api group '%v' and namespace '%v'", group, namespace)
} else if group != "" {
groupAndNamespace = fmt.Sprintf("in group '%v'", group)
groupAndNamespace = fmt.Sprintf("in api group '%v'", group)
} else if namespace != "" {
groupAndNamespace = fmt.Sprintf("in namespace '%v'", namespace)
}

View File

@@ -27,13 +27,21 @@ func runMizuCheck() {
checkPassed = check.KubernetesVersion(kubernetesVersion)
}
if config.Config.Check.PreTap {
if checkPassed {
checkPassed = check.TapKubernetesPermissions(ctx, embedFS, kubernetesProvider)
if config.Config.Check.PreTap || config.Config.Check.PreInstall || config.Config.Check.ImagePull {
if config.Config.Check.PreTap {
if checkPassed {
checkPassed = check.TapKubernetesPermissions(ctx, embedFS, kubernetesProvider)
}
} else if config.Config.Check.PreInstall {
if checkPassed {
checkPassed = check.InstallKubernetesPermissions(ctx, kubernetesProvider)
}
}
} else if config.Config.Check.PreInstall {
if checkPassed {
checkPassed = check.InstallKubernetesPermissions(ctx, kubernetesProvider)
if config.Config.Check.ImagePull {
if checkPassed {
checkPassed = check.ImagePullInCluster(ctx, kubernetesProvider)
}
}
} else {
if checkPassed {
@@ -45,12 +53,6 @@ func runMizuCheck() {
}
}
if config.Config.Check.ImagePull {
if checkPassed {
checkPassed = check.ImagePullInCluster(ctx, kubernetesProvider)
}
}
if checkPassed {
logger.Log.Infof("\nStatus check results are %v", fmt.Sprintf(uiUtils.Green, "√"))
} else {

View File

@@ -40,7 +40,7 @@ type ConfigStruct struct {
HeadlessMode bool `yaml:"headless" default:"false"`
LogLevelStr string `yaml:"log-level,omitempty" default:"INFO" readonly:""`
ServiceMap bool `yaml:"service-map" default:"true"`
OAS bool `yaml:"oas,omitempty" default:"false" readonly:""`
OAS bool `yaml:"oas" default:"true"`
Elastic shared.ElasticConfig `yaml:"elastic"`
}

View File

@@ -11,7 +11,7 @@ require (
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7
github.com/spf13/cobra v1.3.0
github.com/spf13/pflag v1.0.5
github.com/up9inc/basenine/server/lib v0.0.0-20220317230530-8472d80307f6
github.com/up9inc/basenine/server/lib v0.0.0-20220326121918-785f3061c8ce
github.com/up9inc/mizu/shared v0.0.0
github.com/up9inc/mizu/tap/api v0.0.0
golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8

View File

@@ -600,8 +600,8 @@ github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/up9inc/basenine/server/lib v0.0.0-20220317230530-8472d80307f6 h1:+RZTD+HdfIW2SMbc65yWkruTY+g5/1Av074m62A74ls=
github.com/up9inc/basenine/server/lib v0.0.0-20220317230530-8472d80307f6/go.mod h1:ZIkxWiJm65jYQIso9k+OZKhR7gQ1we2jNyE2kQX9IQI=
github.com/up9inc/basenine/server/lib v0.0.0-20220326121918-785f3061c8ce h1:PypqybjmuxftGkX4NmP4JAUyEykZj2r6W4r9lnRZ/kE=
github.com/up9inc/basenine/server/lib v0.0.0-20220326121918-785f3061c8ce/go.mod h1:ZIkxWiJm65jYQIso9k+OZKhR7gQ1we2jNyE2kQX9IQI=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg=
github.com/xlab/treeprint v1.1.0 h1:G/1DjNkPpfZCFt9CSh6b5/nY4VimlbHF3Rh4obvtzDk=

View File

@@ -1,18 +1,20 @@
FROM dockcross/linux-arm64-musl:latest AS builder-from-amd64-to-arm64v8
# Install Go
RUN curl https://go.dev/dl/go1.17.6.linux-amd64.tar.gz -Lo ./go.linux-amd64.tar.gz
RUN curl https://go.dev/dl/go1.17.6.linux-amd64.tar.gz.asc -Lo ./go.linux-amd64.tar.gz.asc
RUN curl https://dl.google.com/dl/linux/linux_signing_key.pub -Lo linux_signing_key.pub
RUN gpg --import linux_signing_key.pub && gpg --verify ./go.linux-amd64.tar.gz.asc ./go.linux-amd64.tar.gz
RUN rm -rf /usr/local/go && tar -C /usr/local -xzf go.linux-amd64.tar.gz
RUN curl https://go.dev/dl/go1.17.6.linux-amd64.tar.gz -Lo ./go.linux-amd64.tar.gz \
&& curl https://go.dev/dl/go1.17.6.linux-amd64.tar.gz.asc -Lo ./go.linux-amd64.tar.gz.asc \
&& curl https://dl.google.com/dl/linux/linux_signing_key.pub -Lo linux_signing_key.pub \
&& gpg --import linux_signing_key.pub && gpg --verify ./go.linux-amd64.tar.gz.asc ./go.linux-amd64.tar.gz \
&& rm -rf /usr/local/go && tar -C /usr/local -xzf go.linux-amd64.tar.gz
ENV PATH "$PATH:/usr/local/go/bin"
# Compile libpcap from source
RUN curl https://www.tcpdump.org/release/libpcap-1.10.1.tar.gz -Lo ./libpcap.tar.gz
RUN curl https://www.tcpdump.org/release/libpcap-1.10.1.tar.gz.sig -Lo ./libpcap.tar.gz.sig
RUN curl https://www.tcpdump.org/release/signing-key.asc -Lo ./signing-key.asc
RUN gpg --import signing-key.asc && gpg --verify libpcap.tar.gz.sig libpcap.tar.gz
RUN tar -xzf libpcap.tar.gz && mv ./libpcap-* ./libpcap
RUN cd ./libpcap && ./configure --host=arm && make
RUN cp /work/libpcap/libpcap.a /usr/xcc/aarch64-linux-musl-cross/lib/gcc/aarch64-linux-musl/*/
RUN curl https://www.tcpdump.org/release/libpcap-1.10.1.tar.gz -Lo ./libpcap.tar.gz \
&& curl https://www.tcpdump.org/release/libpcap-1.10.1.tar.gz.sig -Lo ./libpcap.tar.gz.sig \
&& curl https://www.tcpdump.org/release/signing-key.asc -Lo ./signing-key.asc \
&& gpg --import signing-key.asc && gpg --verify libpcap.tar.gz.sig libpcap.tar.gz \
&& tar -xzf libpcap.tar.gz && mv ./libpcap-* ./libpcap
WORKDIR /work/libpcap
RUN ./configure --host=arm && make \
&& cp /work/libpcap/libpcap.a /usr/xcc/aarch64-linux-musl-cross/lib/gcc/aarch64-linux-musl/*/

View File

@@ -5,7 +5,6 @@ const (
SyncEntriesConfigEnvVar = "SYNC_ENTRIES_CONFIG"
HostModeEnvVar = "HOST_MODE"
NodeNameEnvVar = "NODE_NAME"
TappedAddressesPerNodeDictEnvVar = "TAPPED_ADDRESSES_PER_HOST"
ConfigDirPath = "/app/config/"
DataDirPath = "/app/data/"
ValidationRulesFileName = "validation-rules.yaml"

View File

@@ -31,7 +31,8 @@ type MizuTapperSyncer struct {
TapPodChangesOut chan TappedPodChangeEvent
TapperStatusChangedOut chan shared.TapperStatus
ErrorOut chan K8sTapManagerError
nodeToTappedPodMap map[string][]core.Pod
nodeToTappedPodMap shared.NodeToPodsMap
tappedNodes []string
}
type TapperSyncerConfig struct {
@@ -177,7 +178,7 @@ func (tapperSyncer *MizuTapperSyncer) watchPodsForTapping() {
podWatchHelper := NewPodWatchHelper(tapperSyncer.kubernetesProvider, &tapperSyncer.config.PodFilterRegex)
eventChan, errorChan := FilteredWatch(tapperSyncer.context, podWatchHelper, tapperSyncer.config.TargetNamespaces, podWatchHelper)
restartTappers := func() {
handleChangeInPods := func() {
err, changeFound := tapperSyncer.updateCurrentlyTappedPods()
if err != nil {
tapperSyncer.ErrorOut <- K8sTapManagerError{
@@ -197,7 +198,7 @@ func (tapperSyncer *MizuTapperSyncer) watchPodsForTapping() {
}
}
}
restartTappersDebouncer := debounce.NewDebouncer(updateTappersDelay, restartTappers)
restartTappersDebouncer := debounce.NewDebouncer(updateTappersDelay, handleChangeInPods)
for {
select {
@@ -295,6 +296,20 @@ func (tapperSyncer *MizuTapperSyncer) updateCurrentlyTappedPods() (err error, ch
}
func (tapperSyncer *MizuTapperSyncer) updateMizuTappers() error {
nodesToTap := make([]string, len(tapperSyncer.nodeToTappedPodMap))
i := 0
for node := range tapperSyncer.nodeToTappedPodMap {
nodesToTap[i] = node
i++
}
if shared.EqualStringSlices(nodesToTap, tapperSyncer.tappedNodes) {
logger.Log.Debug("Skipping apply, DaemonSet is up to date")
return nil
}
logger.Log.Debugf("Updating DaemonSet to run on nodes: %v", nodesToTap)
if len(tapperSyncer.nodeToTappedPodMap) > 0 {
var serviceAccountName string
if tapperSyncer.config.MizuServiceAccountExists {
@@ -303,6 +318,11 @@ func (tapperSyncer *MizuTapperSyncer) updateMizuTappers() error {
serviceAccountName = ""
}
nodeNames := make([]string, 0, len(tapperSyncer.nodeToTappedPodMap))
for nodeName := range tapperSyncer.nodeToTappedPodMap {
nodeNames = append(nodeNames, nodeName)
}
if err := tapperSyncer.kubernetesProvider.ApplyMizuTapperDaemonSet(
tapperSyncer.context,
tapperSyncer.config.MizuResourcesNamespace,
@@ -310,7 +330,7 @@ func (tapperSyncer *MizuTapperSyncer) updateMizuTappers() error {
tapperSyncer.config.AgentImage,
TapperPodName,
fmt.Sprintf("%s.%s.svc.cluster.local", ApiServerPodName, tapperSyncer.config.MizuResourcesNamespace),
tapperSyncer.nodeToTappedPodMap,
nodeNames,
serviceAccountName,
tapperSyncer.config.TapperResources,
tapperSyncer.config.ImagePullPolicy,
@@ -335,5 +355,7 @@ func (tapperSyncer *MizuTapperSyncer) updateMizuTappers() error {
logger.Log.Debugf("Successfully reset tapper daemon set")
}
tapperSyncer.tappedNodes = nodesToTap
return nil
}

View File

@@ -708,18 +708,13 @@ func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string,
return nil
}
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeToTappedPodMap map[string][]core.Pod, serviceAccountName string, resources shared.Resources, imagePullPolicy core.PullPolicy, mizuApiFilteringOptions api.TrafficFilteringOptions, logLevel logging.Level, serviceMesh bool, tls bool) error {
logger.Log.Debugf("Applying %d tapper daemon sets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeToTappedPodMap), namespace, daemonSetName, podImage, tapperPodName)
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeNames []string, serviceAccountName string, resources shared.Resources, imagePullPolicy core.PullPolicy, mizuApiFilteringOptions api.TrafficFilteringOptions, logLevel logging.Level, serviceMesh bool, tls bool) error {
logger.Log.Debugf("Applying %d tapper daemon sets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeNames), namespace, daemonSetName, podImage, tapperPodName)
if len(nodeToTappedPodMap) == 0 {
if len(nodeNames) == 0 {
return fmt.Errorf("daemon set %s must tap at least 1 pod", daemonSetName)
}
nodeToTappedPodMapJsonStr, err := json.Marshal(nodeToTappedPodMap)
if err != nil {
return err
}
mizuApiFilteringOptionsJsonStr, err := json.Marshal(mizuApiFilteringOptions)
if err != nil {
return err
@@ -773,7 +768,6 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
agentContainer.WithEnv(
applyconfcore.EnvVar().WithName(shared.LogLevelEnvVar).WithValue(logLevel.String()),
applyconfcore.EnvVar().WithName(shared.HostModeEnvVar).WithValue("1"),
applyconfcore.EnvVar().WithName(shared.TappedAddressesPerNodeDictEnvVar).WithValue(string(nodeToTappedPodMapJsonStr)),
applyconfcore.EnvVar().WithName(shared.GoGCEnvVar).WithValue("12800"),
applyconfcore.EnvVar().WithName(shared.MizuFilteringOptionsEnvVar).WithValue(string(mizuApiFilteringOptionsJsonStr)),
)
@@ -811,10 +805,6 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
agentResources := applyconfcore.ResourceRequirements().WithRequests(agentResourceRequests).WithLimits(agentResourceLimits)
agentContainer.WithResources(agentResources)
nodeNames := make([]string, 0, len(nodeToTappedPodMap))
for nodeName := range nodeToTappedPodMap {
nodeNames = append(nodeNames, nodeName)
}
nodeSelectorRequirement := applyconfcore.NodeSelectorRequirement()
nodeSelectorRequirement.WithKey("kubernetes.io/hostname")
nodeSelectorRequirement.WithOperator(core.NodeSelectorOpIn)

View File

@@ -5,12 +5,11 @@ import (
"github.com/up9inc/mizu/shared"
core "k8s.io/api/core/v1"
v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func GetNodeHostToTappedPodsMap(tappedPods []core.Pod) map[string][]core.Pod {
nodeToTappedPodMap := make(map[string][]core.Pod)
func GetNodeHostToTappedPodsMap(tappedPods []core.Pod) shared.NodeToPodsMap {
nodeToTappedPodMap := make(shared.NodeToPodsMap)
for _, pod := range tappedPods {
minimizedPod := getMinimizedPod(pod)
@@ -29,18 +28,18 @@ func getMinimizedPod(fullPod core.Pod) core.Pod {
ObjectMeta: metav1.ObjectMeta{
Name: fullPod.Name,
},
Status: v1.PodStatus{
Status: core.PodStatus{
PodIP: fullPod.Status.PodIP,
ContainerStatuses: getMinimizedContainerStatuses(fullPod),
},
}
}
func getMinimizedContainerStatuses(fullPod core.Pod) []v1.ContainerStatus {
result := make([]v1.ContainerStatus, len(fullPod.Status.ContainerStatuses))
func getMinimizedContainerStatuses(fullPod core.Pod) []core.ContainerStatus {
result := make([]core.ContainerStatus, len(fullPod.Status.ContainerStatuses))
for i, container := range fullPod.Status.ContainerStatuses {
result[i] = v1.ContainerStatus{
result[i] = core.ContainerStatus{
ContainerID: container.ContainerID,
}
}

View File

@@ -6,24 +6,25 @@ import (
"github.com/op/go-logging"
"github.com/up9inc/mizu/shared/logger"
v1 "k8s.io/api/core/v1"
"gopkg.in/yaml.v3"
v1 "k8s.io/api/core/v1"
)
type WebSocketMessageType string
const (
WebSocketMessageTypeEntry WebSocketMessageType = "entry"
WebSocketMessageTypeFullEntry WebSocketMessageType = "fullEntry"
WebSocketMessageTypeTappedEntry WebSocketMessageType = "tappedEntry"
WebSocketMessageTypeUpdateStatus WebSocketMessageType = "status"
WebSocketMessageTypeAnalyzeStatus WebSocketMessageType = "analyzeStatus"
WebsocketMessageTypeOutboundLink WebSocketMessageType = "outboundLink"
WebSocketMessageTypeToast WebSocketMessageType = "toast"
WebSocketMessageTypeQueryMetadata WebSocketMessageType = "queryMetadata"
WebSocketMessageTypeStartTime WebSocketMessageType = "startTime"
WebSocketMessageTypeTapConfig WebSocketMessageType = "tapConfig"
WebSocketMessageTypeEntry WebSocketMessageType = "entry"
WebSocketMessageTypeFullEntry WebSocketMessageType = "fullEntry"
WebSocketMessageTypeTappedEntry WebSocketMessageType = "tappedEntry"
WebSocketMessageTypeUpdateStatus WebSocketMessageType = "status"
WebSocketMessageTypeUpdateTappedPods WebSocketMessageType = "tappedPods"
WebSocketMessageTypeAnalyzeStatus WebSocketMessageType = "analyzeStatus"
WebsocketMessageTypeOutboundLink WebSocketMessageType = "outboundLink"
WebSocketMessageTypeToast WebSocketMessageType = "toast"
WebSocketMessageTypeQueryMetadata WebSocketMessageType = "queryMetadata"
WebSocketMessageTypeStartTime WebSocketMessageType = "startTime"
WebSocketMessageTypeTapConfig WebSocketMessageType = "tapConfig"
)
type Resources struct {
@@ -75,11 +76,29 @@ type WebSocketStatusMessage struct {
TappingStatus []TappedPodStatus `json:"tappingStatus"`
}
type WebSocketTappedPodsMessage struct {
*WebSocketMessageMetadata
NodeToTappedPodMap NodeToPodsMap `json:"nodeToTappedPodMap"`
}
type WebSocketTapConfigMessage struct {
*WebSocketMessageMetadata
TapTargets []v1.Pod `json:"pods"`
}
type NodeToPodsMap map[string][]v1.Pod
func (np NodeToPodsMap) Summary() map[string][]string {
summary := make(map[string][]string)
for node, pods := range np {
for _, pod := range pods {
summary[node] = append(summary[node], pod.Namespace + "/" + pod.Name)
}
}
return summary
}
type TapperStatus struct {
TapperName string `json:"tapperName"`
NodeName string `json:"nodeName"`
@@ -121,6 +140,15 @@ func CreateWebSocketStatusMessage(tappedPodsStatus []TappedPodStatus) WebSocketS
}
}
func CreateWebSocketTappedPodsMessage(nodeToTappedPodMap NodeToPodsMap) WebSocketTappedPodsMessage {
return WebSocketTappedPodsMessage{
WebSocketMessageMetadata: &WebSocketMessageMetadata{
MessageType: WebSocketMessageTypeUpdateTappedPods,
},
NodeToTappedPodMap: nodeToTappedPodMap,
}
}
func CreateWebSocketMessageTypeAnalyzeStatus(analyzeStatus AnalyzeStatus) WebSocketAnalyzeStatusMessage {
return WebSocketAnalyzeStatusMessage{
WebSocketMessageMetadata: &WebSocketMessageMetadata{

View File

@@ -32,3 +32,17 @@ func Unique(slice []string) []string {
return list
}
func EqualStringSlices(slice1 []string, slice2 []string) bool {
if len(slice1) != len(slice2) {
return false
}
for _, v := range slice1 {
if !Contains(slice2, v) {
return false
}
}
return true
}

View File

@@ -161,7 +161,7 @@ type Entry struct {
Capture Capture `json:"capture"`
Source *TCP `json:"src"`
Destination *TCP `json:"dst"`
Namespace string `json:"namespace,omitempty"`
Namespace string `json:"namespace"`
Outgoing bool `json:"outgoing"`
Timestamp int64 `json:"timestamp"`
StartTime time.Time `json:"startTime"`

View File

@@ -13,4 +13,4 @@ test-pull-bin:
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect5/amqp/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect6/amqp/\* expect

View File

@@ -13,4 +13,4 @@ test-pull-bin:
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect5/http/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect6/http/\* expect

View File

@@ -53,7 +53,16 @@ func representMapSliceAsTable(mapSlice []interface{}, selectorPrefix string) (re
h := item.(map[string]interface{})
key := h["name"].(string)
value := h["value"]
switch reflect.TypeOf(value).Kind() {
var reflectKind reflect.Kind
reflectType := reflect.TypeOf(value)
if reflectType == nil {
reflectKind = reflect.Interface
} else {
reflectKind = reflect.TypeOf(value).Kind()
}
switch reflectKind {
case reflect.Slice:
fallthrough
case reflect.Array:

View File

@@ -28,26 +28,6 @@ const protoMinorHTTP2 = 0
var maxHTTP2DataLen = 1 * 1024 * 1024 // 1MB
var grpcStatusCodes = []string{
"OK",
"CANCELLED",
"UNKNOWN",
"INVALID_ARGUMENT",
"DEADLINE_EXCEEDED",
"NOT_FOUND",
"ALREADY_EXISTS",
"PERMISSION_DENIED",
"RESOURCE_EXHAUSTED",
"FAILED_PRECONDITION",
"ABORTED",
"OUT_OF_RANGE",
"UNIMPLEMENTED",
"INTERNAL",
"UNAVAILABLE",
"DATA_LOSS",
"UNAUTHENTICATED",
}
type messageFragment struct {
headers []hpack.HeaderField
data []byte
@@ -142,18 +122,8 @@ func (ga *Http2Assembler) readMessage() (streamID uint32, messageHTTP1 interface
// gRPC detection
grpcStatus := headersHTTP1.Get("Grpc-Status")
if grpcStatus != "" {
if grpcStatus != "" || strings.Contains(headersHTTP1.Get("Content-Type"), "application/grpc") {
isGrpc = true
status = grpcStatus
}
if strings.Contains(headersHTTP1.Get("Content-Type"), "application/grpc") {
isGrpc = true
grpcPath := headersHTTP1.Get(":path")
pathSegments := strings.Split(grpcPath, "/")
if len(pathSegments) > 0 {
method = pathSegments[len(pathSegments)-1]
}
}
if method != "" {

View File

@@ -248,11 +248,6 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
reqDetails["_queryStringMerged"] = mapSliceMergeRepeatedKeys(reqDetails["_queryString"].([]interface{}))
reqDetails["queryString"] = mapSliceRebuildAsMap(reqDetails["_queryStringMerged"].([]interface{}))
statusCode := int(resDetails["status"].(float64))
if item.Protocol.Abbreviation == "gRPC" {
resDetails["statusText"] = grpcStatusCodes[statusCode]
}
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
if elapsedTime < 0 {
elapsedTime = 0

View File

@@ -13,4 +13,4 @@ test-pull-bin:
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect5/kafka/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect6/kafka/\* expect

View File

@@ -8,6 +8,7 @@ require (
github.com/segmentio/kafka-go v0.4.27
github.com/stretchr/testify v1.6.1
github.com/up9inc/mizu/tap/api v0.0.0
golang.org/x/text v0.3.0
)
require (

View File

@@ -40,6 +40,7 @@ golang.org/x/crypto v0.0.0-20190506204251-e1dfcc566284/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=

View File

@@ -3,6 +3,8 @@ package kafka
import (
"encoding/json"
"fmt"
"golang.org/x/text/cases"
"golang.org/x/text/language"
"reflect"
"sort"
"strconv"
@@ -891,8 +893,9 @@ func representMapAsTable(mapData map[string]interface{}, selectorPrefix string,
}
}
selector := fmt.Sprintf("%s[\"%s\"]", selectorPrefix, key)
caser := cases.Title(language.Und, cases.NoLower)
table = append(table, api.TableData{
Name: strings.Join(camelcase.Split(strings.Title(key)), " "),
Name: strings.Join(camelcase.Split(caser.String(key)), " "),
Value: value,
Selector: selector,
})

View File

@@ -13,4 +13,4 @@ test-pull-bin:
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect5/redis/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect6/redis/\* expect

View File

@@ -59,7 +59,6 @@ var memprofile = flag.String("memprofile", "", "Write memory profile")
type TapOpts struct {
HostMode bool
FilterAuthorities []v1.Pod
}
var extensions []*api.Extension // global
@@ -67,6 +66,7 @@ var filteringOptions *api.TrafficFilteringOptions // global
var tapTargets []v1.Pod // global
var packetSourceManager *source.PacketSourceManager // global
var mainPacketInputChan chan source.TcpPacketInfo // global
var tlsTapperInstance *tlstapper.TlsTapper // global
func inArrayInt(arr []int, valueToCheck int) bool {
for _, value := range arr {
@@ -90,16 +90,10 @@ func StartPassiveTapper(opts *TapOpts, outputItems chan *api.OutputChannelItem,
extensions = extensionsRef
filteringOptions = options
if opts.FilterAuthorities == nil {
tapTargets = []v1.Pod{}
} else {
tapTargets = opts.FilterAuthorities
}
if *tls {
for _, e := range extensions {
if e.Protocol.Name == "http" {
startTlsTapper(e, outputItems, options)
tlsTapperInstance = startTlsTapper(e, outputItems, options)
break
}
}
@@ -109,24 +103,39 @@ func StartPassiveTapper(opts *TapOpts, outputItems chan *api.OutputChannelItem,
diagnose.StartMemoryProfiler(os.Getenv(MemoryProfilingDumpPath), os.Getenv(MemoryProfilingTimeIntervalSeconds))
}
go startPassiveTapper(opts, outputItems)
streamsMap, assembler := initializePassiveTapper(opts, outputItems)
go startPassiveTapper(streamsMap, assembler)
}
func UpdateTapTargets(newTapTargets []v1.Pod) {
success := true
tapTargets = newTapTargets
if err := initializePacketSources(); err != nil {
logger.Log.Fatal(err)
packetSourceManager.UpdatePods(tapTargets, !*nodefrag, mainPacketInputChan)
if tlsTapperInstance != nil {
if err := tlstapper.UpdateTapTargets(tlsTapperInstance, &tapTargets, *procfs); err != nil {
tlstapper.LogError(err)
success = false
}
}
printNewTapTargets()
printNewTapTargets(success)
}
func printNewTapTargets() {
func printNewTapTargets(success bool) {
printStr := ""
for _, tapTarget := range tapTargets {
printStr += fmt.Sprintf("%s (%s), ", tapTarget.Status.PodIP, tapTarget.Name)
}
printStr = strings.TrimRight(printStr, ", ")
logger.Log.Infof("Now tapping: %s", printStr)
if success {
logger.Log.Infof("Now tapping: %s", printStr)
} else {
logger.Log.Errorf("Failed to start tapping: %s", printStr)
}
}
func printPeriodicStats(cleaner *Cleaner) {
@@ -189,17 +198,12 @@ func initializePacketSources() error {
}
var err error
if packetSourceManager, err = source.NewPacketSourceManager(*procfs, *fname, *iface, *servicemesh, tapTargets, behaviour); err != nil {
return err
} else {
packetSourceManager.ReadPackets(!*nodefrag, mainPacketInputChan)
return nil
}
packetSourceManager, err = source.NewPacketSourceManager(*procfs, *fname, *iface, *servicemesh, tapTargets, behaviour, !*nodefrag, mainPacketInputChan)
return err
}
func startPassiveTapper(opts *TapOpts, outputItems chan *api.OutputChannelItem) {
func initializePassiveTapper(opts *TapOpts, outputItems chan *api.OutputChannelItem) (*tcpStreamMap, *tcpAssembler) {
streamsMap := NewTcpStreamMap()
go streamsMap.closeTimedoutTcpStreamChannels()
diagnose.InitializeErrorsMap(*debug, *verbose, *quiet)
diagnose.InitializeTapperInternalStats()
@@ -212,6 +216,12 @@ func startPassiveTapper(opts *TapOpts, outputItems chan *api.OutputChannelItem)
assembler := NewTcpAssembler(outputItems, streamsMap, opts)
return streamsMap, assembler
}
func startPassiveTapper(streamsMap *tcpStreamMap, assembler *tcpAssembler) {
go streamsMap.closeTimedoutTcpStreamChannels()
diagnose.AppStats.SetStartTime(time.Now())
staleConnectionTimeout := time.Second * time.Duration(*staleTimeoutSeconds)
@@ -243,13 +253,19 @@ func startPassiveTapper(opts *TapOpts, outputItems chan *api.OutputChannelItem)
logger.Log.Infof("AppStats: %v", diagnose.AppStats)
}
func startTlsTapper(extension *api.Extension, outputItems chan *api.OutputChannelItem, options *api.TrafficFilteringOptions) {
func startTlsTapper(extension *api.Extension, outputItems chan *api.OutputChannelItem, options *api.TrafficFilteringOptions) *tlstapper.TlsTapper {
tls := tlstapper.TlsTapper{}
tlsPerfBufferSize := os.Getpagesize() * 100
chunksBufferSize := os.Getpagesize() * 100
logBufferSize := os.Getpagesize()
if err := tls.Init(tlsPerfBufferSize, *procfs, extension); err != nil {
if err := tls.Init(chunksBufferSize, logBufferSize, *procfs, extension); err != nil {
tlstapper.LogError(err)
return
return nil
}
if err := tlstapper.UpdateTapTargets(&tls, &tapTargets, *procfs); err != nil {
tlstapper.LogError(err)
return nil
}
// A quick way to instrument libssl.so without PID filtering - used for debuging and troubleshooting
@@ -257,19 +273,17 @@ func startTlsTapper(extension *api.Extension, outputItems chan *api.OutputChanne
if os.Getenv("MIZU_GLOBAL_SSL_LIBRARY") != "" {
if err := tls.GlobalTap(os.Getenv("MIZU_GLOBAL_SSL_LIBRARY")); err != nil {
tlstapper.LogError(err)
return
return nil
}
}
if err := tlstapper.UpdateTapTargets(&tls, &tapTargets, *procfs); err != nil {
tlstapper.LogError(err)
return
}
var emitter api.Emitter = &api.Emitting{
AppStats: &diagnose.AppStats,
OutputChannel: outputItems,
}
go tls.PollForLogging()
go tls.Poll(emitter, options)
return &tls
}

View File

@@ -11,12 +11,20 @@ import (
const bpfFilterMaxPods = 150
const hostSourcePid = "0"
type PacketSourceManagerConfig struct {
mtls bool
procfs string
interfaceName string
behaviour TcpPacketSourceBehaviour
}
type PacketSourceManager struct {
sources map[string]*tcpPacketSource
config PacketSourceManagerConfig
}
func NewPacketSourceManager(procfs string, filename string, interfaceName string,
mtls bool, pods []v1.Pod, behaviour TcpPacketSourceBehaviour) (*PacketSourceManager, error) {
mtls bool, pods []v1.Pod, behaviour TcpPacketSourceBehaviour, ipdefrag bool, packets chan<- TcpPacketInfo) (*PacketSourceManager, error) {
hostSource, err := newHostPacketSource(filename, interfaceName, behaviour)
if err != nil {
return nil, err
@@ -28,7 +36,14 @@ func NewPacketSourceManager(procfs string, filename string, interfaceName string
},
}
sourceManager.UpdatePods(mtls, procfs, pods, interfaceName, behaviour)
sourceManager.config = PacketSourceManagerConfig{
mtls: mtls,
procfs: procfs,
interfaceName: interfaceName,
behaviour: behaviour,
}
go hostSource.readPackets(ipdefrag, packets)
return sourceManager, nil
}
@@ -49,17 +64,16 @@ func newHostPacketSource(filename string, interfaceName string,
return source, nil
}
func (m *PacketSourceManager) UpdatePods(mtls bool, procfs string, pods []v1.Pod,
interfaceName string, behaviour TcpPacketSourceBehaviour) {
if mtls {
m.updateMtlsPods(procfs, pods, interfaceName, behaviour)
func (m *PacketSourceManager) UpdatePods(pods []v1.Pod, ipdefrag bool, packets chan<- TcpPacketInfo) {
if m.config.mtls {
m.updateMtlsPods(m.config.procfs, pods, m.config.interfaceName, m.config.behaviour, ipdefrag, packets)
}
m.setBPFFilter(pods)
}
func (m *PacketSourceManager) updateMtlsPods(procfs string, pods []v1.Pod,
interfaceName string, behaviour TcpPacketSourceBehaviour) {
interfaceName string, behaviour TcpPacketSourceBehaviour, ipdefrag bool, packets chan<- TcpPacketInfo) {
relevantPids := m.getRelevantPids(procfs, pods)
logger.Log.Infof("Updating mtls pods (new: %v) (current: %v)", relevantPids, m.sources)
@@ -76,6 +90,7 @@ func (m *PacketSourceManager) updateMtlsPods(procfs string, pods []v1.Pod,
source, err := newNetnsPacketSource(procfs, pid, interfaceName, behaviour)
if err == nil {
go source.readPackets(ipdefrag, packets)
m.sources[pid] = source
}
}
@@ -139,12 +154,6 @@ func (m *PacketSourceManager) setBPFFilter(pods []v1.Pod) {
}
}
func (m *PacketSourceManager) ReadPackets(ipdefrag bool, packets chan<- TcpPacketInfo) {
for _, src := range m.sources {
go src.readPackets(ipdefrag, packets)
}
}
func (m *PacketSourceManager) Close() {
for _, src := range m.sources {
src.close()

View File

@@ -1,5 +1,7 @@
#!/bin/bash
pushd "$(dirname "$0")" || exit 1
MIZU_HOME=$(realpath ../../../)
docker build -t mizu-ebpf-builder . || exit 1
@@ -7,6 +9,7 @@ docker build -t mizu-ebpf-builder . || exit 1
docker run --rm \
--name mizu-ebpf-builder \
-v $MIZU_HOME:/mizu \
-v $(go env GOPATH):/root/go \
-it mizu-ebpf-builder \
sh -c "
go generate tap/tlstapper/tls_tapper.go
@@ -15,3 +18,5 @@ docker run --rm \
chown $(id -u):$(id -g) tap/tlstapper/tlstapper_bpfel.go
chown $(id -u):$(id -g) tap/tlstapper/tlstapper_bpfel.o
" || exit 1
popd

View File

@@ -7,8 +7,12 @@ Copyright (C) UP9 Inc.
#include "include/headers.h"
#include "include/util.h"
#include "include/maps.h"
#include "include/log.h"
#include "include/logger_messages.h"
#include "include/pids.h"
#define IPV4_ADDR_LEN (16)
struct accept_info {
__u64* sockaddr;
__u32* addrlen;
@@ -41,9 +45,7 @@ void sys_enter_accept4(struct sys_enter_accept4_ctx *ctx) {
long err = bpf_map_update_elem(&accept_syscall_context, &id, &info, BPF_ANY);
if (err != 0) {
char msg[] = "Error putting accept info (id: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), id, err);
return;
log_error(ctx, LOG_ERROR_PUTTING_ACCEPT_INFO, id, err, 0l);
}
}
@@ -69,7 +71,8 @@ void sys_exit_accept4(struct sys_exit_accept4_ctx *ctx) {
struct accept_info *infoPtr = bpf_map_lookup_elem(&accept_syscall_context, &id);
if (infoPtr == 0) {
if (infoPtr == NULL) {
log_error(ctx, LOG_ERROR_GETTING_ACCEPT_INFO, id, 0l, 0l);
return;
}
@@ -79,15 +82,14 @@ void sys_exit_accept4(struct sys_exit_accept4_ctx *ctx) {
bpf_map_delete_elem(&accept_syscall_context, &id);
if (err != 0) {
char msg[] = "Error reading accept info from accept syscall (id: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), id, err);
log_error(ctx, LOG_ERROR_READING_ACCEPT_INFO, id, err, 0l);
return;
}
__u32 addrlen;
bpf_probe_read(&addrlen, sizeof(__u32), info.addrlen);
if (addrlen != 16) {
if (addrlen != IPV4_ADDR_LEN) {
// Currently only ipv4 is supported linux-src/include/linux/inet.h
return;
}
@@ -105,9 +107,7 @@ void sys_exit_accept4(struct sys_exit_accept4_ctx *ctx) {
err = bpf_map_update_elem(&file_descriptor_to_ipv4, &key, &fdinfo, BPF_ANY);
if (err != 0) {
char msg[] = "Error putting fd to address mapping from accept (key: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), key, err);
return;
log_error(ctx, LOG_ERROR_PUTTING_FD_MAPPING, id, err, ORIGIN_SYS_EXIT_ACCEPT4_CODE);
}
}
@@ -145,9 +145,7 @@ void sys_enter_connect(struct sys_enter_connect_ctx *ctx) {
long err = bpf_map_update_elem(&connect_syscall_info, &id, &info, BPF_ANY);
if (err != 0) {
char msg[] = "Error putting connect info (id: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), id, err);
return;
log_error(ctx, LOG_ERROR_PUTTING_CONNECT_INFO, id, err, 0l);
}
}
@@ -175,7 +173,8 @@ void sys_exit_connect(struct sys_exit_connect_ctx *ctx) {
struct connect_info *infoPtr = bpf_map_lookup_elem(&connect_syscall_info, &id);
if (infoPtr == 0) {
if (infoPtr == NULL) {
log_error(ctx, LOG_ERROR_GETTING_CONNECT_INFO, id, 0l, 0l);
return;
}
@@ -185,12 +184,11 @@ void sys_exit_connect(struct sys_exit_connect_ctx *ctx) {
bpf_map_delete_elem(&connect_syscall_info, &id);
if (err != 0) {
char msg[] = "Error reading connect info from connect syscall (id: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), id, err);
log_error(ctx, LOG_ERROR_READING_CONNECT_INFO, id, err, 0l);
return;
}
if (info.addrlen != 16) {
if (info.addrlen != IPV4_ADDR_LEN) {
// Currently only ipv4 is supported linux-src/include/linux/inet.h
return;
}
@@ -208,8 +206,6 @@ void sys_exit_connect(struct sys_exit_connect_ctx *ctx) {
err = bpf_map_update_elem(&file_descriptor_to_ipv4, &key, &fdinfo, BPF_ANY);
if (err != 0) {
char msg[] = "Error putting fd to address mapping from connect (key: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), key, err);
return;
log_error(ctx, LOG_ERROR_PUTTING_FD_MAPPING, id, err, ORIGIN_SYS_EXIT_CONNECT_CODE);
}
}

View File

@@ -7,6 +7,8 @@ Copyright (C) UP9 Inc.
#include "include/headers.h"
#include "include/util.h"
#include "include/maps.h"
#include "include/log.h"
#include "include/logger_messages.h"
#include "include/pids.h"
struct sys_enter_read_ctx {
@@ -28,7 +30,7 @@ void sys_enter_read(struct sys_enter_read_ctx *ctx) {
struct ssl_info *infoPtr = bpf_map_lookup_elem(&ssl_read_context, &id);
if (infoPtr == 0) {
if (infoPtr == NULL) {
return;
}
@@ -36,8 +38,7 @@ void sys_enter_read(struct sys_enter_read_ctx *ctx) {
long err = bpf_probe_read(&info, sizeof(struct ssl_info), infoPtr);
if (err != 0) {
char msg[] = "Error reading read info from read syscall (id: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), id, err);
log_error(ctx, LOG_ERROR_READING_SSL_CONTEXT, id, err, ORIGIN_SYS_ENTER_READ_CODE);
return;
}
@@ -46,9 +47,7 @@ void sys_enter_read(struct sys_enter_read_ctx *ctx) {
err = bpf_map_update_elem(&ssl_read_context, &id, &info, BPF_ANY);
if (err != 0) {
char msg[] = "Error putting file descriptor from read syscall (id: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), id, err);
return;
log_error(ctx, LOG_ERROR_PUTTING_FILE_DESCRIPTOR, id, err, ORIGIN_SYS_ENTER_READ_CODE);
}
}
@@ -71,7 +70,7 @@ void sys_enter_write(struct sys_enter_write_ctx *ctx) {
struct ssl_info *infoPtr = bpf_map_lookup_elem(&ssl_write_context, &id);
if (infoPtr == 0) {
if (infoPtr == NULL) {
return;
}
@@ -79,8 +78,7 @@ void sys_enter_write(struct sys_enter_write_ctx *ctx) {
long err = bpf_probe_read(&info, sizeof(struct ssl_info), infoPtr);
if (err != 0) {
char msg[] = "Error reading write context from write syscall (id: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), id, err);
log_error(ctx, LOG_ERROR_READING_SSL_CONTEXT, id, err, ORIGIN_SYS_ENTER_WRITE_CODE);
return;
}
@@ -89,8 +87,6 @@ void sys_enter_write(struct sys_enter_write_ctx *ctx) {
err = bpf_map_update_elem(&ssl_write_context, &id, &info, BPF_ANY);
if (err != 0) {
char msg[] = "Error putting file descriptor from write syscall (id: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), id, err);
return;
log_error(ctx, LOG_ERROR_PUTTING_FILE_DESCRIPTOR, id, err, ORIGIN_SYS_ENTER_WRITE_CODE);
}
}

View File

@@ -0,0 +1,79 @@
/*
Note: This file is licenced differently from the rest of the project
SPDX-License-Identifier: GPL-2.0
Copyright (C) UP9 Inc.
*/
#ifndef __LOG__
#define __LOG__
// The same consts defined in bpf_logger.go
//
#define LOG_LEVEL_ERROR (0)
#define LOG_LEVEL_INFO (1)
#define LOG_LEVEL_DEBUG (2)
// The same struct can be found in bpf_logger.go
//
// Be careful when editing, alignment and padding should be exactly the same in go/c.
//
struct log_message {
__u32 level;
__u32 message_code;
__u64 arg1;
__u64 arg2;
__u64 arg3;
};
static __always_inline void log_error(void* ctx, __u16 message_code, __u64 arg1, __u64 arg2, __u64 arg3) {
struct log_message entry = {};
entry.level = LOG_LEVEL_ERROR;
entry.message_code = message_code;
entry.arg1 = arg1;
entry.arg2 = arg2;
entry.arg3 = arg3;
long err = bpf_perf_event_output(ctx, &log_buffer, BPF_F_CURRENT_CPU, &entry, sizeof(struct log_message));
if (err != 0) {
char msg[] = "Error writing log error to perf buffer - %ld";
bpf_trace_printk(msg, sizeof(msg), err);
}
}
static __always_inline void log_info(void* ctx, __u16 message_code, __u64 arg1, __u64 arg2, __u64 arg3) {
struct log_message entry = {};
entry.level = LOG_LEVEL_INFO;
entry.message_code = message_code;
entry.arg1 = arg1;
entry.arg2 = arg2;
entry.arg3 = arg3;
long err = bpf_perf_event_output(ctx, &log_buffer, BPF_F_CURRENT_CPU, &entry, sizeof(struct log_message));
if (err != 0) {
char msg[] = "Error writing log info to perf buffer - %ld";
bpf_trace_printk(msg, sizeof(msg), arg1, err);
}
}
static __always_inline void log_debug(void* ctx, __u16 message_code, __u64 arg1, __u64 arg2, __u64 arg3) {
struct log_message entry = {};
entry.level = LOG_LEVEL_DEBUG;
entry.message_code = message_code;
entry.arg1 = arg1;
entry.arg2 = arg2;
entry.arg3 = arg3;
long err = bpf_perf_event_output(ctx, &log_buffer, BPF_F_CURRENT_CPU, &entry, sizeof(struct log_message));
if (err != 0) {
char msg[] = "Error writing log debug to perf buffer - %ld";
bpf_trace_printk(msg, sizeof(msg), arg1, err);
}
}
#endif /* __LOG__ */

View File

@@ -0,0 +1,42 @@
/*
Note: This file is licenced differently from the rest of the project
SPDX-License-Identifier: GPL-2.0
Copyright (C) UP9 Inc.
*/
#ifndef __LOG_MESSAGES__
#define __LOG_MESSAGES__
// Must be synced with bpf_logger_messages.go
//
#define LOG_ERROR_READING_BYTES_COUNT (0)
#define LOG_ERROR_READING_FD_ADDRESS (1)
#define LOG_ERROR_READING_FROM_SSL_BUFFER (2)
#define LOG_ERROR_BUFFER_TOO_BIG (3)
#define LOG_ERROR_ALLOCATING_CHUNK (4)
#define LOG_ERROR_READING_SSL_CONTEXT (5)
#define LOG_ERROR_PUTTING_SSL_CONTEXT (6)
#define LOG_ERROR_GETTING_SSL_CONTEXT (7)
#define LOG_ERROR_MISSING_FILE_DESCRIPTOR (8)
#define LOG_ERROR_PUTTING_FILE_DESCRIPTOR (9)
#define LOG_ERROR_PUTTING_ACCEPT_INFO (10)
#define LOG_ERROR_GETTING_ACCEPT_INFO (11)
#define LOG_ERROR_READING_ACCEPT_INFO (12)
#define LOG_ERROR_PUTTING_FD_MAPPING (13)
#define LOG_ERROR_PUTTING_CONNECT_INFO (14)
#define LOG_ERROR_GETTING_CONNECT_INFO (15)
#define LOG_ERROR_READING_CONNECT_INFO (16)
// Sometimes we have the same error, happening from different locations.
// in order to be able to distinct between them in the log, we add an
// extra number that identify the location. The number can be anything,
// but do not give the same number to different origins.
//
#define ORIGIN_SSL_UPROBE_CODE (0l)
#define ORIGIN_SSL_URETPROBE_CODE (1l)
#define ORIGIN_SYS_ENTER_READ_CODE (2l)
#define ORIGIN_SYS_ENTER_WRITE_CODE (3l)
#define ORIGIN_SYS_EXIT_ACCEPT4_CODE (4l)
#define ORIGIN_SYS_EXIT_CONNECT_CODE (5l)
#endif /* __LOG_MESSAGES__ */

View File

@@ -10,6 +10,12 @@ Copyright (C) UP9 Inc.
#define FLAGS_IS_CLIENT_BIT (1 << 0)
#define FLAGS_IS_READ_BIT (1 << 1)
#define CHUNK_SIZE (1 << 12)
#define MAX_CHUNKS_PER_OPERATION (8)
// One minute in nano seconds. Chosen by gut feeling.
#define SSL_INFO_MAX_TTL_NANO (1000000000l * 60l)
// The same struct can be found in chunk.go
//
// Be careful when editing, alignment and padding should be exactly the same in go/c.
@@ -18,16 +24,18 @@ struct tlsChunk {
__u32 pid;
__u32 tgid;
__u32 len;
__u32 start;
__u32 recorded;
__u32 fd;
__u32 flags;
__u8 address[16];
__u8 data[4096]; // Must be N^2
__u8 data[CHUNK_SIZE]; // Must be N^2
};
struct ssl_info {
void* buffer;
__u32 fd;
__u64 created_at_nano;
// for ssl_write and ssl_read must be zero
// for ssl_write_ex and ssl_read_ex save the *written/*readbytes pointer.
@@ -53,11 +61,15 @@ struct fd_info {
#define BPF_PERF_OUTPUT(_name) \
BPF_MAP(_name, BPF_MAP_TYPE_PERF_EVENT_ARRAY, int, __u32, 1024)
#define BPF_LRU_HASH(_name, _key_type, _value_type) \
BPF_MAP(_name, BPF_MAP_TYPE_LRU_HASH, _key_type, _value_type, 16384)
BPF_HASH(pids_map, __u32, __u32);
BPF_HASH(ssl_write_context, __u64, struct ssl_info);
BPF_HASH(ssl_read_context, __u64, struct ssl_info);
BPF_LRU_HASH(ssl_write_context, __u64, struct ssl_info);
BPF_LRU_HASH(ssl_read_context, __u64, struct ssl_info);
BPF_HASH(file_descriptor_to_ipv4, __u64, struct fd_info);
BPF_PERF_OUTPUT(chunks_buffer);
BPF_PERF_OUTPUT(log_buffer);
#endif /* __MAPS__ */

View File

@@ -7,6 +7,8 @@ Copyright (C) UP9 Inc.
#include "include/headers.h"
#include "include/util.h"
#include "include/maps.h"
#include "include/log.h"
#include "include/logger_messages.h"
#include "include/pids.h"
// Heap-like area for eBPF programs - stack size limited to 512 bytes, we must use maps for bigger (chunk) objects.
@@ -18,181 +20,249 @@ struct {
__type(value, struct tlsChunk);
} heap SEC(".maps");
static __always_inline int ssl_uprobe(void* ssl, void* buffer, int num, struct bpf_map_def* map_fd, size_t *count_ptr) {
__u64 id = bpf_get_current_pid_tgid();
static __always_inline int get_count_bytes(struct pt_regs *ctx, struct ssl_info* info, __u64 id) {
int returnValue = PT_REGS_RC(ctx);
if (!should_tap(id >> 32)) {
if (info->count_ptr == NULL) {
// ssl_read and ssl_write return the number of bytes written/read
//
return returnValue;
}
// ssl_read_ex and ssl_write_ex return 1 for success
//
if (returnValue != 1) {
return 0;
}
// ssl_read_ex and ssl_write_ex write the number of bytes to an arg named *count
//
size_t countBytes;
long err = bpf_probe_read(&countBytes, sizeof(size_t), (void*) info->count_ptr);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_BYTES_COUNT, id, err, 0l);
return 0;
}
return countBytes;
}
static __always_inline void add_address_to_chunk(struct pt_regs *ctx, struct tlsChunk* chunk, __u64 id, __u32 fd) {
__u32 pid = id >> 32;
__u64 key = (__u64) pid << 32 | fd;
struct fd_info *fdinfo = bpf_map_lookup_elem(&file_descriptor_to_ipv4, &key);
if (fdinfo == NULL) {
return;
}
int err = bpf_probe_read(chunk->address, sizeof(chunk->address), fdinfo->ipv4_addr);
chunk->flags |= (fdinfo->flags & FLAGS_IS_CLIENT_BIT);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_FD_ADDRESS, id, err, 0l);
}
}
static __always_inline void send_chunk_part(struct pt_regs *ctx, __u8* buffer, __u64 id,
struct tlsChunk* chunk, int start, int end) {
size_t recorded = MIN(end - start, sizeof(chunk->data));
if (recorded <= 0) {
return;
}
chunk->recorded = recorded;
chunk->start = start;
// This ugly trick is for the ebpf verifier happiness
//
long err = 0;
if (chunk->recorded == sizeof(chunk->data)) {
err = bpf_probe_read(chunk->data, sizeof(chunk->data), buffer + start);
} else {
recorded &= (sizeof(chunk->data) - 1); // Buffer must be N^2
err = bpf_probe_read(chunk->data, recorded, buffer + start);
}
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_FROM_SSL_BUFFER, id, err, 0l);
return;
}
bpf_perf_event_output(ctx, &chunks_buffer, BPF_F_CURRENT_CPU, chunk, sizeof(struct tlsChunk));
}
static __always_inline void send_chunk(struct pt_regs *ctx, __u8* buffer, __u64 id, struct tlsChunk* chunk) {
// ebpf loops must be bounded at compile time, we can't use (i < chunk->len / CHUNK_SIZE)
//
// https://lwn.net/Articles/794934/
//
// However we want to run in kernel older than 5.3, hence we use "#pragma unroll" anyway
//
#pragma unroll
for (int i = 0; i < MAX_CHUNKS_PER_OPERATION; i++) {
if (chunk->len <= (CHUNK_SIZE * i)) {
break;
}
send_chunk_part(ctx, buffer, id, chunk, CHUNK_SIZE * i, chunk->len);
}
}
static __always_inline void output_ssl_chunk(struct pt_regs *ctx, struct ssl_info* info, __u64 id, __u32 flags) {
int countBytes = get_count_bytes(ctx, info, id);
if (countBytes <= 0) {
return;
}
if (countBytes > (CHUNK_SIZE * MAX_CHUNKS_PER_OPERATION)) {
log_error(ctx, LOG_ERROR_BUFFER_TOO_BIG, id, countBytes, 0l);
return;
}
struct tlsChunk* chunk;
int zero = 0;
// If other thread, running on the same CPU get to this point at the same time like us (context switch)
// the data will be corrupted - protection may be added in the future
//
chunk = bpf_map_lookup_elem(&heap, &zero);
if (!chunk) {
log_error(ctx, LOG_ERROR_ALLOCATING_CHUNK, id, 0l, 0l);
return;
}
chunk->flags = flags;
chunk->pid = id >> 32;
chunk->tgid = id;
chunk->len = countBytes;
chunk->fd = info->fd;
add_address_to_chunk(ctx, chunk, id, chunk->fd);
send_chunk(ctx, info->buffer, id, chunk);
}
static __always_inline void ssl_uprobe(struct pt_regs *ctx, void* ssl, void* buffer, int num, struct bpf_map_def* map_fd, size_t *count_ptr) {
__u64 id = bpf_get_current_pid_tgid();
if (!should_tap(id >> 32)) {
return;
}
struct ssl_info *infoPtr = bpf_map_lookup_elem(map_fd, &id);
struct ssl_info info = {};
info.fd = -1;
if (infoPtr == NULL) {
info.fd = -1;
info.created_at_nano = bpf_ktime_get_ns();
} else {
long err = bpf_probe_read(&info, sizeof(struct ssl_info), infoPtr);
if (err != 0) {
log_error(ctx, LOG_ERROR_READING_SSL_CONTEXT, id, err, ORIGIN_SSL_UPROBE_CODE);
}
if ((bpf_ktime_get_ns() - info.created_at_nano) > SSL_INFO_MAX_TTL_NANO) {
// If the ssl info is too old, we don't want to use its info because it may be incorrect.
//
info.fd = -1;
info.created_at_nano = bpf_ktime_get_ns();
}
}
info.count_ptr = count_ptr;
info.buffer = buffer;
long err = bpf_map_update_elem(map_fd, &id, &info, BPF_ANY);
if (err != 0) {
char msg[] = "Error putting ssl context (id: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), id, err);
return 0;
log_error(ctx, LOG_ERROR_PUTTING_SSL_CONTEXT, id, err, 0l);
}
return 0;
}
static __always_inline int ssl_uretprobe(struct pt_regs *ctx, struct bpf_map_def* map_fd, __u32 flags) {
static __always_inline void ssl_uretprobe(struct pt_regs *ctx, struct bpf_map_def* map_fd, __u32 flags) {
__u64 id = bpf_get_current_pid_tgid();
if (!should_tap(id >> 32)) {
return 0;
return;
}
struct ssl_info *infoPtr = bpf_map_lookup_elem(map_fd, &id);
if (infoPtr == 0) {
char msg[] = "Error getting ssl context info (id: %ld)";
bpf_trace_printk(msg, sizeof(msg), id);
return 0;
if (infoPtr == NULL) {
log_error(ctx, LOG_ERROR_GETTING_SSL_CONTEXT, id, 0l, 0l);
return;
}
struct ssl_info info;
long err = bpf_probe_read(&info, sizeof(struct ssl_info), infoPtr);
bpf_map_delete_elem(map_fd, &id);
// Do not clean map on purpose, sometimes there are two calls to ssl_read in a raw
// while the first call actually goes to read from socket, and we get the chance
// to find the fd. The other call already have all the information and we don't
// have the chance to get the fd.
//
// There are two risks keeping the map items
// 1. It gets full - we solve it by using BPF_MAP_TYPE_LRU_HASH with hard limit
// 2. We get wrong info of an old call - we solve it by comparing the timestamp
// info before using it
//
// bpf_map_delete_elem(map_fd, &id);
if (err != 0) {
char msg[] = "Error reading ssl context (id: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), id, err);
return 0;
log_error(ctx, LOG_ERROR_READING_SSL_CONTEXT, id, err, ORIGIN_SSL_URETPROBE_CODE);
return;
}
if (info.fd == -1) {
char msg[] = "File descriptor is missing from ssl info (id: %ld)";
bpf_trace_printk(msg, sizeof(msg), id);
return 0;
log_error(ctx, LOG_ERROR_MISSING_FILE_DESCRIPTOR, id, 0l, 0l);
return;
}
int countBytes = PT_REGS_RC(ctx);
if (info.count_ptr != 0) {
// ssl_read_ex and ssl_write_ex return 1 for success
//
if (countBytes != 1) {
return 0;
}
size_t tempCount;
long err = bpf_probe_read(&tempCount, sizeof(size_t), (void*) info.count_ptr);
if (err != 0) {
char msg[] = "Error reading bytes count of _ex (id: %ld) (err: %ld)";
bpf_trace_printk(msg, sizeof(msg), id, err);
return 0;
}
countBytes = tempCount;
}
if (countBytes <= 0) {
return 0;
}
struct tlsChunk* c;
int zero = 0;
// If other thread, running on the same CPU get to this point at the same time like us
// the data will be corrupted - protection may be added in the future
//
c = bpf_map_lookup_elem(&heap, &zero);
if (!c) {
char msg[] = "Unable to allocate chunk (id: %ld)";
bpf_trace_printk(msg, sizeof(msg), id);
return 0;
}
size_t recorded = MIN(countBytes, sizeof(c->data));
c->flags = flags;
c->pid = id >> 32;
c->tgid = id;
c->len = countBytes;
c->recorded = recorded;
c->fd = info.fd;
// This ugly trick is for the ebpf verifier happiness
//
if (recorded == sizeof(c->data)) {
err = bpf_probe_read(c->data, sizeof(c->data), info.buffer);
} else {
recorded &= sizeof(c->data) - 1; // Buffer must be N^2
err = bpf_probe_read(c->data, recorded, info.buffer);
}
if (err != 0) {
char msg[] = "Error reading from ssl buffer %ld - %ld";
bpf_trace_printk(msg, sizeof(msg), id, err);
return 0;
}
__u32 pid = id >> 32;
__u32 fd = info.fd;
__u64 key = (__u64) pid << 32 | fd;
struct fd_info *fdinfo = bpf_map_lookup_elem(&file_descriptor_to_ipv4, &key);
if (fdinfo != 0) {
err = bpf_probe_read(c->address, sizeof(c->address), fdinfo->ipv4_addr);
c->flags |= (fdinfo->flags & FLAGS_IS_CLIENT_BIT);
if (err != 0) {
char msg[] = "Error reading from fd address %ld - %ld";
bpf_trace_printk(msg, sizeof(msg), id, err);
}
}
bpf_perf_event_output(ctx, &chunks_buffer, BPF_F_CURRENT_CPU, c, sizeof(struct tlsChunk));
return 0;
output_ssl_chunk(ctx, &info, id, flags);
}
SEC("uprobe/ssl_write")
int BPF_KPROBE(ssl_write, void* ssl, void* buffer, int num) {
return ssl_uprobe(ssl, buffer, num, &ssl_write_context, 0);
void BPF_KPROBE(ssl_write, void* ssl, void* buffer, int num) {
ssl_uprobe(ctx, ssl, buffer, num, &ssl_write_context, 0);
}
SEC("uretprobe/ssl_write")
int BPF_KPROBE(ssl_ret_write) {
return ssl_uretprobe(ctx, &ssl_write_context, 0);
void BPF_KPROBE(ssl_ret_write) {
ssl_uretprobe(ctx, &ssl_write_context, 0);
}
SEC("uprobe/ssl_read")
int BPF_KPROBE(ssl_read, void* ssl, void* buffer, int num) {
return ssl_uprobe(ssl, buffer, num, &ssl_read_context, 0);
void BPF_KPROBE(ssl_read, void* ssl, void* buffer, int num) {
ssl_uprobe(ctx, ssl, buffer, num, &ssl_read_context, 0);
}
SEC("uretprobe/ssl_read")
int BPF_KPROBE(ssl_ret_read) {
return ssl_uretprobe(ctx, &ssl_read_context, FLAGS_IS_READ_BIT);
void BPF_KPROBE(ssl_ret_read) {
ssl_uretprobe(ctx, &ssl_read_context, FLAGS_IS_READ_BIT);
}
SEC("uprobe/ssl_write_ex")
int BPF_KPROBE(ssl_write_ex, void* ssl, void* buffer, size_t num, size_t *written) {
return ssl_uprobe(ssl, buffer, num, &ssl_write_context, written);
void BPF_KPROBE(ssl_write_ex, void* ssl, void* buffer, size_t num, size_t *written) {
ssl_uprobe(ctx, ssl, buffer, num, &ssl_write_context, written);
}
SEC("uretprobe/ssl_write_ex")
int BPF_KPROBE(ssl_ret_write_ex) {
return ssl_uretprobe(ctx, &ssl_write_context, 0);
void BPF_KPROBE(ssl_ret_write_ex) {
ssl_uretprobe(ctx, &ssl_write_context, 0);
}
SEC("uprobe/ssl_read_ex")
int BPF_KPROBE(ssl_read_ex, void* ssl, void* buffer, size_t num, size_t *readbytes) {
return ssl_uprobe(ssl, buffer, num, &ssl_read_context, readbytes);
void BPF_KPROBE(ssl_read_ex, void* ssl, void* buffer, size_t num, size_t *readbytes) {
ssl_uprobe(ctx, ssl, buffer, num, &ssl_read_context, readbytes);
}
SEC("uretprobe/ssl_read_ex")
int BPF_KPROBE(ssl_ret_read_ex) {
return ssl_uretprobe(ctx, &ssl_read_context, FLAGS_IS_READ_BIT);
void BPF_KPROBE(ssl_ret_read_ex) {
ssl_uretprobe(ctx, &ssl_read_context, FLAGS_IS_READ_BIT);
}

View File

@@ -7,6 +7,8 @@ Copyright (C) UP9 Inc.
#include "include/headers.h"
#include "include/util.h"
#include "include/maps.h"
#include "include/log.h"
#include "include/logger_messages.h"
#include "include/pids.h"
// To avoid multiple .o files

116
tap/tlstapper/bpf_logger.go Normal file
View File

@@ -0,0 +1,116 @@
package tlstapper
import (
"bytes"
"encoding/binary"
"strings"
"github.com/cilium/ebpf/perf"
"github.com/go-errors/errors"
"github.com/up9inc/mizu/shared/logger"
)
const logPrefix = "[bpf] "
// The same consts defined in log.h
//
const logLevelError = 0
const logLevelInfo = 1
const logLevelDebug = 2
type logMessage struct {
Level uint32
MessageCode uint32
Arg1 uint64
Arg2 uint64
Arg3 uint64
}
type bpfLogger struct {
logReader *perf.Reader
}
func newBpfLogger() *bpfLogger {
return &bpfLogger{
logReader: nil,
}
}
func (p *bpfLogger) init(bpfObjects *tlsTapperObjects, bufferSize int) error {
var err error
p.logReader, err = perf.NewReader(bpfObjects.LogBuffer, bufferSize)
if err != nil {
return errors.Wrap(err, 0)
}
return nil
}
func (p *bpfLogger) close() error {
return p.logReader.Close()
}
func (p *bpfLogger) poll() {
logger.Log.Infof("Start polling for bpf logs")
for {
record, err := p.logReader.Read()
if err != nil {
if errors.Is(err, perf.ErrClosed) {
return
}
LogError(errors.Errorf("Error reading from bpf logger perf buffer, aboring logger! %w", err))
return
}
if record.LostSamples != 0 {
logger.Log.Infof("Log buffer is full, dropped %d logs", record.LostSamples)
continue
}
buffer := bytes.NewReader(record.RawSample)
var log logMessage
if err := binary.Read(buffer, binary.LittleEndian, &log); err != nil {
LogError(errors.Errorf("Error parsing log %v", err))
continue
}
p.log(&log)
}
}
func (p *bpfLogger) log(log *logMessage) {
if int(log.MessageCode) >= len(bpfLogMessages) {
logger.Log.Errorf("Unknown message code from bpf logger %d", log.MessageCode)
return
}
format := bpfLogMessages[log.MessageCode]
tokensCount := strings.Count(format, "%")
if tokensCount == 0 {
p.logLevel(log.Level, format)
} else if tokensCount == 1 {
p.logLevel(log.Level, format, log.Arg1)
} else if tokensCount == 2 {
p.logLevel(log.Level, format, log.Arg1, log.Arg2)
} else if tokensCount == 3 {
p.logLevel(log.Level, format, log.Arg1, log.Arg2, log.Arg3)
}
}
func (p *bpfLogger) logLevel(level uint32, format string, args ...interface{}) {
if level == logLevelError {
logger.Log.Errorf(logPrefix+format, args...)
} else if level == logLevelInfo {
logger.Log.Infof(logPrefix+format, args...)
} else if level == logLevelDebug {
logger.Log.Debugf(logPrefix+format, args...)
}
}

View File

@@ -0,0 +1,25 @@
package tlstapper
// Must be synced with logger_messages.h
//
var bpfLogMessages = []string {
/*0000*/ "[%d] Unable to read bytes count from _ex methods [err: %d]",
/*0001*/ "[%d] Unable to read ipv4 address [err: %d]",
/*0002*/ "[%d] Unable to read ssl buffer [err: %d]",
/*0003*/ "[%d] Buffer is too big [size: %d]",
/*0004*/ "[%d] Unable to allocate chunk in bpf heap",
/*0005*/ "[%d] Unable to read ssl context [err: %d] [origin: %d]",
/*0006*/ "[%d] Unable to put ssl context [err: %d]",
/*0007*/ "[%d] Unable to get ssl context",
/*0008*/ "[%d] File descriptor is missing for tls chunk",
/*0009*/ "[%d] Unable to put file descriptor [err: %d] [origin: %d]",
/*0010*/ "[%d] Unable to put accept info [err: %d]",
/*0011*/ "[%d] Unable to get accept info",
/*0012*/ "[%d] Unable to read accept info [err: %d]",
/*0013*/ "[%d] Unable to put file descriptor to address mapping [err: %d] [origin: %d]",
/*0014*/ "[%d] Unable to put connect info [err: %d]",
/*0015*/ "[%d] Unable to get connect info",
/*0016*/ "[%d] Unable to read connect info [err: %d]",
}

View File

@@ -16,14 +16,15 @@ const FLAGS_IS_READ_BIT uint32 = (1 << 1)
// Be careful when editing, alignment and padding should be exactly the same in go/c.
//
type tlsChunk struct {
Pid uint32
Tgid uint32
Len uint32
Recorded uint32
Fd uint32
Flags uint32
Address [16]byte
Data [4096]byte
Pid uint32 // process id
Tgid uint32 // thread id inside the process
Len uint32 // the size of the native buffer used to read/write the tls data (may be bigger than tlsChunk.Data[])
Start uint32 // the start offset withing the native buffer
Recorded uint32 // number of bytes copied from the native buffer to tlsChunk.Data[]
Fd uint32 // the file descriptor used to read/write the tls data (probably socket file descriptor)
Flags uint32 // bitwise flags
Address [16]byte // ipv4 address and port
Data [4096]byte // actual tls data
}
func (c *tlsChunk) getAddress() (net.IP, uint16, error) {

View File

@@ -225,8 +225,8 @@ func (p *tlsPoller) logTls(chunk *tlsChunk, ip net.IP, port uint16) {
str := strings.ReplaceAll(strings.ReplaceAll(string(chunk.Data[0:chunk.Recorded]), "\n", " "), "\r", "")
logger.Log.Infof("PID: %v (tid: %v) (fd: %v) (client: %v) (addr: %v:%v) (fdaddr %v:%v>%v:%v) (recorded %v out of %v) - %v - %v",
logger.Log.Infof("PID: %v (tid: %v) (fd: %v) (client: %v) (addr: %v:%v) (fdaddr %v:%v>%v:%v) (recorded %v out of %v starting at %v) - %v - %v",
chunk.Pid, chunk.Tgid, chunk.Fd, flagsStr, ip, port,
srcIp, srcPort, dstIp, dstPort,
chunk.Recorded, chunk.Len, str, hex.EncodeToString(chunk.Data[0:chunk.Recorded]))
chunk.Recorded, chunk.Len, chunk.Start, str, hex.EncodeToString(chunk.Data[0:chunk.Recorded]))
}

View File

@@ -24,6 +24,8 @@ func UpdateTapTargets(tls *TlsTapper, pods *[]v1.Pod, procfs string) error {
if err != nil {
return err
}
tls.ClearPids()
for _, pid := range containerPids {
if err := tls.AddPid(procfs, pid); err != nil {

View File

@@ -5,8 +5,11 @@ import (
"github.com/go-errors/errors"
"github.com/up9inc/mizu/shared/logger"
"github.com/up9inc/mizu/tap/api"
"sync"
)
const GLOABL_TAP_PID = 0
//go:generate go run github.com/cilium/ebpf/cmd/bpf2go tlsTapper bpf/tls_tapper.c -- -O2 -g -D__TARGET_ARCH_x86
type TlsTapper struct {
@@ -14,10 +17,12 @@ type TlsTapper struct {
syscallHooks syscallHooks
sslHooksStructs []sslHooks
poller *tlsPoller
bpfLogger *bpfLogger
registeredPids sync.Map
}
func (t *TlsTapper) Init(bufferSize int, procfs string, extension *api.Extension) error {
logger.Log.Infof("Initializing tls tapper (bufferSize: %v)", bufferSize)
func (t *TlsTapper) Init(chunksBufferSize int, logBufferSize int, procfs string, extension *api.Extension) error {
logger.Log.Infof("Initializing tls tapper (chunksSize: %d) (logSize: %d)", chunksBufferSize, logBufferSize)
if err := setupRLimit(); err != nil {
return err
@@ -35,16 +40,25 @@ func (t *TlsTapper) Init(bufferSize int, procfs string, extension *api.Extension
t.sslHooksStructs = make([]sslHooks, 0)
t.bpfLogger = newBpfLogger()
if err := t.bpfLogger.init(&t.bpfObjects, logBufferSize); err != nil {
return err
}
t.poller = newTlsPoller(t, extension, procfs)
return t.poller.init(&t.bpfObjects, bufferSize)
return t.poller.init(&t.bpfObjects, chunksBufferSize)
}
func (t *TlsTapper) Poll(emitter api.Emitter, options *api.TrafficFilteringOptions) {
t.poller.poll(emitter, options)
}
func (t *TlsTapper) PollForLogging() {
t.bpfLogger.poll()
}
func (t *TlsTapper) GlobalTap(sslLibrary string) error {
return t.tapPid(0, sslLibrary)
return t.tapPid(GLOABL_TAP_PID, sslLibrary)
}
func (t *TlsTapper) AddPid(procfs string, pid uint32) error {
@@ -70,6 +84,21 @@ func (t *TlsTapper) RemovePid(pid uint32) error {
return nil
}
func (t *TlsTapper) ClearPids() {
t.registeredPids.Range(func(key, v interface{}) bool {
pid := key.(uint32)
if pid == GLOABL_TAP_PID {
return true
}
if err := t.RemovePid(pid); err != nil {
LogError(err)
}
t.registeredPids.Delete(key)
return true
})
}
func (t *TlsTapper) Close() []error {
errors := make([]error, 0)
@@ -83,6 +112,10 @@ func (t *TlsTapper) Close() []error {
errors = append(errors, sslHooks.close()...)
}
if err := t.bpfLogger.close(); err != nil {
errors = append(errors, err)
}
if err := t.poller.close(); err != nil {
errors = append(errors, err)
}
@@ -116,6 +149,8 @@ func (t *TlsTapper) tapPid(pid uint32, sslLibrary string) error {
if err := pids.Put(pid, uint32(1)); err != nil {
return errors.Wrap(err, 0)
}
t.registeredPids.Store(pid, true)
return nil
}

View File

@@ -78,6 +78,7 @@ type tlsTapperMapSpecs struct {
ConnectSyscallInfo *ebpf.MapSpec `ebpf:"connect_syscall_info"`
FileDescriptorToIpv4 *ebpf.MapSpec `ebpf:"file_descriptor_to_ipv4"`
Heap *ebpf.MapSpec `ebpf:"heap"`
LogBuffer *ebpf.MapSpec `ebpf:"log_buffer"`
PidsMap *ebpf.MapSpec `ebpf:"pids_map"`
SslReadContext *ebpf.MapSpec `ebpf:"ssl_read_context"`
SslWriteContext *ebpf.MapSpec `ebpf:"ssl_write_context"`
@@ -107,6 +108,7 @@ type tlsTapperMaps struct {
ConnectSyscallInfo *ebpf.Map `ebpf:"connect_syscall_info"`
FileDescriptorToIpv4 *ebpf.Map `ebpf:"file_descriptor_to_ipv4"`
Heap *ebpf.Map `ebpf:"heap"`
LogBuffer *ebpf.Map `ebpf:"log_buffer"`
PidsMap *ebpf.Map `ebpf:"pids_map"`
SslReadContext *ebpf.Map `ebpf:"ssl_read_context"`
SslWriteContext *ebpf.Map `ebpf:"ssl_write_context"`
@@ -119,6 +121,7 @@ func (m *tlsTapperMaps) Close() error {
m.ConnectSyscallInfo,
m.FileDescriptorToIpv4,
m.Heap,
m.LogBuffer,
m.PidsMap,
m.SslReadContext,
m.SslWriteContext,

Binary file not shown.

View File

@@ -78,6 +78,7 @@ type tlsTapperMapSpecs struct {
ConnectSyscallInfo *ebpf.MapSpec `ebpf:"connect_syscall_info"`
FileDescriptorToIpv4 *ebpf.MapSpec `ebpf:"file_descriptor_to_ipv4"`
Heap *ebpf.MapSpec `ebpf:"heap"`
LogBuffer *ebpf.MapSpec `ebpf:"log_buffer"`
PidsMap *ebpf.MapSpec `ebpf:"pids_map"`
SslReadContext *ebpf.MapSpec `ebpf:"ssl_read_context"`
SslWriteContext *ebpf.MapSpec `ebpf:"ssl_write_context"`
@@ -107,6 +108,7 @@ type tlsTapperMaps struct {
ConnectSyscallInfo *ebpf.Map `ebpf:"connect_syscall_info"`
FileDescriptorToIpv4 *ebpf.Map `ebpf:"file_descriptor_to_ipv4"`
Heap *ebpf.Map `ebpf:"heap"`
LogBuffer *ebpf.Map `ebpf:"log_buffer"`
PidsMap *ebpf.Map `ebpf:"pids_map"`
SslReadContext *ebpf.Map `ebpf:"ssl_read_context"`
SslWriteContext *ebpf.Map `ebpf:"ssl_write_context"`
@@ -119,6 +121,7 @@ func (m *tlsTapperMaps) Close() error {
m.ConnectSyscallInfo,
m.FileDescriptorToIpv4,
m.Heap,
m.LogBuffer,
m.PidsMap,
m.SslReadContext,
m.SslWriteContext,

Binary file not shown.

View File

@@ -1,4 +1,4 @@
import TrafficViewer,{useWS, DEFAULT_QUERY} from '@up9/mizu-common';
import TrafficViewer,{useWS, DEFAULT_QUERY, OasModal} from '@up9/mizu-common';
import "@up9/mizu-common/dist/index.css"
import {useEffect} from 'react';
import Api, {getWebsocketUrl} from "./api";
@@ -17,8 +17,7 @@ const App = () => {
},[])
return <>
<TrafficViewer message={message} error={error} isWebSocketOpen={isOpen}
trafficViewerApiProp={trafficViewerApi} ></TrafficViewer>
</>
}

View File

@@ -1,12 +1,12 @@
{
"name": "@up9/mizu-common",
"version": "1.0.10",
"version": "0.0.0",
"lockfileVersion": 2,
"requires": true,
"packages": {
"": {
"name": "@up9/mizu-common",
"version": "1.0.10",
"version": "0.0.0",
"license": "MIT",
"dependencies": {
"@craco/craco": "^6.4.3",

View File

@@ -1,6 +1,6 @@
{
"name": "@up9/mizu-common",
"version": "1.0.130",
"version": "0.0.0",
"description": "Made with create-react-library",
"author": "",
"license": "MIT",
@@ -27,7 +27,7 @@
"@material-ui/icons": "^4.11.2",
"@material-ui/lab": "^4.0.0-alpha.60",
"node-sass": "^6.0.0",
"react":"^17.0.2",
"react": "^17.0.2",
"react-dom": "^17.0.2",
"recoil": "^0.5.2",
"react-copy-to-clipboard": "^5.0.3",
@@ -49,6 +49,7 @@
"node-fetch": "^3.1.1",
"numeral": "^2.0.6",
"protobuf-decoder": "^0.1.0",
"react-resizable": "^3.0.4",
"react-graph-vis": "^1.0.7",
"react-lowlight": "^3.0.0",
"react-router-dom": "^6.2.1",
@@ -77,7 +78,7 @@
"rollup-plugin-postcss": "^4.0.2",
"rollup-plugin-sass": "^1.2.10",
"rollup-plugin-scss": "^3.0.0",
"react":"^17.0.2",
"react": "^17.0.2",
"react-dom": "^17.0.2",
"typescript": "^4.2.4"
},
@@ -90,4 +91,4 @@
"files": [
"dist"
]
}
}

View File

@@ -6,23 +6,32 @@
padding: 10px
.selectHeader
font-family: Source Sans Pro,Lucida Grande,Tahoma,sans-serif
display: flex
align-items: center
font-weight: 900
width: 100%
margin-top: -1%
.openApilogo
width: 36px
width: 43px
.title
color:#494677
font-family: Lato
font-size: 20px
font-family: Source Sans Pro,Lucida Grande,Tahoma,sans-serif
font-size: 28px
font-weight: 600
.selectContainer
margin-left: 1%
width: 14%
margin-bottom: 1%
margin-top: 1%
.redoc
height: 98%
overflow-y: scroll
height: 85%
overflow-y: scroll
.borderLine
border-top: 1px solid #dee6fe
margin-bottom: 1%

View File

@@ -1,4 +1,4 @@
import { Box, Fade, FormControl, MenuItem, Modal, Backdrop, ListSubheader } from "@material-ui/core";
import { Box, Fade, FormControl, MenuItem, Modal, Backdrop } from "@material-ui/core";
import { useCallback, useEffect, useState } from "react";
import { RedocStandalone } from "redoc";
import closeIcon from "assets/closeIcon.svg";
@@ -9,6 +9,7 @@ import { redocThemeOptions } from "./redocThemeOptions";
import React from "react";
import { Select } from "../UI/Select";
const modalStyle = {
position: 'absolute',
top: '6%',
@@ -23,63 +24,44 @@ const modalStyle = {
color: '#000',
};
const ipAddressWithPortRegex = new RegExp('([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}):([0-9]{1,5})');
const OasModal = ({ openModal, handleCloseModal, getOasServices, getOasByService}) => {
const OasModal = ({ openModal, handleCloseModal, getOasServices, getOasByService }) => {
const [oasServices, setOasServices] = useState([] as string[])
const [selectedServiceName, setSelectedServiceName] = useState("");
const [selectedServiceSpec, setSelectedServiceSpec] = useState(null);
const [resolvedServices, setResolvedServices] = useState([]);
const [unResolvedServices, setUnResolvedServices] = useState([]);
const onSelectedOASService = useCallback( async (selectedService) => {
if (!!selectedService){
setSelectedServiceName(selectedService);
if(oasServices.length === 0){
return
}
try {
const data = await getOasByService(selectedService);
setSelectedServiceSpec(data);
} catch (e) {
toast.error("Error occurred while fetching service OAS spec");
console.error(e);
}
const onSelectedOASService = async (selectedService) => {
if (oasServices.length === 0) {
setSelectedServiceSpec(null);
setSelectedServiceName("");
return
}
},[oasServices.length])
const resolvedArrayBuilder = useCallback(async(services) => {
const resServices = [];
const unResServices = [];
services.forEach(s => {
if(ipAddressWithPortRegex.test(s)){
unResServices.push(s);
}
else {
resServices.push(s);
}
});
resServices.sort();
unResServices.sort();
onSelectedOASService(resServices[0]);
setResolvedServices(resServices);
setUnResolvedServices(unResServices);
},[onSelectedOASService])
else {
setSelectedServiceName(selectedService ? selectedService : oasServices[0]);
}
try {
const data = await getOasByService(selectedService ? selectedService : oasServices[0]);
setSelectedServiceSpec(data);
} catch (e) {
toast.error("Error occurred while fetching service OAS spec");
console.error(e);
}
};
useEffect(() => {
(async () => {
try {
const services = await getOasServices();
resolvedArrayBuilder(services);
setOasServices(services);
} catch (e) {
console.error(e);
}
})();
}, [openModal,resolvedArrayBuilder]);
}, [openModal]);
useEffect(() => {
onSelectedOASService(null);
}, [oasServices])
return (
<Modal
@@ -90,48 +72,41 @@ const OasModal = ({ openModal, handleCloseModal, getOasServices, getOasByService
closeAfterTransition
BackdropComponent={Backdrop}
BackdropProps={{
timeout: 500,
timeout: 500,
}}
>
<Fade in={openModal}>
<Box sx={modalStyle}>
<div className={style.boxContainer}>
<div className={style.selectHeader}>
<div><img src={openApiLogo} alt="openApi" className={style.openApilogo}/></div>
<div className={style.title}>OpenAPI selected service: </div>
<div className={style.selectContainer} >
<FormControl>
<Select
labelId="service-select-label"
id="service-select"
placeholder="Show OAS"
value={selectedServiceName}
onChangeCb={onSelectedOASService}
>
<ListSubheader disableSticky={true}>Resolved</ListSubheader>
{resolvedServices.map((service) => (
<MenuItem key={service} value={service}>
{service}
</MenuItem>
))}
<ListSubheader disableSticky={true}>UnResolved</ListSubheader>
{unResolvedServices.map((service) => (
<MenuItem key={service} value={service}>
{service}
</MenuItem>
))}
</Select>
</FormControl>
</div>
<div><img src={openApiLogo} alt="openAPI" className={style.openApilogo} /></div>
<div className={style.title}>Service Catalog</div>
</div>
<div style={{ cursor: "pointer" }}>
<img src={closeIcon} alt="close" onClick={handleCloseModal} />
</div>
</div>
<div className={style.selectContainer} >
<FormControl>
<Select
labelId="service-select-label"
id="service-select"
value={selectedServiceName}
onChangeCb={onSelectedOASService}
>
{oasServices.map((service) => (
<MenuItem key={service} value={service}>
{service}
</MenuItem>
))}
</Select>
</FormControl>
</div>
<div className={style.borderLine}></div>
<div className={style.redoc}>
{selectedServiceSpec && <RedocStandalone
spec={selectedServiceSpec}
options={redocThemeOptions}/>}
{selectedServiceSpec && <RedocStandalone
spec={selectedServiceSpec}
options={redocThemeOptions} />}
</div>
</Box>
</Fade>

View File

@@ -1,35 +1,52 @@
export const redocThemeOptions = {
theme:{
codeBlock:{
backgroundColor:"#11171a",
},
colors:{
responses:{
error:{
tabTextColor:"#1b1b29"
},
info:{
tabTextColor:"#1b1b29",
},
success:{
tabTextColor:"#0c0b1a"
},
},
text:{
primary:"#1b1b29",
secondary:"#4d4d4d"
}
},
rightPanel:{
backgroundColor:"#253237",
},
sidebar:{
backgroundColor:"#ffffff"
},
typography:{
code:{
color:"#0c0b1a"
}
}
}
}
const fontFamilyVar = "Source Sans Pro, Lucida Grande, Tahoma, sans-serif"
export const redocThemeOptions = {
theme: {
codeBlock: {
backgroundColor: "#14161c",
},
components: {
buttons: {
fontFamily: fontFamilyVar,
},
httpBadges: {
fontFamily: fontFamilyVar,
}
},
colors: {
responses: {
error: {
tabTextColor: "#1b1b29"
},
info: {
tabTextColor: "#1b1b29",
},
success: {
tabTextColor: "#0c0b1a"
},
},
text: {
primary: "#1b1b29",
secondary: "#4d4d4d"
}
},
rightPanel: {
backgroundColor: "#0D0B1D",
},
sidebar: {
backgroundColor: "#ffffff"
},
typography: {
code: {
color: "#0c0b1a",
fontFamily: fontFamilyVar
},
fontFamily: fontFamilyVar,
fontSize: "90%",
fontWieght: "normal",
headings: {
fontFamily: fontFamilyVar
}
}
}
}

View File

@@ -0,0 +1,60 @@
@import "../../variables.module"
.modalContainer
display: flex
flex-wrap: nowrap
width: 100%
height: 100%
.graphSection
flex: 85%
.filterSection
flex: 15%
border-right: 1px solid $blue-color
margin-right: 2%
height: 100%
.filters table
margin-top: 0px
tr
border-style: none
td
color: #8f9bb2
font-size: 11px
font-weight: 600
padding-top: 2px
padding-bottom: 2px
.colorBlock
display: inline-block
height: 15px
width: 50px
.filterWrapper
height: 100%
display: flex
flex-direction: column
margin-right: 10px
.servicesFilterSearch
width: calc(100% - 10px)
max-width: 300px
box-shadow: 0px 1px 5px #979797
margin-left: 10px
margin-bottom: 5px
.servicesFilter
margin-top: clamp(25px,15%,35px)
height: 100%
overflow: hidden
& .servicesFilterList
overflow-y: auto
height: 92%
.separtorLine
margin-top: 10px
border: 1px solid #E9EBF8

View File

@@ -0,0 +1,227 @@
import React, { useState, useEffect, useCallback, useMemo } from "react";
import { Box, Fade, Modal, Backdrop, Button } from "@material-ui/core";
import { toast } from "react-toastify";
import spinnerStyle from '../UI/style/Spinner.module.sass';
import spinnerImg from 'assets/spinner.svg';
import Graph from "react-graph-vis";
import debounce from 'lodash/debounce';
import ServiceMapOptions from './ServiceMapOptions'
import { useCommonStyles } from "../../helpers/commonStyle";
import refreshIcon from "assets/refresh.svg";
import closeIcon from "assets/close.svg"
import styles from './ServiceMapModal.module.sass'
import SelectList from "../UI/SelectList";
import { GraphData, ServiceMapGraph } from "./ServiceMapModalTypes"
import { ResizableBox } from "react-resizable"
import "react-resizable/css/styles.css"
import { Utils } from "../../helpers/Utils";
const modalStyle = {
position: 'absolute',
top: '6%',
left: '50%',
transform: 'translate(-50%, 0%)',
width: '89vw',
height: '82vh',
bgcolor: 'background.paper',
borderRadius: '5px',
boxShadow: 24,
p: 4,
color: '#000',
padding: "25px 15px"
};
interface LegentLabelProps {
color: string,
name: string
}
const LegentLabel: React.FC<LegentLabelProps> = ({ color, name }) => {
return <React.Fragment>
<div style={{ display: "flex", justifyContent: "space-between", alignItems: "center" }}>
<span title={name}>{name}</span>
<span style={{ background: color }} className={styles.colorBlock}></span>
</div>
</React.Fragment>
}
const protocols = [
{ key: "http", value: "HTTP", component: <LegentLabel color="#205cf5" name="HTTP" /> },
{ key: "http/2", value: "HTTP/2", component: <LegentLabel color='#244c5a' name="HTTP/2" /> },
{ key: "grpc", value: "gRPC", component: <LegentLabel color='#244c5a' name="gRPC" /> },
{ key: "amqp", value: "AMQP", component: <LegentLabel color='#ff6600' name="AMQP" /> },
{ key: "kafka", value: "KAFKA", component: <LegentLabel color='#000000' name="KAFKA" /> },
{ key: "redis", value: "REDIS", component: <LegentLabel color='#a41e11' name="REDIS" /> },]
interface ServiceMapModalProps {
isOpen: boolean;
onOpen: () => void;
onClose: () => void;
getServiceMapDataApi: () => Promise<any>
}
export const ServiceMapModal: React.FC<ServiceMapModalProps> = ({ isOpen, onClose, getServiceMapDataApi }) => {
const commonClasses = useCommonStyles();
const [isLoading, setIsLoading] = useState<boolean>(true);
const [graphData, setGraphData] = useState<GraphData>({ nodes: [], edges: [] });
const [filteredProtocols, setFilteredProtocols] = useState(protocols.map(x => x.key))
const [filteredServices, setFilteredServices] = useState([])
const [serviceMapApiData, setServiceMapApiData] = useState<ServiceMapGraph>({ edges: [], nodes: [] })
const [servicesSearchVal, setServicesSearchVal] = useState("")
const [graphOptions, setGraphOptions] = useState(ServiceMapOptions);
const getServiceMapData = useCallback(async () => {
try {
setIsLoading(true)
const serviceMapData: ServiceMapGraph = await getServiceMapDataApi()
setServiceMapApiData(serviceMapData)
const newGraphData: GraphData = { nodes: [], edges: [] }
if (serviceMapData.nodes) {
newGraphData.nodes = serviceMapData.nodes.map(mapNodesDatatoGraph)
}
if (serviceMapData.edges) {
newGraphData.edges = serviceMapData.edges.map(mapEdgesDatatoGraph)
}
setGraphData(newGraphData)
} catch (ex) {
toast.error("An error occurred while loading Mizu Service Map, see console for mode details");
console.error(ex);
} finally {
setIsLoading(false)
}
// eslint-disable-next-line
}, [isOpen])
const mapNodesDatatoGraph = node => {
return {
id: node.id,
value: node.count,
label: (node.entry.name === "unresolved") ? node.name : `${node.entry.name} (${node.name})`,
title: "Count: " + node.name,
isResolved: node.entry.resolved
}
}
const mapEdgesDatatoGraph = edge => {
return {
from: edge.source.id,
to: edge.destination.id,
value: edge.count,
label: edge.count.toString(),
color: {
color: edge.protocol.backgroundColor,
highlight: edge.protocol.backgroundColor
},
}
}
const mapToKeyValForFilter = (arr) => arr.map(mapNodesDatatoGraph)
.map((edge) => { return { key: edge.label, value: edge.label } })
.sort((a, b) => { return a.key.localeCompare(b.key) });
const getServicesForFilter = useMemo(() => {
const resolved = mapToKeyValForFilter(serviceMapApiData.nodes?.filter(x => x.resolved))
const unResolved = mapToKeyValForFilter(serviceMapApiData.nodes?.filter(x => !x.resolved))
return [...resolved, ...unResolved]
}, [serviceMapApiData])
const filterServiceMap = (newProtocolsFilters?: any[], newServiceFilters?: string[]) => {
const filterProt = newProtocolsFilters || filteredProtocols
const filterService = newServiceFilters || filteredServices || getServicesForFilter.map(x => x.key)
setFilteredProtocols(filterProt)
setFilteredServices(filterService)
const newGraphData: GraphData = {
nodes: serviceMapApiData.nodes?.map(mapNodesDatatoGraph).filter(node => filterService.includes(node.label)),
edges: serviceMapApiData.edges?.filter(edge => filterProt.includes(edge.protocol.name)).map(mapEdgesDatatoGraph)
}
setGraphData(newGraphData);
}
useEffect(() => {
const resolvedServices = getServicesForFilter.map(x => x.key).filter(serviceName => !Utils.isIpAddress(serviceName))
setFilteredServices(resolvedServices)
filterServiceMap(filteredProtocols, resolvedServices)
}, [getServicesForFilter])
useEffect(() => {
getServiceMapData()
}, [getServiceMapData])
useEffect(() => {
if (graphData?.nodes?.length === 0) return;
let options = { ...graphOptions };
options.physics.barnesHut.avoidOverlap = graphData?.nodes?.length > 10 ? 0 : 1;
setGraphOptions(options);
// eslint-disable-next-line
}, [graphData?.nodes?.length])
const refreshServiceMap = debounce(() => {
getServiceMapData();
}, 500);
return (
<Modal
aria-labelledby="transition-modal-title"
aria-describedby="transition-modal-description"
open={isOpen}
onClose={onClose}
closeAfterTransition
BackdropComponent={Backdrop}
BackdropProps={{ timeout: 500 }}>
<Fade in={isOpen}>
<Box sx={modalStyle}>
<div className={styles.modalContainer}>
{/* TODO: remove error missing height */}
<ResizableBox width={200} style={{ height: '100%', minWidth: "200px" }} axis={"x"}>
<div className={styles.filterSection}>
<div className={styles.filterWrapper}>
<div className={styles.protocolsFilterList}>
<SelectList items={protocols} checkBoxWidth="5%" tableName={"Protocols"} multiSelect={true}
checkedValues={filteredProtocols} setCheckedValues={filterServiceMap} tableClassName={styles.filters} />
</div>
<div className={styles.separtorLine}></div>
<div className={styles.servicesFilter}>
<input className={commonClasses.textField + ` ${styles.servicesFilterSearch}`} placeholder="search service" value={servicesSearchVal} onChange={(event) => setServicesSearchVal(event.target.value)} />
<div className={styles.servicesFilterList}>
<SelectList items={getServicesForFilter} tableName={"Services"} tableClassName={styles.filters} multiSelect={true} searchValue={servicesSearchVal}
checkBoxWidth="5%" checkedValues={filteredServices} setCheckedValues={(newServicesForFilter) => filterServiceMap(null, newServicesForFilter)} />
</div>
</div>
</div>
</div>
</ResizableBox>
<div className={styles.graphSection}>
<div style={{ display: "flex", justifyContent: "space-between" }}>
<Button style={{ marginRight: "3%" }}
startIcon={<img src={refreshIcon} className="custom" alt="refresh" style={{ marginRight: "8%" }}></img>}
size="medium"
variant="contained"
className={commonClasses.outlinedButton + " " + commonClasses.imagedButton}
onClick={refreshServiceMap}
>
Refresh
</Button>
<img src={closeIcon} alt="close" onClick={() => onClose()} style={{ cursor: "pointer" }}></img>
</div>
{isLoading && <div className={spinnerStyle.spinnerContainer}>
<img alt="spinner" src={spinnerImg} style={{ height: 50 }} />
</div>}
{!isLoading && <div style={{ height: "100%", width: "100%" }}>
<Graph
graph={graphData}
options={graphOptions}
/>
</div>
}
</div>
</div>
</Box>
</Fade>
</Modal>
);
}

View File

@@ -0,0 +1,60 @@
export interface GraphData {
nodes: Node[];
edges: Edge[];
}
export interface Node {
id: number;
value: number;
label: string;
title?: string;
color?: object;
}
export interface Edge {
from: number;
to: number;
value: number;
label: string;
title?: string;
color?: object;
}
export interface ServiceMapNode {
id: number;
name: string;
entry: Entry;
count: number;
resolved: boolean;
}
export interface ServiceMapEdge {
source: ServiceMapNode;
destination: ServiceMapNode;
count: number;
protocol: Protocol;
}
export interface ServiceMapGraph {
nodes: ServiceMapNode[];
edges: ServiceMapEdge[];
}
export interface Entry {
ip: string;
port: string;
name: string;
}
export interface Protocol {
name: string;
abbr: string;
macro: string;
version: string;
backgroundColor: string;
foregroundColor: string;
fontSize: number;
referenceLink: string;
ports: string[];
priority: number;
}

View File

@@ -0,0 +1,83 @@
const ServiceMapOptions = {
physics: {
enabled: true,
solver: 'barnesHut',
barnesHut: {
theta: 0.5,
gravitationalConstant: -2000,
centralGravity: 0.3,
springLength: 180,
springConstant: 0.04,
damping: 0.09,
avoidOverlap: 0
},
},
layout: {
hierarchical: false,
randomSeed: 1 // always on node 1
},
nodes: {
shape: 'dot',
chosen: true,
color: {
background: '#27AE60',
border: '#000000',
highlight: {
background: '#27AE60',
border: '#000000',
},
},
font: {
color: '#343434',
size: 14, // px
face: 'arial',
background: 'none',
strokeWidth: 0, // px
strokeColor: '#ffffff',
align: 'center',
multi: false
},
borderWidth: 1.5,
borderWidthSelected: 2.5,
labelHighlightBold: true,
opacity: 1,
shadow: true,
},
edges: {
chosen: true,
dashes: false,
arrowStrikethrough: false,
arrows: {
to: {
enabled: true,
},
middle: {
enabled: false,
},
from: {
enabled: false,
}
},
smooth: {
enabled: true,
type: 'dynamic',
roundness: 1.0
},
font: {
color: '#343434',
size: 12, // px
face: 'arial',
background: 'none',
strokeWidth: 2, // px
strokeColor: '#ffffff',
align: 'horizontal',
multi: false,
},
labelHighlightBold: true,
selectionWidth: 1,
shadow: true,
},
autoResize: true,
};
export default ServiceMapOptions

View File

@@ -0,0 +1,4 @@
<svg width="20" height="20" viewBox="0 0 20 20" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M18.591 9.99997C18.591 14.7446 14.7447 18.5909 10.0001 18.5909C5.25546 18.5909 1.40918 14.7446 1.40918 9.99997C1.40918 5.25534 5.25546 1.40906 10.0001 1.40906C14.7447 1.40906 18.591 5.25534 18.591 9.99997Z" fill="#E9EBF8" stroke="#BCCEFD"/>
<path d="M13.1604 8.23038L11.95 7.01994L10.1392 8.83078L8.32832 7.01994L7.11789 8.23038L8.92872 10.0412L7.12046 11.8495L8.33089 13.0599L10.1392 11.2517L11.9474 13.0599L13.1579 11.8495L11.3496 10.0412L13.1604 8.23038Z" fill="#205CF5"/>
</svg>

After

Width:  |  Height:  |  Size: 588 B

View File

@@ -0,0 +1,3 @@
<svg width="26" height="26" viewBox="0 0 26 26" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10.8337 11.9167H7.69308L7.69416 11.907C7.83561 11.2143 8.11247 10.5564 8.50883 9.97105C9.09865 9.10202 9.92598 8.42097 10.8922 8.00913C11.2193 7.87046 11.5606 7.7643 11.9083 7.69388C12.6297 7.54762 13.3731 7.54762 14.0945 7.69388C15.1312 7.90631 16.0825 8.41908 16.8299 9.1683L18.3639 7.63863C17.6725 6.94707 16.8546 6.39501 15.9546 6.01255C15.4956 5.81823 15.0184 5.67016 14.53 5.57055C13.5223 5.36581 12.4838 5.36581 11.4761 5.57055C10.9873 5.67057 10.5098 5.819 10.0504 6.01363C8.69682 6.58791 7.53808 7.54123 6.71374 8.7588C6.15895 9.5798 5.77099 10.5019 5.57191 11.4725C5.54158 11.6188 5.52533 11.7683 5.50366 11.9167H2.16699L6.50033 16.25L10.8337 11.9167ZM15.167 14.0834H18.3076L18.3065 14.092C18.0234 15.4806 17.205 16.7019 16.0282 17.4915C15.443 17.8882 14.7851 18.1651 14.0923 18.3062C13.3713 18.4525 12.6283 18.4525 11.9072 18.3062C11.2146 18.1648 10.5567 17.8879 9.97133 17.4915C9.68383 17.2971 9.41541 17.0758 9.16966 16.8307L7.63783 18.3625C8.32954 19.0539 9.14791 19.6056 10.0482 19.9875C10.5076 20.1825 10.9875 20.331 11.4728 20.4295C12.4801 20.6344 13.5184 20.6344 14.5257 20.4295C16.4676 20.0265 18.1757 18.8819 19.2869 17.2391C19.8412 16.4187 20.2288 15.4974 20.4277 14.5275C20.4569 14.3813 20.4742 14.2318 20.4959 14.0834H23.8337L19.5003 9.75005L15.167 14.0834Z" fill="#205CF5"/>
</svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" style="margin: auto; background: none; display: block; shape-rendering: auto;" width="200px" height="200px" viewBox="0 0 100 100" preserveAspectRatio="xMidYMid">
<circle cx="50" cy="50" fill="none" stroke="#1d3f72" stroke-width="10" r="35" stroke-dasharray="164.93361431346415 56.97787143782138" transform="rotate(275.903 50 50)">
<animateTransform attributeName="transform" type="rotate" repeatCount="indefinite" dur="1s" values="0 50 50;360 50 50" keyTimes="0;1"></animateTransform>
</circle>
<!-- [ldio] generated by https://loading.io/ --></svg>

After

Width:  |  Height:  |  Size: 673 B

View File

@@ -5,152 +5,214 @@ import Moment from 'moment';
import {EntryItem} from "./EntryListItem/EntryListItem";
import down from "assets/downImg.svg";
import spinner from 'assets/spinner.svg';
import {RecoilState, useRecoilState, useRecoilValue} from "recoil";
import {RecoilState, useRecoilState, useRecoilValue, useSetRecoilState} from "recoil";
import entriesAtom from "../../recoil/entries";
import wsConnectionAtom, {WsConnectionStatus} from "../../recoil/wsConnection";
import queryAtom from "../../recoil/query";
import TrafficViewerApiAtom from "../../recoil/TrafficViewerApi";
import TrafficViewerApi from "./TrafficViewerApi";
import focusedEntryIdAtom from "../../recoil/focusedEntryId";
import {toast} from "react-toastify";
import {TOAST_CONTAINER_ID} from "../../configs/Consts";
import tappingStatusAtom from "../../recoil/tappingStatus";
import leftOffTopAtom from "../../recoil/leftOffTop";
interface EntriesListProps {
listEntryREF: any;
onSnapBrokenEvent: () => void;
isSnappedToBottom: boolean;
setIsSnappedToBottom: any;
queriedCurrent: number;
setQueriedCurrent: any;
queriedTotal: number;
setQueriedTotal: any;
startTime: number;
noMoreDataTop: boolean;
setNoMoreDataTop: (flag: boolean) => void;
leftOffTop: number;
setLeftOffTop: (leftOffTop: number) => void;
openWebSocket: (query: string, resetEntries: boolean) => void;
leftOffBottom: number;
truncatedTimestamp: number;
setTruncatedTimestamp: any;
scrollableRef: any;
listEntryREF: any;
onSnapBrokenEvent: () => void;
isSnappedToBottom: boolean;
setIsSnappedToBottom: any;
noMoreDataTop: boolean;
setNoMoreDataTop: (flag: boolean) => void;
openWebSocket: (query: string, resetEntries: boolean) => void;
scrollableRef: any;
ws: any;
}
export const EntriesList: React.FC<EntriesListProps> = ({listEntryREF, onSnapBrokenEvent, isSnappedToBottom, setIsSnappedToBottom, queriedCurrent, setQueriedCurrent, queriedTotal, setQueriedTotal, startTime, noMoreDataTop, setNoMoreDataTop, leftOffTop, setLeftOffTop, openWebSocket, leftOffBottom, truncatedTimestamp, setTruncatedTimestamp, scrollableRef}) => {
export const EntriesList: React.FC<EntriesListProps> = ({
listEntryREF,
onSnapBrokenEvent,
isSnappedToBottom,
setIsSnappedToBottom,
noMoreDataTop,
setNoMoreDataTop,
openWebSocket,
scrollableRef,
ws
}) => {
const [entries, setEntries] = useRecoilState(entriesAtom);
const wsConnection = useRecoilValue(wsConnectionAtom);
const query = useRecoilValue(queryAtom);
const isWsConnectionClosed = wsConnection === WsConnectionStatus.Closed;
const trafficViewerApi = useRecoilValue(TrafficViewerApiAtom as RecoilState<TrafficViewerApi>)
const [entries, setEntries] = useRecoilState(entriesAtom);
const query = useRecoilValue(queryAtom);
const isWsConnectionClosed = ws?.current?.readyState !== WebSocket.OPEN;
const [focusedEntryId, setFocusedEntryId] = useRecoilState(focusedEntryIdAtom);
const [leftOffTop, setLeftOffTop] = useRecoilState(leftOffTopAtom);
const setTappingStatus = useSetRecoilState(tappingStatusAtom);
const [loadMoreTop, setLoadMoreTop] = useState(false);
const [isLoadingTop, setIsLoadingTop] = useState(false);
const trafficViewerApi = useRecoilValue(TrafficViewerApiAtom as RecoilState<TrafficViewerApi>)
useEffect(() => {
const list = document.getElementById('list').firstElementChild;
list.addEventListener('scroll', (e) => {
const el: any = e.target;
if(el.scrollTop === 0) {
setLoadMoreTop(true);
} else {
setNoMoreDataTop(false);
setLoadMoreTop(false);
}
});
}, [setLoadMoreTop, setNoMoreDataTop]);
const [loadMoreTop, setLoadMoreTop] = useState(false);
const [isLoadingTop, setIsLoadingTop] = useState(false);
const [queriedTotal, setQueriedTotal] = useState(0);
const [startTime, setStartTime] = useState(0);
const [truncatedTimestamp, setTruncatedTimestamp] = useState(0);
const memoizedEntries = useMemo(() => {
return entries;
},[entries]);
const leftOffBottom = entries.length > 0 ? entries[entries.length - 1].id : -1;
const getOldEntries = useCallback(async () => {
useEffect(() => {
const list = document.getElementById('list').firstElementChild;
list.addEventListener('scroll', (e) => {
const el: any = e.target;
if (el.scrollTop === 0) {
setLoadMoreTop(true);
} else {
setNoMoreDataTop(false);
setLoadMoreTop(false);
if (leftOffTop === null || leftOffTop <= 0) {
return;
}
setIsLoadingTop(true);
const data = await trafficViewerApi.fetchEntries(leftOffTop, -1, query, 100, 3000);
if (!data || data.data === null || data.meta === null) {
setNoMoreDataTop(true);
setIsLoadingTop(false);
return;
}
setLeftOffTop(data.meta.leftOff);
}
});
}, [setLoadMoreTop, setNoMoreDataTop]);
let scrollTo: boolean;
if (data.meta.leftOff === 0) {
setNoMoreDataTop(true);
scrollTo = false;
} else {
scrollTo = true;
}
setIsLoadingTop(false);
const memoizedEntries = useMemo(() => {
return entries;
}, [entries]);
const newEntries = [...data.data.reverse(), ...entries];
setEntries(newEntries);
const getOldEntries = useCallback(async () => {
setLoadMoreTop(false);
if (leftOffTop === null || leftOffTop <= 0) {
return;
}
setIsLoadingTop(true);
const data = await trafficViewerApi.fetchEntries(leftOffTop, -1, query, 100, 3000);
if (!data || data.data === null || data.meta === null) {
setNoMoreDataTop(true);
setIsLoadingTop(false);
return;
}
setLeftOffTop(data.meta.leftOff);
setQueriedCurrent(queriedCurrent + data.meta.current);
setQueriedTotal(data.meta.total);
setTruncatedTimestamp(data.meta.truncatedTimestamp);
let scrollTo: boolean;
if (data.meta.leftOff === 0) {
setNoMoreDataTop(true);
scrollTo = false;
} else {
scrollTo = true;
}
setIsLoadingTop(false);
if (scrollTo) {
scrollableRef.current.scrollToIndex(data.data.length - 1);
}
},[setLoadMoreTop, setIsLoadingTop, entries, setEntries, query, setNoMoreDataTop, leftOffTop, setLeftOffTop, queriedCurrent, setQueriedCurrent, setQueriedTotal, setTruncatedTimestamp, scrollableRef]);
const newEntries = [...data.data.reverse(), ...entries];
setEntries(newEntries);
useEffect(() => {
if(!isWsConnectionClosed || !loadMoreTop || noMoreDataTop) return;
getOldEntries();
}, [loadMoreTop, noMoreDataTop, getOldEntries, isWsConnectionClosed]);
setQueriedTotal(data.meta.total);
setTruncatedTimestamp(data.meta.truncatedTimestamp);
const scrollbarVisible = scrollableRef.current?.childWrapperRef.current.clientHeight > scrollableRef.current?.wrapperRef.current.clientHeight;
if (scrollTo) {
scrollableRef.current.scrollToIndex(data.data.length - 1);
}
}, [setLoadMoreTop, setIsLoadingTop, entries, setEntries, query, setNoMoreDataTop, leftOffTop, setLeftOffTop, setQueriedTotal, setTruncatedTimestamp, scrollableRef]);
return <React.Fragment>
<div className={styles.list}>
<div id="list" ref={listEntryREF} className={styles.list}>
{isLoadingTop && <div className={styles.spinnerContainer}>
<img alt="spinner" src={spinner} style={{height: 25}}/>
</div>}
{noMoreDataTop && <div id="noMoreDataTop" className={styles.noMoreDataAvailable}>No more data available</div>}
<ScrollableFeedVirtualized ref={scrollableRef} itemHeight={48} marginTop={10} onSnapBroken={onSnapBrokenEvent}>
{false /* It's because the first child is ignored by ScrollableFeedVirtualized */}
{memoizedEntries.map(entry => <EntryItem
key={`entry-${entry.id}`}
entry={entry}
style={{}}
headingMode={false}
/>)}
</ScrollableFeedVirtualized>
<button type="button"
title="Fetch old records"
className={`${styles.btnOld} ${!scrollbarVisible && leftOffTop > 0 ? styles.showButton : styles.hideButton}`}
onClick={(_) => {
trafficViewerApi.webSocket.close()
getOldEntries();
}}>
<img alt="down" src={down} />
</button>
<button type="button"
title="Snap to bottom"
className={`${styles.btnLive} ${isSnappedToBottom && !isWsConnectionClosed ? styles.hideButton : styles.showButton}`}
onClick={(_) => {
if (isWsConnectionClosed) {
if (query) {
openWebSocket(`(${query}) and leftOff(${leftOffBottom})`, false);
} else {
openWebSocket(`leftOff(${leftOffBottom})`, false);
}
}
scrollableRef.current.jumpToBottom();
setIsSnappedToBottom(true);
}}>
<img alt="down" src={down} />
</button>
</div>
useEffect(() => {
if (!isWsConnectionClosed || !loadMoreTop || noMoreDataTop) return;
getOldEntries();
}, [loadMoreTop, noMoreDataTop, getOldEntries, isWsConnectionClosed]);
<div className={styles.footer}>
<div>Displaying <b id="entries-length">{entries?.length}</b> results out of <b id="total-entries">{queriedTotal}</b> total</div>
{startTime !== 0 && <div>Started listening at <span style={{marginRight: 5, fontWeight: 600, fontSize: 13}}>{Moment(truncatedTimestamp ? truncatedTimestamp : startTime).utc().format('MM/DD/YYYY, h:mm:ss.SSS A')}</span></div>}
</div>
</div>
</React.Fragment>;
const scrollbarVisible = scrollableRef.current?.childWrapperRef.current.clientHeight > scrollableRef.current?.wrapperRef.current.clientHeight;
if (ws.current) {
ws.current.onmessage = (e) => {
if (!e?.data) return;
const message = JSON.parse(e.data);
switch (message.messageType) {
case "entry":
const entry = message.data;
if (!focusedEntryId) setFocusedEntryId(entry.id.toString());
const newEntries = [...entries, entry];
if (newEntries.length === 10001) {
setLeftOffTop(newEntries[0].id);
newEntries.shift();
setNoMoreDataTop(false);
}
setEntries(newEntries);
break;
case "status":
setTappingStatus(message.tappingStatus);
break;
case "toast":
toast[message.data.type](message.data.text, {
theme: "colored",
autoClose: message.data.autoClose,
pauseOnHover: true,
progress: undefined,
containerId: TOAST_CONTAINER_ID
});
break;
case "queryMetadata":
setTruncatedTimestamp(message.data.truncatedTimestamp);
setQueriedTotal(message.data.total);
if (leftOffTop === null) {
setLeftOffTop(message.data.leftOff - 1);
}
break;
case "startTime":
setStartTime(message.data);
break;
};
}
}
return <React.Fragment>
<div className={styles.list}>
<div id="list" ref={listEntryREF} className={styles.list}>
{isLoadingTop && <div className={styles.spinnerContainer}>
<img alt="spinner" src={spinner} style={{height: 25}}/>
</div>}
{noMoreDataTop && <div id="noMoreDataTop" className={styles.noMoreDataAvailable}>No more data available</div>}
<ScrollableFeedVirtualized ref={scrollableRef} itemHeight={48} marginTop={10} onSnapBroken={onSnapBrokenEvent}>
{false /* It's because the first child is ignored by ScrollableFeedVirtualized */}
{memoizedEntries.map(entry => <EntryItem
key={`entry-${entry.id}`}
entry={entry}
style={{}}
headingMode={false}
/>)}
</ScrollableFeedVirtualized>
<button type="button"
title="Fetch old records"
className={`${styles.btnOld} ${!scrollbarVisible && leftOffTop > 0 ? styles.showButton : styles.hideButton}`}
onClick={(_) => {
trafficViewerApi.webSocket.close()
getOldEntries();
}}>
<img alt="down" src={down}/>
</button>
<button type="button"
title="Snap to bottom"
className={`${styles.btnLive} ${isSnappedToBottom && !isWsConnectionClosed ? styles.hideButton : styles.showButton}`}
onClick={(_) => {
if (isWsConnectionClosed) {
if (query) {
openWebSocket(`(${query}) and leftOff(${leftOffBottom})`, false);
} else {
openWebSocket(`leftOff(${leftOffBottom})`, false);
}
}
scrollableRef.current.jumpToBottom();
setIsSnappedToBottom(true);
}}>
<img alt="down" src={down}/>
</button>
</div>
<div className={styles.footer}>
<div>Displaying <b id="entries-length">{entries?.length}</b> results out of <b
id="total-entries">{queriedTotal}</b> total
</div>
{startTime !== 0 && <div>Started listening at <span style={{
marginRight: 5,
fontWeight: 600,
fontSize: 13
}}>{Moment(truncatedTimestamp ? truncatedTimestamp : startTime).utc().format('MM/DD/YYYY, h:mm:ss.SSS A')}</span>
</div>}
</div>
</div>
</React.Fragment>;
};

View File

@@ -1,16 +1,18 @@
import React, {useEffect, useState} from "react";
import React, { useEffect, useState } from "react";
import EntryViewer from "./EntryDetailed/EntryViewer";
import {EntryItem} from "./EntryListItem/EntryListItem";
import {makeStyles} from "@material-ui/core";
import { EntryItem } from "./EntryListItem/EntryListItem";
import { makeStyles } from "@material-ui/core";
import Protocol from "../UI/Protocol"
import Queryable from "../UI/Queryable";
import {toast} from "react-toastify";
import {RecoilState, useRecoilState, useRecoilValue} from "recoil";
import { toast } from "react-toastify";
import { RecoilState, useRecoilValue } from "recoil";
import focusedEntryIdAtom from "../../recoil/focusedEntryId";
import trafficViewerApi from "../../recoil/TrafficViewerApi";
import TrafficViewerApi from "./TrafficViewerApi";
import TrafficViewerApiAtom from "../../recoil/TrafficViewerApi/atom";
import queryAtom from "../../recoil/query/atom";
import useWindowDimensions, { useRequestTextByWidth } from "../../hooks/WindowDimensionsHook";
import { TOAST_CONTAINER_ID } from "../../configs/Consts";
import spinner from "assets/spinner.svg";
const useStyles = makeStyles(() => ({
entryTitle: {
@@ -35,61 +37,65 @@ const useStyles = makeStyles(() => ({
}));
export const formatSize = (n: number) => n > 1000 ? `${Math.round(n / 1000)}KB` : `${n} B`;
const EntryTitle: React.FC<any> = ({protocol, data, elapsedTime}) => {
const minSizeDisplayRequestSize = 880;
const EntryTitle: React.FC<any> = ({ protocol, data, elapsedTime }) => {
const classes = useStyles();
const request = data.request;
const response = data.response;
const { width } = useWindowDimensions();
const { requestText, responseText, elapsedTimeText } = useRequestTextByWidth(width)
return <div className={classes.entryTitle}>
<Protocol protocol={protocol} horizontal={true}/>
<div style={{right: "30px", position: "absolute", display: "flex"}}>
<Protocol protocol={protocol} horizontal={true} />
{(width > minSizeDisplayRequestSize) && <div style={{ right: "30px", position: "absolute", display: "flex" }}>
{request && <Queryable
query={`requestSize == ${data.requestSize}`}
style={{margin: "0 18px"}}
style={{ margin: "0 18px" }}
displayIconOnMouseOver={true}
>
<div
style={{opacity: 0.5}}
style={{ opacity: 0.5 }}
id="entryDetailedTitleRequestSize"
>
{`Request: ${formatSize(data.requestSize)}`}
{`${requestText}${formatSize(data.requestSize)}`}
</div>
</Queryable>}
{response && <Queryable
query={`responseSize == ${data.responseSize}`}
style={{margin: "0 18px"}}
style={{ margin: "0 18px" }}
displayIconOnMouseOver={true}
>
<div
style={{opacity: 0.5}}
style={{ opacity: 0.5 }}
id="entryDetailedTitleResponseSize"
>
{`Response: ${formatSize(data.responseSize)}`}
{`${responseText}${formatSize(data.responseSize)}`}
</div>
</Queryable>}
{response && <Queryable
query={`elapsedTime >= ${elapsedTime}`}
style={{marginRight: 18}}
style={{ margin: "0 0 0 18px" }}
displayIconOnMouseOver={true}
>
<div
style={{opacity: 0.5}}
style={{ opacity: 0.5 }}
id="entryDetailedTitleElapsedTime"
>
{`Elapsed Time: ${Math.round(elapsedTime)}ms`}
{`${elapsedTimeText}${Math.round(elapsedTime)}ms`}
</div>
</Queryable>}
</div>
</div>}
</div>;
};
const EntrySummary: React.FC<any> = ({entry}) => {
const EntrySummary: React.FC<any> = ({ entry, namespace }) => {
return <EntryItem
key={`entry-${entry.id}`}
entry={entry}
style={{}}
headingMode={true}
namespace={namespace}
/>;
};
@@ -100,12 +106,13 @@ export const EntryDetailed = () => {
const focusedEntryId = useRecoilValue(focusedEntryIdAtom);
const trafficViewerApi = useRecoilValue(TrafficViewerApiAtom as RecoilState<TrafficViewerApi>)
const query = useRecoilValue(queryAtom);
const [isLoading, setIsLoading] = useState(false);
const [entryData, setEntryData] = useState(null);
useEffect(() => {
if (!focusedEntryId) return;
setEntryData(null);
setIsLoading(true);
(async () => {
try {
const entryData = await trafficViewerApi.getEntry(focusedEntryId, query);
@@ -113,31 +120,30 @@ export const EntryDetailed = () => {
} catch (error) {
if (error.response?.data?.type) {
toast[error.response.data.type](`Entry[${focusedEntryId}]: ${error.response.data.msg}`, {
position: "bottom-right",
theme: "colored",
autoClose: error.response.data.autoClose,
hideProgressBar: false,
closeOnClick: true,
pauseOnHover: true,
draggable: true,
progress: undefined,
containerId: TOAST_CONTAINER_ID
});
}
console.error(error);
} finally {
setIsLoading(false);
}
})();
// eslint-disable-next-line
}, [focusedEntryId]);
return <React.Fragment>
{entryData && <EntryTitle
{isLoading && <div style={{textAlign: "center", width: "100%", marginTop: 50}}><img alt="spinner" src={spinner} style={{height: 60}}/></div>}
{!isLoading && entryData && <EntryTitle
protocol={entryData.protocol}
data={entryData.data}
elapsedTime={entryData.data.elapsedTime}
/>}
{entryData && <EntrySummary entry={entryData.base}/>}
{!isLoading && entryData && <EntrySummary entry={entryData.base} namespace={entryData.data.namespace} />}
<React.Fragment>
{entryData && <EntryViewer
{!isLoading && entryData && <EntryViewer
representation={entryData.representation}
isRulesEnabled={entryData.isRulesEnabled}
rulesMatched={entryData.rulesMatched}

View File

@@ -66,8 +66,10 @@
margin-top: -60px
.capture img
height: 20px
height: 14px
z-index: 1000
margin-top: 12px
margin-left: -2px
.endpointServiceContainer
display: flex
@@ -76,6 +78,7 @@
padding-right: 10px
padding-top: 4px
flex-grow: 1
padding-left: 10px
.separatorRight
display: flex

View File

@@ -4,7 +4,7 @@ import SwapHorizIcon from '@material-ui/icons/SwapHoriz';
import styles from './EntryListItem.module.sass';
import StatusCode, {getClassification, StatusCodeClassification} from "../../UI/StatusCode";
import Protocol, {ProtocolInterface} from "../../UI/Protocol"
import eBPFLogo from '../../assets/ebpf.png';
import eBPFLogo from 'assets/lock.svg';
import {Summary} from "../../UI/Summary";
import Queryable from "../../UI/Queryable";
import ingoingIconSuccess from "assets/ingoing-traffic-success.svg"
@@ -52,6 +52,7 @@ interface EntryProps {
entry: Entry;
style: object;
headingMode: boolean;
namespace?: string;
}
enum CaptureTypes {
@@ -62,7 +63,7 @@ enum CaptureTypes {
Ebpf = "ebpf",
}
export const EntryItem: React.FC<EntryProps> = ({entry, style, headingMode}) => {
export const EntryItem: React.FC<EntryProps> = ({entry, style, headingMode, namespace}) => {
const [focusedEntryId, setFocusedEntryId] = useRecoilState(focusedEntryIdAtom);
const [queryState, setQuery] = useRecoilState(queryAtom);
@@ -140,8 +141,6 @@ export const EntryItem: React.FC<EntryProps> = ({entry, style, headingMode}) =>
const isStatusCodeEnabled = ((entry.proto.name === "http" && "status" in entry) || entry.status !== 0);
let endpointServiceContainer = "10px";
if (!isStatusCodeEnabled) endpointServiceContainer = "20px";
return <React.Fragment>
<div
@@ -178,7 +177,7 @@ export const EntryItem: React.FC<EntryProps> = ({entry, style, headingMode}) =>
{isStatusCodeEnabled && <div>
<StatusCode statusCode={entry.status} statusQuery={entry.statusQuery}/>
</div>}
<div className={styles.endpointServiceContainer} style={{paddingLeft: endpointServiceContainer}}>
<div className={styles.endpointServiceContainer}>
<Summary method={entry.method} methodQuery={entry.methodQuery} summary={entry.summary} summaryQuery={entry.summaryQuery}/>
<div className={styles.resolvedName}>
<Queryable
@@ -226,6 +225,19 @@ export const EntryItem: React.FC<EntryProps> = ({entry, style, headingMode}) =>
: ""
}
<div className={styles.separatorRight}>
{headingMode ? <Queryable
query={`namespace == "${namespace}"`}
displayIconOnMouseOver={true}
flipped={true}
iconStyle={{marginRight: "16px"}}
>
<span
className={`${styles.tcpInfo} ${styles.ip}`}
title="Namespace"
>
{namespace}
</span>
</Queryable> : null}
<Queryable
query={`src.ip == "${entry.src.ip}"`}
displayIconOnMouseOver={true}

View File

@@ -0,0 +1,3 @@
<svg width="12" height="14" viewBox="0 0 12 14" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M3.33333 6.36364H8.66667V6.36011H10.6667V6.36364H11C11.2778 6.36364 11.5139 6.45644 11.7083 6.64205C11.9028 6.82765 12 7.05303 12 7.31818V13.0455C12 13.3106 11.9028 13.536 11.7083 13.7216C11.5139 13.9072 11.2778 14 11 14H1C0.722222 14 0.486111 13.9072 0.291666 13.7216C0.0972223 13.536 0 13.3106 0 13.0455V7.31818C0 7.05303 0.0972223 6.82765 0.291666 6.64205C0.486111 6.45644 0.722222 6.36364 1 6.36364H1.33333V4.45455C1.33333 3.23485 1.79167 2.1875 2.70833 1.3125C3.625 0.4375 4.72222 0 6 0C7.27778 0 8.375 0.4375 9.29167 1.3125C9.92325 1.91538 10.3373 2.60007 10.5337 3.36658L8.59659 3.85085C8.48731 3.40176 8.25026 3.00309 7.88542 2.65483C7.36458 2.15767 6.73611 1.90909 6 1.90909C5.26389 1.90909 4.63542 2.15767 4.11458 2.65483C3.59375 3.15199 3.33333 3.75189 3.33333 4.45455V6.36364Z" fill="#BCCEFD"/>
</svg>

After

Width:  |  Height:  |  Size: 959 B

View File

@@ -16,21 +16,21 @@ import trafficViewerApiAtom from "../../recoil/TrafficViewerApi"
interface FiltersProps {
backgroundColor: string
openWebSocket: (query: string, resetEntries: boolean) => void;
reopenConnection: any;
}
export const Filters: React.FC<FiltersProps> = ({backgroundColor, openWebSocket}) => {
export const Filters: React.FC<FiltersProps> = ({backgroundColor, reopenConnection}) => {
return <div className={styles.container}>
<QueryForm
backgroundColor={backgroundColor}
openWebSocket={openWebSocket}
reopenConnection={reopenConnection}
/>
</div>;
};
interface QueryFormProps {
backgroundColor: string
openWebSocket: (query: string, resetEntries: boolean) => void;
reopenConnection: any;
}
export const modalStyle = {
@@ -47,11 +47,10 @@ export const modalStyle = {
color: '#000',
};
export const QueryForm: React.FC<QueryFormProps> = ({backgroundColor, openWebSocket}) => {
export const QueryForm: React.FC<QueryFormProps> = ({backgroundColor, reopenConnection}) => {
const formRef = useRef<HTMLFormElement>(null);
const [query, setQuery] = useRecoilState(queryAtom);
const trafficViewerApi = useRecoilValue(trafficViewerApiAtom)
const [openModal, setOpenModal] = useState(false);
@@ -63,12 +62,7 @@ export const QueryForm: React.FC<QueryFormProps> = ({backgroundColor, openWebSoc
}
const handleSubmit = (e) => {
trafficViewerApi.webSocket.close()
if (query) {
openWebSocket(`(${query}) and leftOff(-1)`, true);
} else {
openWebSocket(`leftOff(-1)`, true);
}
reopenConnection();
e.preventDefault();
}

View File

@@ -1,26 +1,26 @@
import React, { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { Filters } from "./Filters";
import { EntriesList } from "./EntriesList";
import { makeStyles } from "@material-ui/core";
import React, {useEffect, useMemo, useRef, useState} from "react";
import {Filters} from "./Filters";
import {EntriesList} from "./EntriesList";
import {makeStyles} from "@material-ui/core";
import TrafficViewerStyles from "./TrafficViewer.module.sass";
import styles from '../style/EntriesList.module.sass';
import { EntryDetailed } from "./EntryDetailed";
import {EntryDetailed} from "./EntryDetailed";
import playIcon from 'assets/run.svg';
import pauseIcon from 'assets/pause.svg';
import variables from '../../variables.module.scss';
import { toast,ToastContainer } from 'react-toastify';
import 'react-toastify/dist/ReactToastify.css';
import {ToastContainer} from 'react-toastify';
import debounce from 'lodash/debounce';
import { RecoilRoot, RecoilState, useRecoilState, useRecoilValue, useSetRecoilState } from "recoil";
import {RecoilRoot, RecoilState, useRecoilState, useRecoilValue, useSetRecoilState} from "recoil";
import entriesAtom from "../../recoil/entries";
import focusedEntryIdAtom from "../../recoil/focusedEntryId";
import websocketConnectionAtom, { WsConnectionStatus } from "../../recoil/wsConnection";
import queryAtom from "../../recoil/query";
import { TLSWarning } from "../TLSWarning/TLSWarning";
import {TLSWarning} from "../TLSWarning/TLSWarning";
import trafficViewerApiAtom from "../../recoil/TrafficViewerApi"
import TrafficViewerApi from "./TrafficViewerApi";
import { StatusBar } from "../UI/StatusBar";
import {StatusBar} from "../UI/StatusBar";
import tappingStatusAtom from "../../recoil/tappingStatus/atom";
import {TOAST_CONTAINER_ID} from "../../configs/Consts";
import leftOffTopAtom from "../../recoil/leftOffTop";
const useLayoutStyles = makeStyles(() => ({
details: {
@@ -48,34 +48,31 @@ interface TrafficViewerProps {
trafficViewerApiProp: TrafficViewerApi,
actionButtons?: JSX.Element,
isShowStatusBar?: boolean,
webSocketUrl : string,
isCloseWebSocket : boolean
webSocketUrl: string,
isCloseWebSocket: boolean,
isDemoBannerView: boolean
}
export const TrafficViewer : React.FC<TrafficViewerProps> = ({setAnalyzeStatus, trafficViewerApiProp,
actionButtons,isShowStatusBar,webSocketUrl,
isCloseWebSocket}) => {
export const TrafficViewer: React.FC<TrafficViewerProps> = ({
setAnalyzeStatus, trafficViewerApiProp,
actionButtons, isShowStatusBar, webSocketUrl,
isCloseWebSocket, isDemoBannerView
}) => {
const classes = useLayoutStyles();
const [entries, setEntries] = useRecoilState(entriesAtom);
const [focusedEntryId, setFocusedEntryId] = useRecoilState(focusedEntryIdAtom);
const [wsConnection, setWsConnection] = useRecoilState(websocketConnectionAtom);
const setEntries = useSetRecoilState(entriesAtom);
const setFocusedEntryId = useSetRecoilState(focusedEntryIdAtom);
const query = useRecoilValue(queryAtom);
const setTrafficViewerApiState = useSetRecoilState(trafficViewerApiAtom as RecoilState<TrafficViewerApi>)
const [tappingStatus, setTappingStatus] = useRecoilState(tappingStatusAtom);
const [noMoreDataTop, setNoMoreDataTop] = useState(false);
const [isSnappedToBottom, setIsSnappedToBottom] = useState(true);
const [wsReadyState, setWsReadyState] = useState(0);
const [queryBackgroundColor, setQueryBackgroundColor] = useState("#f5f5f5");
const [queriedCurrent, setQueriedCurrent] = useState(0);
const [queriedTotal, setQueriedTotal] = useState(0);
const [leftOffBottom, setLeftOffBottom] = useState(0);
const [leftOffTop, setLeftOffTop] = useState(null);
const [truncatedTimestamp, setTruncatedTimestamp] = useState(0);
const [startTime, setStartTime] = useState(0);
const setLeftOffTop = useSetRecoilState(leftOffTopAtom);
const scrollableRef = useRef(null);
const [showTLSWarning, setShowTLSWarning] = useState(false);
@@ -106,36 +103,57 @@ export const TrafficViewer : React.FC<TrafficViewerProps> = ({setAnalyzeStatus,
handleQueryChange(query);
}, [query, handleQueryChange]);
useEffect(()=>{
useEffect(() => {
isCloseWebSocket && closeWebSocket()
},[isCloseWebSocket])
}, [isCloseWebSocket])
useEffect(() => {
reopenConnection()
}, [webSocketUrl])
const ws = useRef(null);
const openEmptyWebSocket = () => {
if (query) {
openWebSocket(`(${query}) and leftOff(-1)`, true);
} else {
openWebSocket(`leftOff(-1)`, true);
}
}
const closeWebSocket = () => {
if (ws?.current?.readyState === WebSocket.OPEN) {
ws.current.close();
return true;
}
}
const listEntry = useRef(null);
const openWebSocket = (query: string, resetEntries: boolean) => {
if (resetEntries) {
setFocusedEntryId(null);
setEntries([]);
setQueriedCurrent(0);
setLeftOffTop(null);
setNoMoreDataTop(false);
}
ws.current = new WebSocket(webSocketUrl);
ws.current.onopen = () => {
setWsConnection(WsConnectionStatus.Connected);
try {
ws.current = new WebSocket(webSocketUrl);
sendQueryWhenWsOpen(query);
}
ws.current.onclose = () => {
setWsConnection(WsConnectionStatus.Closed);
}
ws.current.onerror = (event) => {
console.error("WebSocket error:", event);
if (query) {
openWebSocket(`(${query}) and leftOff(${leftOffBottom})`, false);
} else {
openWebSocket(`leftOff(${leftOffBottom})`, false);
ws.current.onopen = () => {
setWsReadyState(ws?.current?.readyState);
}
ws.current.onclose = () => {
setWsReadyState(ws?.current?.readyState);
}
ws.current.onerror = (event) => {
console.error("WebSocket error:", event);
if (ws?.current?.readyState === WebSocket.OPEN) {
ws.current.close();
}
}
} catch (e) {
}
}
@@ -149,75 +167,14 @@ export const TrafficViewer : React.FC<TrafficViewerProps> = ({setAnalyzeStatus,
}, 500)
}
const closeWebSocket = () => {
ws.current && ws.current.close()
}
if (ws.current) {
ws.current.onmessage = (e) => {
if (!e?.data) return;
const message = JSON.parse(e.data);
switch (message.messageType) {
case "entry":
const entry = message.data;
if (!focusedEntryId) setFocusedEntryId(entry.id.toString());
const newEntries = [...entries, entry];
if (newEntries.length === 10001) {
setLeftOffTop(newEntries[0].entry.id);
newEntries.shift();
setNoMoreDataTop(false);
}
setEntries(newEntries);
break;
case "status":
setTappingStatus(message.tappingStatus);
break;
case "analyzeStatus":
setAnalyzeStatus(message.analyzeStatus);
break;
case "outboundLink":
onTLSDetected(message.Data.DstIP);
break;
case "toast":
toast[message.data.type](message.data.text, {
position: "bottom-right",
theme: "colored",
autoClose: message.data.autoClose,
hideProgressBar: false,
closeOnClick: true,
pauseOnHover: true,
draggable: true,
progress: undefined,
});
break;
case "queryMetadata":
setQueriedCurrent(queriedCurrent + message.data.current);
setQueriedTotal(message.data.total);
setLeftOffBottom(message.data.leftOff);
setTruncatedTimestamp(message.data.truncatedTimestamp);
if (leftOffTop === null) {
setLeftOffTop(message.data.leftOff - 1);
}
break;
case "startTime":
setStartTime(message.data);
break;
default:
console.error(
`unsupported websocket message type, Got: ${message.messageType}`
);
}
};
}
useEffect(() => {
setTrafficViewerApiState({...trafficViewerApiProp, webSocket : {close : closeWebSocket}});
setTrafficViewerApiState({...trafficViewerApiProp, webSocket: {close: closeWebSocket}});
(async () => {
openWebSocket("leftOff(-1)", true);
try{
try {
const tapStatusResponse = await trafficViewerApiProp.tapStatus();
setTappingStatus(tapStatusResponse);
if(setAnalyzeStatus) {
if (setAnalyzeStatus) {
const analyzeStatusResponse = await trafficViewerApiProp.analyzeStatus();
setAnalyzeStatus(analyzeStatusResponse);
}
@@ -225,47 +182,49 @@ export const TrafficViewer : React.FC<TrafficViewerProps> = ({setAnalyzeStatus,
console.error(error);
}
})()
// eslint-disable-next-line
}, []);
const toggleConnection = () => {
ws.current.close();
if (wsConnection !== WsConnectionStatus.Connected) {
if (query) {
openWebSocket(`(${query}) and leftOff(-1)`, true);
} else {
openWebSocket(`leftOff(-1)`, true);
}
if (!closeWebSocket()) {
openEmptyWebSocket();
scrollableRef.current.jumpToBottom();
setIsSnappedToBottom(true);
}
}
const onTLSDetected = (destAddress: string) => {
addressesWithTLS.add(destAddress);
setAddressesWithTLS(new Set(addressesWithTLS));
const reopenConnection = async () => {
closeWebSocket()
openEmptyWebSocket();
scrollableRef.current.jumpToBottom();
setIsSnappedToBottom(true);
}
if (!userDismissedTLSWarning) {
setShowTLSWarning(true);
}
};
useEffect(() => {
return () => {
if (ws?.current?.readyState === WebSocket.OPEN) {
ws.current.close();
}
};
}, []);
const getConnectionIndicator = () => {
switch (wsConnection) {
case WsConnectionStatus.Connected:
return <div className={`${TrafficViewerStyles.indicatorContainer} ${TrafficViewerStyles.greenIndicatorContainer}`}>
<div className={`${TrafficViewerStyles.indicator} ${TrafficViewerStyles.greenIndicator}`} />
switch (wsReadyState) {
case WebSocket.OPEN:
return <div
className={`${TrafficViewerStyles.indicatorContainer} ${TrafficViewerStyles.greenIndicatorContainer}`}>
<div className={`${TrafficViewerStyles.indicator} ${TrafficViewerStyles.greenIndicator}`}/>
</div>
default:
return <div className={`${TrafficViewerStyles.indicatorContainer} ${TrafficViewerStyles.redIndicatorContainer}`}>
<div className={`${TrafficViewerStyles.indicator} ${TrafficViewerStyles.redIndicator}`} />
return <div
className={`${TrafficViewerStyles.indicatorContainer} ${TrafficViewerStyles.redIndicatorContainer}`}>
<div className={`${TrafficViewerStyles.indicator} ${TrafficViewerStyles.redIndicator}`}/>
</div>
}
}
const getConnectionTitle = () => {
switch (wsConnection) {
case WsConnectionStatus.Connected:
switch (wsReadyState) {
case WebSocket.OPEN:
return "streaming live traffic"
default:
return "streaming paused";
@@ -274,20 +233,23 @@ export const TrafficViewer : React.FC<TrafficViewerProps> = ({setAnalyzeStatus,
const onSnapBrokenEvent = () => {
setIsSnappedToBottom(false);
if (wsConnection === WsConnectionStatus.Connected) {
if (ws?.current?.readyState === WebSocket.OPEN) {
ws.current.close();
}
}
return (
<div className={TrafficViewerStyles.TrafficPage}>
{tappingStatus && isShowStatusBar && <StatusBar />}
{tappingStatus && isShowStatusBar && <StatusBar isDemoBannerView={isDemoBannerView}/>}
<div className={TrafficViewerStyles.TrafficPageHeader}>
<div className={TrafficViewerStyles.TrafficPageStreamStatus}>
<img className={TrafficViewerStyles.playPauseIcon} style={{ visibility: wsConnection === WsConnectionStatus.Connected ? "visible" : "hidden" }} alt="pause"
src={pauseIcon} onClick={toggleConnection} />
<img className={TrafficViewerStyles.playPauseIcon} style={{ position: "absolute", visibility: wsConnection === WsConnectionStatus.Connected ? "hidden" : "visible" }} alt="play"
src={playIcon} onClick={toggleConnection} />
<img className={TrafficViewerStyles.playPauseIcon}
style={{visibility: wsReadyState === WebSocket.OPEN ? "visible" : "hidden"}} alt="pause"
src={pauseIcon} onClick={toggleConnection}/>
<img className={TrafficViewerStyles.playPauseIcon}
style={{position: "absolute", visibility: wsReadyState === WebSocket.OPEN ? "hidden" : "visible"}}
alt="play"
src={playIcon} onClick={toggleConnection}/>
<div className={TrafficViewerStyles.connectionText}>
{getConnectionTitle()}
{getConnectionIndicator()}
@@ -299,8 +261,7 @@ export const TrafficViewer : React.FC<TrafficViewerProps> = ({setAnalyzeStatus,
<div className={TrafficViewerStyles.TrafficPageListContainer}>
<Filters
backgroundColor={queryBackgroundColor}
openWebSocket={openWebSocket}
reopenConnection={reopenConnection}
/>
<div className={styles.container}>
<EntriesList
@@ -308,46 +269,48 @@ export const TrafficViewer : React.FC<TrafficViewerProps> = ({setAnalyzeStatus,
onSnapBrokenEvent={onSnapBrokenEvent}
isSnappedToBottom={isSnappedToBottom}
setIsSnappedToBottom={setIsSnappedToBottom}
queriedCurrent={queriedCurrent}
setQueriedCurrent={setQueriedCurrent}
queriedTotal={queriedTotal}
setQueriedTotal={setQueriedTotal}
startTime={startTime}
noMoreDataTop={noMoreDataTop}
setNoMoreDataTop={setNoMoreDataTop}
leftOffTop={leftOffTop}
setLeftOffTop={setLeftOffTop}
openWebSocket={openWebSocket}
leftOffBottom={leftOffBottom}
truncatedTimestamp={truncatedTimestamp}
setTruncatedTimestamp={setTruncatedTimestamp}
scrollableRef={scrollableRef}
ws={ws}
/>
</div>
</div>
<div className={classes.details} id="rightSideContainer">
{focusedEntryId && <EntryDetailed />}
<EntryDetailed/>
</div>
</div>}
<TLSWarning showTLSWarning={showTLSWarning}
setShowTLSWarning={setShowTLSWarning}
addressesWithTLS={addressesWithTLS}
setAddressesWithTLS={setAddressesWithTLS}
userDismissedTLSWarning={userDismissedTLSWarning}
setUserDismissedTLSWarning={setUserDismissedTLSWarning} />
<ToastContainer/>
setShowTLSWarning={setShowTLSWarning}
addressesWithTLS={addressesWithTLS}
setAddressesWithTLS={setAddressesWithTLS}
userDismissedTLSWarning={userDismissedTLSWarning}
setUserDismissedTLSWarning={setUserDismissedTLSWarning}/>
</div>
);
};
const MemoiedTrafficViewer = React.memo(TrafficViewer)
const TrafficViewerContainer: React.FC<TrafficViewerProps> = ({ setAnalyzeStatus, trafficViewerApiProp,
actionButtons, isShowStatusBar = true ,
webSocketUrl, isCloseWebSocket}) => {
const TrafficViewerContainer: React.FC<TrafficViewerProps> = ({
setAnalyzeStatus, trafficViewerApiProp,
actionButtons, isShowStatusBar = true,
webSocketUrl, isCloseWebSocket, isDemoBannerView
}) => {
return <RecoilRoot>
<MemoiedTrafficViewer actionButtons={actionButtons} isShowStatusBar={isShowStatusBar} webSocketUrl={webSocketUrl}
isCloseWebSocket={isCloseWebSocket} trafficViewerApiProp={trafficViewerApiProp}
setAnalyzeStatus={setAnalyzeStatus} />
setAnalyzeStatus={setAnalyzeStatus} isDemoBannerView={isDemoBannerView}/>
<ToastContainer enableMultiContainer containerId={TOAST_CONTAINER_ID}
position="bottom-right"
autoClose={5000}
hideProgressBar={false}
newestOnTop={false}
closeOnClick
rtl={false}
pauseOnFocusLoss
draggable
pauseOnHover/>
</RecoilRoot>
}

Some files were not shown because too many files have changed in this diff Show More