Compare commits

...

68 Commits

Author SHA1 Message Date
Alex Haiut
0ef16bd55a fixed typo in install - helm command (#828)
#patch hotfix to main, release 27.2
2022-02-18 15:38:52 +02:00
Igor Gov
de42de9d62 Merge pull request #817 from up9inc/develop
Develop -> main (Release 27.1) #patch
2022-02-16 10:17:20 +02:00
Gustavo Massaneiro
bb3ae1ef70 Service Map node key as entry.Name instead of entry.IP (#818) 2022-02-16 09:01:59 +02:00
AmitUp9
5dfa94d76e service map - reset button and function deleted (#805)
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2022-02-15 22:57:56 +02:00
Alex Haiut
dfe63f2318 enabled service-map by default (#816) 2022-02-15 22:41:57 +02:00
gadotroee
026745ac8e #major Develop -> main (Release 27.0)
#major (Merge pull request #815 from up9inc/develop)
2022-02-15 17:46:59 +02:00
Alex Haiut
9cf64a43f5 modified namespace in helm command (#814) 2022-02-15 17:14:37 +02:00
gadotroee
5e07924aca Revert "Develop -> main (Release 27.0) (#812)" (#813)
This reverts commit 77078e78d1.

Co-authored-by: Igor Gov <iggvrv@gmail.com>
2022-02-15 17:13:08 +02:00
Igor Gov
77078e78d1 Develop -> main (Release 27.0) (#812)
* ws pause after buttons click (#798)

* ws pause after buttons click

* name change

Co-authored-by: Igor Gov <iggvrv@gmail.com>

* moved CHANGELOG to Mizu wiki in Github (#801)

* Adding logs of k8s client config (#800)

* Support rancher client config (#802)

* Move the request-response matcher's scope from global-level to TCP stream-level (#793)

* Create a new request-response matcher for each TCP stream

* Fix the `ident` formats in request-response matchers

* Don't sort the items in the HTTP tests

* Update tap/extensions/kafka/matcher.go

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>

* Temporarily change the bucket folder to the new expected

* Bring back the `deleteOlderThan` method

* Use `api.RequestResponseMatcher` instead of `interface{}` as type

* Use `api.RequestResponseMatcher` instead of `interface{}` as type (more)

* Update the key format comments

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>

* Force DaemonSet apply (#804)

Required for apply to work if the DaemonSet is created by another program e.g. Helm.

* Tag entries destination namespace (#803)

* Update main.go, socket_server_handlers.go, and 7 more files...

* Update cors.go

* omit empty namespace in json so tests pass

* fix indent

* Sending telemetry config to server (#808)

* Print http error response details (#792)

* View command - no version check (#810)

* Update mizu install command (#811)

Co-authored-by: AmitUp9 <96980485+AmitUp9@users.noreply.github.com>
Co-authored-by: Alex Haiut <alex@up9.com>
Co-authored-by: M. Mert Yıldıran <mehmet@up9.com>
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
Co-authored-by: Nimrod Gilboa Markevich <59927337+nimrod-up9@users.noreply.github.com>
Co-authored-by: RamiBerm <54766858+RamiBerm@users.noreply.github.com>
2022-02-15 16:54:48 +02:00
Igor Gov
bf2362d836 Update mizu install command (#811) 2022-02-15 16:05:28 +02:00
Igor Gov
1c11523d9d View command - no version check (#810) 2022-02-15 15:49:44 +02:00
Nimrod Gilboa Markevich
150a87413d Print http error response details (#792) 2022-02-15 12:29:34 +02:00
Igor Gov
f7221a7355 Sending telemetry config to server (#808) 2022-02-15 11:08:16 +02:00
RamiBerm
4197ec198c Tag entries destination namespace (#803)
* Update main.go, socket_server_handlers.go, and 7 more files...

* Update cors.go

* omit empty namespace in json so tests pass

* fix indent
2022-02-15 10:51:38 +02:00
Nimrod Gilboa Markevich
5484b7c491 Force DaemonSet apply (#804)
Required for apply to work if the DaemonSet is created by another program e.g. Helm.
2022-02-15 10:16:24 +02:00
M. Mert Yıldıran
041223b558 Move the request-response matcher's scope from global-level to TCP stream-level (#793)
* Create a new request-response matcher for each TCP stream

* Fix the `ident` formats in request-response matchers

* Don't sort the items in the HTTP tests

* Update tap/extensions/kafka/matcher.go

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>

* Temporarily change the bucket folder to the new expected

* Bring back the `deleteOlderThan` method

* Use `api.RequestResponseMatcher` instead of `interface{}` as type

* Use `api.RequestResponseMatcher` instead of `interface{}` as type (more)

* Update the key format comments

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2022-02-14 18:16:58 +03:00
Igor Gov
71c04d20ef Support rancher client config (#802) 2022-02-14 16:48:33 +02:00
Igor Gov
8852bac77b Adding logs of k8s client config (#800) 2022-02-14 06:26:57 +02:00
Alex Haiut
59d21e19b7 moved CHANGELOG to Mizu wiki in Github (#801) 2022-02-14 00:16:33 +02:00
AmitUp9
884cb791fc ws pause after buttons click (#798)
* ws pause after buttons click

* name change

Co-authored-by: Igor Gov <iggvrv@gmail.com>
2022-02-13 17:03:33 +02:00
Igor Gov
727d75bccc Merge pull request #797 from up9inc/develop
Develop -> Main #major
2022-02-13 11:29:37 +02:00
M. Mert Yıldıran
cb332cedd4 Upgrade Basenine version to v0.4.16 (#796) 2022-02-13 06:22:04 +02:00
Igor Gov
391af95fb5 Fix: tapper count check (#791) 2022-02-10 17:00:13 +02:00
RoyUP9
9e62eaf4de Fixed view port (#790) 2022-02-10 16:17:09 +02:00
Igor Gov
81e830dd18 Check that API server and tappers are running in check cmd (#789)
* Check if API server and tapper are running in check cmd
2022-02-10 16:00:33 +02:00
Igor Gov
88e3fba1ef Fix: tapper status race condition (#788) 2022-02-10 12:13:40 +02:00
Igor Gov
d987bb17d2 Merge pull request #744 from up9inc/develop
Develop -> main (Release 0.25.0)
2022-02-01 19:30:13 +02:00
Igor Gov
89946041a1 Merge pull request #736 from up9inc/develop
Develop -> main (Release 0.24.0)
2022-02-01 08:23:45 +02:00
gadotroee
b919c93f28 [0.23.0 hotfix] - Redis extension issue
Merge pull request #712 from up9inc/hotfix/redis-error
2022-01-28 12:27:04 +02:00
Roee Gadot
eac751190d no message 2022-01-28 12:08:05 +02:00
Igor Gov
2bc2051a46 Merge pull request #651 from up9inc/develop
Develop -> main (Release 0.22.0)
2022-01-16 13:35:10 +02:00
Igor Gov
89ad4e0f3a Merge pull request #546 from up9inc/develop
Develop -> main
2021-12-19 16:48:38 +02:00
Igor Gov
72f4753620 Develop -> main (#544)
* Add support of listening to multiple netns (#418)

* multiple netns listen - initial commit

* multiple netns listen - actual work

* remove redundant log line

* map /proc of host to tapper

* changing kubernetes provider again after big conflict

* revert node-sass version back to 5.0.0

* Rename host_source to hostSource

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>

* PR fixes - adding comment + typos + naming conventions

* go fmt + making procfs read only

* setns back to the original value after packet source initialized

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>

* TRA-3842 daemon acceptance tests (#429)

* Update tap_test.go and testsUtils.go

* Update tap_test.go

* Update testsUtils.go

* Update tap_test.go and testsUtils.go

* Update tap_test.go and testsUtils.go

* Update testsUtils.go

* Update tap_test.go

* gofmt

* TRA-3913 support mizu via expose service (#440)

* Update README.md, tapRunner.go, and 4 more files...

* Update testsUtils.go

* Update proxy.go

* Update README.md, testsUtils.go, and 3 more files...

* Update testsUtils.go and provider.go

* fix readme titles (#442)

* Auto close inactive issues  (#441)

* Migrate from SQLite to Basenine and introduce a new filtering syntax (#279)

* Fix the OOMKilled error by calling `debug.FreeOSMemory` periodically

* Remove `MAX_NUMBER_OF_GOROUTINES` environment variable

* Change the line

* Increase the default value of `TCP_STREAM_CHANNEL_TIMEOUT_MS` to `10000`

* Write the client and integrate to the new real-time database

* Refactor the WebSocket implementaiton for `/ws`

* Adapt the UI to the new filtering system

* Fix the rest of the issues in the UI

* Increase the buffer of the scanner

* Implement accessing single records

* Increase the buffer of another scanner

* Populate `Request` and `Response` fields of `MizuEntry`

* Add syntax highlighting for the query

* Add database to `Dockerfile`

* Fix some issues

* Update the `realtime_dbms` Git module commit hash

* Upgrade Gin version and print the query string

* Revert "Upgrade Gin version and print the query string"

This reverts commit aa09f904ee.

* Use WebSocket's itself to query instead of the query string

* Fix some errors related to conversion to HAR

* Fix the issues caused by the latest merge

* Fix the build error

* Fix PR validation GitHub workflow

* Replace the git submodule with latest Basenine version `0.1.0`

Remove `realtime_client.go` and use the official client library `github.com/up9inc/basenine/client/go` instead.

* Move Basenine host and port constants to `shared` module

* Reliably execute and wait for Basenine to become available

* Upgrade Basenine version

* Properly close WebSocket and data channel

* Fix the issues caused by the recent merge commit

* Clean up the TypeScript code

* Update `.gitignore`

* Limit the database size

* Add `Macros` method signature to `Dissector` interface and set the macros provided by the protocol extensions

* Run `go mod tidy` on `agent`

* Upgrade `github.com/up9inc/basenine/client/go` version

* Implement a mechanism to update the query using click events in the UI and use it for protocol macros

* Update the query on click to timestamps

* Fix some issues in the WebSocket and channel handling

* Update the query on clicks to status code

* Update the query on clicks to method, path and service

* Update the query on clicks to is outgoing, source and destination ports

* Add an API endpoint to validate the query against syntax errors

* Move the query background color state into `TrafficPage`

* Fix the logic in `setQuery`

* Display a toast message in case of a syntax error in the query

* Remove a call to `fmt.Printf`

* Upgrade Basenine version to `0.1.3`

* Fix an issue related to getting `MAX_ENTRIES_DB_BYTES` environment variable

* Have the `path` key in request details, in HTTP

* Rearrange the HTTP headers for the querying

* Do the same thing for `cookies` and `queryString`

* Update the query on click to table elements

Add the selectors for `TABLE` type representations in HTTP extension.

* Update the query on click to `bodySize` and `elapsedTime` in `EntryTitle`

* Add the selectors for `TABLE` type representations in AMQP extension

* Add the selectors for `TABLE` type representations in Kafka extension

* Add the selectors for `TABLE` type representations in Redis extension

* Define a struct in `tap/api.go` for the section representation data

* Add the selectors for `BODY` type representations

* Add `request.path` to the HTTP request details

* Change the summary string's field name from `path` to `summary`

* Introduce `queryable` CSS class for queryable UI elements and underline them on hover

* Instead of `N requests` at the bottom, make it `Displaying N results (queried X/Y)` and live update the values

Upgrade Basenine version to `0.2.0`.

* Verify the sha256sum of Basenine executable inside `Dockerfile`

* Pass the start time to web UI through WebSocket and always show the `EntriesList` footer

* Pipe the `stderr` of Basenine as well

* Fix the layout issues related to `CodeEditor` in the UI

* Use the correct `shasum` command in `Dockerfile`

* Upgrade Basenine version to `0.2.1`

* Limit the height of `CodeEditor` container

* Remove `Paused` enum `ConnectionStatus` in UI

* Fix the issue caused by the recent merge

* Add the filtering guide (cheatsheet)

* Update open cheatsheet button's title

* Update cheatsheet content

* Remove the old SQLite code, adapt the `--analyze` related code to Basenine

* Change the method signature of `NewEntry`

* Change the method signature of `Represent`

* Introduce `HTTPPair` field in `MizuEntry` specific to HTTP

* Remove `Entry`, `EntryId` and `EstimatedSizeBytes` fields from `MizuEntry`

Also remove the `getEstimatedEntrySizeBytes` method.

* Remove `gorm.io/gorm` dependency

* Remove unused `sensitiveDataFiltering` folder

* Increase the left margin of open cheatsheet button

* Add `overflow: auto` to the cheatsheet `Modal`

* Fix `GetEntry` method

* Fix the macro for gRPC

* Fix an interface conversion in case of AMQP

* Fix two more interface conversion errors in AMQP

* Make the `syncEntriesImpl` method blocking

* Fix a grammar mistake in the cheatsheet

* Adapt to the changes in the recent merge commit

* Improve the cheatsheet text

* Always display the timestamp in `en-US`

* Upgrade Basenine version to `0.2.2`

* Fix the order of closing Basenine connections and channels

* Don't close the Basenine channels at all

* Upgrade Basenine version to `0.2.3`

* Set the initial filter to `rlimit(100)`

* Make Basenine persistent

* Upgrade Basenine version to `0.2.4`

* Update `debug.Dockerfile`

* Fix a failing test

* Upgrade Basenine version to `0.2.5`

* Revert "Do not show play icon when disconnected (#428)"

This reverts commit 8af2e562f8.

* Upgrade Basenine version to `0.2.6`

* Make all non-informative things informative

* Make `100` a constant

* Use `===` in JavaScript no matter what

* Remove a forgotten `console.log`

* Add a comment and update the `query` in `syncEntriesImpl`

* Don't call `panic` in `GetEntry`

* Replace `panic` calls in `startBasenineServer` with `logger.Log.Panicf`

* Remove unnecessary `\n` characters in the logs

* Remove the `Reconnect` button (#444)

* Upgrade `github.com/up9inc/basenine/client/go` version (#446)

* Fix the `Analysis` button's style into its original state (#447)

* Fix the `Analysis` button's style into its original state

* Fix the MUI button style into its original state

* Fix the acceptance tests after the merger of #279 (#443)

* Enable acceptance tests

* Fix the acceptance tests

* Move `--headless` from `getDefaultCommandArgs` to `getDefaultTapCommandArgs`

* Fix rest of the failing acceptance tests

* Revert "Enable acceptance tests"

This reverts commit 3f919e865a.

* Revert "Revert "Enable acceptance tests""

This reverts commit c0bfe54b70.

* Ignore `--headless` in `mizu view`

* Make all non-informative things informative

* Remove `github.com/stretchr/testify` dependency from the acceptance tests

* Move the helper methods `waitTimeout` and `checkDBHasEntries` from `tap_test.go` to `testsUtils.go`

* Split `checkDBHasEntries` method into `getDBEntries` and `assertEntriesAtLeast` methods

* Revert "Revert "Revert "Enable acceptance tests"""

This reverts commit c13342671c.

* Revert "Revert "Revert "Revert "Enable acceptance tests""""

This reverts commit 0f8c436926.

* Make `getDBEntries` and `checkEntriesAtLeast` methods return errors instead

* Revert "Revert "Revert "Revert "Revert "Enable acceptance tests"""""

This reverts commit 643fdde009.

* Send the message into this WebSocket connection instead of all (#449)

* Fix the CSS issues in the cheatsheet modal (#448)

* Fix the CSS issues in the cheatsheet modal

* Change the Sass variable names

* moved headless to root config, use headless in view (#450)

* extend cleanup timeout to solve context timeout problem in dump logs (#453)

* Add link to exposing mizu wiki page in README (#455)

* changed logger debug mode to log level (#456)

* fixed acceptance test go sum (#458)

* Ignore `SNYK-JS-JSONSCHEMA-1920922` (#462)

Dependency tree:
`node-sass@5.0.0 > node-gyp@7.1.2 > request@2.88.2 > http-signature@1.2.0 > jsprim@1.4.1 > json-schema@0.2.3`

`node-sass` should fix it first.

* Optimize UI entry feed performance (#452)

* Optimize the React code for feeding the entries

By building `EntryItem` only once and updating the `entries` state on meta query messages.

* Upgrade `react-scrollable-feed-virtualized` version from `1.4.3` to `1.4.8`

* Fix the `isSelected` state

* Set the query text before deciding the background to prevent lags while typing

* Upgrade Basenine version from `0.2.6` to `0.2.7`

* Set the query background color only if the query is same after the HTTP request and use `useEffect` instead

* Upgrade Basenine version from `0.2.7` to `0.2.8`

* Use `CancelToken` of `axios` instead of trying to check the query state

* Turn `updateQuery` function into a state hook

* Update the macro for `http`

* Do the `source.cancel()` call in `axios.CancelToken`

* Reduce client-side logging

* Upgrade Basenine version from `0.2.8` to `0.2.9` (#465)

Fixes `limit` helper being not finished because of lack of meta updates.

* Set `response.bodySize` to `0` if it's negative (#466)

* Prevent `elapsedTime` to be negative (#467)

Also fix the `elapsedTime` for Redis.

* changes log format to be more readable (#463)

* Stop reduction of user agent header (#468)

* remove newline in logs, fixed logs time format (#469)

* TRA-3903 better health endpoint for daemon mode (#471)

* Update main.go, status_controller.go, and 2 more files...

* Update status_controller.go and mizuTapperSyncer.go

* fixed redact acceptance test (#472)

* Return `404` instead of `500` if the entry could not be found and display a toast message (#464)

* TRA-3903 add flag to disable pvc creation for daemon mode (#474)

* Update tapRunner.go and tapConfig.go

* Update tapConfig.go

* Revert "Update tapConfig.go"

This reverts commit 5c7c02c4ab.

* TRA-3903 - display targetted pods before waiting for all daemon resources to be created (#475)

* WIP

* Update tapRunner.go

* Update tapRunner.go

* Update the UI screenshots (#476)

* Update the UI screenshots

* Update `mizu-ui.png`

* TRA-3903 fix daemon mode in permission restricted configs (#473)

* Update tapRunner.go, permissions-all-namespaces-daemon.yaml, and 2 more files...

* Update tapRunner.go

* Update tapRunner.go and permissions-ns-daemon.yaml

* Update tapRunner.go

* Update tapRunner.go

* Update tapRunner.go

* TRA-3903 minor daemon mode refactor (#479)

* Update common.go and tapRunner.go

* Update common.go

* Don't omit the key-value pair if the value is `false` in `EntryTableSection` (#478)

* Sync entries in batches just as before (using `uploadIntervalSec` parameter) (#477)

* Sync entries in batches just as before (using `uploadIntervalSec` parameter)

* Replace `lastTimeSynced` value with `time.Time{}`

Since it will be overwritten by the very first iteration.

* Clear `focusedEntryId` state in case of a filter is applied (#482)

* Prevent the crash on client-side in case of `text` being undefined in `FancyTextDisplay` (#481)

* Prevent the crash on client-side in case of `text` being undefined in `FancyTextDisplay`

* Use `String(text)` instead

* Refactor watch pods to allow reusing watch wrapper (#470)

Currently shared/kubernetes/watch.go:FilteredWatch only watches pods.
This PR makes it reusable for other types of resources.
This is done in preparation for watching k8s events.

* Show the source and destination IP in the entry feed (#485)

* Upgrade Basenine version from `0.2.9` to `0.2.10` (#484)

* Upgrade Basenine version from `0.2.9` to `0.2.10`

Fixes the issues in `limit` and `rlimit` helpers that occur when they are on the left operand of a binary expression.

* Upgrade the client hash to latest

* Remove unnecessary `tcpdump` dependency from `Dockerfile` (#491)

* Ignore gob files (#488)

* Ignore gob files

* Remove `*.db` from `.gitignore`

* Update README (#486)

* Add token validity check (#483)

* Add support to auto discover envoy processes (#459)

* discover envoy pids using cluster ips

* add istio flag to cli + rename mtls flag to istio

* add istio.md to docs

* Fixing typos

* Fix minor typos and grammer in docs

Co-authored-by: Nimrod Gilboa Markevich <nimrod@up9.com>

* Improving daemon documentation (#457)

* Some changes to the doc (#494)

* Warn pods not starting (#493)

Print warning event related to mizu k8s resources.
In non-daemon print to CLI. In Daemon print to API-Server logs.

* Remove `tap/tester/` directory (#489)

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>

* Disable IPv4 defragmentation and support IPv6 (#487)

* Remove the extra negation on `nodefrag` flag's value

* Support IPv4 fragmentation and IPv6 at the same time

* Re-enable `nodefrag` flag

* Make the `gRPC` and `HTTP/2` distinction (#492)

* Remove the extra negation on `nodefrag` flag's value

* Support IPv4 fragmentation and IPv6 at the same time

* Set `Method` and `StatusCode` fields correctly for `HTTP/2`

* Replace unnecessary `grpc` naming with `http2`

* Make the `gRPC` and `HTTP/2` distinction

* Fix the macros of `http` extension

* Fix the macros of other protocol extensions

* Update the method signature of `Represent`

* Fix the `HTTP/2` support

* Fix some minor issues

* Upgrade Basenine version from `0.2.10` to `0.2.11`

Sorts macros before expanding them and prioritize the long macros.

* Don't regex split the gRPC method name

* Re-enable `nodefrag` flag

* Remove `SetHostname` method in HTTP extension (#496)

* Remove prevPodPhase (#497)

prevPodPhase does not take into account the fact that there may be more
than one tapper pod. Therefore it is not clear what its value
represents. It is only used in a debug print. It is not worth the effort
to fix for that one debug print.

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>

* minor logging changes (#499)

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>

* Use one channel for events instead of three (#495)

Use one channel for events instead of three separate channels by event type

* Add response body to the error in case of failure (#503)

* add response body to the error in case of failure

* fix typo + make inline condition

* Remove local dev instruction from readme (#507)

* Rename `URL` field to `Target URI` in the UI to prevent confusion (#509)

* Add HTTP2 Over Cleartext (H2C) support (#510)

* Add HTTP2 Over Cleartext (H2C) support

* Remove a parameter which is a remnant of debugging

* Hide `Encoding` field if it's `undefined` or empty in the UI (#511)

* Show the `EntryItem` as `EntrySummary` in `EntryDetailed` (#506)

* Fix the selected entry behavior by propagating the `focusedEntryId` through WebSocket (before #452) TRA-3983 (#513)

* Revert the select entry behavior into its original state RACING! (before #452) [TRA-3983 alternative 3]

* Remove the remaining `forceSelect`(s)

* Add a missing `focusedEntryId` prop

* Fix the race condition

* Propagate the `focusedEntryId` through WebSocket to prevent racing

* Handle unexpected socket close and replace the default `rlimit(100)` filter with `leftOff(-1)` filter (#508)

* Handle unexpected socket close and replace the default `rlimit(100)` filter with `leftOff(-1)` filter

* Rename `dontClear` parameter to `resetEntriesBuffer` and remove negation

* Add `Queryable` component to show a green add circle icon for the queryable UI elements (#512)

* Add `Queryable` component to show a green circle and use it in `EntryViewLine`

* Refactor `Queryable` component

* Use the `Queryable` component `EntryDetailed`

* Use the `Queryable` component `Summary`

* Instead of passing the style to `Queryable`, pass the children components directly

* Make `useTooltip = true` by default in `Queryable`

* Refactor a lot of styling to achieve using `Queryable` in `Protocol` component

* Migrate the last queryable elements in `EntryListItem` to `Queryable` component

* Fix some of the styling issues

* Make horizontal `Protocol` `Queryable` too

* Remove unnecessary child constants

* Revert some of the changes in 2a93f365f5

* Fix rest of the styling issues

* Fix one more styling issue

* Update the screenshots and text in the cheatsheet according to the change

* Use `let` not `var`

* Add missing dependencies to the React hook

* Bring back `GetEntries` HTTP endpoint (#515)

* Bring back `GetEntries` HTTP endpoint

* Upgrade Basenine version from `0.2.12` to `0.2.13`

* Accept negative `leftOff` value

* Remove `max`es from the validations

* Make `timeoutMs` optional

* Update the route comment

* Add `EntriesResponse` struct

* Disable telemetry by env var MIZU_DISABLE_TELEMTRY (#517)

* Replace `privileged` with specific CAPABILITIES requests  (#514)

* Fix the styling of `Queryable` under `StatusCode` and `Summary` components (#519)

* Fix the CSS issue in `Queryable` inside `EntryViewLine` (#521)

* TRA-4017 Bring back `getOldEntries` method using fetch API and always start streaming from now (#518)

* Bring back `getOldEntries` method using fetch API

* Determine no more data on top based on `leftOff` value

* Remove `entriesBuffer` state

* Always open WebSocket with some `leftOff` value

* Rename `leftOff` state to `leftOffBottom`

* Don't set the `focusedEntryId` through WebSocket if the WebSocket is closed

* Call `setQueriedCurrent` with addition

* Close WebSocket upon reaching to top

* Open WebSocket upon snapping to bottom

* Close the WebSocket on snap broken event instead

* Set queried current value to zero upon filter submit

* Upgrade `react-scrollable-feed-virtualized` version and use `scrollToIndex` function

* Change the footer text format

* Improve no more data top logic

* Fix `closeWebSocket()` call logic in `onSnapBrokenEvent` and handle `data.meta` being `null` in `getOldEntries`

* Fix the issues around fetching old records

* Clean up `EntriesList.module.sass`

* Decrement initial `leftOffTop` value by `2`

* Fix the order of `incomingEntries` in `getOldEntries`

* Request `leftOffTop - 1` from `fetchEntries`

* Limit the front-end total entries fetched through WebSocket count to `10000`

* Lose the UI performance gain that's provided by #452

* Revert "Fix the selected entry behavior by propagating the `focusedEntryId` through WebSocket (before #452) TRA-3983 (#513)"

This reverts commit 873f252544.

* Fix the issues caused by 09371f141f

* Upgrade Basenine version from `0.2.13` to `0.2.14`

* Upgrade Basenine version from `0.2.14` to `0.2.15`

* Fix the condition of "Fetch old records" button visibility

* Upgrade Basenine version from `0.2.15` to `0.2.16` and fix the UI code related to fetching old records

* Make `newEntries` constant

* Add type switch for `Base` field of `MizuEntry` (#520)

* Disable version check for devs (#522)

* Report the platform in telemtry (#523)

Co-authored-by: Igor Gov <igor.govorov1@gmail.com>

* Include milliseconds information into the timestamps in the UI (#524)

* Include milliseconds information into the timestamps in the UI

* Upgrade Basenine version from `0.2.16` to `0.2.17`

* Increase the `width` of timestamp

* Fix the CSS issues in queryable vertical protocol element (#526)

* Remove unnecessary fields and split `service` into `src.name` and `dst.name` (#525)

* Remove unnecessary fields and split `service` into `src.name` and `dst.name`

* Don't fall back to IP address but instead display `[Unresolved]` text

* Fix the CSS issues in the plus icon position and replace the separator `->` text with `SwapHorizIcon`

* make description of mizu config options public (#527)

* Fix the glitch (#529)

* Fix the glitch

* Bring back the functionality to "Fetch old records" and "Snap to bottom" buttons

* Fix the CSS issue in `Queryable` component for `src.name` field on heading mode (#530)

* API server stores tappers status (#531)

* Decreased API server boot time (#536)

* Change the connection status text and the toggle connection behavior (#534)

* Update the "Started listening at" timestamp and `queriedTotal` state based on database truncation (#533)

* Send pod info to tapper (#532)

* Alert on acceptance tests failure (#537)

* Fix health tapper status count (#538)

* Fix: acceptance tests (#539)

* Fix a JavaScript error in case of `null` attribute and an interface conversion error in the API server (#540)

* Bringing back the pod watch api server events to make acceptance test more stable (#541)

* TRA-4060 fix proxying error (#542)

* TRA-4062 remove duplicate target pod print (#543)

* Report pods "isTapped" to FE (#535)

* Fix acceptance tests (after pods status request change) (#545)

Co-authored-by: David Levanon <dvdlevanon@gmail.com>
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
Co-authored-by: RamiBerm <54766858+RamiBerm@users.noreply.github.com>
Co-authored-by: M. Mert Yıldıran <mehmet@up9.com>
Co-authored-by: RoyUP9 <87927115+RoyUP9@users.noreply.github.com>
Co-authored-by: Nimrod Gilboa Markevich <59927337+nimrod-up9@users.noreply.github.com>
Co-authored-by: Nimrod Gilboa Markevich <nimrod@up9.com>
Co-authored-by: Alon Girmonsky <1990761+alongir@users.noreply.github.com>
Co-authored-by: Igor Gov <igor.govorov1@gmail.com>
Co-authored-by: Alex Haiut <alex@up9.com>
2021-12-19 15:28:01 +02:00
Roy Island
4badaadcc1 Merge remote-tracking branch 'origin/develop' 2021-11-07 12:31:42 +02:00
Alex Haiut
3f01f20f0c upgrade alpine base image (#413) 2021-10-28 17:00:42 +03:00
RoyUP9
1fbb00f8f0 Merge pull request #398 from up9inc/develop
Develop -> Main #patch
2021-10-25 13:15:41 +03:00
Igor Gov
da7d3590fc Merge pull request #394 from up9inc/develop
Develop -> main
2021-10-24 14:35:06 +03:00
gadotroee
256006ca3e Merge pull request #332 - update download link fix
#minor
2021-10-07 19:45:05 +03:00
Roee Gadot
213528c619 no message 2021-10-07 19:41:51 +03:00
Igor Gov
8b47dba05d Merge pull request #326 from up9inc/develop
Develop -> Main
2021-10-07 12:28:21 +03:00
RoyUP9
5e5d5de91a Merge pull request #297 from up9inc/develop
Develop -> main
2021-09-22 12:14:07 +03:00
Igor Gov
680ea71958 Merge branch 'develop'
# Conflicts:
#	acceptanceTests/tap_test.go
#	cli/apiserver/provider.go
#	cli/cmd/common.go
#	cli/cmd/fetch.go
#	cli/cmd/fetchRunner.go
#	cli/cmd/tapRunner.go
#	cli/cmd/viewRunner.go
#	cli/config/config.go
#	cli/mizu/fsUtils/mizuLogsUtils.go
2021-09-02 12:17:57 +03:00
Igor Gov
5fb5dbbbf5 Fixing call to analysis (#248) 2021-08-30 11:16:55 +03:00
RoyUP9
b3fe448ff1 added custom config path option (#247) 2021-08-30 11:16:55 +03:00
RoyUP9
101a54e8da added tap acceptance tests, fixed duplicate namespace problem (#244) 2021-08-30 11:16:55 +03:00
Igor Gov
3308cab826 Introducing API server provider (#243) 2021-08-30 11:16:55 +03:00
RoyUP9
5fdd8288f4 added tapper count route and wait time for tappers in test (#226) 2021-08-30 11:16:55 +03:00
Alon Girmonsky
4cb32b40e6 some changes in the read me (#241)
change prerequisite to permissions and kubeconfig. These are more FYIs as Mizu requires very little prerequisites. 
Change the description to match getmizu.io
2021-08-30 11:16:55 +03:00
Igor Gov
afa81c7ec2 Fixing bad conflict resolution 2021-08-19 13:33:14 +03:00
Igor Gov
e84c7d3310 Merge branch 'develop' 2021-08-19 13:18:06 +03:00
Igor Gov
7d0a90cb78 Merge branch 'main' into develop
# Conflicts:
#	cli/config/configStruct.go
#	cli/mizu/config.go
#	tap/http_reader.go
2021-08-19 13:16:19 +03:00
Nimrod Gilboa Markevich
24f79922e9 Add to periodic stats print in tapper (#221)
#patch
2021-08-16 15:50:04 +03:00
RoyUP9
c3995009ee Hotfix - ignore not allowed set flags (#192)
#patch
2021-08-10 14:21:16 +03:00
RoyUP9
6e9fe2986e removed duplicate har page header (#187) 2021-08-09 13:31:53 +03:00
RoyUP9
603240fedb temp fix - ignore agent image in config command (#185) 2021-08-09 11:55:45 +03:00
Igor Gov
e61871a68e Merge pull request #182 from up9inc/develop
Release 2021-08-08
2021-08-08 14:50:30 +03:00
nimrod-up9
379af59f07 Merge pull request #121 from up9inc/develop
Missing request body (#120)
2021-07-19 13:53:49 +03:00
gadotroee
ef9afe31a4 Merge pull request #119 from up9inc/develop
Mizu release
2021-07-18 16:54:31 +03:00
gadotroee
dca636b0fd Merge pull request #94 from up9inc/develop
Mizu release
2021-07-06 21:05:40 +03:00
Roee Gadot
9b72cc7aa6 Merge branch 'develop' into main
# Conflicts:
#	README.md
#	api/main.go
#	api/pkg/api/main.go
#	api/pkg/models/models.go
#	api/pkg/resolver/resolver.go
#	cli/Makefile
#	cli/cmd/tap.go
#	cli/cmd/tapRunner.go
#	tap/http_matcher.go
#	tap/http_reader.go
#	tap/tcp_stream_factory.go
2021-06-29 11:16:47 +03:00
Alex Haiut
d3c023b3ba mizu release 2021-06-21 (#79)
* Show pod name and namespace (#61)

* WIP

* Update main.go, consts.go, and 2 more files...

* Update messageSensitiveDataCleaner.go

* Update consts.go and messageSensitiveDataCleaner.go

* Update messageSensitiveDataCleaner.go

* Update main.go, consts.go, and 3 more files...

* WIP

* Update main.go, messageSensitiveDataCleaner.go, and 6 more files...

* Update main.go, messageSensitiveDataCleaner.go, and 3 more files...

* Update consts.go, messageSensitiveDataCleaner.go, and tap.go

* Update provider.go

* Update serializableRegexp.go

* Update tap.go

* TRA-3234 fetch with _source + no hard limit (#64)

* remove the HARD limit of 5000

* TRA-3299 Reduce footprint and Add Tolerances(#65)

* Use lib const for DNSClusterFirstWithHostNet.

* Whitespace.

* Break lines.

* Added affinity to pod names.

* Added tolerations to NoExecute and NoSchedule taints.

* Implementation of Mizu view command

* .

* .

* Update main.go and messageSensitiveDataCleaner.go

* Update main.go

* String and not pointers (#68)

* TRA-3318 - Cookies not null and fix har file names  (#69)

* no message

* TRA-3212 Passive-Tapper and Mizu share code (#70)

* Use log in tap package instead of fmt.

* Moved api/pkg/tap to root.

* Added go.mod and go.sum for tap.

* Added replace for shared.

* api uses tap module instead of tap package.

* Removed dependency of tap in shared by moving env var out of tap.

* Fixed compilation bugs.

* Fixed: Forgot to export struct field HostMode.

* Removed unused flag.

* Close har output channel when done.

* Moved websocket out of mizu and into passive-tapper.

* Send connection details over har output channel.

* Fixed compilation errors.

* Removed unused info from request response cache.

* Renamed connection -> connectionID.

* Fixed rename bug.

* Export setters and getters for filter ips and ports.

* Added tap dependency to Dockerfile.

* Uncomment error messages.

* Renamed `filterIpAddresses` -> `filterAuthorities`.

* Renamed ConnectionID -> ConnectionInfo.

* Fixed: Missed one replace.

* TRA-3342 Mizu/tap dump to har directory fails on Linux (#71)

* Instead of saving incomplete temp har files in a temp dir, save them in the output dir with a *.har.tmp suffix.

* API only loads har from *.har files (by extension).

* Add export entries endpoint for better up9 connect funcionality  (#72)

* no message
* no message
* no message

* Filter 'cookie' header

* Release action  (#73)

* Create main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* trying new approach

* no message

* yaml error

* no message

* no message

* no message

* missing )

* no message

* no message

* remove main.yml and fix branches

* Create tag-temp.yaml

* Update tag-temp.yaml

* Update tag-temp.yaml

* no message

* no message

* no message

* no message

* no message

* no message

* no message

* #minor

* no message

* no message

* added checksum calc to CLI makefile

* fixed build error - created bin directory upfront

* using markdown for release text

* use separate checksum files

* fixed release readme

* #minor

* readme updated

Co-authored-by: Alex Haiut <alex@up9.com>

* TRA-3360 Fix: Mizu ignores -n namespace flag and records traffic from all pods (#75)

Do not tap pods in namespaces which were not requested.

* added apple/m1 binary, updated readme (#77)

Co-authored-by: Alex Haiut <alex@up9.com>

* Update README.md (#78)

Co-authored-by: lirazyehezkel <61656597+lirazyehezkel@users.noreply.github.com>
Co-authored-by: RamiBerm <rami.berman@up9.com>
Co-authored-by: RamiBerm <54766858+RamiBerm@users.noreply.github.com>
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
Co-authored-by: nimrod-up9 <59927337+nimrod-up9@users.noreply.github.com>
Co-authored-by: Igor Gov <igor.govorov1@gmail.com>
Co-authored-by: Alex Haiut <alex@up9.com>
2021-06-21 15:17:31 +03:00
Roee Gadot
5f2a4deb19 remove file 2021-05-26 18:08:37 +03:00
Roee Gadot
91f290987e Merge branch 'develop' into main
# Conflicts:
#	cli/cmd/tap.go
#	cli/cmd/version.go
#	cli/kubernetes/provider.go
#	cli/mizu/consts.go
#	cli/mizu/mizuRunner.go
#	debug.Dockerfile
#	ui/src/components/HarPage.tsx
2021-05-26 17:58:17 +03:00
gadotroee
2f3215b71a Fix mizu image parameter (#53) 2021-05-23 13:34:32 +03:00
Alex Haiut
2e87a01346 end of week - develop to master (#50)
* Provide cli version as git hash from makefile

* Update Makefile, version.go, and 3 more files...

* Update mizuRunner.go

* Update README.md, resolver.go, and 2 more files...

* Update provider.go

* Feature/UI/light theme (#44)

* light theme

* css polish

* unused code

* css

* text shadow

* footer style

* Update mizuRunner.go

* Handle nullable vars (#47)

* Decode gRPC body (#48)

* Decode grpc.

* Better variable names.

* Added protobuf-decoder dependency.

* Updated protobuf-decoder's version.

Co-authored-by: RamiBerm <rami.berman@up9.com>
Co-authored-by: RamiBerm <54766858+RamiBerm@users.noreply.github.com>
Co-authored-by: lirazyehezkel <61656597+lirazyehezkel@users.noreply.github.com>
Co-authored-by: nimrod-up9 <59927337+nimrod-up9@users.noreply.github.com>
2021-05-13 20:29:31 +03:00
gadotroee
453003bf14 remove leftovers (#43) 2021-05-10 17:35:59 +03:00
Roee Gadot
80ca377668 Merge branch 'develop' into main
# Conflicts:
#	Dockerfile
#	Makefile
#	api/go.mod
#	api/go.sum
#	api/main.go
#	api/pkg/controllers/entries_controller.go
#	api/pkg/inserter/main.go
#	api/pkg/models/models.go
#	api/pkg/tap/grpc_assembler.go
#	api/pkg/tap/har_writer.go
#	api/pkg/tap/http_matcher.go
#	api/pkg/tap/http_reader.go
#	api/pkg/tap/passive_tapper.go
#	api/pkg/utils/utils.go
#	cli/Makefile
#	cli/cmd/tap.go
#	cli/cmd/version.go
#	cli/config/config.go
#	cli/kubernetes/provider.go
#	cli/mizu/mizuRunner.go
2021-05-10 17:27:32 +03:00
gadotroee
d21297bc9c 0.9 (#37)
* Update .gitignore

* WIP

* WIP

* Update README.md, root.go, and 4 more files...

* Update README.md

* Update README.md

* Update root.go

* Update provider.go

* Update provider.go

* Update root.go, go.mod, and go.sum

* Update mizu.go

* Update go.sum and provider.go

* Update portForward.go, watch.go, and mizu.go

* Update README.md

* Update watch.go

* Update mizu.go

* Update mizu.go

* no message

* no message

* remove unused things and use external for object id (instead of copy)

* no message

* Update mizu.go

* Update go.mod, go.sum, and 2 more files...

* no message

* Update README.md, go.mod, and resolver.go

* Update README.md

* Update go.mod

* Update loader.go

* some refactor

* Update loader.go

* no message

* status to statusCode

* return data directly

* Traffic viewer

* cleaning

* css

* no message

* Clean warnings

* Makefile - first draft

* Update Makefile

* Update Makefile

* Update Makefile, README.md, and 4 more files...

* Add api build and clean to makefile (files restructure) (#9)

* no message
* add clean api command

* no message

* stating with web socket

* Add tap as a separate executable (#10)

* Added tap.

* Ignore build directories.

* Added tapper build to Makefile.

* Improvements  (#12)

* no message

* no message

* Feature/makefile (#11)

* minor fixes

* makefile fixes - docker build

* minor fix in Makefile
Co-authored-by: Alex Haiut <alex@up9.com>

* Update Dockerfile, multi-runner.sh, and 31 more files...

* Update multi-runner.sh

* no message

* Update .dockerignore, Dockerfile, and 30 more files...

* Update cleaner.go, grpc_assembler.go, and 2 more files...

* start the pod with host network and privileged

* fix multi runner passive tapper command

* add HOST_MODE env var

* do not return true in the should tap function

* remove line in the end

* default value in api is input
fix description and pass the parameter in the multi runner script

* missing flag.parse

* no message

* fix image

* Create main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Small fixes - permission + har writing exception (#17)

* Select node by pod (#18)

* Select node by pod.

* Removed watch pod by regex. Irrelevant for now.

* Changed default image to develop:latest.

* Features/clifix (#19)

* makefile fixes - docker build

* readme update, CLI usage fix

* added chmod

Co-authored-by: Alex Haiut <alex@up9.com>

* meta information

* Only record traffic of the requested pod. Filtered by pod IP. (#21)

* fixed readme and reduced batch size to 5 (#22)

Co-authored-by: Alex Haiut <alex@up9.com>

* API and TAP in single process  (#24)

* no message
* no message

* CLI make --pod required flag and faster api image build (#25)

* makefile fixes - docker build

* readme update, CLI usage fix

* added chmod

* typo

* run example incorreect in makefile

* no message

* no message

* no message

Co-authored-by: Alex Haiut <alex@up9.com>

* Reduce delay between tap and UI - Skip dump to file (#26)

* Pass HARs between tap and api via channel.

* Fixed make docker commad.

* Various fixes.

* Added .DS_Store to .gitignore.

* Parse flags in Mizu main instead of in tap_output.go.

* Use channel to pass HAR by default instead of files.

* Infinite scroll (#28)

* no message

* infinite scroll + new ws implementation

* no message

* scrolling top

* fetch button

* more Backend changes

* fix go mod and sum

* mire fixes against develop

* unused code

* small ui refactor

Co-authored-by: Roee Gadot <roee.gadot@up9.com>

* Fix gRPC crash, display gRPC as base64, display gRPC URL and status code (#27)

* Added Method (POST) and URL (emtpy) to gRPC requests.

* Removed quickfix that skips writing HTTP/2 to HAR.

* Use HTTP/2 body to fill out http.Request and htt.Response.

* Make sure that in HARs request.postData.mimeType and response.content.mimeType are application/grpc in case of grpc.

* Comment.

* Add URL and status code for gRPC.

* Don't assume http scheme.

* Use http.Header.Set instead of manually acccessing the underlaying map.

* General stats api fix  (#29)

* refactor and validation

* Show gRPC as ASCII (#31)

* Moved try-catch up one block.

* Display grpc as ASCII.

* Better code in entries fetch endpoint (#30)

* no message
* no message

* Feature/UI/filters (#32)

* UI filters

* refactor

* Revert "refactor"

This reverts commit 70e7d4b6ac.

* remove recursive func

* CLI cleanup (#33)

* Moved cli root command to tap subcommand.

* tap subcommand works.

* Added view and fetch placeholders.

* Updated descriptions.

* Fixed indentation.

* Added versio subcommand.

* Removed version flag.

* gofmt.

* Changed pod from flag to arg.

* Commented out "all namespaces" flag.

* CLI cleanup 2 (#34)

* Renamed dashboard -> GUI/web interface.

* Commented out --quiet, removed unused config variables.

* Quiter output when calling unimplemented subcommands.

* Leftovers from PR #30 (#36)

Co-authored-by: up9-github <info@up9.com>
Co-authored-by: RamiBerm <54766858+RamiBerm@users.noreply.github.com>
Co-authored-by: Liraz Yehezkel <lirazy@up9.com>
Co-authored-by: Alex Haiut <alex@testr.io>
Co-authored-by: lirazyehezkel <61656597+lirazyehezkel@users.noreply.github.com>
Co-authored-by: Alex Haiut <alex@up9.com>
Co-authored-by: nimrod-up9 <59927337+nimrod-up9@users.noreply.github.com>
Co-authored-by: RamiBerm <rami.berman@up9.com>
Co-authored-by: Alex Haiut <alex.haiut@gmail.com>
2021-05-09 11:45:39 +03:00
45 changed files with 371 additions and 397 deletions

View File

@@ -78,8 +78,8 @@ RUN go build -ldflags="-extldflags=-static -s -w \
-X 'github.com/up9inc/mizu/agent/pkg/version.Ver=${VER}'" -o mizuagent .
# Download Basenine executable, verify the sha1sum
ADD https://github.com/up9inc/basenine/releases/download/v0.4.14/basenine_linux_${GOARCH} ./basenine_linux_${GOARCH}
ADD https://github.com/up9inc/basenine/releases/download/v0.4.14/basenine_linux_${GOARCH}.sha256 ./basenine_linux_${GOARCH}.sha256
ADD https://github.com/up9inc/basenine/releases/download/v0.4.16/basenine_linux_${GOARCH} ./basenine_linux_${GOARCH}
ADD https://github.com/up9inc/basenine/releases/download/v0.4.16/basenine_linux_${GOARCH}.sha256 ./basenine_linux_${GOARCH}.sha256
RUN shasum -a 256 -c basenine_linux_${GOARCH}.sha256
RUN chmod +x ./basenine_linux_${GOARCH}
RUN mv ./basenine_linux_${GOARCH} ./basenine

View File

@@ -118,8 +118,8 @@ func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extension
for item := range outputItems {
extension := extensionsMap[item.Protocol.Name]
resolvedSource, resolvedDestionation := resolveIP(item.ConnectionInfo)
mizuEntry := extension.Dissector.Analyze(item, resolvedSource, resolvedDestionation)
resolvedSource, resolvedDestionation, namespace := resolveIP(item.ConnectionInfo)
mizuEntry := extension.Dissector.Analyze(item, resolvedSource, resolvedDestionation, namespace)
if extension.Protocol.Name == "http" {
if !disableOASValidation {
var httpPair tapApi.HTTPRequestResponsePair
@@ -158,26 +158,32 @@ func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extension
}
}
func resolveIP(connectionInfo *tapApi.ConnectionInfo) (resolvedSource string, resolvedDestination string) {
func resolveIP(connectionInfo *tapApi.ConnectionInfo) (resolvedSource string, resolvedDestination string, namespace string) {
if k8sResolver != nil {
unresolvedSource := connectionInfo.ClientIP
resolvedSource = k8sResolver.Resolve(unresolvedSource)
if resolvedSource == "" {
resolvedSourceObject := k8sResolver.Resolve(unresolvedSource)
if resolvedSourceObject == nil {
logger.Log.Debugf("Cannot find resolved name to source: %s", unresolvedSource)
if os.Getenv("SKIP_NOT_RESOLVED_SOURCE") == "1" {
return
}
} else {
resolvedSource = resolvedSourceObject.FullAddress
}
unresolvedDestination := fmt.Sprintf("%s:%s", connectionInfo.ServerIP, connectionInfo.ServerPort)
resolvedDestination = k8sResolver.Resolve(unresolvedDestination)
if resolvedDestination == "" {
resolvedDestinationObject := k8sResolver.Resolve(unresolvedDestination)
if resolvedDestinationObject == nil {
logger.Log.Debugf("Cannot find resolved name to dest: %s", unresolvedDestination)
if os.Getenv("SKIP_NOT_RESOLVED_DEST") == "1" {
return
}
} else {
resolvedDestination = resolvedDestinationObject.FullAddress
namespace = resolvedDestinationObject.Namespace
}
}
return resolvedSource, resolvedDestination
return resolvedSource, resolvedDestination, namespace
}
func CheckIsServiceIP(address string) bool {

View File

@@ -34,9 +34,12 @@ func (h *RoutesEventHandlers) WebSocketConnect(socketId int, isTapper bool) {
tappers.Connected()
} else {
logger.Log.Infof("Websocket event - Browser socket connected, socket ID: %d", socketId)
socketListLock.Lock()
browserClientSocketUUIDs = append(browserClientSocketUUIDs, socketId)
socketListLock.Unlock()
BroadcastTappedPodsStatus()
}
}
@@ -101,9 +104,9 @@ func (h *RoutesEventHandlers) WebSocketMessage(_ int, message []byte) {
}
func handleTLSLink(outboundLinkMessage models.WebsocketOutboundLinkMessage) {
resolvedName := k8sResolver.Resolve(outboundLinkMessage.Data.DstIP)
if resolvedName != "" {
outboundLinkMessage.Data.DstIP = resolvedName
resolvedNameObject := k8sResolver.Resolve(outboundLinkMessage.Data.DstIP)
if resolvedNameObject != nil {
outboundLinkMessage.Data.DstIP = resolvedNameObject.FullAddress
} else if outboundLinkMessage.Data.SuggestedResolvedName != "" {
outboundLinkMessage.Data.DstIP = outboundLinkMessage.Data.SuggestedResolvedName
}

20
agent/pkg/api/utils.go Normal file
View File

@@ -0,0 +1,20 @@
package api
import (
"encoding/json"
"github.com/up9inc/mizu/agent/pkg/providers/tappedPods"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
)
func BroadcastTappedPodsStatus() {
tappedPodsStatus := tappedPods.GetTappedPodsStatus()
message := shared.CreateWebSocketStatusMessage(tappedPodsStatus)
if jsonBytes, err := json.Marshal(message); err != nil {
logger.Log.Errorf("Could not Marshal message %v", err)
} else {
BroadcastToBrowserClients(jsonBytes)
}
}

View File

@@ -7,6 +7,7 @@ import (
"time"
"github.com/gin-gonic/gin"
"github.com/up9inc/mizu/agent/pkg/api"
"github.com/up9inc/mizu/agent/pkg/config"
"github.com/up9inc/mizu/agent/pkg/models"
"github.com/up9inc/mizu/agent/pkg/providers"
@@ -36,7 +37,7 @@ func PostTapConfig(c *gin.Context) {
tappedPods.Set([]*shared.PodInfo{})
tappers.ResetStatus()
broadcastTappedPodsStatus()
api.BroadcastTappedPodsStatus()
}
var tappedNamespaces []string
@@ -135,7 +136,7 @@ func startMizuTapperSyncer(ctx context.Context, provider *kubernetes.Provider, t
}
tappedPods.Set(kubernetes.GetPodInfosForPods(tapperSyncer.CurrentlyTappedPods))
broadcastTappedPodsStatus()
api.BroadcastTappedPodsStatus()
case tapperStatus, ok := <-tapperSyncer.TapperStatusChangedOut:
if !ok {
logger.Log.Debug("mizuTapperSyncer tapper status changed channel closed, ending listener loop")
@@ -143,7 +144,7 @@ func startMizuTapperSyncer(ctx context.Context, provider *kubernetes.Provider, t
}
tappers.SetStatus(&tapperStatus)
broadcastTappedPodsStatus()
api.BroadcastTappedPodsStatus()
case <-ctx.Done():
logger.Log.Debug("mizuTapperSyncer event listener loop exiting due to context done")
return

View File

@@ -102,13 +102,13 @@ func (s *ServiceMapControllerSuite) TestGet() {
// response nodes
aNode := servicemap.ServiceMapNode{
Id: 1,
Name: TCPEntryA.IP,
Name: TCPEntryA.Name,
Entry: TCPEntryA,
Count: 1,
}
bNode := servicemap.ServiceMapNode{
Id: 2,
Name: TCPEntryB.IP,
Name: TCPEntryB.Name,
Entry: TCPEntryB,
Count: 1,
}

View File

@@ -1,7 +1,6 @@
package controllers
import (
"encoding/json"
"net/http"
"github.com/gin-gonic/gin"
@@ -39,18 +38,7 @@ func PostTappedPods(c *gin.Context) {
logger.Log.Infof("[Status] POST request: %d tapped pods", len(requestTappedPods))
tappedPods.Set(requestTappedPods)
broadcastTappedPodsStatus()
}
func broadcastTappedPodsStatus() {
tappedPodsStatus := tappedPods.GetTappedPodsStatus()
message := shared.CreateWebSocketStatusMessage(tappedPodsStatus)
if jsonBytes, err := json.Marshal(message); err != nil {
logger.Log.Errorf("Could not Marshal message %v", err)
} else {
api.BroadcastToBrowserClients(jsonBytes)
}
api.BroadcastTappedPodsStatus()
}
func PostTapperStatus(c *gin.Context) {
@@ -67,7 +55,7 @@ func PostTapperStatus(c *gin.Context) {
logger.Log.Infof("[Status] POST request, tapper status: %v", tapperStatus)
tappers.SetStatus(tapperStatus)
broadcastTappedPodsStatus()
api.BroadcastTappedPodsStatus()
}
func GetConnectedTappersCount(c *gin.Context) {

View File

@@ -7,7 +7,7 @@ func CORSMiddleware() gin.HandlerFunc {
c.Writer.Header().Set("Access-Control-Allow-Origin", "*")
c.Writer.Header().Set("Access-Control-Allow-Credentials", "true")
c.Writer.Header().Set("Access-Control-Allow-Headers", "Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization, accept, origin, Cache-Control, X-Requested-With, x-session-token")
c.Writer.Header().Set("Access-Control-Allow-Methods", "POST, OPTIONS, GET, PUT")
c.Writer.Header().Set("Access-Control-Allow-Methods", "POST, OPTIONS, GET, PUT, DELETE")
if c.Request.Method == "OPTIONS" {
c.AbortWithStatus(204)

View File

@@ -30,6 +30,11 @@ type Resolver struct {
namespace string
}
type ResolvedObjectInfo struct {
FullAddress string
Namespace string
}
func (resolver *Resolver) Start(ctx context.Context) {
if !resolver.isStarted {
resolver.isStarted = true
@@ -40,12 +45,12 @@ func (resolver *Resolver) Start(ctx context.Context) {
}
}
func (resolver *Resolver) Resolve(name string) string {
func (resolver *Resolver) Resolve(name string) *ResolvedObjectInfo {
resolvedName, isFound := resolver.nameMap.Get(name)
if !isFound {
return ""
return nil
}
return resolvedName.(string)
return resolvedName.(*ResolvedObjectInfo)
}
func (resolver *Resolver) GetMap() cmap.ConcurrentMap {
@@ -71,7 +76,7 @@ func (resolver *Resolver) watchPods(ctx context.Context) error {
}
if event.Type == watch.Deleted {
pod := event.Object.(*corev1.Pod)
resolver.saveResolvedName(pod.Status.PodIP, "", event.Type)
resolver.saveResolvedName(pod.Status.PodIP, "", pod.Namespace, event.Type)
}
case <-ctx.Done():
watcher.Stop()
@@ -106,10 +111,10 @@ func (resolver *Resolver) watchEndpoints(ctx context.Context) error {
}
if subset.Addresses != nil {
for _, address := range subset.Addresses {
resolver.saveResolvedName(address.IP, serviceHostname, event.Type)
resolver.saveResolvedName(address.IP, serviceHostname, endpoint.Namespace, event.Type)
for _, port := range ports {
ipWithPort := fmt.Sprintf("%s:%d", address.IP, port)
resolver.saveResolvedName(ipWithPort, serviceHostname, event.Type)
resolver.saveResolvedName(ipWithPort, serviceHostname, endpoint.Namespace, event.Type)
}
}
}
@@ -139,19 +144,19 @@ func (resolver *Resolver) watchServices(ctx context.Context) error {
service := event.Object.(*corev1.Service)
serviceHostname := fmt.Sprintf("%s.%s", service.Name, service.Namespace)
if service.Spec.ClusterIP != "" && service.Spec.ClusterIP != kubClientNullString {
resolver.saveResolvedName(service.Spec.ClusterIP, serviceHostname, event.Type)
resolver.saveResolvedName(service.Spec.ClusterIP, serviceHostname, service.Namespace, event.Type)
if service.Spec.Ports != nil {
for _, port := range service.Spec.Ports {
if port.Port > 0 {
resolver.saveResolvedName(fmt.Sprintf("%s:%d", service.Spec.ClusterIP, port.Port), serviceHostname, event.Type)
resolver.saveResolvedName(fmt.Sprintf("%s:%d", service.Spec.ClusterIP, port.Port), serviceHostname, service.Namespace, event.Type)
}
}
}
resolver.saveServiceIP(service.Spec.ClusterIP, serviceHostname, event.Type)
resolver.saveServiceIP(service.Spec.ClusterIP, serviceHostname, service.Namespace, event.Type)
}
if service.Status.LoadBalancer.Ingress != nil {
for _, ingress := range service.Status.LoadBalancer.Ingress {
resolver.saveResolvedName(ingress.IP, serviceHostname, event.Type)
resolver.saveResolvedName(ingress.IP, serviceHostname, service.Namespace, event.Type)
}
}
case <-ctx.Done():
@@ -161,21 +166,22 @@ func (resolver *Resolver) watchServices(ctx context.Context) error {
}
}
func (resolver *Resolver) saveResolvedName(key string, resolved string, eventType watch.EventType) {
func (resolver *Resolver) saveResolvedName(key string, resolved string, namespace string, eventType watch.EventType) {
if eventType == watch.Deleted {
resolver.nameMap.Remove(key)
logger.Log.Infof("setting %s=nil", key)
} else {
resolver.nameMap.Set(key, resolved)
resolver.nameMap.Set(key, &ResolvedObjectInfo{FullAddress: resolved, Namespace: namespace})
logger.Log.Infof("setting %s=%s", key, resolved)
}
}
func (resolver *Resolver) saveServiceIP(key string, resolved string, eventType watch.EventType) {
func (resolver *Resolver) saveServiceIP(key string, resolved string, namespace string, eventType watch.EventType) {
if eventType == watch.Deleted {
resolver.serviceMap.Remove(key)
} else {
resolver.serviceMap.Set(key, resolved)
resolver.nameMap.Set(key, &ResolvedObjectInfo{FullAddress: resolved, Namespace: namespace})
}
}

View File

@@ -172,20 +172,33 @@ func (s *serviceMap) NewTCPEntry(src *tapApi.TCP, dst *tapApi.TCP, p *tapApi.Pro
return
}
srcEntry := &entryData{
key: key(src.IP),
entry: src,
}
if len(srcEntry.entry.Name) == 0 {
var srcEntry *entryData
var dstEntry *entryData
if len(src.Name) == 0 {
srcEntry = &entryData{
key: key(src.IP),
entry: src,
}
srcEntry.entry.Name = UnresolvedNodeName
} else {
srcEntry = &entryData{
key: key(src.Name),
entry: src,
}
}
dstEntry := &entryData{
key: key(dst.IP),
entry: dst,
}
if len(dstEntry.entry.Name) == 0 {
if len(dst.Name) == 0 {
dstEntry = &entryData{
key: key(dst.IP),
entry: dst,
}
dstEntry.entry.Name = UnresolvedNodeName
} else {
dstEntry = &entryData{
key: key(dst.Name),
entry: dst,
}
}
s.addEdge(srcEntry, dstEntry, p)

View File

@@ -268,9 +268,14 @@ func (s *ServiceMapEnabledSuite) TestServiceMap() {
assert.LessOrEqual(node.Id, expectedNodeCount)
// entry
// node.Name is the key of the node, key = entry.IP
// node.Name is the key of the node, key = entry.Name by default
// entry.Name is the name of the service and could be unresolved
assert.Equal(node.Name, node.Entry.IP)
// when entry.Name is unresolved, key = entry.IP
if node.Entry.Name == UnresolvedNodeName {
assert.Equal(node.Name, node.Entry.IP)
} else {
assert.Equal(node.Name, node.Entry.Name)
}
assert.Equal(Port, node.Entry.Port)
assert.Equal(entryName, node.Entry.Name)
@@ -320,16 +325,24 @@ func (s *ServiceMapEnabledSuite) TestServiceMap() {
cdEdge := -1
acEdge := -1
var validateEdge = func(edge ServiceMapEdge, sourceEntryName string, destEntryName string, protocolName string, protocolCount int) {
// source
// source node
assert.Contains(nodeIds, edge.Source.Id)
assert.LessOrEqual(edge.Source.Id, expectedNodeCount)
assert.Equal(edge.Source.Name, edge.Source.Entry.IP)
if edge.Source.Entry.Name == UnresolvedNodeName {
assert.Equal(edge.Source.Name, edge.Source.Entry.IP)
} else {
assert.Equal(edge.Source.Name, edge.Source.Entry.Name)
}
assert.Equal(sourceEntryName, edge.Source.Entry.Name)
// destination
// destination node
assert.Contains(nodeIds, edge.Destination.Id)
assert.LessOrEqual(edge.Destination.Id, expectedNodeCount)
assert.Equal(edge.Destination.Name, edge.Destination.Entry.IP)
if edge.Destination.Entry.Name == UnresolvedNodeName {
assert.Equal(edge.Destination.Name, edge.Destination.Entry.IP)
} else {
assert.Equal(edge.Destination.Name, edge.Destination.Entry.Name)
}
assert.Equal(destEntryName, edge.Destination.Entry.Name)
// protocol

View File

@@ -1,5 +1,5 @@
# Mizu release _VER_
Full changelog for stable release see in [docs](https://github.com/up9inc/mizu/blob/main/docs/CHANGELOG.md)
Mizu CHANGELOG is now part of [Mizu wiki](https://github.com/up9inc/mizu/wiki/CHANGELOG)
## Download Mizu for your platform

View File

@@ -4,9 +4,11 @@ import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
"time"
"github.com/up9inc/mizu/shared/kubernetes"
@@ -57,10 +59,8 @@ func (provider *Provider) TestConnection() error {
func (provider *Provider) isReachable() (bool, error) {
echoUrl := fmt.Sprintf("%s/echo", provider.url)
if response, err := provider.client.Get(echoUrl); err != nil {
if _, err := provider.get(echoUrl); err != nil {
return false, err
} else if response.StatusCode != 200 {
return false, fmt.Errorf("invalid status code %v", response.StatusCode)
} else {
return true, nil
}
@@ -72,10 +72,8 @@ func (provider *Provider) ReportTapperStatus(tapperStatus shared.TapperStatus) e
if jsonValue, err := json.Marshal(tapperStatus); err != nil {
return fmt.Errorf("failed Marshal the tapper status %w", err)
} else {
if response, err := provider.client.Post(tapperStatusUrl, "application/json", bytes.NewBuffer(jsonValue)); err != nil {
if _, err := provider.post(tapperStatusUrl, "application/json", bytes.NewBuffer(jsonValue)); err != nil {
return fmt.Errorf("failed sending to API server the tapped pods %w", err)
} else if response.StatusCode != 200 {
return fmt.Errorf("failed sending to API server the tapper status, response status code %v", response.StatusCode)
} else {
logger.Log.Debugf("Reported to server API about tapper status: %v", tapperStatus)
return nil
@@ -91,10 +89,8 @@ func (provider *Provider) ReportTappedPods(pods []core.Pod) error {
if jsonValue, err := json.Marshal(podInfos); err != nil {
return fmt.Errorf("failed Marshal the tapped pods %w", err)
} else {
if response, err := provider.client.Post(tappedPodsUrl, "application/json", bytes.NewBuffer(jsonValue)); err != nil {
if _, err := provider.post(tappedPodsUrl, "application/json", bytes.NewBuffer(jsonValue)); err != nil {
return fmt.Errorf("failed sending to API server the tapped pods %w", err)
} else if response.StatusCode != 200 {
return fmt.Errorf("failed sending to API server the tapped pods, response status code %v", response.StatusCode)
} else {
logger.Log.Debugf("Reported to server API about %d taped pods successfully", len(podInfos))
return nil
@@ -105,11 +101,9 @@ func (provider *Provider) ReportTappedPods(pods []core.Pod) error {
func (provider *Provider) GetGeneralStats() (map[string]interface{}, error) {
generalStatsUrl := fmt.Sprintf("%s/status/general", provider.url)
response, requestErr := provider.client.Get(generalStatsUrl)
response, requestErr := provider.get(generalStatsUrl)
if requestErr != nil {
return nil, fmt.Errorf("failed to get general stats for telemetry, err: %w", requestErr)
} else if response.StatusCode != 200 {
return nil, fmt.Errorf("failed to get general stats for telemetry, status code: %v", response.StatusCode)
}
defer response.Body.Close()
@@ -132,7 +126,7 @@ func (provider *Provider) GetVersion() (string, error) {
Method: http.MethodGet,
URL: versionUrl,
}
statusResp, err := provider.client.Do(req)
statusResp, err := provider.do(req)
if err != nil {
return "", err
}
@@ -145,3 +139,40 @@ func (provider *Provider) GetVersion() (string, error) {
return versionResponse.Ver, nil
}
// When err is nil, resp always contains a non-nil resp.Body.
// Caller should close resp.Body when done reading from it.
func (provider *Provider) get(url string) (*http.Response, error) {
return provider.checkError(provider.client.Get(url))
}
// When err is nil, resp always contains a non-nil resp.Body.
// Caller should close resp.Body when done reading from it.
func (provider *Provider) post(url, contentType string, body io.Reader) (*http.Response, error) {
return provider.checkError(provider.client.Post(url, contentType, body))
}
// When err is nil, resp always contains a non-nil resp.Body.
// Caller should close resp.Body when done reading from it.
func (provider *Provider) do(req *http.Request) (*http.Response, error) {
return provider.checkError(provider.client.Do(req))
}
func (provider *Provider) checkError(response *http.Response, errInOperation error) (*http.Response, error) {
if (errInOperation != nil) {
return response, errInOperation
// Check only if status != 200 (and not status >= 300). Agent APIs return only 200 on success.
} else if response.StatusCode != http.StatusOK {
body, err := ioutil.ReadAll(response.Body)
response.Body.Close()
response.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
if err != nil {
return response, err
}
errorMsg := strings.ReplaceAll((string(body)), "\n", ";")
return response, fmt.Errorf("got response with status code: %d, body: %s", response.StatusCode, errorMsg)
}
return response, nil
}

View File

@@ -31,7 +31,7 @@ func runMizuCheck() {
}
if checkPassed {
checkPassed = checkAllResourcesExist(ctx, kubernetesProvider, isInstallCommand)
checkPassed = checkK8sResources(ctx, kubernetesProvider, isInstallCommand)
}
if checkPassed {
@@ -66,7 +66,7 @@ func checkKubernetesApi() (*kubernetes.Provider, *semver.SemVersion, bool) {
}
func checkMizuMode(ctx context.Context, kubernetesProvider *kubernetes.Provider) (bool, bool) {
logger.Log.Infof("\nmizu-mode\n--------------------")
logger.Log.Infof("\nmode\n--------------------")
if exist, err := kubernetesProvider.DoesDeploymentExist(ctx, config.Config.MizuResourcesNamespace, kubernetes.ApiServerPodName); err != nil {
logger.Log.Errorf("%v can't check mizu command, err: %v", fmt.Sprintf(uiUtils.Red, "✗"), err)
@@ -79,7 +79,7 @@ func checkMizuMode(ctx context.Context, kubernetesProvider *kubernetes.Provider)
return false, false
} else if exist {
logger.Log.Infof("%v mizu running with tap command", fmt.Sprintf(uiUtils.Green, "√"))
return true, true
return true, false
} else {
logger.Log.Infof("%v mizu is not running", fmt.Sprintf(uiUtils.Red, "✗"))
return false, false
@@ -99,9 +99,9 @@ func checkKubernetesVersion(kubernetesVersion *semver.SemVersion) bool {
}
func checkServerConnection(kubernetesProvider *kubernetes.Provider) bool {
logger.Log.Infof("\nmizu-connectivity\n--------------------")
logger.Log.Infof("\nAPI-server-connectivity\n--------------------")
serverUrl := GetApiServerUrl()
serverUrl := GetApiServerUrl(config.Config.Tap.GuiPort)
apiServerProvider := apiserver.NewProvider(serverUrl, 1, apiserver.DefaultTimeout)
if err := apiServerProvider.TestConnection(); err == nil {
@@ -169,8 +169,8 @@ func checkPortForward(serverUrl string, kubernetesProvider *kubernetes.Provider)
return nil
}
func checkAllResourcesExist(ctx context.Context, kubernetesProvider *kubernetes.Provider, isInstallCommand bool) bool {
logger.Log.Infof("\nmizu-existence\n--------------------")
func checkK8sResources(ctx context.Context, kubernetesProvider *kubernetes.Provider, isInstallCommand bool) bool {
logger.Log.Infof("\nk8s-components\n--------------------")
exist, err := kubernetesProvider.DoesNamespaceExist(ctx, config.Config.MizuResourcesNamespace)
allResourcesExist := checkResourceExist(config.Config.MizuResourcesNamespace, "namespace", exist, err)
@@ -227,7 +227,43 @@ func checkTapResourcesExist(ctx context.Context, kubernetesProvider *kubernetes.
exist, err := kubernetesProvider.DoesPodExist(ctx, config.Config.MizuResourcesNamespace, kubernetes.ApiServerPodName)
tapResourcesExist := checkResourceExist(kubernetes.ApiServerPodName, "pod", exist, err)
return tapResourcesExist
if !tapResourcesExist {
return false
}
if pod, err := kubernetesProvider.GetPod(ctx, config.Config.MizuResourcesNamespace, kubernetes.ApiServerPodName); err != nil {
logger.Log.Errorf("%v error checking if '%v' pod exists, err: %v", fmt.Sprintf(uiUtils.Red, "✗"), kubernetes.ApiServerPodName, err)
return false
} else if kubernetes.IsPodRunning(pod) {
logger.Log.Infof("%v '%v' pod running", fmt.Sprintf(uiUtils.Green, "√"), kubernetes.ApiServerPodName)
} else {
logger.Log.Errorf("%v '%v' pod not running", fmt.Sprintf(uiUtils.Red, "✗"), kubernetes.ApiServerPodName)
return false
}
tapperRegex := regexp.MustCompile(fmt.Sprintf("^%s.*", kubernetes.TapperPodName))
if pods, err := kubernetesProvider.ListAllPodsMatchingRegex(ctx, tapperRegex, []string{config.Config.MizuResourcesNamespace}); err != nil {
logger.Log.Errorf("%v error listing '%v' pods, err: %v", fmt.Sprintf(uiUtils.Red, "✗"), kubernetes.TapperPodName, err)
return false
} else {
tappers := 0
notRunningTappers := 0
for _, pod := range pods {
tappers += 1
if !kubernetes.IsPodRunning(&pod) {
notRunningTappers += 1
}
}
if notRunningTappers > 0 {
logger.Log.Errorf("%v '%v' %v/%v pods are not running", fmt.Sprintf(uiUtils.Red, "✗"), kubernetes.TapperPodName, notRunningTappers, tappers)
return false
}
logger.Log.Infof("%v '%v' %v pods running", fmt.Sprintf(uiUtils.Green, "√"), kubernetes.TapperPodName, tappers)
return true
}
}
func checkResourceExist(resourceName string, resourceType string, exist bool, err error) bool {

View File

@@ -23,12 +23,12 @@ import (
"github.com/up9inc/mizu/shared/logger"
)
func GetApiServerUrl() string {
return fmt.Sprintf("http://%s", kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Tap.GuiPort))
func GetApiServerUrl(port uint16) string {
return fmt.Sprintf("http://%s", kubernetes.GetMizuApiServerProxiedHostAndPath(port))
}
func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, ctx context.Context, cancel context.CancelFunc) {
httpServer, err := kubernetes.StartProxy(kubernetesProvider, config.Config.Tap.ProxyHost, config.Config.Tap.GuiPort, config.Config.MizuResourcesNamespace, kubernetes.ApiServerPodName, cancel)
func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, ctx context.Context, cancel context.CancelFunc, port uint16) {
httpServer, err := kubernetes.StartProxy(kubernetesProvider, config.Config.Tap.ProxyHost, port, config.Config.MizuResourcesNamespace, kubernetes.ApiServerPodName, cancel)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error occured while running k8s proxy %v\n"+
"Try setting different port by using --%s", errormessage.FormatError(err), configStructs.GuiPortTapName))
@@ -36,7 +36,7 @@ func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, ctx con
return
}
apiProvider = apiserver.NewProvider(GetApiServerUrl(), apiserver.DefaultRetries, apiserver.DefaultTimeout)
apiProvider = apiserver.NewProvider(GetApiServerUrl(port), apiserver.DefaultRetries, apiserver.DefaultTimeout)
if err := apiProvider.TestConnection(); err != nil {
logger.Log.Debugf("Couldn't connect using proxy, stopping proxy and trying to create port-forward")
if err := httpServer.Shutdown(ctx); err != nil {
@@ -44,14 +44,14 @@ func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, ctx con
}
podRegex, _ := regexp.Compile(kubernetes.ApiServerPodName)
if _, err := kubernetes.NewPortForward(kubernetesProvider, config.Config.MizuResourcesNamespace, podRegex, config.Config.Tap.GuiPort, ctx, cancel); err != nil {
if _, err := kubernetes.NewPortForward(kubernetesProvider, config.Config.MizuResourcesNamespace, podRegex, port, ctx, cancel); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error occured while running port forward %v\n"+
"Try setting different port by using --%s", errormessage.FormatError(err), configStructs.GuiPortTapName))
cancel()
return
}
apiProvider = apiserver.NewProvider(GetApiServerUrl(), apiserver.DefaultRetries, apiserver.DefaultTimeout)
apiProvider = apiserver.NewProvider(GetApiServerUrl(port), apiserver.DefaultRetries, apiserver.DefaultTimeout)
if err := apiProvider.TestConnection(); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Couldn't connect to API server, for more info check logs at %s", fsUtils.GetLogFilePath()))
cancel()

View File

@@ -1,10 +1,9 @@
package cmd
import (
"fmt"
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/shared/logger"
)
var installCmd = &cobra.Command{
@@ -12,13 +11,13 @@ var installCmd = &cobra.Command{
Short: "Installs mizu components",
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("install", nil)
runMizuInstall()
return nil
},
PreRunE: func(cmd *cobra.Command, args []string) error {
if config.Config.IsNsRestrictedMode() {
return fmt.Errorf("install is not supported in restricted namespace mode")
}
logger.Log.Infof("This command has been deprecated, please use helm as described below.\n\n")
logger.Log.Infof("To install stable build of Mizu on your cluster using helm, run the following command:")
logger.Log.Infof(" helm install mizu mizu --repo https://static.up9.com/mizu/helm --namespace=mizu --create-namespace\n\n")
logger.Log.Infof("To install development build of Mizu on your cluster using helm, run the following command:")
logger.Log.Infof(" helm install mizu mizu --repo https://static.up9.com/mizu/helm-develop --namespace=mizu --create-namespace")
return nil
},
@@ -27,4 +26,3 @@ var installCmd = &cobra.Command{
func init() {
rootCmd.AddCommand(installCmd)
}

View File

@@ -1,81 +0,0 @@
package cmd
import (
"context"
"errors"
"fmt"
"github.com/creasty/defaults"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/errormessage"
"github.com/up9inc/mizu/cli/resources"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func runMizuInstall() {
kubernetesProvider, err := getKubernetesProviderForCli()
if err != nil {
return
}
ctx, cancel := context.WithCancel(context.Background())
defer cancel() // cancel will be called when this function exits
var serializedValidationRules string
var serializedContract string
var defaultMaxEntriesDBSizeBytes int64 = 200 * 1000 * 1000
defaultResources := shared.Resources{}
if err := defaults.Set(&defaultResources); err != nil {
logger.Log.Debug(err)
}
mizuAgentConfig := getInstallMizuAgentConfig(defaultMaxEntriesDBSizeBytes, defaultResources)
serializedMizuConfig, err := getSerializedMizuAgentConfig(mizuAgentConfig)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error serializing mizu config: %v", errormessage.FormatError(err)))
return
}
if err = resources.CreateInstallMizuResources(ctx, kubernetesProvider, serializedValidationRules,
serializedContract, serializedMizuConfig, config.Config.IsNsRestrictedMode(),
config.Config.MizuResourcesNamespace, config.Config.AgentImage,
config.Config.KratosImage, config.Config.KetoImage,
nil, defaultMaxEntriesDBSizeBytes, defaultResources, config.Config.ImagePullPolicy(),
config.Config.LogLevel(), false); err != nil {
var statusError *k8serrors.StatusError
if errors.As(err, &statusError) && (statusError.ErrStatus.Reason == metav1.StatusReasonAlreadyExists) {
logger.Log.Info("Mizu is already running in this namespace, run `mizu clean` to remove the currently running Mizu instance")
} else {
defer resources.CleanUpMizuResources(ctx, cancel, kubernetesProvider, config.Config.IsNsRestrictedMode(), config.Config.MizuResourcesNamespace)
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error creating resources: %v", errormessage.FormatError(err)))
}
return
}
logger.Log.Infof(uiUtils.Magenta, "Installation completed, run `mizu view` to connect to the mizu daemon instance")
}
func getInstallMizuAgentConfig(maxDBSizeBytes int64, tapperResources shared.Resources) *shared.MizuAgentConfig {
mizuAgentConfig := shared.MizuAgentConfig{
MaxDBSizeBytes: maxDBSizeBytes,
AgentImage: config.Config.AgentImage,
PullPolicy: config.Config.ImagePullPolicyStr,
LogLevel: config.Config.LogLevel(),
TapperResources: tapperResources,
MizuResourcesNamespace: config.Config.MizuResourcesNamespace,
AgentDatabasePath: shared.DataDirPath,
StandaloneMode: true,
ServiceMap: config.Config.ServiceMap,
OAS: config.Config.OAS,
Elastic: config.Config.Elastic,
}
return &mizuAgentConfig
}

View File

@@ -45,7 +45,7 @@ var apiProvider *apiserver.Provider
func RunMizuTap() {
state.startTime = time.Now()
apiProvider = apiserver.NewProvider(GetApiServerUrl(), apiserver.DefaultRetries, apiserver.DefaultTimeout)
apiProvider = apiserver.NewProvider(GetApiServerUrl(config.Config.Tap.GuiPort), apiserver.DefaultRetries, apiserver.DefaultTimeout)
var err error
var serializedValidationRules string
@@ -162,6 +162,7 @@ func getTapMizuAgentConfig() *shared.MizuAgentConfig {
AgentDatabasePath: shared.DataDirPath,
ServiceMap: config.Config.ServiceMap,
OAS: config.Config.OAS,
Telemetry: config.Config.Telemetry,
Elastic: config.Config.Elastic,
}
@@ -421,7 +422,7 @@ func watchApiServerEvents(ctx context.Context, kubernetesProvider *kubernetes.Pr
}
func postApiServerStarted(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
startProxyReportErrorIfAny(kubernetesProvider, ctx, cancel)
startProxyReportErrorIfAny(kubernetesProvider, ctx, cancel, config.Config.Tap.GuiPort)
options, _ := getMizuApiFilteringOptions()
if err := startTapperSyncer(ctx, cancel, kubernetesProvider, state.targetNamespaces, *options, state.startTime); err != nil {
@@ -429,7 +430,7 @@ func postApiServerStarted(ctx context.Context, kubernetesProvider *kubernetes.Pr
cancel()
}
url := GetApiServerUrl()
url := GetApiServerUrl(config.Config.Tap.GuiPort)
logger.Log.Infof("Mizu is available at %s", url)
if !config.Config.HeadlessMode {
uiUtils.OpenBrowser(url)

View File

@@ -3,13 +3,13 @@ package cmd
import (
"context"
"fmt"
"github.com/up9inc/mizu/cli/utils"
"net/http"
"github.com/up9inc/mizu/cli/utils"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/mizu/fsUtils"
"github.com/up9inc/mizu/cli/mizu/version"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared/kubernetes"
"github.com/up9inc/mizu/shared/logger"
@@ -39,7 +39,7 @@ func runMizuView() {
return
}
url = GetApiServerUrl()
url = GetApiServerUrl(config.Config.View.GuiPort)
response, err := http.Get(fmt.Sprintf("%s/", url))
if err == nil && response.StatusCode == 200 {
@@ -47,7 +47,7 @@ func runMizuView() {
return
}
logger.Log.Infof("Establishing connection to k8s cluster...")
startProxyReportErrorIfAny(kubernetesProvider, ctx, cancel)
startProxyReportErrorIfAny(kubernetesProvider, ctx, cancel, config.Config.View.GuiPort)
}
apiServerProvider := apiserver.NewProvider(url, apiserver.DefaultRetries, apiserver.DefaultTimeout)
@@ -62,14 +62,5 @@ func runMizuView() {
uiUtils.OpenBrowser(url)
}
if isCompatible, err := version.CheckVersionCompatibility(apiServerProvider); err != nil {
logger.Log.Errorf("Failed to check versions compatibility %v", err)
cancel()
return
} else if !isCompatible {
cancel()
return
}
utils.WaitForFinish(ctx, cancel)
}

View File

@@ -38,7 +38,7 @@ type ConfigStruct struct {
ConfigFilePath string `yaml:"config-path,omitempty" readonly:""`
HeadlessMode bool `yaml:"headless" default:"false"`
LogLevelStr string `yaml:"log-level,omitempty" default:"INFO" readonly:""`
ServiceMap bool `yaml:"service-map" default:"false"`
ServiceMap bool `yaml:"service-map" default:"true"`
OAS bool `yaml:"oas,omitempty" default:"false" readonly:""`
Elastic shared.ElasticConfig `yaml:"elastic"`
}

View File

@@ -1,72 +1 @@
# CHANGELOG
This document summarizes main and fixes changes published in stable (aka `main`) branch of this project.
Ongoing work and development releases are under `develop` branch.
## 0.24.0
### main features
* ARM64 support -- Mizu is now available for ARM 64bit architecture
* Now you can run Mizu with `minikube` on your Apple M1 laptop or any other ARM-based hosts
* New command helps user verify Mizu deployment
* Run `mizu check` to verify Mizu was deployed successfully
* `mizu check` verifies version compatibility, resources and permissions required by Mizu
* EXPERIMENTAL: Service Map - graph of all service interactions
* Arrow direction show client to server connection
* Graph edge width reflects volume of traffic captured between the services
* to enable this experimental feature use `--set service-map=true` flag
### improvements
* Mizu container images are now served from [Docker Hub](https://hub.docker.com/r/up9inc/mizu), as multi-architecture images (arm64, amd64)
* in Mizu GUI the filter query can now be applied by pressing CONTROL/COMMAND + ENTER
* try port-forwarding if http-proxy connection to Mizu API server is not available
### notable bug fixes
* Fixed HTTP/1.0 presentation which was shown as HTTP/1.1
* Fixed handling of long-living TCP connections, improves capturing gRPC and HTTP/2 traffic, and helps in service-mesh setups (istio, linkerd)
## 0.23.0
### notable bug fixes
* fixed errors in Redis protocol parser (better handling of Array and Bulk String message types)
## 0.22.0
### main features
* Service Mesh support -- mizu is now capable to tap mTLS traffic between pods connected by Istio service mesh
* Use `--service-mesh` option to enable this feature
* New installation option - have the same Mizu functionality as long living pods in your cluster, with password protection
* To install use `mizu install` command
* To access use `mizu view` or `kubectl -n mizu port-forward svc/mizu-api-server`
* To uninstall run `mizu clean`
* At first login
* Set admin password as prompted, use it to login to mizu later on.
* After login, user should select cluster namespaces to tap: by default all namespaces in the cluster are selected, user can select/unselect according to their needs. These settings are retained and can be modified at any time via Settings menu (cog icon on the top-right)
### improvements
* improved Mizu permissions/roles logic to support clusters with strict PodSecurityPolicy (PSP) -- see [PERMISSIONS](PERMISSIONS.md) doc for more details
### notable bug fixes
* mizu now works properly when API service is exposed via HTTPS url
* mizu now properly displays KAFKA message body
## 0.21.0
### main features
* New traffic search & stream exprience
* Rich query language with full-text search capabilities on headers & body
* Distinct live-streaming vs paging/browsing modes, all with filter applied
### improvements
* GUI - source and destination IP addresses & service names for each traffic item
* GUI - Mizu health - display warning sign in top bar when not all requested pods are successfully tapped
* GUI - pod tapping status reflected in the list (ok or problem)
* Mizu telemetry - report platform type
### fixes
* Request duration and body size properly shown in GUI (instead of -1)
Mizu CHANGELOG is now part of [Mizu wiki](https://github.com/up9inc/mizu/wiki/CHANGELOG)

View File

@@ -76,6 +76,8 @@ func NewProvider(kubeConfigPath string) (*Provider, error) {
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
logger.Log.Debugf("K8s client config, host: %s, api path: %s, user agent: %s", restClientConfig.Host, restClientConfig.APIPath, restClientConfig.UserAgent)
return &Provider{
clientSet: clientSet,
kubernetesConfig: kubernetesConfig,
@@ -952,6 +954,11 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
labelSelector := applyconfmeta.LabelSelector()
labelSelector.WithMatchLabels(map[string]string{"app": tapperPodName})
applyOptions := metav1.ApplyOptions{
Force: true,
FieldManager: fieldManagerName,
}
daemonSet := applyconfapp.DaemonSet(daemonSetName, namespace)
daemonSet.
WithLabels(map[string]string{
@@ -960,7 +967,7 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
}).
WithSpec(applyconfapp.DaemonSetSpec().WithSelector(labelSelector).WithTemplate(podTemplate))
_, err = provider.clientSet.AppsV1().DaemonSets(namespace).Apply(ctx, daemonSet, metav1.ApplyOptions{FieldManager: fieldManagerName})
_, err = provider.clientSet.AppsV1().DaemonSets(namespace).Apply(ctx, daemonSet, applyOptions)
return err
}
@@ -1000,7 +1007,7 @@ func (provider *Provider) ListAllRunningPodsMatchingRegex(ctx context.Context, r
matchingPods := make([]core.Pod, 0)
for _, pod := range pods {
if isPodRunning(&pod) {
if IsPodRunning(&pod) {
matchingPods = append(matchingPods, pod)
}
}
@@ -1190,6 +1197,6 @@ func loadKubernetesConfiguration(kubeConfigPath string) clientcmd.ClientConfig {
)
}
func isPodRunning(pod *core.Pod) bool {
func IsPodRunning(pod *core.Pod) bool {
return pod.Status.Phase == core.PodRunning
}

View File

@@ -128,9 +128,14 @@ func getHttpDialer(kubernetesProvider *Provider, namespace string, podName strin
return nil, err
}
path := fmt.Sprintf("/api/v1/namespaces/%s/pods/%s/portforward", namespace, podName)
hostIP := strings.TrimLeft(kubernetesProvider.clientConfig.Host, "htps:/") // no need specify "t" twice
serverURL := url.URL{Scheme: "https", Path: path, Host: hostIP}
clientConfigHostUrl, err := url.Parse(kubernetesProvider.clientConfig.Host)
if err != nil {
return nil, fmt.Errorf("Failed parsing client config host URL %s, error %w", kubernetesProvider.clientConfig.Host, err)
}
path := fmt.Sprintf("%s/api/v1/namespaces/%s/pods/%s/portforward", clientConfigHostUrl.Path, namespace, podName)
serverURL := url.URL{Scheme: "https", Path: path, Host: clientConfigHostUrl.Host}
logger.Log.Debugf("Http dialer url %v", serverURL)
return spdy.NewDialer(upgrader, &http.Client{Transport: roundTripper}, http.MethodPost, &serverURL), nil
}

View File

@@ -43,6 +43,7 @@ type MizuAgentConfig struct {
StandaloneMode bool `json:"standaloneMode"`
ServiceMap bool `json:"serviceMap"`
OAS bool `json:"oas"`
Telemetry bool `json:"telemetry"`
Elastic ElasticConfig `json:"elastic"`
}

View File

@@ -39,10 +39,9 @@ type TCP struct {
}
type Extension struct {
Protocol *Protocol
Path string
Dissector Dissector
MatcherMap *sync.Map
Protocol *Protocol
Path string
Dissector Dissector
}
type ConnectionInfo struct {
@@ -62,7 +61,6 @@ type TcpID struct {
}
type CounterPair struct {
StreamId int64
Request uint
Response uint
sync.Mutex
@@ -100,10 +98,15 @@ type SuperIdentifier struct {
type Dissector interface {
Register(*Extension)
Ping()
Dissect(b *bufio.Reader, isClient bool, tcpID *TcpID, counterPair *CounterPair, superTimer *SuperTimer, superIdentifier *SuperIdentifier, emitter Emitter, options *TrafficFilteringOptions) error
Analyze(item *OutputChannelItem, resolvedSource string, resolvedDestination string) *Entry
Dissect(b *bufio.Reader, isClient bool, tcpID *TcpID, counterPair *CounterPair, superTimer *SuperTimer, superIdentifier *SuperIdentifier, emitter Emitter, options *TrafficFilteringOptions, reqResMatcher RequestResponseMatcher) error
Analyze(item *OutputChannelItem, resolvedSource string, resolvedDestination string, namespace string) *Entry
Represent(request map[string]interface{}, response map[string]interface{}) (object []byte, bodySize int64, err error)
Macros() map[string]string
NewResponseRequestMatcher() RequestResponseMatcher
}
type RequestResponseMatcher interface {
GetMap() *sync.Map
}
type Emitting struct {
@@ -125,6 +128,7 @@ type Entry struct {
Protocol Protocol `json:"proto"`
Source *TCP `json:"src"`
Destination *TCP `json:"dst"`
Namespace string `json:"namespace,omitempty"`
Outgoing bool `json:"outgoing"`
Timestamp int64 `json:"timestamp"`
StartTime time.Time `json:"startTime"`

View File

@@ -22,6 +22,7 @@ type Cleaner struct {
connectionTimeout time.Duration
stats CleanerStats
statsMutex sync.Mutex
streamsMap *tcpStreamMap
}
func (cl *Cleaner) clean() {
@@ -32,10 +33,15 @@ func (cl *Cleaner) clean() {
flushed, closed := cl.assembler.FlushCloseOlderThan(startCleanTime.Add(-cl.connectionTimeout))
cl.assemblerMutex.Unlock()
for _, extension := range extensions {
deleted := deleteOlderThan(extension.MatcherMap, startCleanTime.Add(-cl.connectionTimeout))
cl.streamsMap.streams.Range(func(k, v interface{}) bool {
reqResMatcher := v.(*tcpStreamWrapper).reqResMatcher
if reqResMatcher == nil {
return true
}
deleted := deleteOlderThan(reqResMatcher.GetMap(), startCleanTime.Add(-cl.connectionTimeout))
cl.stats.deleted += deleted
}
return true
})
cl.statsMutex.Lock()
logger.Log.Debugf("Assembler Stats after cleaning %s", cl.assembler.Dump())

View File

@@ -42,7 +42,7 @@ func (d dissecting) Ping() {
const amqpRequest string = "amqp_request"
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions, _reqResMatcher api.RequestResponseMatcher) error {
r := AmqpReader{b}
var remaining int
@@ -212,7 +212,7 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
}
}
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string) *api.Entry {
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string, namespace string) *api.Entry {
request := item.Pair.Request.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
@@ -254,6 +254,7 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
IP: item.ConnectionInfo.ServerIP,
Port: item.ConnectionInfo.ServerPort,
},
Namespace: namespace,
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Method: request["method"].(string),
@@ -300,6 +301,10 @@ func (d dissecting) Macros() map[string]string {
}
}
func (d dissecting) NewResponseRequestMatcher() api.RequestResponseMatcher {
return nil
}
var Dissector dissecting
func NewDissector() api.Dissector {

View File

@@ -13,4 +13,4 @@ test-pull-bin:
test-pull-expect:
@mkdir -p expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect/http/\* expect
@[ "${skipexpect}" ] && echo "Skipping downloading expected JSONs" || gsutil -o 'GSUtil:parallel_process_count=5' -o 'GSUtil:parallel_thread_count=5' -m cp -r gs://static.up9.io/mizu/test-pcap/expect2/http/\* expect

View File

@@ -47,7 +47,7 @@ func replaceForwardedFor(item *api.OutputChannelItem) {
item.ConnectionInfo.ClientPort = ""
}
func handleHTTP2Stream(http2Assembler *Http2Assembler, tcpID *api.TcpID, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
func handleHTTP2Stream(http2Assembler *Http2Assembler, tcpID *api.TcpID, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions, reqResMatcher *requestResponseMatcher) error {
streamID, messageHTTP1, isGrpc, err := http2Assembler.readMessage()
if err != nil {
return err
@@ -58,7 +58,7 @@ func handleHTTP2Stream(http2Assembler *Http2Assembler, tcpID *api.TcpID, superTi
switch messageHTTP1 := messageHTTP1.(type) {
case http.Request:
ident := fmt.Sprintf(
"%s->%s %s->%s %d %s",
"%s_%s_%s_%s_%d_%s",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
@@ -78,7 +78,7 @@ func handleHTTP2Stream(http2Assembler *Http2Assembler, tcpID *api.TcpID, superTi
}
case http.Response:
ident := fmt.Sprintf(
"%s->%s %s->%s %d %s",
"%s_%s_%s_%s_%d_%s",
tcpID.DstIP,
tcpID.SrcIP,
tcpID.DstPort,
@@ -110,7 +110,7 @@ func handleHTTP2Stream(http2Assembler *Http2Assembler, tcpID *api.TcpID, superTi
return nil
}
func handleHTTP1ClientStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) (switchingProtocolsHTTP2 bool, req *http.Request, err error) {
func handleHTTP1ClientStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions, reqResMatcher *requestResponseMatcher) (switchingProtocolsHTTP2 bool, req *http.Request, err error) {
req, err = http.ReadRequest(b)
if err != nil {
return
@@ -130,8 +130,7 @@ func handleHTTP1ClientStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api
req.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
ident := fmt.Sprintf(
"%d_%s:%s_%s:%s_%d_%s",
counterPair.StreamId,
"%s_%s_%s_%s_%d_%s",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
@@ -153,7 +152,7 @@ func handleHTTP1ClientStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api
return
}
func handleHTTP1ServerStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) (switchingProtocolsHTTP2 bool, err error) {
func handleHTTP1ServerStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions, reqResMatcher *requestResponseMatcher) (switchingProtocolsHTTP2 bool, err error) {
var res *http.Response
res, err = http.ReadResponse(b, nil)
if err != nil {
@@ -174,8 +173,7 @@ func handleHTTP1ServerStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api
res.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
ident := fmt.Sprintf(
"%d_%s:%s_%s:%s_%d_%s",
counterPair.StreamId,
"%s_%s_%s_%s_%d_%s",
tcpID.DstIP,
tcpID.SrcIP,
tcpID.DstPort,

View File

@@ -84,14 +84,15 @@ type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &http11protocol
extension.MatcherMap = reqResMatcher.openMessagesMap
}
func (d dissecting) Ping() {
log.Printf("pong %s", http11protocol.Name)
}
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions, _reqResMatcher api.RequestResponseMatcher) error {
reqResMatcher := _reqResMatcher.(*requestResponseMatcher)
var err error
isHTTP2, _ := checkIsHTTP2Connection(b, isClient)
@@ -124,7 +125,7 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
}
if isHTTP2 {
err = handleHTTP2Stream(http2Assembler, tcpID, superTimer, emitter, options)
err = handleHTTP2Stream(http2Assembler, tcpID, superTimer, emitter, options, reqResMatcher)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
@@ -133,7 +134,7 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
superIdentifier.Protocol = &http11protocol
} else if isClient {
var req *http.Request
switchingProtocolsHTTP2, req, err = handleHTTP1ClientStream(b, tcpID, counterPair, superTimer, emitter, options)
switchingProtocolsHTTP2, req, err = handleHTTP1ClientStream(b, tcpID, counterPair, superTimer, emitter, options, reqResMatcher)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
@@ -144,7 +145,7 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
// In case of an HTTP2 upgrade, duplicate the HTTP1 request into HTTP2 with stream ID 1
if switchingProtocolsHTTP2 {
ident := fmt.Sprintf(
"%s->%s %s->%s 1 %s",
"%s_%s_%s_%s_1_%s",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
@@ -164,7 +165,7 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
}
}
} else {
switchingProtocolsHTTP2, err = handleHTTP1ServerStream(b, tcpID, counterPair, superTimer, emitter, options)
switchingProtocolsHTTP2, err = handleHTTP1ServerStream(b, tcpID, counterPair, superTimer, emitter, options, reqResMatcher)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
@@ -181,7 +182,7 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
return nil
}
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string) *api.Entry {
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string, namespace string) *api.Entry {
var host, authority, path string
request := item.Pair.Request.Payload.(map[string]interface{})
@@ -279,6 +280,7 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
IP: item.ConnectionInfo.ServerIP,
Port: item.ConnectionInfo.ServerPort,
},
Namespace: namespace,
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Response: resDetails,
@@ -472,6 +474,10 @@ func (d dissecting) Macros() map[string]string {
}
}
func (d dissecting) NewResponseRequestMatcher() api.RequestResponseMatcher {
return createResponseRequestMatcher()
}
var Dissector dissecting
func NewDissector() api.Dissector {

View File

@@ -11,7 +11,6 @@ import (
"os"
"path"
"path/filepath"
"sort"
"testing"
"time"
@@ -39,7 +38,6 @@ func TestRegister(t *testing.T) {
extension := &api.Extension{}
dissector.Register(extension)
assert.Equal(t, "http", extension.Protocol.Name)
assert.NotNil(t, extension.MatcherMap)
}
func TestMacros(t *testing.T) {
@@ -123,7 +121,8 @@ func TestDissect(t *testing.T) {
SrcPort: "1",
DstPort: "2",
}
err = dissector.Dissect(bufferClient, true, tcpIDClient, counterPair, &api.SuperTimer{}, superIdentifier, emitter, options)
reqResMatcher := dissector.NewResponseRequestMatcher()
err = dissector.Dissect(bufferClient, true, tcpIDClient, counterPair, &api.SuperTimer{}, superIdentifier, emitter, options, reqResMatcher)
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
panic(err)
}
@@ -141,7 +140,7 @@ func TestDissect(t *testing.T) {
SrcPort: "2",
DstPort: "1",
}
err = dissector.Dissect(bufferServer, false, tcpIDServer, counterPair, &api.SuperTimer{}, superIdentifier, emitter, options)
err = dissector.Dissect(bufferServer, false, tcpIDServer, counterPair, &api.SuperTimer{}, superIdentifier, emitter, options, reqResMatcher)
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
panic(err)
}
@@ -155,14 +154,6 @@ func TestDissect(t *testing.T) {
stop <- true
sort.Slice(items, func(i, j int) bool {
iMarshaled, err := json.Marshal(items[i])
assert.Nil(t, err)
jMarshaled, err := json.Marshal(items[j])
assert.Nil(t, err)
return len(iMarshaled) < len(jMarshaled)
})
marshaled, err := json.Marshal(items)
assert.Nil(t, err)
@@ -214,7 +205,7 @@ func TestAnalyze(t *testing.T) {
var entries []*api.Entry
for _, item := range items {
entry := dissector.Analyze(item, "", "")
entry := dissector.Analyze(item, "", "", "")
entries = append(entries, entry)
}

View File

@@ -8,16 +8,17 @@ import (
"github.com/up9inc/mizu/tap/api"
)
var reqResMatcher = createResponseRequestMatcher() // global
// Key is {client_addr}:{client_port}->{dest_addr}:{dest_port}_{incremental_counter}
// Key is {client_addr}_{client_port}_{dest_addr}_{dest_port}_{incremental_counter}_{proto_ident}
type requestResponseMatcher struct {
openMessagesMap *sync.Map
}
func createResponseRequestMatcher() requestResponseMatcher {
newMatcher := &requestResponseMatcher{openMessagesMap: &sync.Map{}}
return *newMatcher
func createResponseRequestMatcher() api.RequestResponseMatcher {
return &requestResponseMatcher{openMessagesMap: &sync.Map{}}
}
func (matcher *requestResponseMatcher) GetMap() *sync.Map {
return matcher.openMessagesMap
}
func (matcher *requestResponseMatcher) registerRequest(ident string, request *http.Request, captureTime time.Time, protoMinor int) *api.OutputChannelItem {

View File

@@ -33,27 +33,27 @@ type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &_protocol
extension.MatcherMap = reqResMatcher.openMessagesMap
}
func (d dissecting) Ping() {
log.Printf("pong %s", _protocol.Name)
}
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions, _reqResMatcher api.RequestResponseMatcher) error {
reqResMatcher := _reqResMatcher.(*requestResponseMatcher)
for {
if superIdentifier.Protocol != nil && superIdentifier.Protocol != &_protocol {
return errors.New("Identified by another protocol")
}
if isClient {
_, _, err := ReadRequest(b, tcpID, counterPair, superTimer)
_, _, err := ReadRequest(b, tcpID, counterPair, superTimer, reqResMatcher)
if err != nil {
return err
}
superIdentifier.Protocol = &_protocol
} else {
err := ReadResponse(b, tcpID, counterPair, superTimer, emitter)
err := ReadResponse(b, tcpID, counterPair, superTimer, emitter, reqResMatcher)
if err != nil {
return err
}
@@ -62,7 +62,7 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
}
}
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string) *api.Entry {
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string, namespace string) *api.Entry {
request := item.Pair.Request.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
apiKey := ApiKey(reqDetails["apiKey"].(float64))
@@ -158,6 +158,7 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
IP: item.ConnectionInfo.ServerIP,
Port: item.ConnectionInfo.ServerPort,
},
Namespace: namespace,
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Response: item.Pair.Response.Payload.(map[string]interface{})["details"].(map[string]interface{}),
@@ -215,6 +216,10 @@ func (d dissecting) Macros() map[string]string {
}
}
func (d dissecting) NewResponseRequestMatcher() api.RequestResponseMatcher {
return createResponseRequestMatcher()
}
var Dissector dissecting
func NewDissector() api.Dissector {

View File

@@ -3,9 +3,10 @@ package kafka
import (
"sync"
"time"
"github.com/up9inc/mizu/tap/api"
)
var reqResMatcher = CreateResponseRequestMatcher() // global
const maxTry int = 3000
type RequestResponsePair struct {
@@ -13,14 +14,17 @@ type RequestResponsePair struct {
Response Response
}
// Key is {client_addr}:{client_port}->{dest_addr}:{dest_port}::{correlation_id}
// Key is {client_addr}_{client_port}_{dest_addr}_{dest_port}_{correlation_id}
type requestResponseMatcher struct {
openMessagesMap *sync.Map
}
func CreateResponseRequestMatcher() requestResponseMatcher {
newMatcher := &requestResponseMatcher{openMessagesMap: &sync.Map{}}
return *newMatcher
func createResponseRequestMatcher() api.RequestResponseMatcher {
return &requestResponseMatcher{openMessagesMap: &sync.Map{}}
}
func (matcher *requestResponseMatcher) GetMap() *sync.Map {
return matcher.openMessagesMap
}
func (matcher *requestResponseMatcher) registerRequest(key string, request *Request) *RequestResponsePair {

View File

@@ -19,7 +19,7 @@ type Request struct {
CaptureTime time.Time `json:"captureTime"`
}
func ReadRequest(r io.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer) (apiKey ApiKey, apiVersion int16, err error) {
func ReadRequest(r io.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, reqResMatcher *requestResponseMatcher) (apiKey ApiKey, apiVersion int16, err error) {
d := &decoder{reader: r, remain: 4}
size := d.readInt32()
@@ -214,8 +214,7 @@ func ReadRequest(r io.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, su
}
key := fmt.Sprintf(
"%d_%s:%s_%s:%s_%d",
counterPair.StreamId,
"%s_%s_%s_%s_%d",
tcpID.SrcIP,
tcpID.SrcPort,
tcpID.DstIP,

View File

@@ -16,7 +16,7 @@ type Response struct {
CaptureTime time.Time `json:"captureTime"`
}
func ReadResponse(r io.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter) (err error) {
func ReadResponse(r io.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, reqResMatcher *requestResponseMatcher) (err error) {
d := &decoder{reader: r, remain: 4}
size := d.readInt32()
@@ -44,8 +44,7 @@ func ReadResponse(r io.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, s
}
key := fmt.Sprintf(
"%d_%s:%s_%s:%s_%d",
counterPair.StreamId,
"%s_%s_%s_%s_%d",
tcpID.DstIP,
tcpID.DstPort,
tcpID.SrcIP,

View File

@@ -6,15 +6,14 @@ import (
"github.com/up9inc/mizu/tap/api"
)
func handleClientStream(tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, request *RedisPacket) error {
func handleClientStream(tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, request *RedisPacket, reqResMatcher *requestResponseMatcher) error {
counterPair.Lock()
counterPair.Request++
requestCounter := counterPair.Request
counterPair.Unlock()
ident := fmt.Sprintf(
"%d_%s:%s_%s:%s_%d",
counterPair.StreamId,
"%s_%s_%s_%s_%d",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
@@ -36,15 +35,14 @@ func handleClientStream(tcpID *api.TcpID, counterPair *api.CounterPair, superTim
return nil
}
func handleServerStream(tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, response *RedisPacket) error {
func handleServerStream(tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, response *RedisPacket, reqResMatcher *requestResponseMatcher) error {
counterPair.Lock()
counterPair.Response++
responseCounter := counterPair.Response
counterPair.Unlock()
ident := fmt.Sprintf(
"%d_%s:%s_%s:%s_%d",
counterPair.StreamId,
"%s_%s_%s_%s_%d",
tcpID.DstIP,
tcpID.SrcIP,
tcpID.DstPort,

View File

@@ -32,14 +32,14 @@ type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &protocol
extension.MatcherMap = reqResMatcher.openMessagesMap
}
func (d dissecting) Ping() {
log.Printf("pong %s", protocol.Name)
}
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions, _reqResMatcher api.RequestResponseMatcher) error {
reqResMatcher := _reqResMatcher.(*requestResponseMatcher)
is := &RedisInputStream{
Reader: b,
Buf: make([]byte, 8192),
@@ -52,9 +52,9 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
}
if isClient {
err = handleClientStream(tcpID, counterPair, superTimer, emitter, redisPacket)
err = handleClientStream(tcpID, counterPair, superTimer, emitter, redisPacket, reqResMatcher)
} else {
err = handleServerStream(tcpID, counterPair, superTimer, emitter, redisPacket)
err = handleServerStream(tcpID, counterPair, superTimer, emitter, redisPacket, reqResMatcher)
}
if err != nil {
@@ -63,7 +63,7 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
}
}
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string) *api.Entry {
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string, namespace string) *api.Entry {
request := item.Pair.Request.Payload.(map[string]interface{})
response := item.Pair.Response.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
@@ -96,6 +96,7 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
IP: item.ConnectionInfo.ServerIP,
Port: item.ConnectionInfo.ServerPort,
},
Namespace: namespace,
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Response: resDetails,
@@ -127,6 +128,10 @@ func (d dissecting) Macros() map[string]string {
}
}
func (d dissecting) NewResponseRequestMatcher() api.RequestResponseMatcher {
return createResponseRequestMatcher()
}
var Dissector dissecting
func NewDissector() api.Dissector {

View File

@@ -7,16 +7,17 @@ import (
"github.com/up9inc/mizu/tap/api"
)
var reqResMatcher = createResponseRequestMatcher() // global
// Key is `{stream_id}_{src_ip}:{dst_ip}_{src_ip}:{src_port}_{incremental_counter}`
// Key is `{src_ip}_{dst_ip}_{src_ip}_{src_port}_{incremental_counter}`
type requestResponseMatcher struct {
openMessagesMap *sync.Map
}
func createResponseRequestMatcher() requestResponseMatcher {
newMatcher := &requestResponseMatcher{openMessagesMap: &sync.Map{}}
return *newMatcher
func createResponseRequestMatcher() api.RequestResponseMatcher {
return &requestResponseMatcher{openMessagesMap: &sync.Map{}}
}
func (matcher *requestResponseMatcher) GetMap() *sync.Map {
return matcher.openMessagesMap
}
func (matcher *requestResponseMatcher) registerRequest(ident string, request *RedisPacket, captureTime time.Time) *api.OutputChannelItem {

View File

@@ -210,6 +210,7 @@ func startPassiveTapper(opts *TapOpts, outputItems chan *api.OutputChannelItem)
assemblerMutex: &assembler.assemblerMutex,
cleanPeriod: cleanPeriod,
connectionTimeout: staleConnectionTimeout,
streamsMap: streamsMap,
}
cleaner.start()

View File

@@ -47,6 +47,7 @@ type tcpReader struct {
extension *api.Extension
emitter api.Emitter
counterPair *api.CounterPair
reqResMatcher api.RequestResponseMatcher
sync.Mutex
}
@@ -94,7 +95,7 @@ func (h *tcpReader) Close() {
func (h *tcpReader) run(wg *sync.WaitGroup) {
defer wg.Done()
b := bufio.NewReader(h)
err := h.extension.Dissector.Dissect(b, h.isClient, h.tcpID, h.counterPair, h.superTimer, h.parent.superIdentifier, h.emitter, filteringOptions)
err := h.extension.Dissector.Dissect(b, h.isClient, h.tcpID, h.counterPair, h.superTimer, h.parent.superIdentifier, h.emitter, filteringOptions, h.reqResMatcher)
if err != nil {
_, err = io.Copy(ioutil.Discard, b)
if err != nil {

View File

@@ -29,8 +29,9 @@ type tcpStreamFactory struct {
}
type tcpStreamWrapper struct {
stream *tcpStream
createdAt time.Time
stream *tcpStream
reqResMatcher api.RequestResponseMatcher
createdAt time.Time
}
func NewTcpStreamFactory(emitter api.Emitter, streamsMap *tcpStreamMap, opts *TapOpts) *tcpStreamFactory {
@@ -81,8 +82,8 @@ func (factory *tcpStreamFactory) New(net, transport gopacket.Flow, tcp *layers.T
if stream.isTapTarget {
stream.id = factory.streamsMap.nextId()
for i, extension := range extensions {
reqResMatcher := extension.Dissector.NewResponseRequestMatcher()
counterPair := &api.CounterPair{
StreamId: stream.id,
Request: 0,
Response: 0,
}
@@ -103,6 +104,7 @@ func (factory *tcpStreamFactory) New(net, transport gopacket.Flow, tcp *layers.T
extension: extension,
emitter: factory.Emitter,
counterPair: counterPair,
reqResMatcher: reqResMatcher,
})
stream.servers = append(stream.servers, tcpReader{
msgQueue: make(chan tcpReaderDataMsg),
@@ -121,11 +123,13 @@ func (factory *tcpStreamFactory) New(net, transport gopacket.Flow, tcp *layers.T
extension: extension,
emitter: factory.Emitter,
counterPair: counterPair,
reqResMatcher: reqResMatcher,
})
factory.streamsMap.Store(stream.id, &tcpStreamWrapper{
stream: stream,
createdAt: time.Now(),
stream: stream,
reqResMatcher: reqResMatcher,
createdAt: time.Now(),
})
factory.wg.Add(2)

View File

@@ -76,7 +76,7 @@ export const TrafficPage: React.FC<TrafficPageProps> = ({setAnalyzeStatus}) => {
const scrollableRef = useRef(null);
const [openOasModal, setOpenOasModal] = useState(false);
const handleOpenModal = () => setOpenOasModal(true);
const handleCloseModal = () => setOpenOasModal(false);
const [showTLSWarning, setShowTLSWarning] = useState(false);
@@ -258,8 +258,14 @@ export const TrafficPage: React.FC<TrafficPageProps> = ({setAnalyzeStatus}) => {
}
}
const handleOpenOasModal = () => {
ws.current.close();
setOpenOasModal(true);
}
const openServiceMapModalDebounce = debounce(() => {
setServiceMapModalOpen(true)
ws.current.close();
setServiceMapModalOpen(true);
}, 500);
return (
@@ -285,7 +291,7 @@ export const TrafficPage: React.FC<TrafficPageProps> = ({setAnalyzeStatus}) => {
variant="contained"
className={commonClasses.outlinedButton + " " + commonClasses.imagedButton}
style={{ marginRight: 25 }}
onClick={handleOpenModal}
onClick={handleOpenOasModal}
>
Show OAS
</Button>}

View File

@@ -10,8 +10,7 @@ import debounce from 'lodash/debounce';
import ServiceMapOptions from './ServiceMapOptions'
import { useCommonStyles } from "../../helpers/commonStyle";
import refresh from "../assets/refresh.svg";
import reset from "../assets/reset.svg";
import close from "../assets/close.svg"
import close from "../assets/close.svg";
interface GraphData {
nodes: Node[];
@@ -154,19 +153,6 @@ export const ServiceMapModal: React.FC<ServiceMapModalProps> = ({ isOpen, onOpen
return () => setGraphData({ nodes: [], edges: [] })
}, [getServiceMapData])
const resetServiceMap = debounce(async () => {
try {
const serviceMapResetResponse = await api.serviceMapReset();
if (serviceMapResetResponse["status"] === "enabled") {
refreshServiceMap()
}
} catch (ex) {
toast.error("An error occurred while resetting Mizu Service Map, see console for mode details");
console.error(ex);
}
}, 500);
const refreshServiceMap = debounce(() => {
getServiceMapData();
}, 500);
@@ -192,16 +178,6 @@ export const ServiceMapModal: React.FC<ServiceMapModalProps> = ({ isOpen, onOpen
{!isLoading && <div style={{ height: "100%", width: "100%" }}>
<div style={{ display: "flex", justifyContent: "space-between" }}>
<div>
<Button
startIcon={<img src={reset} className="custom" alt="reset" style={{ marginRight:"8%"}}></img>}
size="large"
variant="contained"
className={commonClasses.outlinedButton + " " + commonClasses.imagedButton}
style={{ marginRight: 25, paddingTop: "3px", paddingBottom: "1px"}}
onClick={resetServiceMap}
>
Reset
</Button>
<Button
startIcon={<img src={refresh} className="custom" alt="refresh" style={{ marginRight:"8%"}}></img>}
size="medium"

View File

@@ -1,4 +0,0 @@
<svg width="22" height="22" viewBox="0 0 22 22" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M11 14C12.114 14 13 13.1127 13 12C13 10.8873 12.114 10 11 10C9.886 10 9 10.8873 9 12C9 13.1127 9.886 14 11 14Z" fill="#205CF5"/>
<path d="M19.0823 10.2539C18.8662 9.1982 18.4442 8.19548 17.8402 7.30312C17.2467 6.42521 16.4906 5.66909 15.6127 5.07562C14.7202 4.47184 13.7175 4.04978 12.6619 3.83354C12.1074 3.72109 11.5429 3.6658 10.9771 3.66854V1.83337L7.33333 4.58337L10.9771 7.33337V5.50187C11.4208 5.50004 11.8644 5.54221 12.2925 5.63021C13.1129 5.79833 13.8923 6.12632 14.586 6.59546C15.2702 7.05676 15.859 7.64561 16.3203 8.32979C17.0366 9.38875 17.4185 10.6383 17.4167 11.9167C17.4165 12.7746 17.2451 13.6238 16.9125 14.4146C16.7507 14.7955 16.5531 15.1602 16.3222 15.5036C16.0904 15.845 15.827 16.1639 15.5357 16.456C14.6483 17.3417 13.5219 17.9492 12.2943 18.2041C11.4407 18.3765 10.5612 18.3765 9.7075 18.2041C8.88668 18.0358 8.10704 17.7075 7.41308 17.238C6.72969 16.7771 6.14148 16.1888 5.68058 15.5055C4.96518 14.4454 4.58306 13.1956 4.58333 11.9167H2.75C2.75098 13.5609 3.24215 15.1675 4.16075 16.5312C4.75461 17.4077 5.50996 18.163 6.38642 18.7569C7.74824 19.6786 9.3556 20.1697 11 20.1667C11.5585 20.1667 12.1156 20.1105 12.6628 19.999C13.7177 19.7811 14.7197 19.3592 15.6127 18.7569C16.0511 18.4615 16.4597 18.1241 16.8328 17.7495C17.2063 17.3749 17.5439 16.9661 17.8411 16.5285C18.762 15.167 19.2528 13.5604 19.25 11.9167C19.25 11.3582 19.1938 10.8011 19.0823 10.2539Z" fill="#205CF5"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.5 KiB