Compare commits

...

63 Commits

Author SHA1 Message Date
Igor Gov
90c210452d API server stores tappers status (#531) 2021-12-15 14:52:49 +02:00
M. Mert Yıldıran
0a915b3fe7 Fix the CSS issue in Queryable component for src.name field on heading mode (#530) 2021-12-15 12:28:46 +03:00
M. Mert Yıldıran
a830bbe023 Fix the glitch (#529)
* Fix the glitch

* Bring back the functionality to "Fetch old records" and "Snap to bottom" buttons
2021-12-15 12:26:18 +03:00
Alex Haiut
f1ba397543 make description of mizu config options public (#527) 2021-12-14 20:03:26 +02:00
M. Mert Yıldıran
4e17ac5654 Remove unnecessary fields and split service into src.name and dst.name (#525)
* Remove unnecessary fields and split `service` into `src.name` and `dst.name`

* Don't fall back to IP address but instead display `[Unresolved]` text

* Fix the CSS issues in the plus icon position and replace the separator `->` text with `SwapHorizIcon`
2021-12-14 11:36:02 +03:00
M. Mert Yıldıran
d274db2d87 Fix the CSS issues in queryable vertical protocol element (#526) 2021-12-12 19:38:14 +03:00
M. Mert Yıldıran
0a2aacfb02 Include milliseconds information into the timestamps in the UI (#524)
* Include milliseconds information into the timestamps in the UI

* Upgrade Basenine version from `0.2.16` to `0.2.17`

* Increase the `width` of timestamp
2021-12-10 18:03:17 +03:00
Igor Gov
3c64c1c7ca Report the platform in telemtry (#523)
Co-authored-by: Igor Gov <igor.govorov1@gmail.com>
2021-12-09 13:12:15 +02:00
Igor Gov
005f000ef6 Disable version check for devs (#522) 2021-12-09 12:11:36 +02:00
M. Mert Yıldıran
1ef3778051 Add type switch for Base field of MizuEntry (#520) 2021-12-08 16:53:57 +03:00
M. Mert Yıldıran
9f1e311689 TRA-4017 Bring back getOldEntries method using fetch API and always start streaming from now (#518)
* Bring back `getOldEntries` method using fetch API

* Determine no more data on top based on `leftOff` value

* Remove `entriesBuffer` state

* Always open WebSocket with some `leftOff` value

* Rename `leftOff` state to `leftOffBottom`

* Don't set the `focusedEntryId` through WebSocket if the WebSocket is closed

* Call `setQueriedCurrent` with addition

* Close WebSocket upon reaching to top

* Open WebSocket upon snapping to bottom

* Close the WebSocket on snap broken event instead

* Set queried current value to zero upon filter submit

* Upgrade `react-scrollable-feed-virtualized` version and use `scrollToIndex` function

* Change the footer text format

* Improve no more data top logic

* Fix `closeWebSocket()` call logic in `onSnapBrokenEvent` and handle `data.meta` being `null` in `getOldEntries`

* Fix the issues around fetching old records

* Clean up `EntriesList.module.sass`

* Decrement initial `leftOffTop` value by `2`

* Fix the order of `incomingEntries` in `getOldEntries`

* Request `leftOffTop - 1` from `fetchEntries`

* Limit the front-end total entries fetched through WebSocket count to `10000`

* Lose the UI performance gain that's provided by #452

* Revert "Fix the selected entry behavior by propagating the `focusedEntryId` through WebSocket (before #452) TRA-3983 (#513)"

This reverts commit 873f252544.

* Fix the issues caused by 09371f141f

* Upgrade Basenine version from `0.2.13` to `0.2.14`

* Upgrade Basenine version from `0.2.14` to `0.2.15`

* Fix the condition of "Fetch old records" button visibility

* Upgrade Basenine version from `0.2.15` to `0.2.16` and fix the UI code related to fetching old records

* Make `newEntries` constant
2021-12-08 15:19:35 +03:00
M. Mert Yıldıran
9aaf18842b Fix the CSS issue in Queryable inside EntryViewLine (#521) 2021-12-07 14:15:49 +03:00
M. Mert Yıldıran
880842c39f Fix the styling of Queryable under StatusCode and Summary components (#519) 2021-12-04 20:25:22 +03:00
David Levanon
296e1bb667 Replace privileged with specific CAPABILITIES requests (#514) 2021-12-02 11:41:13 +02:00
Igor Gov
2910611111 Disable telemetry by env var MIZU_DISABLE_TELEMTRY (#517) 2021-12-02 09:20:27 +02:00
M. Mert Yıldıran
c47959dbd8 Bring back GetEntries HTTP endpoint (#515)
* Bring back `GetEntries` HTTP endpoint

* Upgrade Basenine version from `0.2.12` to `0.2.13`

* Accept negative `leftOff` value

* Remove `max`es from the validations

* Make `timeoutMs` optional

* Update the route comment

* Add `EntriesResponse` struct
2021-12-01 11:55:13 +03:00
M. Mert Yıldıran
af557f7052 Add Queryable component to show a green add circle icon for the queryable UI elements (#512)
* Add `Queryable` component to show a green circle and use it in `EntryViewLine`

* Refactor `Queryable` component

* Use the `Queryable` component `EntryDetailed`

* Use the `Queryable` component `Summary`

* Instead of passing the style to `Queryable`, pass the children components directly

* Make `useTooltip = true` by default in `Queryable`

* Refactor a lot of styling to achieve using `Queryable` in `Protocol` component

* Migrate the last queryable elements in `EntryListItem` to `Queryable` component

* Fix some of the styling issues

* Make horizontal `Protocol` `Queryable` too

* Remove unnecessary child constants

* Revert some of the changes in 2a93f365f5

* Fix rest of the styling issues

* Fix one more styling issue

* Update the screenshots and text in the cheatsheet according to the change

* Use `let` not `var`

* Add missing dependencies to the React hook
2021-11-30 17:52:21 +03:00
M. Mert Yıldıran
b745f65971 Handle unexpected socket close and replace the default rlimit(100) filter with leftOff(-1) filter (#508)
* Handle unexpected socket close and replace the default `rlimit(100)` filter with `leftOff(-1)` filter

* Rename `dontClear` parameter to `resetEntriesBuffer` and remove negation
2021-11-30 16:30:18 +03:00
M. Mert Yıldıran
873f252544 Fix the selected entry behavior by propagating the focusedEntryId through WebSocket (before #452) TRA-3983 (#513)
* Revert the select entry behavior into its original state RACING! (before #452) [TRA-3983 alternative 3]

* Remove the remaining `forceSelect`(s)

* Add a missing `focusedEntryId` prop

* Fix the race condition

* Propagate the `focusedEntryId` through WebSocket to prevent racing
2021-11-30 15:27:10 +03:00
M. Mert Yıldıran
9696ad9bad Show the EntryItem as EntrySummary in EntryDetailed (#506) 2021-11-28 10:59:40 +03:00
M. Mert Yıldıran
a1bda0a6c3 Hide Encoding field if it's undefined or empty in the UI (#511) 2021-11-26 09:40:44 +03:00
M. Mert Yıldıran
a62842ac9f Add HTTP2 Over Cleartext (H2C) support (#510)
* Add HTTP2 Over Cleartext (H2C) support

* Remove a parameter which is a remnant of debugging
2021-11-25 20:36:13 +03:00
M. Mert Yıldıran
e667597e6e Rename URL field to Target URI in the UI to prevent confusion (#509) 2021-11-25 20:15:43 +03:00
Igor Gov
86240e4121 Remove local dev instruction from readme (#507) 2021-11-24 10:46:07 +02:00
David Levanon
b0c8c0c192 Add response body to the error in case of failure (#503)
* add response body to the error in case of failure

* fix typo + make inline condition
2021-11-23 20:16:07 +02:00
Nimrod Gilboa Markevich
1c18eb1b84 Use one channel for events instead of three (#495)
Use one channel for events instead of three separate channels by event type
2021-11-23 15:06:27 +02:00
David Levanon
01d6005a7b minor logging changes (#499)
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2021-11-23 14:21:53 +02:00
Nimrod Gilboa Markevich
4c97316c02 Remove prevPodPhase (#497)
prevPodPhase does not take into account the fact that there may be more
than one tapper pod. Therefore it is not clear what its value
represents. It is only used in a debug print. It is not worth the effort
to fix for that one debug print.

Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2021-11-23 10:03:36 +02:00
M. Mert Yıldıran
d66c7445e6 Remove SetHostname method in HTTP extension (#496) 2021-11-22 19:30:06 +03:00
M. Mert Yıldıran
12ca3d8779 Make the gRPC and HTTP/2 distinction (#492)
* Remove the extra negation on `nodefrag` flag's value

* Support IPv4 fragmentation and IPv6 at the same time

* Set `Method` and `StatusCode` fields correctly for `HTTP/2`

* Replace unnecessary `grpc` naming with `http2`

* Make the `gRPC` and `HTTP/2` distinction

* Fix the macros of `http` extension

* Fix the macros of other protocol extensions

* Update the method signature of `Represent`

* Fix the `HTTP/2` support

* Fix some minor issues

* Upgrade Basenine version from `0.2.10` to `0.2.11`

Sorts macros before expanding them and prioritize the long macros.

* Don't regex split the gRPC method name

* Re-enable `nodefrag` flag
2021-11-22 17:46:35 +03:00
M. Mert Yıldıran
02a125bb86 Disable IPv4 defragmentation and support IPv6 (#487)
* Remove the extra negation on `nodefrag` flag's value

* Support IPv4 fragmentation and IPv6 at the same time

* Re-enable `nodefrag` flag
2021-11-22 17:35:17 +03:00
M. Mert Yıldıran
08d7fa988e Remove tap/tester/ directory (#489)
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
2021-11-22 17:32:38 +03:00
Nimrod Gilboa Markevich
b1ad2efb96 Warn pods not starting (#493)
Print warning event related to mizu k8s resources.
In non-daemon print to CLI. In Daemon print to API-Server logs.
2021-11-22 15:30:10 +02:00
Alon Girmonsky
ed7b754eca Some changes to the doc (#494) 2021-11-22 09:02:33 +02:00
Igor Gov
c026656b5e Improving daemon documentation (#457) 2021-11-21 19:37:02 +02:00
David Levanon
6caa94f08f Add support to auto discover envoy processes (#459)
* discover envoy pids using cluster ips

* add istio flag to cli + rename mtls flag to istio

* add istio.md to docs

* Fixing typos

* Fix minor typos and grammer in docs

Co-authored-by: Nimrod Gilboa Markevich <nimrod@up9.com>
2021-11-21 15:45:07 +02:00
RoyUP9
b77ea63f42 Add token validity check (#483) 2021-11-21 15:14:02 +02:00
gadotroee
2635964a28 Update README (#486) 2021-11-21 14:09:21 +02:00
M. Mert Yıldıran
a16faca5fb Ignore gob files (#488)
* Ignore gob files

* Remove `*.db` from `.gitignore`
2021-11-21 09:29:01 +03:00
M. Mert Yıldıran
8cf6f56a3c Remove unnecessary tcpdump dependency from Dockerfile (#491) 2021-11-21 09:12:51 +03:00
M. Mert Yıldıran
a849aae94c Upgrade Basenine version from 0.2.9 to 0.2.10 (#484)
* Upgrade Basenine version from `0.2.9` to `0.2.10`

Fixes the issues in `limit` and `rlimit` helpers that occur when they are on the left operand of a binary expression.

* Upgrade the client hash to latest
2021-11-19 18:57:19 +03:00
M. Mert Yıldıran
8118569460 Show the source and destination IP in the entry feed (#485) 2021-11-18 20:21:51 +03:00
Nimrod Gilboa Markevich
2e75834dd0 Refactor watch pods to allow reusing watch wrapper (#470)
Currently shared/kubernetes/watch.go:FilteredWatch only watches pods.
This PR makes it reusable for other types of resources.
This is done in preparation for watching k8s events.
2021-11-18 11:53:11 +02:00
M. Mert Yıldıran
dd53a36d5f Prevent the crash on client-side in case of text being undefined in FancyTextDisplay (#481)
* Prevent the crash on client-side in case of `text` being undefined in `FancyTextDisplay`

* Use `String(text)` instead
2021-11-17 18:50:09 +03:00
M. Mert Yıldıran
ad78f1dcd7 Clear focusedEntryId state in case of a filter is applied (#482) 2021-11-17 18:20:23 +03:00
M. Mert Yıldıran
a13fec3dae Sync entries in batches just as before (using uploadIntervalSec parameter) (#477)
* Sync entries in batches just as before (using `uploadIntervalSec` parameter)

* Replace `lastTimeSynced` value with `time.Time{}`

Since it will be overwritten by the very first iteration.
2021-11-17 15:16:49 +03:00
M. Mert Yıldıran
bb85312b9f Don't omit the key-value pair if the value is false in EntryTableSection (#478) 2021-11-17 15:02:23 +03:00
RamiBerm
18be46809e TRA-3903 minor daemon mode refactor (#479)
* Update common.go and tapRunner.go

* Update common.go
2021-11-17 11:18:08 +02:00
RamiBerm
b7f7daa05c TRA-3903 fix daemon mode in permission restricted configs (#473)
* Update tapRunner.go, permissions-all-namespaces-daemon.yaml, and 2 more files...

* Update tapRunner.go

* Update tapRunner.go and permissions-ns-daemon.yaml

* Update tapRunner.go

* Update tapRunner.go

* Update tapRunner.go
2021-11-17 11:14:43 +02:00
M. Mert Yıldıran
95d2a868e1 Update the UI screenshots (#476)
* Update the UI screenshots

* Update `mizu-ui.png`
2021-11-16 22:44:31 +03:00
RamiBerm
36077a9985 TRA-3903 - display targetted pods before waiting for all daemon resources to be created (#475)
* WIP

* Update tapRunner.go

* Update tapRunner.go
2021-11-16 17:53:38 +02:00
RamiBerm
51e0dd8ba9 TRA-3903 add flag to disable pvc creation for daemon mode (#474)
* Update tapRunner.go and tapConfig.go

* Update tapConfig.go

* Revert "Update tapConfig.go"

This reverts commit 5c7c02c4ab.
2021-11-16 17:11:47 +02:00
M. Mert Yıldıran
7f265dc4c5 Return 404 instead of 500 if the entry could not be found and display a toast message (#464) 2021-11-16 17:13:07 +03:00
RoyUP9
1c75ce314b fixed redact acceptance test (#472) 2021-11-16 15:49:08 +02:00
RamiBerm
89836d8d75 TRA-3903 better health endpoint for daemon mode (#471)
* Update main.go, status_controller.go, and 2 more files...

* Update status_controller.go and mizuTapperSyncer.go
2021-11-16 15:44:27 +02:00
RoyUP9
763f72a640 remove newline in logs, fixed logs time format (#469) 2021-11-16 12:07:48 +02:00
Igor Gov
a6ec246dd1 Stop reduction of user agent header (#468) 2021-11-16 11:33:31 +02:00
RoyUP9
3e30815fb4 changes log format to be more readable (#463) 2021-11-16 11:01:40 +02:00
M. Mert Yıldıran
a6bf39fad5 Prevent elapsedTime to be negative (#467)
Also fix the `elapsedTime` for Redis.
2021-11-16 02:52:48 +03:00
M. Mert Yıldıran
58a1eac247 Set response.bodySize to 0 if it's negative (#466) 2021-11-16 01:58:22 +03:00
M. Mert Yıldıran
ad574956df Upgrade Basenine version from 0.2.8 to 0.2.9 (#465)
Fixes `limit` helper being not finished because of lack of meta updates.
2021-11-16 00:53:29 +03:00
M. Mert Yıldıran
618cb3a409 Optimize UI entry feed performance (#452)
* Optimize the React code for feeding the entries

By building `EntryItem` only once and updating the `entries` state on meta query messages.

* Upgrade `react-scrollable-feed-virtualized` version from `1.4.3` to `1.4.8`

* Fix the `isSelected` state

* Set the query text before deciding the background to prevent lags while typing

* Upgrade Basenine version from `0.2.6` to `0.2.7`

* Set the query background color only if the query is same after the HTTP request and use `useEffect` instead

* Upgrade Basenine version from `0.2.7` to `0.2.8`

* Use `CancelToken` of `axios` instead of trying to check the query state

* Turn `updateQuery` function into a state hook

* Update the macro for `http`

* Do the `source.cancel()` call in `axios.CancelToken`

* Reduce client-side logging
2021-11-15 17:32:05 +03:00
M. Mert Yıldıran
2582b7a65c Ignore SNYK-JS-JSONSCHEMA-1920922 (#462)
Dependency tree:
`node-sass@5.0.0 > node-gyp@7.1.2 > request@2.88.2 > http-signature@1.2.0 > jsprim@1.4.1 > json-schema@0.2.3`

`node-sass` should fix it first.
2021-11-15 17:29:20 +03:00
106 changed files with 2416 additions and 1280 deletions

5
.gitignore vendored
View File

@@ -15,7 +15,6 @@
# vendor/
.idea/
build
*.db
# Mac OS
.DS_Store
@@ -32,3 +31,7 @@ pprof/*
# Database Files
*.bin
*.gob
# Nohup Files - https://man7.org/linux/man-pages/man1/nohup.1p.html
nohup.*

View File

@@ -42,8 +42,8 @@ RUN go build -ldflags="-s -w \
-X 'mizuserver/pkg/version.SemVer=${SEM_VER}'" -o mizuagent .
# Download Basenine executable, verify the sha1sum and move it to a directory in $PATH
ADD https://github.com/up9inc/basenine/releases/download/v0.2.6/basenine_linux_amd64 ./basenine_linux_amd64
ADD https://github.com/up9inc/basenine/releases/download/v0.2.6/basenine_linux_amd64.sha256 ./basenine_linux_amd64.sha256
ADD https://github.com/up9inc/basenine/releases/download/v0.2.17/basenine_linux_amd64 ./basenine_linux_amd64
ADD https://github.com/up9inc/basenine/releases/download/v0.2.17/basenine_linux_amd64.sha256 ./basenine_linux_amd64.sha256
RUN shasum -a 256 -c basenine_linux_amd64.sha256
RUN chmod +x ./basenine_linux_amd64
@@ -52,7 +52,7 @@ RUN cd .. && /bin/bash build_extensions.sh
FROM alpine:3.14
RUN apk add bash libpcap-dev tcpdump
RUN apk add bash libpcap-dev
WORKDIR /app

View File

@@ -4,16 +4,21 @@
A simple-yet-powerful API traffic viewer for Kubernetes enabling you to view all API communication between microservices to help your debug and troubleshoot regressions.
Think TCPDump and Chrome Dev Tools combined.
Think TCPDump and Wireshark re-invented for Kubernetes.
![Simple UI](assets/mizu-ui.png)
## Features
- Simple and powerful CLI
- Real-time view of all HTTP requests, REST and gRPC API calls
- No installation or code instrumentation
- Works completely on premises
- Monitoring network traffic in real-time. Supported protocols:
- [HTTP/1.1](https://datatracker.ietf.org/doc/html/rfc2616) (REST, etc.)
- [HTTP/2](https://datatracker.ietf.org/doc/html/rfc7540) (gRPC)
- [AMQP](https://www.rabbitmq.com/amqp-0-9-1-reference.html) (RabbitMQ, Apache Qpid, etc.)
- [Apache Kafka](https://kafka.apache.org/protocol)
- [Redis](https://redis.io/topics/protocol)
- Works with Kubernetes APIs. No installation or code instrumentation
- Rich filtering
## Requirements
@@ -44,15 +49,6 @@ SHA256 checksums are available on the [Releases](https://github.com/up9inc/mizu/
### Development (unstable) Build
Pick one from the [Releases](https://github.com/up9inc/mizu/releases) page
## Kubeconfig & Permissions
While `mizu`most often works out of the box, you can influence its behavior:
1. [OPTIONAL] Set `KUBECONFIG` environment variable to your Kubernetes configuration. If this is not set, Mizu assumes that configuration is at `${HOME}/.kube/config`
2. `mizu` assumes user running the command has permissions to create resources (such as pods, services, namespaces) on your Kubernetes cluster (no worries - `mizu` resources are cleaned up upon termination)
For detailed list of k8s permissions see [PERMISSIONS](docs/PERMISSIONS.md) document
## How to Run
1. Find pods you'd like to tap to in your Kubernetes cluster
@@ -111,45 +107,25 @@ To tap all pods in current namespace -
Web interface is now available at http://localhost:8899
^C
```
### To run mizu mizu daemon mode (detached from cli)
```bash
$ mizu tap "^ca.*" --daemon
Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
Tapping pods in namespaces "sock-shop"
Waiting for mizu to be ready... (may take a few minutes)
+carts-66c77f5fbb-fq65r
+catalogue-5f4cb7cf5-7zrmn
..
$ mizu view
Establishing connection to k8s cluster...
Mizu is available at http://localhost:8899
^C
..
$ mizu clean # mizu will continue running in cluster until clean is executed
Removing mizu resources
```
`mizu view` provides one way to access Mizu. For other options, see [Accessing Mizu Wiki Page](https://github.com/up9inc/mizu/wiki/Accessing-Mizu).
## Configuration
Mizu can work with config file which should be stored in ${HOME}/.mizu/config.yaml (macOS: ~/.mizu/config.yaml) <br />
In case no config file found, defaults will be used <br />
Mizu can optionally work with a config file that can be provided as a CLI argument (using `--set config-path=<PATH>`) or if not provided, will be stored at ${HOME}/.mizu/config.yaml
In case of partial configuration defined, all other fields will be used with defaults <br />
You can always override the defaults or config file with CLI flags
To get the default config params run `mizu config` <br />
To generate a new config file with default values use `mizu config -r`
### Telemetry
By default, mizu reports usage telemetry. It can be disabled by adding a line of `telemetry: false` in the `${HOME}/.mizu/config.yaml` file
## Advanced Usage
### Kubeconfig
It is possible to change the kubeconfig path using `KUBECONFIG` environment variable or the command like flag
with `--set kube-config-path=<PATH>`. </br >
If both are not set - Mizu assumes that configuration is at `${HOME}/.kube/config`
### Namespace-Restricted Mode
Some users have permission to only manage resources in one particular namespace assigned to them
@@ -163,6 +139,8 @@ using the `--namespace` flag or by setting `tap.namespaces` in the config file
Setting `mizu-resources-namespace=mizu` resets Mizu to its default behavior
For detailed list of k8s permissions see [PERMISSIONS](docs/PERMISSIONS.md) document
### User agent filtering
User-agent filtering (like health checks) - can be configured using command-line options:
@@ -203,23 +181,7 @@ and when changed it will support accessing by IP
### Run in daemon mode
Mizu can be ran detached from the cli using the daemon flag: `mizu tap --daemon`. This type of mizu instance will run indefinitely in the cluster.
Mizu can be run detached from the cli using the daemon flag: `mizu tap --daemon`. This type of mizu instance will run
indefinitely in the cluster.
Please note that daemon mode requires you to have RBAC creation permissions, see the [permissions](docs/PERMISSIONS.md) doc for more details.
In order to access a daemon mizu you will have to run `mizu view` after running the `tap --daemon` command.
To stop the detached mizu instance and clean all cluster side resources, run `mizu clean`
## How to Run local UI
- run from mizu/agent `go run main.go --hars-read --hars-dir <folder>`
- copy Har files into the folder from last command
- change `MizuWebsocketURL` and `apiURL` in `api.js` file
- run from mizu/ui - `npm run start`
- open browser on `localhost:3000`
For more information please refer to [DAEMON MODE](docs/DAEMON_MODE.md)

View File

@@ -427,9 +427,10 @@ func TestTapRedact(t *testing.T) {
}
proxyUrl := getProxyUrl(defaultNamespaceName, defaultServiceName)
requestHeaders := map[string]string{"User-Header": "Mizu"}
requestBody := map[string]string{"User": "Mizu"}
for i := 0; i < defaultEntriesCount; i++ {
if _, requestErr := executeHttpPostRequest(fmt.Sprintf("%v/post", proxyUrl), requestBody); requestErr != nil {
if _, requestErr := executeHttpPostRequestWithHeaders(fmt.Sprintf("%v/post", proxyUrl), requestHeaders, requestBody); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
@@ -460,12 +461,12 @@ func TestTapRedact(t *testing.T) {
headers := request["_headers"].([]interface{})
for _, headerInterface := range headers {
header := headerInterface.(map[string]interface{})
if header["name"].(string) != "User-Agent" {
if header["name"].(string) != "User-Header" {
continue
}
userAgent := header["value"].(string)
if userAgent != "[REDACTED]" {
userHeader := header["value"].(string)
if userHeader != "[REDACTED]" {
return fmt.Errorf("unexpected result - user agent is not redacted")
}
}
@@ -530,9 +531,10 @@ func TestTapNoRedact(t *testing.T) {
}
proxyUrl := getProxyUrl(defaultNamespaceName, defaultServiceName)
requestHeaders := map[string]string{"User-Header": "Mizu"}
requestBody := map[string]string{"User": "Mizu"}
for i := 0; i < defaultEntriesCount; i++ {
if _, requestErr := executeHttpPostRequest(fmt.Sprintf("%v/post", proxyUrl), requestBody); requestErr != nil {
if _, requestErr := executeHttpPostRequestWithHeaders(fmt.Sprintf("%v/post", proxyUrl), requestHeaders, requestBody); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
@@ -563,12 +565,12 @@ func TestTapNoRedact(t *testing.T) {
headers := request["_headers"].([]interface{})
for _, headerInterface := range headers {
header := headerInterface.(map[string]interface{})
if header["name"].(string) != "User-Agent" {
if header["name"].(string) != "User-Header" {
continue
}
userAgent := header["value"].(string)
if userAgent == "[REDACTED]" {
userHeader := header["value"].(string)
if userHeader == "[REDACTED]" {
return fmt.Errorf("unexpected result - user agent is redacted")
}
}

View File

@@ -240,13 +240,24 @@ func executeHttpGetRequest(url string) (interface{}, error) {
return executeHttpRequest(response, requestErr)
}
func executeHttpPostRequest(url string, body interface{}) (interface{}, error) {
func executeHttpPostRequestWithHeaders(url string, headers map[string]string, body interface{}) (interface{}, error) {
requestBody, jsonErr := json.Marshal(body)
if jsonErr != nil {
return nil, jsonErr
}
response, requestErr := http.Post(url, "application/json", bytes.NewBuffer(requestBody))
request, err := http.NewRequest(http.MethodPost, url, bytes.NewBuffer(requestBody))
if err != nil {
return nil, err
}
request.Header.Add("Content-Type", "application/json")
for headerKey, headerValue := range headers {
request.Header.Add(headerKey, headerValue)
}
client := &http.Client{}
response, requestErr := client.Do(request)
return executeHttpRequest(response, requestErr)
}

View File

@@ -16,7 +16,7 @@ require (
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7
github.com/orcaman/concurrent-map v0.0.0-20210106121528-16402b402231
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/up9inc/basenine/client/go v0.0.0-20211109233221-12b405471084
github.com/up9inc/basenine/client/go v0.0.0-20211207165834-2ced7577f9e6
github.com/up9inc/mizu/shared v0.0.0
github.com/up9inc/mizu/tap v0.0.0
github.com/up9inc/mizu/tap/api v0.0.0

View File

@@ -450,8 +450,8 @@ github.com/ugorji/go v1.1.7 h1:/68gy2h+1mWMrwZFeD1kQialdSzAb432dtpeJ42ovdo=
github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
github.com/ugorji/go/codec v1.1.7 h1:2SvQaVZ1ouYrrKKwoSk2pzd4A9evlKJb9oTL+OaLUSs=
github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=
github.com/up9inc/basenine/client/go v0.0.0-20211109233221-12b405471084 h1:gLoP7AyS/c6pYuBQOgALWpzzc5/aSrq98Lr49JRfmfs=
github.com/up9inc/basenine/client/go v0.0.0-20211109233221-12b405471084/go.mod h1:SvJGPoa/6erhUQV7kvHBwM/0x5LyO6XaG2lUaCaKiUI=
github.com/up9inc/basenine/client/go v0.0.0-20211207165834-2ced7577f9e6 h1:8JOkoaZHhUPi4r7vSL/xo83foSz8BHPSabTDpxmtHFU=
github.com/up9inc/basenine/client/go v0.0.0-20211207165834-2ced7577f9e6/go.mod h1:SvJGPoa/6erhUQV7kvHBwM/0x5LyO6XaG2lUaCaKiUI=
github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw=
github.com/vishvananda/netns v0.0.0-20210104183010-2eb08e3e575f h1:p4VB7kIXpOQvVn1ZaTIVp+3vuYAXFe3OJEvjbUYJLaA=
github.com/vishvananda/netns v0.0.0-20210104183010-2eb08e3e575f/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=

View File

@@ -94,17 +94,17 @@ func main() {
panic("API server address must be provided with --api-server-address when using --tap")
}
hostMode := os.Getenv(shared.HostModeEnvVar) == "1"
tapOpts := &tap.TapOpts{HostMode: hostMode}
tapTargets := getTapTargets()
if tapTargets != nil {
tap.SetFilterAuthorities(tapTargets)
logger.Log.Infof("Filtering for the following authorities: %v", tap.GetFilterIPs())
tapOpts.FilterAuthorities = tapTargets
logger.Log.Infof("Filtering for the following authorities: %v", tapOpts.FilterAuthorities)
}
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)
filteringOptions := getTrafficFilteringOptions()
hostMode := os.Getenv(shared.HostModeEnvVar) == "1"
tapOpts := &tap.TapOpts{HostMode: hostMode}
tap.StartPassiveTapper(tapOpts, filteredOutputItemsChannel, extensions, filteringOptions)
socketConnection, err := dialSocketWithRetry(*apiServerAddress, socketConnectionRetries, socketConnectionRetryDelay)
if err != nil {
@@ -207,7 +207,7 @@ func loadExtensions() {
extensionsMap = make(map[string]*tapApi.Extension)
for i, file := range files {
filename := file.Name()
logger.Log.Infof("Loading extension: %s\n", filename)
logger.Log.Infof("Loading extension: %s", filename)
extension := &tapApi.Extension{
Path: path.Join(extensionsDir, filename),
}
@@ -219,7 +219,7 @@ func loadExtensions() {
var ok bool
dissector, ok = symDissector.(tapApi.Dissector)
if err != nil || !ok {
panic(fmt.Sprintf("Failed to load the extension: %s\n", extension.Path))
panic(fmt.Sprintf("Failed to load the extension: %s", extension.Path))
}
dissector.Register(extension)
extension.Dissector = dissector
@@ -232,7 +232,7 @@ func loadExtensions() {
})
for _, extension := range extensions {
logger.Log.Infof("Extension Properties: %+v\n", extension)
logger.Log.Infof("Extension Properties: %+v", extension)
}
controllers.InitExtensionsMap(extensionsMap)
@@ -264,7 +264,12 @@ func hostApi(socketHarOutputChannel chan<- *tapApi.OutputChannelItem) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
if _, err := startMizuTapperSyncer(ctx); err != nil {
kubernetesProvider, err := kubernetes.NewProviderInCluster()
if err != nil {
logger.Log.Fatalf("error creating k8s provider: %+v", err)
}
if _, err := startMizuTapperSyncer(ctx, kubernetesProvider); err != nil {
logger.Log.Fatalf("error initializing tapper syncer: %+v", err)
}
}
@@ -426,12 +431,7 @@ func dialSocketWithRetry(socketAddress string, retryAmount int, retryDelay time.
return nil, lastErr
}
func startMizuTapperSyncer(ctx context.Context) (*kubernetes.MizuTapperSyncer, error) {
provider, err := kubernetes.NewProviderInCluster()
if err != nil {
return nil, err
}
func startMizuTapperSyncer(ctx context.Context, provider *kubernetes.Provider) (*kubernetes.MizuTapperSyncer, error) {
tapperSyncer, err := kubernetes.CreateAndStartMizuTapperSyncer(ctx, provider, kubernetes.TapperSyncerConfig{
TargetNamespaces: config.Config.TargetNamespaces,
PodFilterRegex: config.Config.TapTargetRegex.Regexp,
@@ -443,7 +443,8 @@ func startMizuTapperSyncer(ctx context.Context) (*kubernetes.MizuTapperSyncer, e
IgnoredUserAgents: config.Config.IgnoredUserAgents,
MizuApiFilteringOptions: config.Config.MizuApiFilteringOptions,
MizuServiceAccountExists: true, //assume service account exists since daemon mode will not function without it anyway
})
Istio: config.Config.Istio,
}, time.Now())
if err != nil {
return nil, err
@@ -459,7 +460,7 @@ func startMizuTapperSyncer(ctx context.Context) (*kubernetes.MizuTapperSyncer, e
return
}
logger.Log.Fatalf("fatal tap syncer error: %v", syncerErr)
case _, ok := <-tapperSyncer.TapPodChangesOut:
case tapPodChangeEvent, ok := <-tapperSyncer.TapPodChangesOut:
if !ok {
logger.Log.Debug("mizuTapperSyncer pod changes channel closed, ending listener loop")
return
@@ -472,6 +473,17 @@ func startMizuTapperSyncer(ctx context.Context) (*kubernetes.MizuTapperSyncer, e
}
api.BroadcastToBrowserClients(serializedTapStatus)
providers.TapStatus.Pods = tapStatus.Pods
providers.ExpectedTapperAmount = tapPodChangeEvent.ExpectedTapperAmount
case tapperStatus, ok := <-tapperSyncer.TapperStatusChangedOut:
if !ok {
logger.Log.Debug("mizuTapperSyncer tapper status changed channel closed, ending listener loop")
return
}
if providers.TappersStatus == nil {
providers.TappersStatus = make(map[string]shared.TapperStatus)
}
providers.TappersStatus[tapperStatus.NodeName] = tapperStatus
case <-ctx.Done():
logger.Log.Debug("mizuTapperSyncer event listener loop exiting due to context done")
return

View File

@@ -76,7 +76,7 @@ func startReadingFiles(workingDir string) {
sort.Sort(utils.ByModTime(harFiles))
if len(harFiles) == 0 {
logger.Log.Infof("Waiting for new files\n")
logger.Log.Infof("Waiting for new files")
time.Sleep(3 * time.Second)
continue
}
@@ -109,7 +109,7 @@ func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extension
ctx := context.Background()
doc, contractContent, router, err := loadOAS(ctx)
if err != nil {
logger.Log.Infof("Disabled OAS validation: %s\n", err.Error())
logger.Log.Infof("Disabled OAS validation: %s", err.Error())
disableOASValidation = true
}
@@ -136,7 +136,7 @@ func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extension
harEntry, err := utils.NewEntry(mizuEntry.Request, mizuEntry.Response, mizuEntry.StartTime, mizuEntry.ElapsedTime)
if err == nil {
rules, _, _ := models.RunValidationRulesState(*harEntry, mizuEntry.Service)
rules, _, _ := models.RunValidationRulesState(*harEntry, mizuEntry.Destination.Name)
baseEntry.Rules = rules
}
}
@@ -154,7 +154,7 @@ func resolveIP(connectionInfo *tapApi.ConnectionInfo) (resolvedSource string, re
unresolvedSource := connectionInfo.ClientIP
resolvedSource = k8sResolver.Resolve(unresolvedSource)
if resolvedSource == "" {
logger.Log.Debugf("Cannot find resolved name to source: %s\n", unresolvedSource)
logger.Log.Debugf("Cannot find resolved name to source: %s", unresolvedSource)
if os.Getenv("SKIP_NOT_RESOLVED_SOURCE") == "1" {
return
}
@@ -162,7 +162,7 @@ func resolveIP(connectionInfo *tapApi.ConnectionInfo) (resolvedSource string, re
unresolvedDestination := fmt.Sprintf("%s:%s", connectionInfo.ServerIP, connectionInfo.ServerPort)
resolvedDestination = k8sResolver.Resolve(unresolvedDestination)
if resolvedDestination == "" {
logger.Log.Debugf("Cannot find resolved name to dest: %s\n", unresolvedDestination)
logger.Log.Debugf("Cannot find resolved name to dest: %s", unresolvedDestination)
if os.Getenv("SKIP_NOT_RESOLVED_DEST") == "1" {
return
}

View File

@@ -127,8 +127,15 @@ func websocketHandler(w http.ResponseWriter, r *http.Request, eventHandlers Even
var dataMap map[string]interface{}
err = json.Unmarshal(bytes, &dataMap)
base := dataMap["base"].(map[string]interface{})
base["id"] = uint(dataMap["id"].(float64))
var base map[string]interface{}
switch dataMap["base"].(type) {
case map[string]interface{}:
base = dataMap["base"].(map[string]interface{})
base["id"] = uint(dataMap["id"].(float64))
default:
logger.Log.Debugf("Base field has an unrecognized type: %+v", dataMap)
continue
}
baseEntryBytes, _ := models.CreateBaseEntryWebSocketMessage(base)
SendToSocket(socketId, baseEntryBytes)
@@ -146,7 +153,7 @@ func websocketHandler(w http.ResponseWriter, r *http.Request, eventHandlers Even
var metadata *basenine.Metadata
err = json.Unmarshal(bytes, &metadata)
if err != nil {
logger.Log.Debugf("Error recieving metadata: %v\n", err.Error())
logger.Log.Debugf("Error recieving metadata: %v", err.Error())
}
metadataBytes, _ := models.CreateWebsocketQueryMetadataMessage(metadata)
@@ -167,7 +174,7 @@ func websocketHandler(w http.ResponseWriter, r *http.Request, eventHandlers Even
func socketCleanup(socketId int, socketConnection *SocketConnection) {
err := socketConnection.connection.Close()
if err != nil {
logger.Log.Errorf("Error closing socket connection for socket id %d: %v\n", socketId, err)
logger.Log.Errorf("Error closing socket connection for socket id %d: %v", socketId, err)
}
websocketIdsLock.Lock()

View File

@@ -65,14 +65,14 @@ func (h *RoutesEventHandlers) WebSocketMessage(_ int, message []byte) {
var socketMessageBase shared.WebSocketMessageMetadata
err := json.Unmarshal(message, &socketMessageBase)
if err != nil {
logger.Log.Infof("Could not unmarshal websocket message %v\n", err)
logger.Log.Infof("Could not unmarshal websocket message %v", err)
} else {
switch socketMessageBase.MessageType {
case shared.WebSocketMessageTypeTappedEntry:
var tappedEntryMessage models.WebSocketTappedEntryMessage
err := json.Unmarshal(message, &tappedEntryMessage)
if err != nil {
logger.Log.Infof("Could not unmarshal message of message type %s %v\n", socketMessageBase.MessageType, err)
logger.Log.Infof("Could not unmarshal message of message type %s %v", socketMessageBase.MessageType, err)
} else {
// NOTE: This is where the message comes back from the intermediate WebSocket to code.
h.SocketOutChannel <- tappedEntryMessage.Data
@@ -81,7 +81,7 @@ func (h *RoutesEventHandlers) WebSocketMessage(_ int, message []byte) {
var statusMessage shared.WebSocketStatusMessage
err := json.Unmarshal(message, &statusMessage)
if err != nil {
logger.Log.Infof("Could not unmarshal message of message type %s %v\n", socketMessageBase.MessageType, err)
logger.Log.Infof("Could not unmarshal message of message type %s %v", socketMessageBase.MessageType, err)
} else {
providers.TapStatus.Pods = statusMessage.TappingStatus.Pods
BroadcastToBrowserClients(message)
@@ -90,7 +90,7 @@ func (h *RoutesEventHandlers) WebSocketMessage(_ int, message []byte) {
var outboundLinkMessage models.WebsocketOutboundLinkMessage
err := json.Unmarshal(message, &outboundLinkMessage)
if err != nil {
logger.Log.Infof("Could not unmarshal message of message type %s %v\n", socketMessageBase.MessageType, err)
logger.Log.Infof("Could not unmarshal message of message type %s %v", socketMessageBase.MessageType, err)
} else {
handleTLSLink(outboundLinkMessage)
}

View File

@@ -4,8 +4,10 @@ import (
"encoding/json"
"mizuserver/pkg/models"
"mizuserver/pkg/utils"
"mizuserver/pkg/validation"
"net/http"
"strconv"
"time"
"github.com/gin-gonic/gin"
@@ -25,12 +27,73 @@ func Error(c *gin.Context, err error) bool {
if err != nil {
logger.Log.Errorf("Error getting entry: %v", err)
c.Error(err)
c.AbortWithStatusJSON(http.StatusInternalServerError, gin.H{"error": true, "msg": err.Error()})
c.AbortWithStatusJSON(http.StatusInternalServerError, gin.H{
"error": true,
"type": "error",
"autoClose": "5000",
"msg": err.Error(),
})
return true // signal that there was an error and the caller should return
}
return false // no error, can continue
}
func GetEntries(c *gin.Context) {
entriesRequest := &models.EntriesRequest{}
if err := c.BindQuery(entriesRequest); err != nil {
c.JSON(http.StatusBadRequest, err)
}
validationError := validation.Validate(entriesRequest)
if validationError != nil {
c.JSON(http.StatusBadRequest, validationError)
}
if entriesRequest.TimeoutMs == 0 {
entriesRequest.TimeoutMs = 3000
}
data, meta, err := basenine.Fetch(shared.BasenineHost, shared.BaseninePort,
entriesRequest.LeftOff, entriesRequest.Direction, entriesRequest.Query,
entriesRequest.Limit, time.Duration(entriesRequest.TimeoutMs)*time.Millisecond)
if err != nil {
c.JSON(http.StatusInternalServerError, validationError)
}
response := &models.EntriesResponse{}
var dataSlice []interface{}
for _, row := range data {
var dataMap map[string]interface{}
err = json.Unmarshal(row, &dataMap)
if err != nil {
c.JSON(http.StatusBadRequest, gin.H{
"error": true,
"type": "error",
"autoClose": "5000",
"msg": string(row),
})
return // exit
}
base := dataMap["base"].(map[string]interface{})
base["id"] = uint(dataMap["id"].(float64))
dataSlice = append(dataSlice, base)
}
var metadata *basenine.Metadata
err = json.Unmarshal(meta, &metadata)
if err != nil {
logger.Log.Debugf("Error recieving metadata: %v", err.Error())
}
response.Data = dataSlice
response.Meta = metadata
c.JSON(http.StatusOK, response)
}
func GetEntry(c *gin.Context) {
id, _ := strconv.Atoi(c.Param("id"))
var entry tapApi.MizuEntry
@@ -39,25 +102,31 @@ func GetEntry(c *gin.Context) {
return // exit
}
err = json.Unmarshal(bytes, &entry)
if Error(c, err) {
if err != nil {
c.JSON(http.StatusNotFound, gin.H{
"error": true,
"type": "error",
"autoClose": "5000",
"msg": string(bytes),
})
return // exit
}
extension := extensionsMap[entry.Protocol.Name]
protocol, representation, bodySize, _ := extension.Dissector.Represent(entry.Protocol, entry.Request, entry.Response)
representation, bodySize, _ := extension.Dissector.Represent(entry.Request, entry.Response)
var rules []map[string]interface{}
var isRulesEnabled bool
if entry.Protocol.Name == "http" {
harEntry, _ := utils.NewEntry(entry.Request, entry.Response, entry.StartTime, entry.ElapsedTime)
_, rulesMatched, _isRulesEnabled := models.RunValidationRulesState(*harEntry, entry.Service)
_, rulesMatched, _isRulesEnabled := models.RunValidationRulesState(*harEntry, entry.Destination.Name)
isRulesEnabled = _isRulesEnabled
inrec, _ := json.Marshal(rulesMatched)
json.Unmarshal(inrec, &rules)
}
c.JSON(http.StatusOK, tapApi.MizuEntryWrapper{
Protocol: protocol,
Protocol: entry.Protocol,
Representation: string(representation),
BodySize: bodySize,
Data: entry,

View File

@@ -2,7 +2,9 @@ package controllers
import (
"encoding/json"
"fmt"
"mizuserver/pkg/api"
"mizuserver/pkg/config"
"mizuserver/pkg/holder"
"mizuserver/pkg/providers"
"mizuserver/pkg/up9"
@@ -15,14 +17,26 @@ import (
)
func HealthCheck(c *gin.Context) {
if config.Config.DaemonMode {
if providers.ExpectedTapperAmount != providers.TappersCount {
c.JSON(http.StatusInternalServerError, fmt.Sprintf("expecting more tappers than are actually connected (%d expected, %d connected)", providers.ExpectedTapperAmount, providers.TappersCount))
return
}
}
tappers := make([]shared.TapperStatus, len(providers.TappersStatus))
for _, value := range providers.TappersStatus {
tappers = append(tappers, value)
}
response := shared.HealthResponse{
TapStatus: providers.TapStatus,
TappersCount: providers.TappersCount,
TapStatus: providers.TapStatus,
TappersCount: providers.TappersCount,
TappersStatus: tappers,
}
c.JSON(http.StatusOK, response)
}
func PostTappedPods(c *gin.Context) {
tapStatus := &shared.TapStatus{}
if err := c.Bind(tapStatus); err != nil {
@@ -37,12 +51,29 @@ func PostTappedPods(c *gin.Context) {
providers.TapStatus.Pods = tapStatus.Pods
message := shared.CreateWebSocketStatusMessage(*tapStatus)
if jsonBytes, err := json.Marshal(message); err != nil {
logger.Log.Errorf("Could not Marshal message %v\n", err)
logger.Log.Errorf("Could not Marshal message %v", err)
} else {
api.BroadcastToBrowserClients(jsonBytes)
}
}
func PostTapperStatus(c *gin.Context) {
tapperStatus := &shared.TapperStatus{}
if err := c.Bind(tapperStatus); err != nil {
c.JSON(http.StatusBadRequest, err)
return
}
if err := validation.Validate(tapperStatus); err != nil {
c.JSON(http.StatusBadRequest, err)
return
}
logger.Log.Infof("[Status] POST request, tapper status: %v", tapperStatus)
if providers.TappersStatus == nil {
providers.TappersStatus = make(map[string]shared.TapperStatus)
}
providers.TappersStatus[tapperStatus.NodeName] = *tapperStatus
}
func GetTappersCount(c *gin.Context) {
c.JSON(http.StatusOK, providers.TappersCount)
}

View File

@@ -16,6 +16,19 @@ func GetEntry(r *tapApi.MizuEntry, v tapApi.DataUnmarshaler) error {
return v.UnmarshalData(r)
}
type EntriesRequest struct {
LeftOff int `form:"leftOff" validate:"required,min=-1"`
Direction int `form:"direction" validate:"required,oneof='1' '-1'"`
Query string `form:"query"`
Limit int `form:"limit" validate:"required,min=1"`
TimeoutMs int `form:"timeoutMs" validate:"min=1"`
}
type EntriesResponse struct {
Data []interface{} `json:"data"`
Meta *basenine.Metadata `json:"meta"`
}
type WebSocketEntryMessage struct {
*shared.WebSocketMessageMetadata
Data map[string]interface{} `json:"data,omitempty"`

View File

@@ -15,12 +15,13 @@ import (
const tlsLinkRetainmentTime = time.Minute * 15
var (
TappersCount int
TapStatus shared.TapStatus
authStatus *models.AuthStatus
RecentTLSLinks = cache.New(tlsLinkRetainmentTime, tlsLinkRetainmentTime)
tappersCountLock = sync.Mutex{}
TappersCount int
TapStatus shared.TapStatus
TappersStatus map[string]shared.TapperStatus
authStatus *models.AuthStatus
RecentTLSLinks = cache.New(tlsLinkRetainmentTime, tlsLinkRetainmentTime)
ExpectedTapperAmount = -1 //only relevant in daemon mode as cli manages tappers otherwise
tappersCountLock = sync.Mutex{}
)
func GetAuthStatus() (*models.AuthStatus, error) {

View File

@@ -164,10 +164,10 @@ func (resolver *Resolver) watchServices(ctx context.Context) error {
func (resolver *Resolver) saveResolvedName(key string, resolved string, eventType watch.EventType) {
if eventType == watch.Deleted {
resolver.nameMap.Remove(key)
logger.Log.Infof("setting %s=nil\n", key)
logger.Log.Infof("setting %s=nil", key)
} else {
resolver.nameMap.Set(key, resolved)
logger.Log.Infof("setting %s=%s\n", key, resolved)
logger.Log.Infof("setting %s=%s", key, resolved)
}
}
@@ -188,7 +188,7 @@ func (resolver *Resolver) infiniteErrorHandleRetryFunc(ctx context.Context, fun
var statusError *k8serrors.StatusError
if errors.As(err, &statusError) {
if statusError.ErrStatus.Reason == metav1.StatusReasonForbidden {
logger.Log.Infof("Resolver loop encountered permission error, aborting event listening - %v\n", err)
logger.Log.Infof("Resolver loop encountered permission error, aborting event listening - %v", err)
return
}
}

View File

@@ -10,5 +10,6 @@ import (
func EntriesRoutes(ginApp *gin.Engine) {
routeGroup := ginApp.Group("/entries")
routeGroup.GET("/", controllers.GetEntries) // get entries (base/thin entries) and metadata
routeGroup.GET("/:id", controllers.GetEntry) // get single (full) entry
}

View File

@@ -11,6 +11,7 @@ func StatusRoutes(ginApp *gin.Engine) {
routeGroup.GET("/health", controllers.HealthCheck)
routeGroup.POST("/tappedPods", controllers.PostTappedPods)
routeGroup.POST("/tapperStatus", controllers.PostTapperStatus)
routeGroup.GET("/tappersCount", controllers.GetTappersCount)
routeGroup.GET("/tap", controllers.GetTappingStatus)

View File

@@ -112,14 +112,14 @@ func GetAnalyzeInfo() *shared.AnalyzeStatus {
}
func SyncEntries(syncEntriesConfig *shared.SyncEntriesConfig) error {
logger.Log.Infof("Sync entries - started\n")
logger.Log.Infof("Sync entries - started")
var (
token, model string
guestMode bool
)
if syncEntriesConfig.Token == "" {
logger.Log.Infof("Sync entries - creating anonymous token. env %s\n", syncEntriesConfig.Env)
logger.Log.Infof("Sync entries - creating anonymous token. env %s", syncEntriesConfig.Env)
guestToken, err := createAnonymousToken(syncEntriesConfig.Env)
if err != nil {
return fmt.Errorf("failed creating anonymous token, err: %v", err)
@@ -133,7 +133,7 @@ func SyncEntries(syncEntriesConfig *shared.SyncEntriesConfig) error {
model = syncEntriesConfig.Workspace
guestMode = false
logger.Log.Infof("Sync entries - upserting model. env %s, model %s\n", syncEntriesConfig.Env, model)
logger.Log.Infof("Sync entries - upserting model. env %s, model %s", syncEntriesConfig.Env, model)
if err := upsertModel(token, model, syncEntriesConfig.Env); err != nil {
return fmt.Errorf("failed upserting model, err: %v", err)
}
@@ -144,7 +144,7 @@ func SyncEntries(syncEntriesConfig *shared.SyncEntriesConfig) error {
return fmt.Errorf("invalid model name, model name: %s", model)
}
logger.Log.Infof("Sync entries - syncing. token: %s, model: %s, guest mode: %v\n", token, model, guestMode)
logger.Log.Infof("Sync entries - syncing. token: %s, model: %s, guest mode: %v", token, model, guestMode)
go syncEntriesImpl(token, model, syncEntriesConfig.Env, syncEntriesConfig.UploadIntervalSec, guestMode)
return nil
@@ -209,7 +209,7 @@ func syncEntriesImpl(token string, model string, envPrefix string, uploadInterva
// "http or grpc" filter indicates that we're only interested in HTTP and gRPC entries
query := "http or grpc"
logger.Log.Infof("Getting entries from the database\n")
logger.Log.Infof("Getting entries from the database")
var connection *basenine.Connection
var err error
@@ -227,6 +227,10 @@ func syncEntriesImpl(token string, model string, envPrefix string, uploadInterva
connection.Close()
}()
lastTimeSynced := time.Time{}
batch := make([]har.Entry, 0)
handleDataChannel := func(wg *sync.WaitGroup, connection *basenine.Connection, data chan []byte) {
defer wg.Done()
for {
@@ -239,7 +243,6 @@ func syncEntriesImpl(token string, model string, envPrefix string, uploadInterva
var dataMap map[string]interface{}
err = json.Unmarshal(dataBytes, &dataMap)
result := make([]har.Entry, 0)
var entry tapApi.MizuEntry
if err := json.Unmarshal([]byte(dataBytes), &entry); err != nil {
continue
@@ -248,12 +251,12 @@ func syncEntriesImpl(token string, model string, envPrefix string, uploadInterva
if err != nil {
continue
}
if entry.ResolvedSource != "" {
harEntry.Request.Headers = append(harEntry.Request.Headers, har.Header{Name: "x-mizu-source", Value: entry.ResolvedSource})
if entry.Source.Name != "" {
harEntry.Request.Headers = append(harEntry.Request.Headers, har.Header{Name: "x-mizu-source", Value: entry.Source.Name})
}
if entry.ResolvedDestination != "" {
harEntry.Request.Headers = append(harEntry.Request.Headers, har.Header{Name: "x-mizu-destination", Value: entry.ResolvedDestination})
harEntry.Request.URL = utils.SetHostname(harEntry.Request.URL, entry.ResolvedDestination)
if entry.Destination.Name != "" {
harEntry.Request.Headers = append(harEntry.Request.Headers, har.Header{Name: "x-mizu-destination", Value: entry.Destination.Name})
harEntry.Request.URL = utils.SetHostname(harEntry.Request.URL, entry.Destination.Name)
}
// go's default marshal behavior is to encode []byte fields to base64, python's default unmarshal behavior is to not decode []byte fields from base64
@@ -261,14 +264,22 @@ func syncEntriesImpl(token string, model string, envPrefix string, uploadInterva
continue
}
result = append(result, *harEntry)
batch = append(batch, *harEntry)
body, jMarshalErr := json.Marshal(result)
now := time.Now()
if lastTimeSynced.Add(time.Duration(uploadIntervalSec) * time.Second).After(now) {
continue
}
lastTimeSynced = now
body, jMarshalErr := json.Marshal(batch)
batchSize := len(batch)
if jMarshalErr != nil {
analyzeInformation.Reset()
logger.Log.Infof("Stopping sync entries")
logger.Log.Fatal(jMarshalErr)
}
batch = make([]har.Entry, 0)
var in bytes.Buffer
w := zlib.NewWriter(&in)
@@ -293,7 +304,7 @@ func syncEntriesImpl(token string, model string, envPrefix string, uploadInterva
logger.Log.Info("Stopping sync entries")
logger.Log.Fatal(postErr)
}
analyzeInformation.SentCount += 1
analyzeInformation.SentCount += batchSize
if analyzeInformation.SentCount%SentCountLogInterval == 0 {
logger.Log.Infof("Uploaded %v entries until now", analyzeInformation.SentCount)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 491 KiB

After

Width:  |  Height:  |  Size: 640 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 53 KiB

View File

@@ -18,8 +18,9 @@ build: ## Build mizu CLI binary (select platform via GOOS / GOARCH env variables
go build -ldflags="-X 'github.com/up9inc/mizu/cli/mizu.GitCommitHash=$(COMMIT_HASH)' \
-X 'github.com/up9inc/mizu/cli/mizu.Branch=$(GIT_BRANCH)' \
-X 'github.com/up9inc/mizu/cli/mizu.BuildTimestamp=$(BUILD_TIMESTAMP)' \
-X 'github.com/up9inc/mizu/cli/mizu.Platform=$(SUFFIX)' \
-X 'github.com/up9inc/mizu/cli/mizu.SemVer=$(SEM_VER)'" \
-o bin/mizu_$(SUFFIX) mizu.go
-o bin/mizu_$(SUFFIX) mizu.go
(cd bin && shasum -a 256 mizu_${SUFFIX} > mizu_${SUFFIX}.sha256)
build-all: ## Build for all supported platforms.

View File

@@ -3,11 +3,12 @@ package apiserver
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"strings"
"time"
"github.com/up9inc/mizu/shared/kubernetes"
@@ -41,7 +42,7 @@ func (provider *Provider) TestConnection() error {
retriesLeft := provider.retries
for retriesLeft > 0 {
if _, err := provider.GetHealthStatus(); err != nil {
logger.Log.Debugf("[ERROR] api server not ready yet %v", err)
logger.Log.Debugf("api server not ready yet %v", err)
} else {
logger.Log.Debugf("connection test to api server passed successfully")
break
@@ -61,7 +62,14 @@ func (provider *Provider) GetHealthStatus() (*shared.HealthResponse, error) {
if response, err := provider.client.Get(healthUrl); err != nil {
return nil, err
} else if response.StatusCode > 299 {
return nil, errors.New(fmt.Sprintf("status code: %d", response.StatusCode))
responseBody := new(strings.Builder)
if _, err := io.Copy(responseBody, response.Body); err != nil {
return nil, fmt.Errorf("status code: %d - (bad response - %v)", response.StatusCode, err)
} else {
singleLineResponse := strings.ReplaceAll(responseBody.String(), "\n", "")
return nil, fmt.Errorf("status code: %d - (response - %v)", response.StatusCode, singleLineResponse)
}
} else {
defer response.Body.Close()
@@ -73,6 +81,23 @@ func (provider *Provider) GetHealthStatus() (*shared.HealthResponse, error) {
}
}
func (provider *Provider) ReportTapperStatus(tapperStatus shared.TapperStatus) error {
tapperStatusUrl := fmt.Sprintf("%s/status/tapperStatus", provider.url)
if jsonValue, err := json.Marshal(tapperStatus); err != nil {
return fmt.Errorf("failed Marshal the tapper status %w", err)
} else {
if response, err := provider.client.Post(tapperStatusUrl, "application/json", bytes.NewBuffer(jsonValue)); err != nil {
return fmt.Errorf("failed sending to API server the tapped pods %w", err)
} else if response.StatusCode != 200 {
return fmt.Errorf("failed sending to API server the tapper status, response status code %v", response.StatusCode)
} else {
logger.Log.Debugf("Reported to server API about tapper status: %v", tapperStatus)
return nil
}
}
}
func (provider *Provider) ReportTappedPods(pods []core.Pod) error {
tappedPodsUrl := fmt.Sprintf("%s/status/tappedPods", provider.url)

View File

@@ -4,9 +4,15 @@ import (
"context"
"errors"
"fmt"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/cli/mizu/fsUtils"
"github.com/up9inc/mizu/cli/telemetry"
"os"
"os/signal"
"path"
"syscall"
"time"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
@@ -64,3 +70,42 @@ func handleKubernetesProviderError(err error) {
logger.Log.Error(err)
}
}
func finishMizuExecution(kubernetesProvider *kubernetes.Provider, apiProvider *apiserver.Provider) {
telemetry.ReportAPICalls(apiProvider)
removalCtx, cancel := context.WithTimeout(context.Background(), cleanupTimeout)
defer cancel()
dumpLogsIfNeeded(removalCtx, kubernetesProvider)
cleanUpMizuResources(removalCtx, cancel, kubernetesProvider)
}
func dumpLogsIfNeeded(ctx context.Context, kubernetesProvider *kubernetes.Provider) {
if !config.Config.DumpLogs {
return
}
mizuDir := mizu.GetMizuFolderPath()
filePath := path.Join(mizuDir, fmt.Sprintf("mizu_logs_%s.zip", time.Now().Format("2006_01_02__15_04_05")))
if err := fsUtils.DumpLogs(ctx, kubernetesProvider, filePath); err != nil {
logger.Log.Errorf("Failed dump logs %v", err)
}
}
func cleanUpMizuResources(ctx context.Context, cancel context.CancelFunc, kubernetesProvider *kubernetes.Provider) {
logger.Log.Infof("\nRemoving mizu resources")
var leftoverResources []string
if config.Config.IsNsRestrictedMode() {
leftoverResources = cleanUpRestrictedMode(ctx, kubernetesProvider)
} else {
leftoverResources = cleanUpNonRestrictedMode(ctx, cancel, kubernetesProvider)
}
if len(leftoverResources) > 0 {
errMsg := fmt.Sprintf("Failed to remove the following resources, for more info check logs at %s:", fsUtils.GetLogFilePath())
for _, resource := range leftoverResources {
errMsg += "\n- " + resource
}
logger.Log.Errorf(uiUtils.Error, errMsg)
}
}

View File

@@ -3,6 +3,7 @@ package cmd
import (
"errors"
"fmt"
"github.com/up9inc/mizu/cli/up9"
"os"
"github.com/creasty/defaults"
@@ -62,6 +63,12 @@ Supported protocols are HTTP and gRPC.`,
logger.Log.Errorf("failed to log in, err: %v", err)
return nil
}
} else if isValidToken := up9.IsTokenValid(config.Config.Auth.Token, config.Config.Auth.EnvName); !isValidToken {
logger.Log.Errorf("Token is not valid, please log in again to continue")
if err := auth.Login(); err != nil {
logger.Log.Errorf("failed to log in, err: %v", err)
return nil
}
}
}
}
@@ -113,4 +120,5 @@ func init() {
tapCmd.Flags().String(configStructs.EnforcePolicyFile, defaultTapConfig.EnforcePolicyFile, "Yaml file path with policy rules")
tapCmd.Flags().String(configStructs.ContractFile, defaultTapConfig.ContractFile, "OAS/Swagger file to validate to monitor the contracts")
tapCmd.Flags().Bool(configStructs.DaemonModeTapName, defaultTapConfig.DaemonMode, "Run mizu in daemon mode, detached from the cli")
tapCmd.Flags().Bool(configStructs.IstioName, defaultTapConfig.Istio, "Record decrypted traffic if the cluster configured with istio and mtls")
}

View File

@@ -5,27 +5,24 @@ import (
"errors"
"fmt"
"io/ioutil"
"path"
"regexp"
"strings"
"time"
"github.com/up9inc/mizu/cli/cmd/goUtils"
"github.com/getkin/kin-openapi/openapi3"
"gopkg.in/yaml.v3"
core "k8s.io/api/core/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/util/wait"
"github.com/getkin/kin-openapi/openapi3"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/cmd/goUtils"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/errormessage"
"gopkg.in/yaml.v3"
core "k8s.io/api/core/v1"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/cli/mizu/fsUtils"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/kubernetes"
@@ -36,6 +33,9 @@ import (
const cleanupTimeout = time.Minute
type tapState struct {
startTime time.Time
targetNamespaces []string
apiServerService *core.Service
tapperSyncer *kubernetes.MizuTapperSyncer
mizuServiceAccountExists bool
@@ -45,6 +45,8 @@ var state tapState
var apiProvider *apiserver.Provider
func RunMizuTap() {
state.startTime = time.Now()
mizuApiFilteringOptions, err := getMizuApiFilteringOptions()
apiProvider = apiserver.NewProvider(GetApiServerUrl(), apiserver.DefaultRetries, apiserver.DefaultTimeout)
if err != nil {
@@ -93,16 +95,16 @@ func RunMizuTap() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel() // cancel will be called when this function exits
targetNamespaces := getNamespaces(kubernetesProvider)
state.targetNamespaces = getNamespaces(kubernetesProvider)
serializedMizuConfig, err := config.GetSerializedMizuAgentConfig(targetNamespaces, mizuApiFilteringOptions)
serializedMizuConfig, err := config.GetSerializedMizuAgentConfig(state.targetNamespaces, mizuApiFilteringOptions)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error composing mizu config: %v", errormessage.FormatError(err)))
return
}
if config.Config.IsNsRestrictedMode() {
if len(targetNamespaces) != 1 || !shared.Contains(targetNamespaces, config.Config.MizuResourcesNamespace) {
if len(state.targetNamespaces) != 1 || !shared.Contains(state.targetNamespaces, config.Config.MizuResourcesNamespace) {
logger.Log.Errorf("Not supported mode. Mizu can't resolve IPs in other namespaces when running in namespace restricted mode.\n"+
"You can use the same namespace for --%s and --%s", configStructs.NamespacesTapName, config.MizuResourcesNamespaceConfigName)
return
@@ -110,14 +112,18 @@ func RunMizuTap() {
}
var namespacesStr string
if !shared.Contains(targetNamespaces, kubernetes.K8sAllNamespaces) {
namespacesStr = fmt.Sprintf("namespaces \"%s\"", strings.Join(targetNamespaces, "\", \""))
if !shared.Contains(state.targetNamespaces, kubernetes.K8sAllNamespaces) {
namespacesStr = fmt.Sprintf("namespaces \"%s\"", strings.Join(state.targetNamespaces, "\", \""))
} else {
namespacesStr = "all namespaces"
}
logger.Log.Infof("Tapping pods in %s", namespacesStr)
if err := printTappedPodsPreview(ctx, kubernetesProvider, state.targetNamespaces); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error listing pods: %v", errormessage.FormatError(err)))
}
if config.Config.Tap.DryRun {
return
}
@@ -134,7 +140,7 @@ func RunMizuTap() {
return
}
if config.Config.Tap.DaemonMode {
if err := handleDaemonModePostCreation(cancel, kubernetesProvider); err != nil {
if err := handleDaemonModePostCreation(ctx, cancel, kubernetesProvider, state.targetNamespaces); err != nil {
defer finishMizuExecution(kubernetesProvider, apiProvider)
cancel()
} else {
@@ -143,41 +149,44 @@ func RunMizuTap() {
} else {
defer finishMizuExecution(kubernetesProvider, apiProvider)
if err = startTapperSyncer(ctx, cancel, kubernetesProvider, targetNamespaces, *mizuApiFilteringOptions); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error starting mizu tapper syncer: %v", err))
cancel()
}
go goUtils.HandleExcWrapper(watchApiServerPod, ctx, kubernetesProvider, cancel)
go goUtils.HandleExcWrapper(watchTapperPod, ctx, kubernetesProvider, cancel)
// block until exit signal or error
waitForFinish(ctx, cancel)
}
}
func handleDaemonModePostCreation(cancel context.CancelFunc, kubernetesProvider *kubernetes.Provider) error {
func handleDaemonModePostCreation(ctx context.Context, cancel context.CancelFunc, kubernetesProvider *kubernetes.Provider, namespaces []string) error {
if err := printTappedPodsPreview(ctx, kubernetesProvider, namespaces); err != nil {
return err
}
apiProvider := apiserver.NewProvider(GetApiServerUrl(), 90, 1*time.Second)
if err := waitForDaemonModeToBeReady(cancel, kubernetesProvider, apiProvider); err != nil {
return err
}
if err := printDaemonModeTappedPods(apiProvider); err != nil {
return err
}
return nil
}
func printDaemonModeTappedPods(apiProvider *apiserver.Provider) error {
if healthStatus, err := apiProvider.GetHealthStatus(); err != nil {
/*
this function is a bit problematic as it might be detached from the actual pods the mizu api server will tap.
The alternative would be to wait for api server to be ready and then query it for the pods it listens to, this has
the arguably worse drawback of taking a relatively very long time before the user sees which pods are targeted, if any.
*/
func printTappedPodsPreview(ctx context.Context, kubernetesProvider *kubernetes.Provider, namespaces []string) error {
if matchingPods, err := kubernetesProvider.ListAllRunningPodsMatchingRegex(ctx, config.Config.Tap.PodRegex(), namespaces); err != nil {
return err
} else {
for _, tappedPod := range healthStatus.TapStatus.Pods {
if len(matchingPods) == 0 {
printNoPodsFoundSuggestion(namespaces)
}
for _, tappedPod := range matchingPods {
logger.Log.Infof(uiUtils.Green, fmt.Sprintf("+%s", tappedPod.Name))
}
return nil
}
return nil
}
func waitForDaemonModeToBeReady(cancel context.CancelFunc, kubernetesProvider *kubernetes.Provider, apiProvider *apiserver.Provider) error {
@@ -192,7 +201,7 @@ func waitForDaemonModeToBeReady(cancel context.CancelFunc, kubernetesProvider *k
return nil
}
func startTapperSyncer(ctx context.Context, cancel context.CancelFunc, provider *kubernetes.Provider, targetNamespaces []string, mizuApiFilteringOptions api.TrafficFilteringOptions) error {
func startTapperSyncer(ctx context.Context, cancel context.CancelFunc, provider *kubernetes.Provider, targetNamespaces []string, mizuApiFilteringOptions api.TrafficFilteringOptions, startTime time.Time) error {
tapperSyncer, err := kubernetes.CreateAndStartMizuTapperSyncer(ctx, provider, kubernetes.TapperSyncerConfig{
TargetNamespaces: targetNamespaces,
PodFilterRegex: *config.Config.Tap.PodRegex(),
@@ -204,24 +213,13 @@ func startTapperSyncer(ctx context.Context, cancel context.CancelFunc, provider
IgnoredUserAgents: config.Config.Tap.IgnoredUserAgents,
MizuApiFilteringOptions: mizuApiFilteringOptions,
MizuServiceAccountExists: state.mizuServiceAccountExists,
})
Istio: config.Config.Tap.Istio,
}, startTime)
if err != nil {
return err
}
for _, tappedPod := range tapperSyncer.CurrentlyTappedPods {
logger.Log.Infof(uiUtils.Green, fmt.Sprintf("+%s", tappedPod.Name))
}
if len(tapperSyncer.CurrentlyTappedPods) == 0 {
var suggestionStr string
if !shared.Contains(targetNamespaces, kubernetes.K8sAllNamespaces) {
suggestionStr = ". Select a different namespace with -n or tap all namespaces with -A"
}
logger.Log.Warningf(uiUtils.Warning, fmt.Sprintf("Did not find any pods matching the regex argument%s", suggestionStr))
}
go func() {
for {
select {
@@ -240,6 +238,14 @@ func startTapperSyncer(ctx context.Context, cancel context.CancelFunc, provider
if err := apiProvider.ReportTappedPods(tapperSyncer.CurrentlyTappedPods); err != nil {
logger.Log.Debugf("[Error] failed update tapped pods %v", err)
}
case tapperStatus, ok := <-tapperSyncer.TapperStatusChangedOut:
if !ok {
logger.Log.Debug("mizuTapperSyncer tapper status changed channel closed, ending listener loop")
return
}
if err := apiProvider.ReportTapperStatus(tapperStatus); err != nil {
logger.Log.Debugf("[Error] failed update tapper status %v", err)
}
case <-ctx.Done():
logger.Log.Debug("mizuTapperSyncer event listener loop exiting due to context done")
return
@@ -252,6 +258,14 @@ func startTapperSyncer(ctx context.Context, cancel context.CancelFunc, provider
return nil
}
func printNoPodsFoundSuggestion(targetNamespaces []string) {
var suggestionStr string
if !shared.Contains(targetNamespaces, kubernetes.K8sAllNamespaces) {
suggestionStr = ". You can also try selecting a different namespace with -n or tap all namespaces with -A"
}
logger.Log.Warningf(uiUtils.Warning, fmt.Sprintf("Did not find any currently running pods that match the regex argument, mizu will automatically tap matching pods if any are created later%s", suggestionStr))
}
func getErrorDisplayTextForK8sTapManagerError(err kubernetes.K8sTapManagerError) string {
switch err.TapManagerReason {
case kubernetes.TapManagerPodListError:
@@ -282,7 +296,7 @@ func createMizuResources(ctx context.Context, cancel context.CancelFunc, kuberne
}
if err := createMizuConfigmap(ctx, kubernetesProvider, serializedValidationRules, serializedContract, serializedMizuConfig); err != nil {
logger.Log.Warningf(uiUtils.Warning, fmt.Sprintf("Failed to create resources required for policy validation. Mizu will not validate policy rules. error: %v\n", errormessage.FormatError(err)))
logger.Log.Warningf(uiUtils.Warning, fmt.Sprintf("Failed to create resources required for policy validation. Mizu will not validate policy rules. error: %v", errormessage.FormatError(err)))
}
var err error
@@ -359,20 +373,9 @@ func createMizuApiServerPod(ctx context.Context, kubernetesProvider *kubernetes.
}
func createMizuApiServerDeployment(ctx context.Context, kubernetesProvider *kubernetes.Provider, opts *kubernetes.ApiServerOptions) error {
isDefaultStorageClassAvailable, err := kubernetesProvider.IsDefaultStorageProviderAvailable(ctx)
volumeClaimCreated := false
if err != nil {
return err
}
if isDefaultStorageClassAvailable {
if _, err = kubernetesProvider.CreatePersistentVolumeClaim(ctx, config.Config.MizuResourcesNamespace, kubernetes.PersistentVolumeClaimName, config.Config.Tap.MaxEntriesDBSizeBytes()+mizu.DaemonModePersistentVolumeSizeBufferBytes); err != nil {
logger.Log.Warningf(uiUtils.Yellow, "An error has occured while creating a persistent volume claim for mizu, this will mean that mizu's data will be lost on pod restart")
logger.Log.Debugf("error creating persistent volume claim: %v", err)
} else {
volumeClaimCreated = true
}
} else {
logger.Log.Warningf(uiUtils.Yellow, "Could not find default volume provider in this cluster, this will mean that mizu's data will be lost on pod restart")
if !config.Config.Tap.NoPersistentVolumeClaim {
volumeClaimCreated = TryToCreatePersistentVolumeClaim(ctx, kubernetesProvider)
}
pod, err := kubernetesProvider.GetMizuApiServerPodObject(opts, volumeClaimCreated, kubernetes.PersistentVolumeClaimName)
@@ -387,6 +390,26 @@ func createMizuApiServerDeployment(ctx context.Context, kubernetesProvider *kube
return nil
}
func TryToCreatePersistentVolumeClaim(ctx context.Context, kubernetesProvider *kubernetes.Provider) bool {
isDefaultStorageClassAvailable, err := kubernetesProvider.IsDefaultStorageProviderAvailable(ctx)
if err != nil {
logger.Log.Warningf(uiUtils.Yellow, "An error occured when checking if a default storage provider exists in this cluster, this means mizu data will be lost on mizu-api-server pod restart")
logger.Log.Debugf("error checking if default storage class exists: %v", err)
return false
} else if !isDefaultStorageClassAvailable {
logger.Log.Warningf(uiUtils.Yellow, "Could not find default storage provider in this cluster, this means mizu data will be lost on mizu-api-server pod restart")
return false
}
if _, err = kubernetesProvider.CreatePersistentVolumeClaim(ctx, config.Config.MizuResourcesNamespace, kubernetes.PersistentVolumeClaimName, config.Config.Tap.MaxEntriesDBSizeBytes()+mizu.DaemonModePersistentVolumeSizeBufferBytes); err != nil {
logger.Log.Warningf(uiUtils.Yellow, "An error has occured while creating a persistent volume claim for mizu, this means mizu data will be lost on mizu-api-server pod restart")
logger.Log.Debugf("error creating persistent volume claim: %v", err)
return false
}
return true
}
func getMizuApiFilteringOptions() (*api.TrafficFilteringOptions, error) {
var compiledRegexSlice []*api.SerializableRegexp
@@ -421,45 +444,6 @@ func getSyncEntriesConfig() *shared.SyncEntriesConfig {
}
}
func finishMizuExecution(kubernetesProvider *kubernetes.Provider, apiProvider *apiserver.Provider) {
telemetry.ReportAPICalls(apiProvider)
removalCtx, cancel := context.WithTimeout(context.Background(), cleanupTimeout)
defer cancel()
dumpLogsIfNeeded(removalCtx, kubernetesProvider)
cleanUpMizuResources(removalCtx, cancel, kubernetesProvider)
}
func dumpLogsIfNeeded(ctx context.Context, kubernetesProvider *kubernetes.Provider) {
if !config.Config.DumpLogs {
return
}
mizuDir := mizu.GetMizuFolderPath()
filePath := path.Join(mizuDir, fmt.Sprintf("mizu_logs_%s.zip", time.Now().Format("2006_01_02__15_04_05")))
if err := fsUtils.DumpLogs(ctx, kubernetesProvider, filePath); err != nil {
logger.Log.Errorf("Failed dump logs %v", err)
}
}
func cleanUpMizuResources(ctx context.Context, cancel context.CancelFunc, kubernetesProvider *kubernetes.Provider) {
logger.Log.Infof("\nRemoving mizu resources\n")
var leftoverResources []string
if config.Config.IsNsRestrictedMode() {
leftoverResources = cleanUpRestrictedMode(ctx, kubernetesProvider)
} else {
leftoverResources = cleanUpNonRestrictedMode(ctx, cancel, kubernetesProvider)
}
if len(leftoverResources) > 0 {
errMsg := fmt.Sprintf("Failed to remove the following resources, for more info check logs at %s:", fsUtils.GetLogFilePath())
for _, resource := range leftoverResources {
errMsg += "\n- " + resource
}
logger.Log.Errorf(uiUtils.Error, errMsg)
}
}
func cleanUpRestrictedMode(ctx context.Context, kubernetesProvider *kubernetes.Provider) []string {
leftoverResources := make([]string, 0)
@@ -569,54 +553,37 @@ func waitUntilNamespaceDeleted(ctx context.Context, cancel context.CancelFunc, k
}
func watchApiServerPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s$", kubernetes.ApiServerPodName))
added, modified, removed, errorChan := kubernetes.FilteredWatch(ctx, kubernetesProvider, []string{config.Config.MizuResourcesNamespace}, podExactRegex)
isPodReady := false
timeAfter := time.After(25 * time.Second)
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s", kubernetes.ApiServerPodName))
eventWatchHelper := kubernetes.NewEventWatchHelper(kubernetesProvider, podExactRegex, "pod")
eventChan, errorChan := kubernetes.FilteredWatch(ctx, eventWatchHelper, []string{config.Config.MizuResourcesNamespace}, eventWatchHelper)
for {
select {
case _, ok := <-added:
case wEvent, ok := <-eventChan:
if !ok {
added = nil
eventChan = nil
continue
}
logger.Log.Debugf("Watching API Server pod loop, added")
case _, ok := <-removed:
if !ok {
removed = nil
event, err := wEvent.ToEvent()
if err != nil {
logger.Log.Errorf(fmt.Sprintf("Error parsing Mizu resource event: %+v", err))
}
if state.startTime.After(event.CreationTimestamp.Time) {
continue
}
logger.Log.Infof("%s removed", kubernetes.ApiServerPodName)
cancel()
return
case modifiedPod, ok := <-modified:
if !ok {
modified = nil
continue
}
logger.Log.Debugf(
fmt.Sprintf("Watching API server events loop, event %s, time: %v, resource: %s (%s), reason: %s, note: %s",
event.Name,
event.CreationTimestamp.Time,
event.Regarding.Name,
event.Regarding.Kind,
event.Reason,
event.Note))
logger.Log.Debugf("Watching API Server pod loop, modified: %v", modifiedPod.Status.Phase)
if modifiedPod.Status.Phase == core.PodPending {
if modifiedPod.Status.Conditions[0].Type == core.PodScheduled && modifiedPod.Status.Conditions[0].Status != core.ConditionTrue {
logger.Log.Debugf("Wasn't able to deploy the API server. Reason: \"%s\"", modifiedPod.Status.Conditions[0].Message)
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Wasn't able to deploy the API server, for more info check logs at %s", fsUtils.GetLogFilePath()))
cancel()
break
}
if len(modifiedPod.Status.ContainerStatuses) > 0 && modifiedPod.Status.ContainerStatuses[0].State.Waiting != nil && modifiedPod.Status.ContainerStatuses[0].State.Waiting.Reason == "ErrImagePull" {
logger.Log.Debugf("Wasn't able to deploy the API server. (ErrImagePull) Reason: \"%s\"", modifiedPod.Status.ContainerStatuses[0].State.Waiting.Message)
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Wasn't able to deploy the API server: failed to pull the image, for more info check logs at %v", fsUtils.GetLogFilePath()))
cancel()
break
}
}
if modifiedPod.Status.Phase == core.PodRunning && !isPodReady {
isPodReady = true
switch event.Reason {
case "Started":
go startProxyReportErrorIfAny(kubernetesProvider, cancel)
url := GetApiServerUrl()
@@ -625,97 +592,30 @@ func watchApiServerPod(ctx context.Context, kubernetesProvider *kubernetes.Provi
cancel()
break
}
options, _ := getMizuApiFilteringOptions()
if err = startTapperSyncer(ctx, cancel, kubernetesProvider, state.targetNamespaces, *options, state.startTime); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error starting mizu tapper syncer: %v", err))
cancel()
}
logger.Log.Infof("Mizu is available at %s\n", url)
logger.Log.Infof("Mizu is available at %s", url)
if !config.Config.HeadlessMode {
uiUtils.OpenBrowser(url)
}
if err := apiProvider.ReportTappedPods(state.tapperSyncer.CurrentlyTappedPods); err != nil {
logger.Log.Debugf("[Error] failed update tapped pods %v", err)
}
}
case err, ok := <-errorChan:
if !ok {
errorChan = nil
continue
}
logger.Log.Errorf("[ERROR] Agent creation, watching %v namespace, error: %v", config.Config.MizuResourcesNamespace, err)
cancel()
case <-timeAfter:
if !isPodReady {
logger.Log.Errorf(uiUtils.Error, "Mizu API server was not ready in time")
cancel()
}
case <-ctx.Done():
logger.Log.Debugf("Watching API Server pod loop, ctx done")
return
}
}
}
func watchTapperPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s.*", kubernetes.TapperDaemonSetName))
added, modified, removed, errorChan := kubernetes.FilteredWatch(ctx, kubernetesProvider, []string{config.Config.MizuResourcesNamespace}, podExactRegex)
var prevPodPhase core.PodPhase
for {
select {
case addedPod, ok := <-added:
if !ok {
added = nil
continue
}
logger.Log.Debugf("Tapper is created [%s]", addedPod.Name)
case removedPod, ok := <-removed:
if !ok {
removed = nil
continue
}
logger.Log.Debugf("Tapper is removed [%s]", removedPod.Name)
case modifiedPod, ok := <-modified:
if !ok {
modified = nil
continue
}
if modifiedPod.Status.Phase == core.PodPending && modifiedPod.Status.Conditions[0].Type == core.PodScheduled && modifiedPod.Status.Conditions[0].Status != core.ConditionTrue {
logger.Log.Infof(uiUtils.Red, fmt.Sprintf("Wasn't able to deploy the tapper %s. Reason: \"%s\"", modifiedPod.Name, modifiedPod.Status.Conditions[0].Message))
case "FailedScheduling", "Failed", "Killing":
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Mizu API Server status: %s - %s", event.Reason, event.Note))
cancel()
break
}
podStatus := modifiedPod.Status
if podStatus.Phase == core.PodPending && prevPodPhase == podStatus.Phase {
logger.Log.Debugf("Tapper %s is %s", modifiedPod.Name, strings.ToLower(string(podStatus.Phase)))
continue
}
prevPodPhase = podStatus.Phase
if podStatus.Phase == core.PodRunning {
state := podStatus.ContainerStatuses[0].State
if state.Terminated != nil {
switch state.Terminated.Reason {
case "OOMKilled":
logger.Log.Infof(uiUtils.Red, fmt.Sprintf("Tapper %s was terminated (reason: OOMKilled). You should consider increasing machine resources.", modifiedPod.Name))
}
}
}
logger.Log.Debugf("Tapper %s is %s", modifiedPod.Name, strings.ToLower(string(podStatus.Phase)))
case err, ok := <-errorChan:
if !ok {
errorChan = nil
continue
}
logger.Log.Errorf("[Error] Error in mizu tapper watch, err: %v", err)
cancel()
logger.Log.Errorf("Watching API server events loop, error: %+v", err)
case <-ctx.Done():
logger.Log.Debugf("Watching tapper pod loop, ctx done")
logger.Log.Debugf("Watching API server events loop, ctx done")
return
}
}

View File

@@ -56,7 +56,7 @@ func runMizuView() {
return
}
logger.Log.Infof("Mizu is available at %s\n", url)
logger.Log.Infof("Mizu is available at %s", url)
if !config.Config.HeadlessMode {
uiUtils.OpenBrowser(url)

View File

@@ -402,6 +402,7 @@ func getMizuAgentConfig(targetNamespaces []string, mizuApiFilteringOptions *api.
MizuResourcesNamespace: Config.MizuResourcesNamespace,
MizuApiFilteringOptions: *mizuApiFilteringOptions,
AgentDatabasePath: shared.DataDirPath,
Istio: Config.Tap.Istio,
}
return &config, nil
}

View File

@@ -3,6 +3,8 @@ package configStructs
import (
"errors"
"fmt"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared/logger"
"regexp"
"github.com/up9inc/mizu/shared"
@@ -23,28 +25,31 @@ const (
EnforcePolicyFile = "traffic-validation-file"
ContractFile = "contract"
DaemonModeTapName = "daemon"
IstioName = "istio"
)
type TapConfig struct {
UploadIntervalSec int `yaml:"upload-interval" default:"10"`
PodRegexStr string `yaml:"regex" default:".*"`
GuiPort uint16 `yaml:"gui-port" default:"8899"`
ProxyHost string `yaml:"proxy-host" default:"127.0.0.1"`
Namespaces []string `yaml:"namespaces"`
Analysis bool `yaml:"analysis" default:"false"`
AllNamespaces bool `yaml:"all-namespaces" default:"false"`
PlainTextFilterRegexes []string `yaml:"regex-masking"`
IgnoredUserAgents []string `yaml:"ignored-user-agents"`
DisableRedaction bool `yaml:"no-redact" default:"false"`
HumanMaxEntriesDBSize string `yaml:"max-entries-db-size" default:"200MB"`
DryRun bool `yaml:"dry-run" default:"false"`
Workspace string `yaml:"workspace"`
EnforcePolicyFile string `yaml:"traffic-validation-file"`
ContractFile string `yaml:"contract"`
AskUploadConfirmation bool `yaml:"ask-upload-confirmation" default:"true"`
ApiServerResources shared.Resources `yaml:"api-server-resources"`
TapperResources shared.Resources `yaml:"tapper-resources"`
DaemonMode bool `yaml:"daemon" default:"false"`
UploadIntervalSec int `yaml:"upload-interval" default:"10"`
PodRegexStr string `yaml:"regex" default:".*"`
GuiPort uint16 `yaml:"gui-port" default:"8899"`
ProxyHost string `yaml:"proxy-host" default:"127.0.0.1"`
Namespaces []string `yaml:"namespaces"`
Analysis bool `yaml:"analysis" default:"false"`
AllNamespaces bool `yaml:"all-namespaces" default:"false"`
PlainTextFilterRegexes []string `yaml:"regex-masking"`
IgnoredUserAgents []string `yaml:"ignored-user-agents"`
DisableRedaction bool `yaml:"no-redact" default:"false"`
HumanMaxEntriesDBSize string `yaml:"max-entries-db-size" default:"200MB"`
DryRun bool `yaml:"dry-run" default:"false"`
Workspace string `yaml:"workspace"`
EnforcePolicyFile string `yaml:"traffic-validation-file"`
ContractFile string `yaml:"contract"`
AskUploadConfirmation bool `yaml:"ask-upload-confirmation" default:"true"`
ApiServerResources shared.Resources `yaml:"api-server-resources"`
TapperResources shared.Resources `yaml:"tapper-resources"`
DaemonMode bool `yaml:"daemon" default:"false"`
NoPersistentVolumeClaim bool `yaml:"no-persistent-volume-claim" default:"false"`
Istio bool `yaml:"istio" default:"false"`
}
func (config *TapConfig) PodRegex() *regexp.Regexp {
@@ -79,5 +84,9 @@ func (config *TapConfig) Validate() error {
return errors.New(fmt.Sprintf("Can't run with both --%s and --%s flags", AnalysisTapName, WorkspaceTapName))
}
if config.NoPersistentVolumeClaim && !config.DaemonMode {
logger.Log.Warningf(uiUtils.Warning, fmt.Sprintf("the --set tap.no-persistent-volume-claim=true flag has no effect without the --%s flag, the claim will not be created anyway.", DaemonModeTapName))
}
return nil
}

View File

@@ -11,9 +11,12 @@ var (
GitCommitHash = "" // this var is overridden using ldflags in makefile when building
BuildTimestamp = "" // this var is overridden using ldflags in makefile when building
RBACVersion = "v1"
Platform = ""
DaemonModePersistentVolumeSizeBufferBytes = int64(500 * 1000 * 1000) //500mb
)
const DEVENVVAR = "MIZU_DISABLE_TELEMTRY"
func GetMizuFolderPath() string {
home, homeDirErr := os.UserHomeDir()
if homeDirErr != nil {

View File

@@ -78,6 +78,6 @@ func DumpLogs(ctx context.Context, provider *kubernetes.Provider, filePath strin
logger.Log.Debugf("Successfully added file %s", GetLogFilePath())
}
logger.Log.Infof("You can find the zip file with all logs in %s\n", filePath)
logger.Log.Infof("You can find the zip file with all logs in %s", filePath)
return nil
}

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"io/ioutil"
"net/http"
"os"
"runtime"
"strings"
"time"
@@ -39,6 +40,10 @@ func CheckVersionCompatibility(apiServerProvider *apiserver.Provider) (bool, err
}
func CheckNewerVersion(versionChan chan string) {
if _, present := os.LookupEnv(mizu.DEVENVVAR); present {
versionChan <- ""
return
}
logger.Log.Debugf("Checking for newer version...")
start := time.Now()
client := github.NewClient(nil)

View File

@@ -5,6 +5,7 @@ import (
"encoding/json"
"fmt"
"net/http"
"os"
"github.com/denisbrodbeck/machineid"
"github.com/up9inc/mizu/cli/apiserver"
@@ -62,6 +63,9 @@ func ReportAPICalls(apiProvider *apiserver.Provider) {
}
func shouldRunTelemetry() bool {
if _, present := os.LookupEnv(mizu.DEVENVVAR); present {
return false
}
if !config.Config.Telemetry {
return false
}
@@ -79,6 +83,7 @@ func sendTelemetry(telemetryType string, argsMap map[string]interface{}) error {
argsMap["buildTimestamp"] = mizu.BuildTimestamp
argsMap["branch"] = mizu.Branch
argsMap["version"] = mizu.SemVer
argsMap["Platform"] = mizu.Platform
if machineId, err := machineid.ProtectedID("mizu"); err == nil {
argsMap["machineId"] = machineId

31
cli/up9/provider.go Normal file
View File

@@ -0,0 +1,31 @@
package up9
import (
"fmt"
"net/http"
"net/url"
)
func IsTokenValid(tokenString string, envName string) bool {
whoAmIUrl, _ := url.Parse(fmt.Sprintf("https://trcc.%s/admin/whoami", envName))
req := &http.Request{
Method: http.MethodGet,
URL: whoAmIUrl,
Header: map[string][]string{
"Authorization": {fmt.Sprintf("bearer %s", tokenString)},
},
}
response, err := http.DefaultClient.Do(req)
if err != nil {
return false
}
defer response.Body.Close()
if response.StatusCode != http.StatusOK {
return false
}
return true
}

View File

@@ -37,8 +37,8 @@ COPY agent .
RUN go build -gcflags="all=-N -l" -o mizuagent .
# Download Basenine executable, verify the sha1sum and move it to a directory in $PATH
ADD https://github.com/up9inc/basenine/releases/download/v0.2.6/basenine_linux_amd64 ./basenine_linux_amd64
ADD https://github.com/up9inc/basenine/releases/download/v0.2.6/basenine_linux_amd64.sha256 ./basenine_linux_amd64.sha256
ADD https://github.com/up9inc/basenine/releases/download/v0.2.17/basenine_linux_amd64 ./basenine_linux_amd64
ADD https://github.com/up9inc/basenine/releases/download/v0.2.17/basenine_linux_amd64.sha256 ./basenine_linux_amd64.sha256
RUN shasum -a 256 -c basenine_linux_amd64.sha256
RUN chmod +x ./basenine_linux_amd64
@@ -48,7 +48,7 @@ RUN cd .. && /bin/bash build_extensions_debug.sh
FROM golang:1.16-alpine
RUN apk add bash libpcap-dev tcpdump
RUN apk add bash libpcap-dev
WORKDIR /app

91
docs/CONFIGURATION.md Normal file
View File

@@ -0,0 +1,91 @@
![Mizu: The API Traffic Viewer for Kubernetes](../assets/mizu-logo.svg)
# Configuration options for Mizu
Mizu has many configuration options and flags that affect its behavior. Their values can be modified via command-line interface or via configuration file.
The list below covers most useful configuration options.
### Config file
Mizu behaviour can be modified via YAML configuration file located at `$HOME/.mizu/config.yaml`.
Default values for the file can be viewed via `mizu config` command.
### Applying config options via command line
To apply any configuration option via command line, use `--set` following by config option name and value, like in the following example:
```
mizu tap --set tap.dry-run=true
```
Please make sure to use full option name (`tap.dry-run` as opposed to `dry-run` only), incl. section (`tap`, in this example)
## General section
* `agent-image` - full path to Mizu container image, in format `full.path.to/your/image:tag`. Default value is set at compilation time to `gcr.io/up9-docker-hub/mizu/<branch>:<version>`
* `dump-logs` - if set to `true`, saves log files for all Mizu components (tapper, api-server, CLI) in a zip file under `$HOME/.mizu`. Default value is `false`
* `image-pull-policy` - container image pull policy for Kubernetes, default value `Always`. Other accepted values are `Never` or `IfNotExist`. Please mind the implications when changing this.
* `kube-config-path` - path to alternative kubeconfig file to use for all interactions with Kubernetes cluster. By default - `$HOME/.kubeconfig`
* `mizu-resources-namespace` - Kubernetes namespace where all Mizu-related resources are created. Default value `mizu`
* `telemetry` - report anonymous usage statistics. Default value `true`
## section `tap`
* `namespaces` - list of namespace names, in which pods are tapped. Default value is empty, meaning only pods in the current namespace are tapped. Typically supplied as command line options.
* `all-namespaces` - special flag indicating whether Mizu should search and tap pods, matching the regex, in all namespaces. Default is `false`. Please use with caution, tapping too many pods can affect resource consumption.
* `daemon` - instructs Mizu whether to run daemon mode (where CLI command exits after launch, and tapper & api-server pods in Kubernetes continue to run without controlling CLI). Typically supplied as command-line option `--daemon`. Default valie is `false`
* `dry-run` - if true, Mizu will print list of pods matching the supplied (or default) regex and exit without actually tapping the traffic. Default value is `false`. Typically supplied as command-line option `--dry-run`
* `proxy-host` - IP address on which proxy to Mizu API service is launched; should be accessible at `proxy-host:gui-port`. Default value is `127.0.0.1`
* `gui-port` - port on which Mizu GUI is accessible, default value is `8899` (stands for `8899/tcp`)
* `regex` - regular expression used to match pods to tap, when no regex is given in the command line; default value is `.*`, which means `mizu tap` with no additional arguments is runnining as `mizu tap .*` (i.e. tap all pods found in current workspace)
* `no-redact` - instructs Mizu whether to redact certain sensitive fields in the collected traffic. Default value is `false`, i.e. Mizu will replace sentitive data values with *REDACTED* placeholder.
* `ignored-user-agents` - array of strings, describing HTTP *User-Agent* header values to be ignored. Useful to ignore Kubernetes healthcheck and other similar noisy periodic probes. Default value is empty.
* `max-entries-db-size` - maximal size of traffic stored locally in the `mizu-api-server` pod. When this size is reached, older traffic is overwritten with new entries. Default value is `200MB`
### section `tap.api-server-resources`
Kubernetes request and limit values for the `mizu-api-server` pod.
Parameters and their default values are same as used natively in Kubernetes pods:
```
cpu-limit: 750m
memory-limit: 1Gi
cpu-requests: 50m
memory-requests: 50Mi
```
### section `tap.tapper-resources`
Kubernetes request and limit values for the `mizu-tapper` pods (launched via daemonset).
Parameters and their default values are same as used natively in Kubernetes pods:
```
cpu-limit: 750m
memory-limit: 1Gi
cpu-requests: 50m
memory-requests: 50Mi
```
--
* `analsys` - enables advanced analysis of collected traffic in the UP9 coud platform. Default value is `false`
* `upload-interval` - in the *analysis* mode, push traffic to UP9 cloud every `upload-interval` seconds. Default value is `10` seconds
* `ask-upload-confirmation` - request user confirmation when uploading tapped data to UP9 cloud
## section `version`
* `debug`- print additional version and build information when `mizu version` command is invoked. Default value is `false`.

80
docs/DAEMON_MODE.md Normal file
View File

@@ -0,0 +1,80 @@
# Mizu daemon mode
Mizu can be run detached from the cli using the daemon flag: `mizu tap --daemon`. This type of mizu instance will run
indefinitely in the cluster.
Please note that daemon mode requires you to have RBAC creation permissions, see the [permissions](PERMISSIONS.md)
doc for more details.
```bash
$ mizu tap "^ca.*" --daemon
Mizu will store up to 200MB of traffic, old traffic will be cleared once the limit is reached.
Tapping pods in namespaces "sock-shop"
Waiting for mizu to be ready... (may take a few minutes)
+carts-66c77f5fbb-fq65r
+catalogue-5f4cb7cf5-7zrmn
..
```
## Stop mizu daemon
To stop the detached mizu instance and clean all cluster side resources, run `mizu clean`
```bash
$ mizu clean # mizu will continue running in cluster until clean is executed
Removing mizu resources
```
## Expose mizu web app
Mizu could be exposed at a later stage in any of the following ways:
### Using mizu view command
In a machine that can access both the cluster and a browser, you can run `mizu view` command which creates a proxy.
Besides that, all the regular ways to expose k8s pods are valid.
```bash
$ mizu view
Establishing connection to k8s cluster...
Mizu is available at http://localhost:8899
^C
..
```
### Port Forward
```bash
$ kubectl port-forward -n mizu deployment/mizu-api-server 8899:8899
```
### NodePort
```bash
$ kubectl expose -n mizu deployment mizu-api-server --name mizu-node-port --type NodePort --port 80 --target-port 8899
```
Mizu's IP is the IP of any node (get the IP with `kubectl get nodes -o wide`) and the port is the target port of the new
service (`kubectl get services -n mizu mizu-node-port`). Note that this method will expose Mizu to public access if your
nodes are public.
### LoadBalancer
```bash
$ kubectl expose deployment -n mizu --port 80 --target-port 8899 mizu-api-server --type=LoadBalancer --name=mizu-lb
service/mizu-lb exposed
..
$ kubectl get services -n mizu
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mizu-api-server ClusterIP 10.107.200.100 <none> 80/TCP 5m5s
mizu-lb LoadBalancer 10.107.200.101 34.77.120.116 80:30141/TCP 76s
```
Note that `LoadBalancer` services only work on supported clusters (usually cloud providers) and might incur extra costs
If you changed the `mizu-resources-namespace` value, make sure the `-n mizu` flag of the `kubectl expose` command is
changed to the value of `mizu-resources-namespace`
mizu will now be available both by running `mizu view` or by accessing the `EXTERNAL-IP` of the `mizu-lb` service
through your browser.

46
docs/ISTIO.md Normal file
View File

@@ -0,0 +1,46 @@
![Mizu: The API Traffic Viewer for Kubernetes](../assets/mizu-logo.svg)
# Istio mutual tls (mtls) with Mizu
This document describe how Mizu tapper handles workloads configured with mtls, making the internal traffic between services in a cluster to be encrypted.
Besides Istio there are other service meshes that implement mtls. However, as of now Istio is the most used one, and this is why we are focusing on it.
In order to create an Istio setup for development, follow those steps:
1. Deploy a sample application to a Kubernetes cluster, the sample application needs to make internal service to service calls
2. SSH to one of the nodes, and run `tcpdump`
3. Make sure you see the internal service to service calls in a plain text
4. Deploy Istio to the cluster - make sure it is attached to all pods of the sample application, and that it is configured with mtls (default)
5. Run `tcpdump` again, make sure you don't see the internal service to service calls in a plain text
## The connection between Istio and Envoy
In order to implement its service mesh capabilities, [Istio](https://istio.io) use an [Envoy](https://www.envoyproxy.io) sidecar in front of every pod in the cluster. The Envoy is responsible for the mtls communication, and that's why we are focusing on Envoy proxy.
In the future we might see more players in that field, then we'll have to either add support for each of them or go with a unified eBPF solution.
## Network namespaces
A [linux network namespace](https://man7.org/linux/man-pages/man7/network_namespaces.7.html) is an isolation that limit the process view of the network. In the container world it used to isolate one container from another. In the Kubernetes world it used to isolate a pod from another. That means that two containers running on the same pod share the same network namespace. A container can reach a container in the same pod by accessing `localhost`.
An Envoy proxy configured with mtls receives the inbound traffic directed to the pod, decrypts it and sends it via `localhost` to the target container.
## Tapping mtls traffic
In order for Mizu to be able to see the decrypted traffic it needs to listen on the same network namespace of the target pod. Multiple threads of the same process can have different network namespaces.
[gopacket](https://github.com/google/gopacket) uses [libpacp](https://github.com/the-tcpdump-group/libpcap) by default for capturing the traffic. Libpacap doesn't support network namespaces and we can't ask it to listen to traffic on a different namespace. However, we can change the network namespace of the calling thread and then start libpcap to see the traffic on a different namespace.
## Finding the network namespace of a running process
The network namespace of a running process can be found in `/proc/PID/ns/net` link. Once we have this link, we can ask Linux to change the network namespace of a thread to this one.
This mean that Mizu needs to have access to the `/proc` (procfs) of the running node.
## Finding the network namespace of a running pod
In order for Mizu to be able to listen to mtls traffic, it needs to get the PIDs of the the running pods, filter them according to the user filters and then start listen to their internal network namespace traffic.
There is no official way in Kubernetes to get from pod to PID. The CRI implementation purposefully doesn't force a pod to be a processes on the host. It can be a Virtual Machine as well like [Kata containers](https://katacontainers.io)
While we can provide a solution for various CRIs (like Docker, Containerd and CRI-O) it's better to have a unified solution. In order to achieve that, Mizu scans all the processes in the host, and finds the Envoy processes using their `/proc/PID/exe` link.
Once Mizu detects an Envoy process, it need to check whether this specific Envoy process is relevant according the user filters. The user filters are a list of `CLUSTER_IPS`. The tapper gets them via the `TapOpts.FilterAuthorities` list.
Istio sends an `INSTANCE_IP` environment variable to every Envoy proxy process. By examining the Envoy process's environment variables we can see whether it's relevant or not. Examining a process environment variables is done by reading the `/proc/PID/envion` file.
## Edge cases
The method we use to find Envoy processes and correlate them to the cluster IPs may be inaccurate in certain situations. If, for example, a user runs an Envoy process manually, and set its `INSTANCE_IP` environment variable to one of the `CLUSTER_IPS` the tapper gets, then Mizu will capture traffic for it.

View File

@@ -7,15 +7,15 @@ rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "delete"]
- apiGroups: [ "" ]
- apiGroups: [ "apps" ]
resources: [ "deployments" ]
verbs: [ "create", "delete" ]
verbs: [ "get", "create", "delete" ]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["create", "patch", "delete"]
verbs: ["get", "create", "patch", "delete", "list"]
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch", "create", "delete"]
@@ -49,6 +49,9 @@ rules:
- apiGroups: ["", "apps", "extensions"]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["events.k8s.io"]
resources: ["events"]
verbs: ["list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1

View File

@@ -23,6 +23,9 @@ rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "delete"]
- apiGroups: ["events.k8s.io"]
resources: ["events"]
verbs: ["list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1

View File

@@ -46,6 +46,9 @@ rules:
- apiGroups: ["", "apps", "extensions"]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["events.k8s.io"]
resources: ["events"]
verbs: ["list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1

View File

@@ -8,7 +8,7 @@ rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "delete"]
- apiGroups: [ "" ]
- apiGroups: [ "apps" ]
resources: [ "deployments" ]
verbs: [ "get", "create", "delete" ]
- apiGroups: [""]
@@ -16,7 +16,7 @@ rules:
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: ["apps"]
resources: ["daemonsets"]
verbs: ["get", "create", "patch", "delete"]
verbs: ["get", "create", "patch", "delete", "list"]
- apiGroups: [""]
resources: ["services/proxy"]
verbs: ["get"]
@@ -32,7 +32,7 @@ rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings"]
verbs: ["get", "create", "delete"]
- apiGroups: ["apps", "extensions"]
- apiGroups: ["apps", "extensions", ""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps", "extensions"]
@@ -41,6 +41,9 @@ rules:
- apiGroups: ["", "apps", "extensions"]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["events.k8s.io"]
resources: ["events"]
verbs: ["list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1

View File

@@ -38,6 +38,9 @@ rules:
- apiGroups: ["", "apps", "extensions"]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["events.k8s.io"]
resources: ["events"]
verbs: ["list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1

View File

@@ -20,6 +20,9 @@ rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "create", "delete"]
- apiGroups: ["events.k8s.io"]
resources: ["events"]
verbs: ["list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1

View File

@@ -38,6 +38,9 @@ rules:
- apiGroups: ["", "apps", "extensions"]
resources: ["endpoints"]
verbs: ["get", "list", "watch"]
- apiGroups: ["events.k8s.io"]
resources: ["events"]
verbs: ["list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1

View File

@@ -0,0 +1,52 @@
package kubernetes
import (
"context"
"regexp"
"strings"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/watch"
)
type EventWatchHelper struct {
kubernetesProvider *Provider
NameRegexFilter *regexp.Regexp
Kind string
}
func NewEventWatchHelper(kubernetesProvider *Provider, NameRegexFilter *regexp.Regexp, kind string) *EventWatchHelper {
return &EventWatchHelper{
kubernetesProvider: kubernetesProvider,
NameRegexFilter: NameRegexFilter,
Kind: kind,
}
}
// Implements the EventFilterer Interface
func (wh *EventWatchHelper) Filter(wEvent *WatchEvent) (bool, error) {
event, err := wEvent.ToEvent()
if err != nil {
return false, nil
}
if !wh.NameRegexFilter.MatchString(event.Name) {
return false, nil
}
if strings.ToLower(event.Regarding.Kind) != strings.ToLower(wh.Kind) {
return false, nil
}
return true, nil
}
// Implements the WatchCreator Interface
func (wh *EventWatchHelper) NewWatcher(ctx context.Context, namespace string) (watch.Interface, error) {
watcher, err := wh.kubernetesProvider.clientSet.EventsV1().Events(namespace).Watch(ctx, metav1.ListOptions{Watch: true})
if err != nil {
return nil, err
}
return watcher, nil
}

View File

@@ -16,18 +16,22 @@ import (
const updateTappersDelay = 5 * time.Second
type TappedPodChangeEvent struct {
Added []core.Pod
Removed []core.Pod
Added []core.Pod
Removed []core.Pod
ExpectedTapperAmount int
}
// MizuTapperSyncer uses a k8s pod watch to update tapper daemonsets when targeted pods are removed or created
type MizuTapperSyncer struct {
context context.Context
CurrentlyTappedPods []core.Pod
config TapperSyncerConfig
kubernetesProvider *Provider
TapPodChangesOut chan TappedPodChangeEvent
ErrorOut chan K8sTapManagerError
startTime time.Time
context context.Context
CurrentlyTappedPods []core.Pod
config TapperSyncerConfig
kubernetesProvider *Provider
TapPodChangesOut chan TappedPodChangeEvent
TapperStatusChangedOut chan shared.TapperStatus
ErrorOut chan K8sTapManagerError
nodeToTappedPodIPMap map[string][]string
}
type TapperSyncerConfig struct {
@@ -41,16 +45,19 @@ type TapperSyncerConfig struct {
IgnoredUserAgents []string
MizuApiFilteringOptions api.TrafficFilteringOptions
MizuServiceAccountExists bool
Istio bool
}
func CreateAndStartMizuTapperSyncer(ctx context.Context, kubernetesProvider *Provider, config TapperSyncerConfig) (*MizuTapperSyncer, error) {
func CreateAndStartMizuTapperSyncer(ctx context.Context, kubernetesProvider *Provider, config TapperSyncerConfig, startTime time.Time) (*MizuTapperSyncer, error) {
syncer := &MizuTapperSyncer{
context: ctx,
CurrentlyTappedPods: make([]core.Pod, 0),
config: config,
kubernetesProvider: kubernetesProvider,
TapPodChangesOut: make(chan TappedPodChangeEvent, 100),
ErrorOut: make(chan K8sTapManagerError, 100),
startTime: startTime.Truncate(time.Second), // Round down because k8s CreationTimestamp is given in 1 sec resolution.
context: ctx,
CurrentlyTappedPods: make([]core.Pod, 0),
config: config,
kubernetesProvider: kubernetesProvider,
TapPodChangesOut: make(chan TappedPodChangeEvent, 100),
TapperStatusChangedOut: make(chan shared.TapperStatus, 100),
ErrorOut: make(chan K8sTapManagerError, 100),
}
if err, _ := syncer.updateCurrentlyTappedPods(); err != nil {
@@ -62,11 +69,75 @@ func CreateAndStartMizuTapperSyncer(ctx context.Context, kubernetesProvider *Pro
}
go syncer.watchPodsForTapping()
go syncer.watchTapperEvents()
return syncer, nil
}
func (tapperSyncer *MizuTapperSyncer) watchTapperEvents() {
mizuResourceRegex := regexp.MustCompile(fmt.Sprintf("^%s.*", TapperPodName))
eventWatchHelper := NewEventWatchHelper(tapperSyncer.kubernetesProvider, mizuResourceRegex, "pod")
eventChan, errorChan := FilteredWatch(tapperSyncer.context, eventWatchHelper, []string{tapperSyncer.config.MizuResourcesNamespace}, eventWatchHelper)
for {
select {
case wEvent, ok := <-eventChan:
if !ok {
eventChan = nil
continue
}
event, err := wEvent.ToEvent()
if err != nil {
logger.Log.Errorf(fmt.Sprintf("Error parsing Mizu resource event: %+v", err))
}
if tapperSyncer.startTime.After(event.CreationTimestamp.Time) {
continue
}
logger.Log.Debugf(
fmt.Sprintf("Watching tapper events loop, event %s, time: %v, resource: %s (%s), reason: %s, note: %s",
event.Name,
event.CreationTimestamp.Time,
event.Regarding.Name,
event.Regarding.Kind,
event.Reason,
event.Note))
pod, err1 := tapperSyncer.kubernetesProvider.GetPod(tapperSyncer.context, tapperSyncer.config.MizuResourcesNamespace, event.Regarding.Name)
if err1 != nil {
logger.Log.Debugf(fmt.Sprintf("Failed to get tapper pod %s", event.Regarding.Name))
continue
}
nodeName := ""
if event.Reason != "FailedScheduling" {
nodeName = pod.Spec.NodeName
} else {
nodeName = pod.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms[0].MatchFields[0].Values[0]
}
taperStatus := shared.TapperStatus{TapperName: pod.Name, NodeName: nodeName, Status: event.Reason}
tapperSyncer.TapperStatusChangedOut <- taperStatus
case err, ok := <-errorChan:
if !ok {
errorChan = nil
continue
}
logger.Log.Errorf("Watching tapper events loop, error: %+v", err)
case <-tapperSyncer.context.Done():
logger.Log.Debugf("Watching tapper events loop, ctx done")
return
}
}
}
func (tapperSyncer *MizuTapperSyncer) watchPodsForTapping() {
added, modified, removed, errorChan := FilteredWatch(tapperSyncer.context, tapperSyncer.kubernetesProvider, tapperSyncer.config.TargetNamespaces, &tapperSyncer.config.PodFilterRegex)
podWatchHelper := NewPodWatchHelper(tapperSyncer.kubernetesProvider, &tapperSyncer.config.PodFilterRegex)
eventChan, errorChan := FilteredWatch(tapperSyncer.context, podWatchHelper, tapperSyncer.config.TargetNamespaces, podWatchHelper)
restartTappers := func() {
err, changeFound := tapperSyncer.updateCurrentlyTappedPods()
@@ -92,37 +163,40 @@ func (tapperSyncer *MizuTapperSyncer) watchPodsForTapping() {
for {
select {
case pod, ok := <-added:
case wEvent, ok := <-eventChan:
if !ok {
added = nil
eventChan = nil
continue
}
logger.Log.Debugf("Added matching pod %s, ns: %s", pod.Name, pod.Namespace)
restartTappersDebouncer.SetOn()
case pod, ok := <-removed:
if !ok {
removed = nil
pod, err := wEvent.ToPod()
if err != nil {
tapperSyncer.handleErrorInWatchLoop(err, restartTappersDebouncer)
continue
}
logger.Log.Debugf("Removed matching pod %s, ns: %s", pod.Name, pod.Namespace)
restartTappersDebouncer.SetOn()
case pod, ok := <-modified:
if !ok {
modified = nil
continue
}
logger.Log.Debugf("Modified matching pod %s, ns: %s, phase: %s, ip: %s", pod.Name, pod.Namespace, pod.Status.Phase, pod.Status.PodIP)
// Act only if the modified pod has already obtained an IP address.
// After filtering for IPs, on a normal pod restart this includes the following events:
// - Pod deletion
// - Pod reaches start state
// - Pod reaches ready state
// Ready/unready transitions might also trigger this event.
if pod.Status.PodIP != "" {
switch wEvent.Type {
case EventAdded:
logger.Log.Debugf("Added matching pod %s, ns: %s", pod.Name, pod.Namespace)
restartTappersDebouncer.SetOn()
case EventDeleted:
logger.Log.Debugf("Removed matching pod %s, ns: %s", pod.Name, pod.Namespace)
restartTappersDebouncer.SetOn()
case EventModified:
logger.Log.Debugf("Modified matching pod %s, ns: %s, phase: %s, ip: %s", pod.Name, pod.Namespace, pod.Status.Phase, pod.Status.PodIP)
// Act only if the modified pod has already obtained an IP address.
// After filtering for IPs, on a normal pod restart this includes the following events:
// - Pod deletion
// - Pod reaches start state
// - Pod reaches ready state
// Ready/unready transitions might also trigger this event.
if pod.Status.PodIP != "" {
restartTappersDebouncer.SetOn()
}
case EventBookmark:
break
case EventError:
break
}
case err, ok := <-errorChan:
if !ok {
@@ -130,12 +204,8 @@ func (tapperSyncer *MizuTapperSyncer) watchPodsForTapping() {
continue
}
logger.Log.Debugf("Watching pods loop, got error %v, stopping `restart tappers debouncer`", err)
restartTappersDebouncer.Cancel()
tapperSyncer.ErrorOut <- K8sTapManagerError{
OriginalError: err,
TapManagerReason: TapManagerPodWatchError,
}
tapperSyncer.handleErrorInWatchLoop(err, restartTappersDebouncer)
continue
case <-tapperSyncer.context.Done():
logger.Log.Debugf("Watching pods loop, context done, stopping `restart tappers debouncer`")
@@ -146,6 +216,15 @@ func (tapperSyncer *MizuTapperSyncer) watchPodsForTapping() {
}
}
func (tapperSyncer *MizuTapperSyncer) handleErrorInWatchLoop(err error, restartTappersDebouncer *debounce.Debouncer) {
logger.Log.Debugf("Watching pods loop, got error %v, stopping `restart tappers debouncer`", err)
restartTappersDebouncer.Cancel()
tapperSyncer.ErrorOut <- K8sTapManagerError{
OriginalError: err,
TapManagerReason: TapManagerPodWatchError,
}
}
func (tapperSyncer *MizuTapperSyncer) updateCurrentlyTappedPods() (err error, changesFound bool) {
if matchingPods, err := tapperSyncer.kubernetesProvider.ListAllRunningPodsMatchingRegex(tapperSyncer.context, &tapperSyncer.config.PodFilterRegex, tapperSyncer.config.TargetNamespaces); err != nil {
return err, false
@@ -160,9 +239,11 @@ func (tapperSyncer *MizuTapperSyncer) updateCurrentlyTappedPods() (err error, ch
}
if len(addedPods) > 0 || len(removedPods) > 0 {
tapperSyncer.CurrentlyTappedPods = podsToTap
tapperSyncer.nodeToTappedPodIPMap = GetNodeHostToTappedPodIpsMap(tapperSyncer.CurrentlyTappedPods)
tapperSyncer.TapPodChangesOut <- TappedPodChangeEvent{
Added: addedPods,
Removed: removedPods,
Added: addedPods,
Removed: removedPods,
ExpectedTapperAmount: len(tapperSyncer.nodeToTappedPodIPMap),
}
return nil, true
}
@@ -171,9 +252,7 @@ func (tapperSyncer *MizuTapperSyncer) updateCurrentlyTappedPods() (err error, ch
}
func (tapperSyncer *MizuTapperSyncer) updateMizuTappers() error {
nodeToTappedPodIPMap := GetNodeHostToTappedPodIpsMap(tapperSyncer.CurrentlyTappedPods)
if len(nodeToTappedPodIPMap) > 0 {
if len(tapperSyncer.nodeToTappedPodIPMap) > 0 {
var serviceAccountName string
if tapperSyncer.config.MizuServiceAccountExists {
serviceAccountName = ServiceAccountName
@@ -188,16 +267,17 @@ func (tapperSyncer *MizuTapperSyncer) updateMizuTappers() error {
tapperSyncer.config.AgentImage,
TapperPodName,
fmt.Sprintf("%s.%s.svc.cluster.local", ApiServerPodName, tapperSyncer.config.MizuResourcesNamespace),
nodeToTappedPodIPMap,
tapperSyncer.nodeToTappedPodIPMap,
serviceAccountName,
tapperSyncer.config.TapperResources,
tapperSyncer.config.ImagePullPolicy,
tapperSyncer.config.MizuApiFilteringOptions,
tapperSyncer.config.LogLevel,
tapperSyncer.config.Istio,
); err != nil {
return err
}
logger.Log.Debugf("Successfully created %v tappers", len(nodeToTappedPodIPMap))
logger.Log.Debugf("Successfully created %v tappers", len(tapperSyncer.nodeToTappedPodIPMap))
} else {
if err := tapperSyncer.kubernetesProvider.RemoveDaemonSet(tapperSyncer.context, tapperSyncer.config.MizuResourcesNamespace, TapperDaemonSetName); err != nil {
return err

View File

@@ -0,0 +1,45 @@
package kubernetes
import (
"context"
"regexp"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/watch"
)
type PodWatchHelper struct {
kubernetesProvider *Provider
NameRegexFilter *regexp.Regexp
}
func NewPodWatchHelper(kubernetesProvider *Provider, NameRegexFilter *regexp.Regexp) *PodWatchHelper {
return &PodWatchHelper{
kubernetesProvider: kubernetesProvider,
NameRegexFilter: NameRegexFilter,
}
}
// Implements the EventFilterer Interface
func (wh *PodWatchHelper) Filter(wEvent *WatchEvent) (bool, error) {
pod, err := wEvent.ToPod()
if err != nil {
return false, nil
}
if !wh.NameRegexFilter.MatchString(pod.Name) {
return false, nil
}
return true, nil
}
// Implements the WatchCreator Interface
func (wh *PodWatchHelper) NewWatcher(ctx context.Context, namespace string) (watch.Interface, error) {
watcher, err := wh.kubernetesProvider.clientSet.CoreV1().Pods(namespace).Watch(ctx, metav1.ListOptions{Watch: true})
if err != nil {
return nil, err
}
return watcher, nil
}

View File

@@ -6,12 +6,16 @@ import (
"encoding/json"
"errors"
"fmt"
"io"
"net/url"
"path/filepath"
"regexp"
"github.com/op/go-logging"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/logger"
"github.com/up9inc/mizu/shared/semver"
"github.com/up9inc/mizu/tap/api"
"io"
v1 "k8s.io/api/apps/v1"
core "k8s.io/api/core/v1"
rbac "k8s.io/api/rbac/v1"
@@ -32,9 +36,6 @@ import (
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
watchtools "k8s.io/client-go/tools/watch"
"net/url"
"path/filepath"
"regexp"
)
type Provider struct {
@@ -153,14 +154,6 @@ func (provider *Provider) WaitUtilNamespaceDeleted(ctx context.Context, name str
return err
}
func (provider *Provider) GetPodWatcher(ctx context.Context, namespace string) watch.Interface {
watcher, err := provider.clientSet.CoreV1().Pods(namespace).Watch(ctx, metav1.ListOptions{Watch: true})
if err != nil {
panic(err.Error())
}
return watcher
}
func (provider *Provider) CreateNamespace(ctx context.Context, name string) (*core.Namespace, error) {
namespaceSpec := &core.Namespace{
ObjectMeta: metav1.ObjectMeta{
@@ -358,8 +351,8 @@ func (provider *Provider) CreateService(ctx context.Context, namespace string, s
}
func (provider *Provider) DoesServicesExist(ctx context.Context, namespace string, name string) (bool, error) {
resource, err := provider.clientSet.CoreV1().Services(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(resource, err)
serviceResource, err := provider.clientSet.CoreV1().Services(namespace).Get(ctx, name, metav1.GetOptions{})
return provider.doesResourceExist(serviceResource, err)
}
func (provider *Provider) doesResourceExist(resource interface{}, err error) (bool, error) {
@@ -493,6 +486,11 @@ func (provider *Provider) CreateDaemonsetRBAC(ctx context.Context, namespace str
Resources: []string{"daemonsets"},
Verbs: []string{"patch", "get", "list", "create", "delete"},
},
{
APIGroups: []string{"events.k8s.io"},
Resources: []string{"events"},
Verbs: []string{"list", "watch"},
},
},
}
roleBinding := &rbac.RoleBinding{
@@ -579,6 +577,11 @@ func (provider *Provider) RemoveDaemonSet(ctx context.Context, namespace string,
return provider.handleRemovalError(err)
}
func (provider *Provider) RemovePersistentVolumeClaim(ctx context.Context, namespace string, volumeClaimName string) error {
err := provider.clientSet.CoreV1().PersistentVolumeClaims(namespace).Delete(ctx, volumeClaimName, metav1.DeleteOptions{})
return provider.handleRemovalError(err)
}
func (provider *Provider) handleRemovalError(err error) error {
// Ignore NotFound - There is nothing to delete.
// Ignore Forbidden - Assume that a user could not have created the resource in the first place.
@@ -615,7 +618,7 @@ func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string,
return nil
}
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeToTappedPodIPMap map[string][]string, serviceAccountName string, resources shared.Resources, imagePullPolicy core.PullPolicy, mizuApiFilteringOptions api.TrafficFilteringOptions, logLevel logging.Level) error {
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeToTappedPodIPMap map[string][]string, serviceAccountName string, resources shared.Resources, imagePullPolicy core.PullPolicy, mizuApiFilteringOptions api.TrafficFilteringOptions, logLevel logging.Level, istio bool) error {
logger.Log.Debugf("Applying %d tapper daemon sets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeToTappedPodIPMap), namespace, daemonSetName, podImage, tapperPodName)
if len(nodeToTappedPodIPMap) == 0 {
@@ -638,14 +641,27 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
"--tap",
"--api-server-address", fmt.Sprintf("ws://%s/wsTapper", apiServerPodIp),
"--nodefrag",
"--procfs", procfsMountPath,
}
if istio {
mizuCmd = append(mizuCmd, "--procfs", procfsMountPath, "--istio")
}
agentContainer := applyconfcore.Container()
agentContainer.WithName(tapperPodName)
agentContainer.WithImage(podImage)
agentContainer.WithImagePullPolicy(imagePullPolicy)
agentContainer.WithSecurityContext(applyconfcore.SecurityContext().WithPrivileged(true))
caps := applyconfcore.Capabilities().WithDrop("ALL").WithAdd("NET_RAW").WithAdd("NET_ADMIN")
if istio {
caps = caps.WithAdd("SYS_ADMIN") // for reading /proc/PID/net/ns
caps = caps.WithAdd("SYS_PTRACE") // for setting netns to other process
caps = caps.WithAdd("DAC_OVERRIDE") // for reading /proc/PID/environ
}
agentContainer.WithSecurityContext(applyconfcore.SecurityContext().WithCapabilities(caps))
agentContainer.WithCommand(mizuCmd...)
agentContainer.WithEnv(
applyconfcore.EnvVar().WithName(shared.LogLevelEnvVar).WithValue(logLevel.String()),
@@ -764,10 +780,10 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
return err
}
func (provider *Provider) ListAllPodsMatchingRegex(ctx context.Context, regex *regexp.Regexp, namespaces []string) ([]core.Pod, error) {
func (provider *Provider) listPodsImpl(ctx context.Context, regex *regexp.Regexp, namespaces []string, listOptions metav1.ListOptions) ([]core.Pod, error) {
var pods []core.Pod
for _, namespace := range namespaces {
namespacePods, err := provider.clientSet.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{})
namespacePods, err := provider.clientSet.CoreV1().Pods(namespace).List(ctx, listOptions)
if err != nil {
return nil, fmt.Errorf("failed to get pods in ns: [%s], %w", namespace, err)
}
@@ -784,6 +800,14 @@ func (provider *Provider) ListAllPodsMatchingRegex(ctx context.Context, regex *r
return matchingPods, nil
}
func (provider *Provider) ListAllPodsMatchingRegex(ctx context.Context, regex *regexp.Regexp, namespaces []string) ([]core.Pod, error) {
return provider.listPodsImpl(ctx, regex, namespaces, metav1.ListOptions{})
}
func (provider *Provider) GetPod(ctx context.Context, namespaces string, podName string) (*core.Pod, error) {
return provider.clientSet.CoreV1().Pods(namespaces).Get(ctx, podName, metav1.GetOptions{})
}
func (provider *Provider) ListAllRunningPodsMatchingRegex(ctx context.Context, regex *regexp.Regexp, namespaces []string) ([]core.Pod, error) {
pods, err := provider.ListAllPodsMatchingRegex(ctx, regex, namespaces)
if err != nil {
@@ -859,10 +883,6 @@ func (provider *Provider) CreatePersistentVolumeClaim(ctx context.Context, names
return provider.clientSet.CoreV1().PersistentVolumeClaims(namespace).Create(ctx, volumeClaim, metav1.CreateOptions{})
}
func (provider *Provider) RemovePersistentVolumeClaim(ctx context.Context, namespace string, volumeClaimName string) error {
return provider.clientSet.CoreV1().PersistentVolumeClaims(namespace).Delete(ctx, volumeClaimName, metav1.DeleteOptions{})
}
func getClientSet(config *restclient.Config) (*kubernetes.Clientset, error) {
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {

View File

@@ -57,11 +57,10 @@ func getMissingPods(pods1 []core.Pod, pods2 []core.Pod) []core.Pod {
return missingPods
}
func GetPodInfosForPods(pods []core.Pod) []shared.PodInfo {
podInfos := make([]shared.PodInfo, 0)
for _, pod := range pods {
podInfos = append(podInfos, shared.PodInfo{Name: pod.Name, Namespace: pod.Namespace})
podInfos = append(podInfos, shared.PodInfo{Name: pod.Name, Namespace: pod.Namespace, NodeName: pod.Spec.NodeName})
}
return podInfos
}

View File

@@ -6,19 +6,22 @@ import (
"fmt"
"github.com/up9inc/mizu/shared/debounce"
"github.com/up9inc/mizu/shared/logger"
"regexp"
"sync"
"time"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/watch"
)
func FilteredWatch(ctx context.Context, kubernetesProvider *Provider, targetNamespaces []string, podFilter *regexp.Regexp) (chan *corev1.Pod, chan *corev1.Pod, chan *corev1.Pod, chan error) {
addedChan := make(chan *corev1.Pod)
modifiedChan := make(chan *corev1.Pod)
removedChan := make(chan *corev1.Pod)
type EventFilterer interface {
Filter(*WatchEvent) (bool, error)
}
type WatchCreator interface {
NewWatcher(ctx context.Context, namespace string) (watch.Interface, error)
}
func FilteredWatch(ctx context.Context, watcherCreator WatchCreator, targetNamespaces []string, filterer EventFilterer) (<-chan *WatchEvent, <-chan error) {
eventChan := make(chan *WatchEvent)
errorChan := make(chan error)
var wg sync.WaitGroup
@@ -31,8 +34,13 @@ func FilteredWatch(ctx context.Context, kubernetesProvider *Provider, targetName
watchRestartDebouncer := debounce.NewDebouncer(1 * time.Minute, func() {})
for {
watcher := kubernetesProvider.GetPodWatcher(ctx, targetNamespace)
err := startWatchLoop(ctx, watcher, podFilter, addedChan, modifiedChan, removedChan) // blocking
watcher, err := watcherCreator.NewWatcher(ctx, targetNamespace)
if err != nil {
errorChan <- fmt.Errorf("error in k8s watch: %v", err)
break
}
err = startWatchLoop(ctx, watcher, filterer, eventChan) // blocking
watcher.Stop()
select {
@@ -43,7 +51,7 @@ func FilteredWatch(ctx context.Context, kubernetesProvider *Provider, targetName
}
if err != nil {
errorChan <- fmt.Errorf("error in k8 watch: %v", err)
errorChan <- fmt.Errorf("error in k8s watch: %v", err)
break
} else {
if !watchRestartDebouncer.IsOn() {
@@ -63,16 +71,14 @@ func FilteredWatch(ctx context.Context, kubernetesProvider *Provider, targetName
go func() {
<-ctx.Done()
wg.Wait()
close(addedChan)
close(modifiedChan)
close(removedChan)
close(eventChan)
close(errorChan)
}()
return addedChan, modifiedChan, removedChan, errorChan
return eventChan, errorChan
}
func startWatchLoop(ctx context.Context, watcher watch.Interface, podFilter *regexp.Regexp, addedChan chan *corev1.Pod, modifiedChan chan *corev1.Pod, removedChan chan *corev1.Pod) error {
func startWatchLoop(ctx context.Context, watcher watch.Interface, filterer EventFilterer, eventChan chan<- *WatchEvent) error {
resultChan := watcher.ResultChan()
for {
select {
@@ -81,27 +87,19 @@ func startWatchLoop(ctx context.Context, watcher watch.Interface, podFilter *reg
return nil
}
if e.Type == watch.Error {
return apierrors.FromObject(e.Object)
wEvent := WatchEvent(e)
if wEvent.Type == watch.Error {
return wEvent.ToError()
}
pod, ok := e.Object.(*corev1.Pod)
if !ok {
if pass, err := filterer.Filter(&wEvent); err != nil {
return err
} else if !pass {
continue
}
if !podFilter.MatchString(pod.Name) {
continue
}
switch e.Type {
case watch.Added:
addedChan <- pod
case watch.Modified:
modifiedChan <- pod
case watch.Deleted:
removedChan <- pod
}
eventChan <- &wEvent
case <-ctx.Done():
return nil
}

View File

@@ -0,0 +1,52 @@
package kubernetes
import (
"fmt"
"reflect"
corev1 "k8s.io/api/core/v1"
eventsv1 "k8s.io/api/events/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/watch"
)
const (
EventAdded watch.EventType = watch.Added
EventModified watch.EventType = watch.Modified
EventDeleted watch.EventType = watch.Deleted
EventBookmark watch.EventType = watch.Bookmark
EventError watch.EventType = watch.Error
)
type InvalidObjectType struct {
RequestedType reflect.Type
}
// Implements the error interface
func (iot *InvalidObjectType) Error() string {
return fmt.Sprintf("Cannot convert event to type %s", iot.RequestedType)
}
type WatchEvent watch.Event
func (we *WatchEvent) ToPod() (*corev1.Pod, error) {
pod, ok := we.Object.(*corev1.Pod)
if !ok {
return nil, &InvalidObjectType{RequestedType: reflect.TypeOf(pod)}
}
return pod, nil
}
func (we *WatchEvent) ToEvent() (*eventsv1.Event, error) {
event, ok := we.Object.(*eventsv1.Event)
if !ok {
return nil, &InvalidObjectType{RequestedType: reflect.TypeOf(event)}
}
return event, nil
}
func (we *WatchEvent) ToError() error {
return apierrors.FromObject(we.Object)
}

View File

@@ -9,7 +9,7 @@ import (
var Log = logging.MustGetLogger("mizu")
var format = logging.MustStringFormatter(
`%{time} %{level:.5s} ▶ %{pid} %{shortfile} %{shortfunc} ▶ %{message}`,
`[%{time:2006-01-02T15:04:05.000-0700}] %{level:-5s} ▶ %{message} ▶ [%{pid} %{shortfile} %{shortfunc}]`,
)
func InitLogger(logPath string) {

View File

@@ -2,9 +2,9 @@ package shared
import (
"github.com/op/go-logging"
"github.com/up9inc/mizu/shared/logger"
"github.com/up9inc/mizu/tap/api"
"io/ioutil"
"log"
"strings"
"gopkg.in/yaml.v3"
@@ -43,6 +43,7 @@ type MizuAgentConfig struct {
MizuResourcesNamespace string `json:"mizuResourceNamespace"`
MizuApiFilteringOptions api.TrafficFilteringOptions `json:"mizuApiFilteringOptions"`
AgentDatabasePath string `json:"agentDatabasePath"`
Istio bool `json:"istio"`
}
type WebSocketMessageMetadata struct {
@@ -66,6 +67,12 @@ type WebSocketStatusMessage struct {
TappingStatus TapStatus `json:"tappingStatus"`
}
type TapperStatus struct {
TapperName string `json:"tapperName"`
NodeName string `json:"nodeName"`
Status string `json:"status"`
}
type TapStatus struct {
Pods []PodInfo `json:"pods"`
TLSLinks []TLSLinkInfo `json:"tlsLinks"`
@@ -74,6 +81,7 @@ type TapStatus struct {
type PodInfo struct {
Namespace string `json:"namespace"`
Name string `json:"name"`
NodeName string `json:"nodeName"`
}
type TLSLinkInfo struct {
@@ -109,8 +117,9 @@ func CreateWebSocketMessageTypeAnalyzeStatus(analyzeStatus AnalyzeStatus) WebSoc
}
type HealthResponse struct {
TapStatus TapStatus `json:"tapStatus"`
TappersCount int `json:"tappersCount"`
TapStatus TapStatus `json:"tapStatus"`
TappersCount int `json:"tappersCount"`
TappersStatus []TapperStatus `json:"tappersStatus"`
}
type VersionResponse struct {
@@ -141,14 +150,12 @@ func (r *RulePolicy) validateType() bool {
permitedTypes := []string{"json", "header", "slo"}
_, found := Find(permitedTypes, r.Type)
if !found {
log.Printf("Error: %s. ", r.Name)
log.Printf("Only json, header and slo types are supported on rule definition. This rule will be ignored\n")
logger.Log.Errorf("Only json, header and slo types are supported on rule definition. This rule will be ignored. rule name: %s", r.Name)
found = false
}
if strings.ToLower(r.Type) == "slo" {
if r.ResponseTime <= 0 {
log.Printf("Error: %s. ", r.Name)
log.Printf("When type=slo, the field response-time should be specified and have a value >= 1\n\n")
logger.Log.Errorf("When rule type is slo, the field response-time should be specified and have a value >= 1. rule name: %s", r.Name)
found = false
}
}

View File

@@ -99,7 +99,7 @@ type Dissector interface {
Dissect(b *bufio.Reader, isClient bool, tcpID *TcpID, counterPair *CounterPair, superTimer *SuperTimer, superIdentifier *SuperIdentifier, emitter Emitter, options *TrafficFilteringOptions) error
Analyze(item *OutputChannelItem, resolvedSource string, resolvedDestination string) *MizuEntry
Summarize(entry *MizuEntry) *BaseEntryDetails
Represent(pIn Protocol, request map[string]interface{}, response map[string]interface{}) (pOut Protocol, object []byte, bodySize int64, err error)
Represent(request map[string]interface{}, response map[string]interface{}) (object []byte, bodySize int64, err error)
Macros() map[string]string
}
@@ -129,19 +129,10 @@ type MizuEntry struct {
Response map[string]interface{} `json:"response"`
Base *BaseEntryDetails `json:"base"`
Summary string `json:"summary"`
Url string `json:"url"`
Method string `json:"method"`
Status int `json:"status"`
RequestSenderIp string `json:"requestSenderIp"`
Service string `json:"service"`
ElapsedTime int64 `json:"elapsedTime"`
Path string `json:"path"`
ResolvedSource string `json:"resolvedSource,omitempty"`
ResolvedDestination string `json:"resolvedDestination,omitempty"`
SourceIp string `json:"sourceIp,omitempty"`
DestinationIp string `json:"destinationIp,omitempty"`
SourcePort string `json:"sourcePort,omitempty"`
DestinationPort string `json:"destinationPort,omitempty"`
IsOutgoing bool `json:"isOutgoing,omitempty"`
ContractStatus ContractStatus `json:"contractStatus,omitempty"`
ContractRequestReason string `json:"contractRequestReason,omitempty"`
@@ -160,24 +151,20 @@ type MizuEntryWrapper struct {
}
type BaseEntryDetails struct {
Id uint `json:"id"`
Protocol Protocol `json:"protocol,omitempty"`
Url string `json:"url,omitempty"`
RequestSenderIp string `json:"requestSenderIp,omitempty"`
Service string `json:"service,omitempty"`
Path string `json:"path,omitempty"`
Summary string `json:"summary,omitempty"`
StatusCode int `json:"statusCode"`
Method string `json:"method,omitempty"`
Timestamp int64 `json:"timestamp,omitempty"`
SourceIp string `json:"sourceIp,omitempty"`
DestinationIp string `json:"destinationIp,omitempty"`
SourcePort string `json:"sourcePort,omitempty"`
DestinationPort string `json:"destinationPort,omitempty"`
IsOutgoing bool `json:"isOutgoing,omitempty"`
Latency int64 `json:"latency"`
Rules ApplicableRules `json:"rules,omitempty"`
ContractStatus ContractStatus `json:"contractStatus"`
Id uint `json:"id"`
Protocol Protocol `json:"protocol,omitempty"`
Url string `json:"url,omitempty"`
Path string `json:"path,omitempty"`
Summary string `json:"summary,omitempty"`
StatusCode int `json:"statusCode"`
Method string `json:"method,omitempty"`
Timestamp int64 `json:"timestamp,omitempty"`
Source *TCP `json:"src"`
Destination *TCP `json:"dst"`
IsOutgoing bool `json:"isOutgoing,omitempty"`
Latency int64 `json:"latency"`
Rules ApplicableRules `json:"rules,omitempty"`
ContractStatus ContractStatus `json:"contractStatus"`
}
type ApplicableRules struct {
@@ -202,18 +189,13 @@ type DataUnmarshaler interface {
func (bed *BaseEntryDetails) UnmarshalData(entry *MizuEntry) error {
bed.Protocol = entry.Protocol
bed.Id = entry.Id
bed.Url = entry.Url
bed.RequestSenderIp = entry.RequestSenderIp
bed.Service = entry.Service
bed.Path = entry.Path
bed.Summary = entry.Path
bed.Summary = entry.Summary
bed.StatusCode = entry.Status
bed.Method = entry.Method
bed.Timestamp = entry.Timestamp
bed.SourceIp = entry.SourceIp
bed.DestinationIp = entry.DestinationIp
bed.SourcePort = entry.SourcePort
bed.DestinationPort = entry.DestinationPort
bed.Source = entry.Source
bed.Destination = entry.Destination
bed.IsOutgoing = entry.IsOutgoing
bed.Latency = entry.ElapsedTime
bed.ContractStatus = entry.ContractStatus
@@ -271,7 +253,6 @@ func (h HTTPPayload) MarshalJSON() ([]byte, error) {
}
return json.Marshal(&HTTPWrapper{
Method: harRequest.Method,
Url: "",
Details: harRequest,
RawRequest: &HTTPRequestWrapper{Request: h.Data.(*http.Request)},
})
@@ -287,7 +268,7 @@ func (h HTTPPayload) MarshalJSON() ([]byte, error) {
RawResponse: &HTTPResponseWrapper{Response: h.Data.(*http.Response)},
})
default:
panic(fmt.Sprintf("HTTP payload cannot be marshaled: %s\n", h.Type))
panic(fmt.Sprintf("HTTP payload cannot be marshaled: %s", h.Type))
}
}

View File

@@ -39,7 +39,7 @@ func StartMemoryProfiler(envDumpPath string, envTimeInterval string) {
filename := fmt.Sprintf("%s/%s__mem.prof", dumpPath, t.Format("15_04_05"))
logger.Log.Infof("Writing memory profile to %s\n", filename)
logger.Log.Infof("Writing memory profile to %s", filename)
f, err := os.Create(filename)
if err != nil {

View File

@@ -37,7 +37,7 @@ func (d dissecting) Register(extension *api.Extension) {
}
func (d dissecting) Ping() {
log.Printf("pong %s\n", protocol.Name)
log.Printf("pong %s", protocol.Name)
}
const amqpRequest string = "amqp_request"
@@ -218,7 +218,7 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
}
default:
// log.Printf("unexpected frame: %+v\n", f)
// log.Printf("unexpected frame: %+v", f)
}
}
}
@@ -226,12 +226,6 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string) *api.MizuEntry {
request := item.Pair.Request.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
service := "amqp"
if resolvedDestination != "" {
service = resolvedDestination
} else if resolvedSource != "" {
service = resolvedSource
}
summary := ""
switch request["method"] {
@@ -279,45 +273,31 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
IP: item.ConnectionInfo.ServerIP,
Port: item.ConnectionInfo.ServerPort,
},
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Url: fmt.Sprintf("%s%s", service, summary),
Method: request["method"].(string),
Status: 0,
RequestSenderIp: item.ConnectionInfo.ClientIP,
Service: service,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: 0,
Summary: summary,
ResolvedSource: resolvedSource,
ResolvedDestination: resolvedDestination,
SourceIp: item.ConnectionInfo.ClientIP,
DestinationIp: item.ConnectionInfo.ServerIP,
SourcePort: item.ConnectionInfo.ClientPort,
DestinationPort: item.ConnectionInfo.ServerPort,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Method: request["method"].(string),
Status: 0,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: 0,
Summary: summary,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
}
}
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
return &api.BaseEntryDetails{
Id: entry.Id,
Protocol: protocol,
Url: entry.Url,
RequestSenderIp: entry.RequestSenderIp,
Service: entry.Service,
Summary: entry.Summary,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
SourceIp: entry.SourceIp,
DestinationIp: entry.DestinationIp,
SourcePort: entry.SourcePort,
DestinationPort: entry.DestinationPort,
IsOutgoing: entry.IsOutgoing,
Latency: entry.ElapsedTime,
Id: entry.Id,
Protocol: protocol,
Summary: entry.Summary,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
Source: entry.Source,
Destination: entry.Destination,
IsOutgoing: entry.IsOutgoing,
Latency: entry.ElapsedTime,
Rules: api.ApplicableRules{
Latency: 0,
Status: false,
@@ -325,8 +305,7 @@ func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
}
}
func (d dissecting) Represent(protoIn api.Protocol, request map[string]interface{}, response map[string]interface{}) (protoOut api.Protocol, object []byte, bodySize int64, err error) {
protoOut = protocol
func (d dissecting) Represent(request map[string]interface{}, response map[string]interface{}) (object []byte, bodySize int64, err error) {
bodySize = 0
representation := make(map[string]interface{}, 0)
var repRequest []interface{}
@@ -363,7 +342,7 @@ func (d dissecting) Represent(protoIn api.Protocol, request map[string]interface
func (d dissecting) Macros() map[string]string {
return map[string]string{
`amqp`: fmt.Sprintf(`proto.abbr == "%s"`, protocol.Abbreviation),
`amqp`: fmt.Sprintf(`proto.name == "%s"`, protocol.Name),
}
}

View File

@@ -7,6 +7,7 @@ import (
"io"
"io/ioutil"
"net/http"
"strings"
"github.com/up9inc/mizu/tap/api"
)
@@ -23,8 +24,8 @@ func filterAndEmit(item *api.OutputChannelItem, emitter api.Emitter, options *ap
emitter.Emit(item)
}
func handleHTTP2Stream(grpcAssembler *GrpcAssembler, tcpID *api.TcpID, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
streamID, messageHTTP1, err := grpcAssembler.readMessage()
func handleHTTP2Stream(http2Assembler *Http2Assembler, tcpID *api.TcpID, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
streamID, messageHTTP1, isGrpc, err := http2Assembler.readMessage()
if err != nil {
return err
}
@@ -34,12 +35,13 @@ func handleHTTP2Stream(grpcAssembler *GrpcAssembler, tcpID *api.TcpID, superTime
switch messageHTTP1 := messageHTTP1.(type) {
case http.Request:
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
"%s->%s %s->%s %d %s",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
tcpID.DstPort,
streamID,
"HTTP2",
)
item = reqResMatcher.registerRequest(ident, &messageHTTP1, superTimer.CaptureTime)
if item != nil {
@@ -53,12 +55,13 @@ func handleHTTP2Stream(grpcAssembler *GrpcAssembler, tcpID *api.TcpID, superTime
}
case http.Response:
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
"%s->%s %s->%s %d %s",
tcpID.DstIP,
tcpID.SrcIP,
tcpID.DstPort,
tcpID.SrcPort,
streamID,
"HTTP2",
)
item = reqResMatcher.registerResponse(ident, &messageHTTP1, superTimer.CaptureTime)
if item != nil {
@@ -73,30 +76,41 @@ func handleHTTP2Stream(grpcAssembler *GrpcAssembler, tcpID *api.TcpID, superTime
}
if item != nil {
item.Protocol = http2Protocol
if isGrpc {
item.Protocol = grpcProtocol
} else {
item.Protocol = http2Protocol
}
filterAndEmit(item, emitter, options)
}
return nil
}
func handleHTTP1ClientStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
req, err := http.ReadRequest(b)
func handleHTTP1ClientStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) (switchingProtocolsHTTP2 bool, req *http.Request, err error) {
req, err = http.ReadRequest(b)
if err != nil {
return err
return
}
counterPair.Request++
body, err := ioutil.ReadAll(req.Body)
// Check HTTP2 upgrade - HTTP2 Over Cleartext (H2C)
if strings.Contains(strings.ToLower(req.Header.Get("Connection")), "upgrade") && strings.ToLower(req.Header.Get("Upgrade")) == "h2c" {
switchingProtocolsHTTP2 = true
}
var body []byte
body, err = ioutil.ReadAll(req.Body)
req.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
"%s->%s %s->%s %d %s",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
tcpID.DstPort,
counterPair.Request,
"HTTP1",
)
item := reqResMatcher.registerRequest(ident, req, superTimer.CaptureTime)
if item != nil {
@@ -109,26 +123,34 @@ func handleHTTP1ClientStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api
}
filterAndEmit(item, emitter, options)
}
return nil
return
}
func handleHTTP1ServerStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
res, err := http.ReadResponse(b, nil)
func handleHTTP1ServerStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) (switchingProtocolsHTTP2 bool, err error) {
var res *http.Response
res, err = http.ReadResponse(b, nil)
if err != nil {
return err
return
}
counterPair.Response++
body, err := ioutil.ReadAll(res.Body)
// Check HTTP2 upgrade - HTTP2 Over Cleartext (H2C)
if res.StatusCode == 101 && strings.Contains(strings.ToLower(res.Header.Get("Connection")), "upgrade") && strings.ToLower(res.Header.Get("Upgrade")) == "h2c" {
switchingProtocolsHTTP2 = true
}
var body []byte
body, err = ioutil.ReadAll(res.Body)
res.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
"%s->%s %s->%s %d %s",
tcpID.DstIP,
tcpID.SrcIP,
tcpID.DstPort,
tcpID.SrcPort,
counterPair.Response,
"HTTP1",
)
item := reqResMatcher.registerResponse(ident, res, superTimer.CaptureTime)
if item != nil {
@@ -141,5 +163,5 @@ func handleHTTP1ServerStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api
}
filterAndEmit(item, emitter, options)
}
return nil
return
}

View File

@@ -10,6 +10,7 @@ import (
"math"
"net/http"
"net/url"
"strconv"
"strings"
"golang.org/x/net/http2"
@@ -27,6 +28,26 @@ const protoMinorHTTP2 = 0
var maxHTTP2DataLen = 1 * 1024 * 1024 // 1MB
var grpcStatusCodes = []string{
"OK",
"CANCELLED",
"UNKNOWN",
"INVALID_ARGUMENT",
"DEADLINE_EXCEEDED",
"NOT_FOUND",
"ALREADY_EXISTS",
"PERMISSION_DENIED",
"RESOURCE_EXHAUSTED",
"FAILED_PRECONDITION",
"ABORTED",
"OUT_OF_RANGE",
"UNIMPLEMENTED",
"INTERNAL",
"UNAVAILABLE",
"DATA_LOSS",
"UNAUTHENTICATED",
}
type messageFragment struct {
headers []hpack.HeaderField
data []byte
@@ -71,37 +92,38 @@ func (fbs *fragmentsByStream) pop(streamID uint32) ([]hpack.HeaderField, []byte)
return headers, data
}
func createGrpcAssembler(b *bufio.Reader) *GrpcAssembler {
func createHTTP2Assembler(b *bufio.Reader) *Http2Assembler {
var framerOutput bytes.Buffer
framer := http2.NewFramer(&framerOutput, b)
framer.ReadMetaHeaders = hpack.NewDecoder(initialHeaderTableSize, nil)
return &GrpcAssembler{
return &Http2Assembler{
fragmentsByStream: make(fragmentsByStream),
framer: framer,
}
}
type GrpcAssembler struct {
type Http2Assembler struct {
fragmentsByStream fragmentsByStream
framer *http2.Framer
}
func (ga *GrpcAssembler) readMessage() (uint32, interface{}, error) {
func (ga *Http2Assembler) readMessage() (streamID uint32, messageHTTP1 interface{}, isGrpc bool, err error) {
// Exactly one Framer is used for each half connection.
// (Instead of creating a new Framer for each ReadFrame operation)
// This is needed in order to decompress the headers,
// because the compression context is updated with each requests/response.
frame, err := ga.framer.ReadFrame()
if err != nil {
return 0, nil, err
return
}
streamID := frame.Header().StreamID
streamID = frame.Header().StreamID
ga.fragmentsByStream.appendFrame(streamID, frame)
if !(ga.isStreamEnd(frame)) {
return 0, nil, nil
streamID = 0
return
}
headers, data := ga.fragmentsByStream.pop(streamID)
@@ -115,13 +137,29 @@ func (ga *GrpcAssembler) readMessage() (uint32, interface{}, error) {
dataString := base64.StdEncoding.EncodeToString(data)
// Use http1 types only because they are expected in http_matcher.
// TODO: Create an interface that will be used by http_matcher:registerRequest and http_matcher:registerRequest
// to accept both HTTP/1.x and HTTP/2 requests and responses
var messageHTTP1 interface{}
if _, ok := headersHTTP1[":method"]; ok {
method := headersHTTP1.Get(":method")
status := headersHTTP1.Get(":status")
// gRPC detection
grpcStatus := headersHTTP1.Get("Grpc-Status")
if grpcStatus != "" {
isGrpc = true
status = grpcStatus
}
if strings.Contains(headersHTTP1.Get("Content-Type"), "application/grpc") {
isGrpc = true
grpcPath := headersHTTP1.Get(":path")
pathSegments := strings.Split(grpcPath, "/")
if len(pathSegments) > 0 {
method = pathSegments[len(pathSegments)-1]
}
}
if method != "" {
messageHTTP1 = http.Request{
URL: &url.URL{},
Method: "POST",
Method: method,
Header: headersHTTP1,
Proto: protoHTTP2,
ProtoMajor: protoMajorHTTP2,
@@ -129,8 +167,16 @@ func (ga *GrpcAssembler) readMessage() (uint32, interface{}, error) {
Body: io.NopCloser(strings.NewReader(dataString)),
ContentLength: int64(len(dataString)),
}
} else if _, ok := headersHTTP1[":status"]; ok {
} else if status != "" {
var statusCode int
statusCode, err = strconv.Atoi(status)
if err != nil {
return
}
messageHTTP1 = http.Response{
StatusCode: statusCode,
Header: headersHTTP1,
Proto: protoHTTP2,
ProtoMajor: protoMajorHTTP2,
@@ -139,13 +185,14 @@ func (ga *GrpcAssembler) readMessage() (uint32, interface{}, error) {
ContentLength: int64(len(dataString)),
}
} else {
return 0, nil, errors.New("failed to assemble stream: neither a request nor a message")
err = errors.New("failed to assemble stream: neither a request nor a message")
return
}
return streamID, messageHTTP1, nil
return
}
func (ga *GrpcAssembler) isStreamEnd(frame http2.Frame) bool {
func (ga *Http2Assembler) isStreamEnd(frame http2.Frame) bool {
switch frame := frame.(type) {
case *http2.MetaHeadersFrame:
if frame.StreamEnded() {

View File

@@ -7,6 +7,7 @@ import (
"fmt"
"io"
"log"
"net/http"
"net/url"
"time"
@@ -23,21 +24,35 @@ var protocol api.Protocol = api.Protocol{
ForegroundColor: "#ffffff",
FontSize: 12,
ReferenceLink: "https://datatracker.ietf.org/doc/html/rfc2616",
Ports: []string{"80", "8080", "50051"},
Ports: []string{"80", "443", "8080"},
Priority: 0,
}
var http2Protocol api.Protocol = api.Protocol{
Name: "http",
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2) (gRPC)",
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2)",
Abbreviation: "HTTP/2",
Macro: "grpc",
Macro: "http2",
Version: "2.0",
BackgroundColor: "#244c5a",
ForegroundColor: "#ffffff",
FontSize: 11,
ReferenceLink: "https://datatracker.ietf.org/doc/html/rfc7540",
Ports: []string{"80", "8080"},
Ports: []string{"80", "443", "8080"},
Priority: 0,
}
var grpcProtocol api.Protocol = api.Protocol{
Name: "http",
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2) [ gRPC over HTTP/2 ]",
Abbreviation: "gRPC",
Macro: "grpc",
Version: "2.0",
BackgroundColor: "#244c5a",
ForegroundColor: "#ffffff",
FontSize: 11,
ReferenceLink: "https://grpc.github.io/grpc/core/md_doc_statuscodes.html",
Ports: []string{"80", "443", "8080", "50051"},
Priority: 0,
}
@@ -58,26 +73,34 @@ func (d dissecting) Register(extension *api.Extension) {
}
func (d dissecting) Ping() {
log.Printf("pong %s\n", protocol.Name)
log.Printf("pong %s", protocol.Name)
}
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
isHTTP2, err := checkIsHTTP2Connection(b, isClient)
var grpcAssembler *GrpcAssembler
var http2Assembler *Http2Assembler
if isHTTP2 {
prepareHTTP2Connection(b, isClient)
grpcAssembler = createGrpcAssembler(b)
http2Assembler = createHTTP2Assembler(b)
}
dissected := false
switchingProtocolsHTTP2 := false
for {
if switchingProtocolsHTTP2 {
switchingProtocolsHTTP2 = false
isHTTP2, err = checkIsHTTP2Connection(b, isClient)
prepareHTTP2Connection(b, isClient)
http2Assembler = createHTTP2Assembler(b)
}
if superIdentifier.Protocol != nil && superIdentifier.Protocol != &protocol {
return errors.New("Identified by another protocol")
}
if isHTTP2 {
err = handleHTTP2Stream(grpcAssembler, tcpID, superTimer, emitter, options)
err = handleHTTP2Stream(http2Assembler, tcpID, superTimer, emitter, options)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
@@ -85,15 +108,39 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
}
dissected = true
} else if isClient {
err = handleHTTP1ClientStream(b, tcpID, counterPair, superTimer, emitter, options)
var req *http.Request
switchingProtocolsHTTP2, req, err = handleHTTP1ClientStream(b, tcpID, counterPair, superTimer, emitter, options)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
continue
}
dissected = true
// In case of an HTTP2 upgrade, duplicate the HTTP1 request into HTTP2 with stream ID 1
if switchingProtocolsHTTP2 {
ident := fmt.Sprintf(
"%s->%s %s->%s 1 %s",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
tcpID.DstPort,
"HTTP2",
)
item := reqResMatcher.registerRequest(ident, req, superTimer.CaptureTime)
if item != nil {
item.ConnectionInfo = &api.ConnectionInfo{
ClientIP: tcpID.SrcIP,
ClientPort: tcpID.SrcPort,
ServerIP: tcpID.DstIP,
ServerPort: tcpID.DstPort,
IsOutgoing: true,
}
filterAndEmit(item, emitter, options)
}
}
} else {
err = handleHTTP1ServerStream(b, tcpID, counterPair, superTimer, emitter, options)
switchingProtocolsHTTP2, err = handleHTTP1ServerStream(b, tcpID, counterPair, superTimer, emitter, options)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
@@ -110,23 +157,16 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
return nil
}
func SetHostname(address, newHostname string) string {
replacedUrl, err := url.Parse(address)
if err != nil {
return address
}
replacedUrl.Host = newHostname
return replacedUrl.String()
}
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string) *api.MizuEntry {
var host, scheme, authority, path, service string
var host, authority, path string
request := item.Pair.Request.Payload.(map[string]interface{})
response := item.Pair.Response.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
resDetails := response["details"].(map[string]interface{})
isRequestUpgradedH2C := false
for _, header := range reqDetails["headers"].([]interface{}) {
h := header.(map[string]interface{})
if h["name"] == "Host" {
@@ -135,18 +175,29 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
if h["name"] == ":authority" {
authority = h["value"].(string)
}
if h["name"] == ":scheme" {
scheme = h["value"].(string)
}
if h["name"] == ":path" {
path = h["value"].(string)
}
if h["name"] == "Upgrade" {
if h["value"].(string) == "h2c" {
isRequestUpgradedH2C = true
}
}
}
if item.Protocol.Version == "2.0" {
service = fmt.Sprintf("%s://%s", scheme, authority)
if resDetails["bodySize"].(float64) < 0 {
resDetails["bodySize"] = 0
}
if item.Protocol.Version == "2.0" && !isRequestUpgradedH2C {
if resolvedDestination == "" {
resolvedDestination = authority
}
if resolvedDestination == "" {
resolvedDestination = host
}
} else {
service = fmt.Sprintf("http://%s", host)
u, err := url.Parse(reqDetails["url"].(string))
if err != nil {
path = reqDetails["url"].(string)
@@ -156,6 +207,7 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
}
request["url"] = reqDetails["url"].(string)
reqDetails["targetUri"] = reqDetails["url"]
reqDetails["path"] = path
reqDetails["summary"] = path
@@ -173,18 +225,24 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
reqDetails["_queryString"] = reqDetails["queryString"]
reqDetails["queryString"] = mapSliceRebuildAsMap(reqDetails["_queryString"].([]interface{}))
if resolvedDestination != "" {
service = SetHostname(service, resolvedDestination)
} else if resolvedSource != "" {
service = SetHostname(service, resolvedSource)
method := reqDetails["method"].(string)
statusCode := int(resDetails["status"].(float64))
if item.Protocol.Abbreviation == "gRPC" {
resDetails["statusText"] = grpcStatusCodes[statusCode]
}
if item.Protocol.Version == "2.0" && !isRequestUpgradedH2C {
reqDetails["url"] = path
request["url"] = path
}
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
if elapsedTime < 0 {
elapsedTime = 0
}
httpPair, _ := json.Marshal(item.Pair)
_protocol := protocol
_protocol.Version = item.Protocol.Version
return &api.MizuEntry{
Protocol: _protocol,
Protocol: item.Protocol,
Source: &api.TCP{
Name: resolvedSource,
IP: item.ConnectionInfo.ClientIP,
@@ -195,53 +253,33 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
IP: item.ConnectionInfo.ServerIP,
Port: item.ConnectionInfo.ServerPort,
},
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Response: resDetails,
Url: fmt.Sprintf("%s%s", service, path),
Method: reqDetails["method"].(string),
Status: int(resDetails["status"].(float64)),
RequestSenderIp: item.ConnectionInfo.ClientIP,
Service: service,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: elapsedTime,
Summary: path,
ResolvedSource: resolvedSource,
ResolvedDestination: resolvedDestination,
SourceIp: item.ConnectionInfo.ClientIP,
DestinationIp: item.ConnectionInfo.ServerIP,
SourcePort: item.ConnectionInfo.ClientPort,
DestinationPort: item.ConnectionInfo.ServerPort,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
HTTPPair: string(httpPair),
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Response: resDetails,
Method: method,
Status: statusCode,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: elapsedTime,
Summary: path,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
HTTPPair: string(httpPair),
}
}
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
var p api.Protocol
if entry.Protocol.Version == "2.0" {
p = http2Protocol
} else {
p = protocol
}
return &api.BaseEntryDetails{
Id: entry.Id,
Protocol: p,
Url: entry.Url,
RequestSenderIp: entry.RequestSenderIp,
Service: entry.Service,
Path: entry.Path,
Summary: entry.Summary,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
SourceIp: entry.SourceIp,
DestinationIp: entry.DestinationIp,
SourcePort: entry.SourcePort,
DestinationPort: entry.DestinationPort,
IsOutgoing: entry.IsOutgoing,
Latency: entry.ElapsedTime,
Id: entry.Id,
Protocol: entry.Protocol,
Path: entry.Path,
Summary: entry.Summary,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
Source: entry.Source,
Destination: entry.Destination,
IsOutgoing: entry.IsOutgoing,
Latency: entry.ElapsedTime,
Rules: api.ApplicableRules{
Latency: 0,
Status: false,
@@ -257,9 +295,9 @@ func representRequest(request map[string]interface{}) (repRequest []interface{})
Selector: `request.method`,
},
{
Name: "URL",
Value: request["url"].(string),
Selector: `request.url`,
Name: "Target URI",
Value: request["targetUri"].(string),
Selector: `request.targetUri`,
},
{
Name: "Path",
@@ -401,12 +439,7 @@ func representResponse(response map[string]interface{}) (repResponse []interface
return
}
func (d dissecting) Represent(protoIn api.Protocol, request map[string]interface{}, response map[string]interface{}) (protoOut api.Protocol, object []byte, bodySize int64, err error) {
if protoIn.Version == "2.0" {
protoOut = http2Protocol
} else {
protoOut = protocol
}
func (d dissecting) Represent(request map[string]interface{}, response map[string]interface{}) (object []byte, bodySize int64, err error) {
representation := make(map[string]interface{}, 0)
repRequest := representRequest(request)
repResponse, bodySize := representResponse(response)
@@ -418,9 +451,9 @@ func (d dissecting) Represent(protoIn api.Protocol, request map[string]interface
func (d dissecting) Macros() map[string]string {
return map[string]string{
`http`: fmt.Sprintf(`proto.abbr == "%s"`, protocol.Abbreviation),
`grpc`: fmt.Sprintf(`proto.abbr == "%s" and proto.version == "%s"`, protocol.Abbreviation, http2Protocol.Version),
`http2`: fmt.Sprintf(`proto.abbr == "%s" and proto.version == "%s"`, protocol.Abbreviation, http2Protocol.Version),
`http`: fmt.Sprintf(`proto.name == "%s" and proto.version == "%s"`, protocol.Name, protocol.Version),
`http2`: fmt.Sprintf(`proto.name == "%s" and proto.version == "%s"`, protocol.Name, http2Protocol.Version),
`grpc`: fmt.Sprintf(`proto.name == "%s" and proto.version == "%s" and proto.macro == "%s"`, protocol.Name, grpcProtocol.Version, grpcProtocol.Macro),
}
}

View File

@@ -92,6 +92,6 @@ func splitIdent(ident string) []string {
}
func genKey(split []string) string {
key := fmt.Sprintf("%s:%s->%s:%s,%s", split[0], split[2], split[1], split[3], split[4])
key := fmt.Sprintf("%s:%s->%s:%s,%s%s", split[0], split[2], split[1], split[3], split[4], split[5])
return key
}

View File

@@ -16,6 +16,7 @@ import (
)
const maskedFieldPlaceholderValue = "[REDACTED]"
const userAgent = "user-agent"
//these values MUST be all lower case and contain no `-` or `_` characters
var personallyIdentifiableDataFields = []string{"token", "authorization", "authentication", "cookie", "userid", "password",
@@ -32,7 +33,7 @@ func IsIgnoredUserAgent(item *api.OutputChannelItem, options *api.TrafficFilteri
request := item.Pair.Request.Payload.(api.HTTPPayload).Data.(*http.Request)
for headerKey, headerValues := range request.Header {
if strings.ToLower(headerKey) == "user-agent" {
if strings.ToLower(headerKey) == userAgent {
for _, userAgent := range options.IgnoredUserAgents {
for _, headerValue := range headerValues {
if strings.Contains(strings.ToLower(headerValue), strings.ToLower(userAgent)) {
@@ -89,6 +90,10 @@ func filterResponseBody(response *http.Response, options *api.TrafficFilteringOp
func filterHeaders(headers *http.Header) {
for key, _ := range *headers {
if strings.ToLower(key) == userAgent {
continue
}
if strings.ToLower(key) == "cookie" {
headers.Del(key)
} else if isFieldNameSensitive(key) {

View File

@@ -37,7 +37,7 @@ func (d dissecting) Register(extension *api.Extension) {
}
func (d dissecting) Ping() {
log.Printf("pong %s\n", _protocol.Name)
log.Printf("pong %s", _protocol.Name)
}
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
@@ -65,12 +65,6 @@ func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, co
func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string, resolvedDestination string) *api.MizuEntry {
request := item.Pair.Request.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
service := "kafka"
if resolvedDestination != "" {
service = resolvedDestination
} else if resolvedSource != "" {
service = resolvedSource
}
apiKey := ApiKey(reqDetails["apiKey"].(float64))
summary := ""
@@ -149,6 +143,9 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
request["url"] = summary
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
if elapsedTime < 0 {
elapsedTime = 0
}
return &api.MizuEntry{
Protocol: _protocol,
Source: &api.TCP{
@@ -161,45 +158,31 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
IP: item.ConnectionInfo.ServerIP,
Port: item.ConnectionInfo.ServerPort,
},
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Response: item.Pair.Response.Payload.(map[string]interface{})["details"].(map[string]interface{}),
Url: fmt.Sprintf("%s%s", service, summary),
Method: apiNames[apiKey],
Status: 0,
RequestSenderIp: item.ConnectionInfo.ClientIP,
Service: service,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: elapsedTime,
Summary: summary,
ResolvedSource: resolvedSource,
ResolvedDestination: resolvedDestination,
SourceIp: item.ConnectionInfo.ClientIP,
DestinationIp: item.ConnectionInfo.ServerIP,
SourcePort: item.ConnectionInfo.ClientPort,
DestinationPort: item.ConnectionInfo.ServerPort,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Response: item.Pair.Response.Payload.(map[string]interface{})["details"].(map[string]interface{}),
Method: apiNames[apiKey],
Status: 0,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: elapsedTime,
Summary: summary,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
}
}
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
return &api.BaseEntryDetails{
Id: entry.Id,
Protocol: _protocol,
Url: entry.Url,
RequestSenderIp: entry.RequestSenderIp,
Service: entry.Service,
Summary: entry.Summary,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
SourceIp: entry.SourceIp,
DestinationIp: entry.DestinationIp,
SourcePort: entry.SourcePort,
DestinationPort: entry.DestinationPort,
IsOutgoing: entry.IsOutgoing,
Latency: entry.ElapsedTime,
Id: entry.Id,
Protocol: _protocol,
Summary: entry.Summary,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
Source: entry.Source,
Destination: entry.Destination,
IsOutgoing: entry.IsOutgoing,
Latency: entry.ElapsedTime,
Rules: api.ApplicableRules{
Latency: 0,
Status: false,
@@ -207,8 +190,7 @@ func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
}
}
func (d dissecting) Represent(protoIn api.Protocol, request map[string]interface{}, response map[string]interface{}) (protoOut api.Protocol, object []byte, bodySize int64, err error) {
protoOut = _protocol
func (d dissecting) Represent(request map[string]interface{}, response map[string]interface{}) (object []byte, bodySize int64, err error) {
bodySize = 0
representation := make(map[string]interface{}, 0)
@@ -255,7 +237,7 @@ func (d dissecting) Represent(protoIn api.Protocol, request map[string]interface
func (d dissecting) Macros() map[string]string {
return map[string]string{
`kafka`: fmt.Sprintf(`proto.abbr == "%s"`, _protocol.Abbreviation),
`kafka`: fmt.Sprintf(`proto.name == "%s"`, _protocol.Name),
}
}

View File

@@ -5,6 +5,7 @@ import (
"encoding/json"
"fmt"
"log"
"time"
"github.com/up9inc/mizu/tap/api"
)
@@ -35,7 +36,7 @@ func (d dissecting) Register(extension *api.Extension) {
}
func (d dissecting) Ping() {
log.Printf("pong %s\n", protocol.Name)
log.Printf("pong %s", protocol.Name)
}
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
@@ -64,13 +65,6 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
reqDetails := request["details"].(map[string]interface{})
resDetails := response["details"].(map[string]interface{})
service := "redis"
if resolvedDestination != "" {
service = resolvedDestination
} else if resolvedSource != "" {
service = resolvedSource
}
method := ""
if reqDetails["command"] != nil {
method = reqDetails["command"].(string)
@@ -82,6 +76,10 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
}
request["url"] = summary
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
if elapsedTime < 0 {
elapsedTime = 0
}
return &api.MizuEntry{
Protocol: protocol,
Source: &api.TCP{
@@ -94,46 +92,32 @@ func (d dissecting) Analyze(item *api.OutputChannelItem, resolvedSource string,
IP: item.ConnectionInfo.ServerIP,
Port: item.ConnectionInfo.ServerPort,
},
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Response: resDetails,
Url: fmt.Sprintf("%s%s", service, summary),
Method: method,
Status: 0,
RequestSenderIp: item.ConnectionInfo.ClientIP,
Service: service,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: 0,
Summary: summary,
ResolvedSource: resolvedSource,
ResolvedDestination: resolvedDestination,
SourceIp: item.ConnectionInfo.ClientIP,
DestinationIp: item.ConnectionInfo.ServerIP,
SourcePort: item.ConnectionInfo.ClientPort,
DestinationPort: item.ConnectionInfo.ServerPort,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
Outgoing: item.ConnectionInfo.IsOutgoing,
Request: reqDetails,
Response: resDetails,
Method: method,
Status: 0,
Timestamp: item.Timestamp,
StartTime: item.Pair.Request.CaptureTime,
ElapsedTime: elapsedTime,
Summary: summary,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
}
}
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
return &api.BaseEntryDetails{
Id: entry.Id,
Protocol: protocol,
Url: entry.Url,
RequestSenderIp: entry.RequestSenderIp,
Service: entry.Service,
Summary: entry.Summary,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
SourceIp: entry.SourceIp,
DestinationIp: entry.DestinationIp,
SourcePort: entry.SourcePort,
DestinationPort: entry.DestinationPort,
IsOutgoing: entry.IsOutgoing,
Latency: entry.ElapsedTime,
Id: entry.Id,
Protocol: protocol,
Summary: entry.Summary,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
Source: entry.Source,
Destination: entry.Destination,
IsOutgoing: entry.IsOutgoing,
Latency: entry.ElapsedTime,
Rules: api.ApplicableRules{
Latency: 0,
Status: false,
@@ -141,8 +125,7 @@ func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
}
}
func (d dissecting) Represent(protoIn api.Protocol, request map[string]interface{}, response map[string]interface{}) (protoOut api.Protocol, object []byte, bodySize int64, err error) {
protoOut = protocol
func (d dissecting) Represent(request map[string]interface{}, response map[string]interface{}) (object []byte, bodySize int64, err error) {
bodySize = 0
representation := make(map[string]interface{}, 0)
repRequest := representGeneric(request, `request.`)
@@ -155,7 +138,7 @@ func (d dissecting) Represent(protoIn api.Protocol, request map[string]interface
func (d dissecting) Macros() map[string]string {
return map[string]string{
`redis`: fmt.Sprintf(`proto.abbr == "%s"`, protocol.Abbreviation),
`redis`: fmt.Sprintf(`proto.name == "%s"`, protocol.Name),
}
}

View File

@@ -313,7 +313,7 @@ func (p *RedisProtocol) Read() (packet *RedisPacket, err error) {
packet.Value = fmt.Sprintf("%s]", packet.Value)
}
default:
msg := fmt.Sprintf("Unrecognized element in Redis array: %v\n", reflect.TypeOf(array[0]))
msg := fmt.Sprintf("Unrecognized element in Redis array: %v", reflect.TypeOf(array[0]))
err = errors.New(msg)
return
}
@@ -333,7 +333,7 @@ func (p *RedisProtocol) Read() (packet *RedisPacket, err error) {
case int64:
packet.Value = fmt.Sprintf("%d", x.(int64))
default:
msg := fmt.Sprintf("Unrecognized Redis data type: %v\n", reflect.TypeOf(x))
msg := fmt.Sprintf("Unrecognized Redis data type: %v", reflect.TypeOf(x))
err = errors.New(msg)
return
}

View File

@@ -50,14 +50,15 @@ var tstype = flag.String("timestamp_type", "", "Type of timestamps to use")
var promisc = flag.Bool("promisc", true, "Set promiscuous mode")
var staleTimeoutSeconds = flag.Int("staletimout", 120, "Max time in seconds to keep connections which don't transmit data")
var pids = flag.String("pids", "", "A comma separated list of PIDs to capture their network namespaces")
var istio = flag.Bool("istio", false, "Record decrypted traffic if the cluster configured with istio and mtls")
var memprofile = flag.String("memprofile", "", "Write memory profile")
type TapOpts struct {
HostMode bool
HostMode bool
FilterAuthorities []string
}
var hostMode bool // global
var extensions []*api.Extension // global
var filteringOptions *api.TrafficFilteringOptions // global
@@ -80,15 +81,18 @@ func inArrayString(arr []string, valueToCheck string) bool {
}
func StartPassiveTapper(opts *TapOpts, outputItems chan *api.OutputChannelItem, extensionsRef []*api.Extension, options *api.TrafficFilteringOptions) {
hostMode = opts.HostMode
extensions = extensionsRef
filteringOptions = options
if opts.FilterAuthorities == nil {
opts.FilterAuthorities = []string{}
}
if GetMemoryProfilingEnabled() {
diagnose.StartMemoryProfiler(os.Getenv(MemoryProfilingDumpPath), os.Getenv(MemoryProfilingTimeIntervalSeconds))
}
go startPassiveTapper(outputItems)
go startPassiveTapper(opts, outputItems)
}
func printPeriodicStats(cleaner *Cleaner) {
@@ -131,7 +135,7 @@ func printPeriodicStats(cleaner *Cleaner) {
}
}
func initializePacketSources() (*source.PacketSourceManager, error) {
func initializePacketSources(opts *TapOpts) (*source.PacketSourceManager, error) {
var bpffilter string
if len(flag.Args()) > 0 {
bpffilter = strings.Join(flag.Args(), " ")
@@ -146,17 +150,17 @@ func initializePacketSources() (*source.PacketSourceManager, error) {
BpfFilter: bpffilter,
}
return source.NewPacketSourceManager(*procfs, *pids, *fname, *iface, behaviour)
return source.NewPacketSourceManager(*procfs, *pids, *fname, *iface, *istio, opts.FilterAuthorities, behaviour)
}
func startPassiveTapper(outputItems chan *api.OutputChannelItem) {
func startPassiveTapper(opts *TapOpts, outputItems chan *api.OutputChannelItem) {
streamsMap := NewTcpStreamMap()
go streamsMap.closeTimedoutTcpStreamChannels()
diagnose.InitializeErrorsMap(*debug, *verbose, *quiet)
diagnose.InitializeTapperInternalStats()
sources, err := initializePacketSources()
sources, err := initializePacketSources(opts)
if err != nil {
logger.Log.Fatal(err)
@@ -169,7 +173,7 @@ func startPassiveTapper(outputItems chan *api.OutputChannelItem) {
}
packets := make(chan source.TcpPacketInfo)
assembler := NewTcpAssembler(outputItems, streamsMap)
assembler := NewTcpAssembler(outputItems, streamsMap, opts)
diagnose.AppStats.SetStartTime(time.Now())
@@ -193,7 +197,7 @@ func startPassiveTapper(outputItems chan *api.OutputChannelItem) {
}
if err := diagnose.DumpMemoryProfile(*memprofile); err != nil {
logger.Log.Errorf("Error dumping memory profile %v\n", err)
logger.Log.Errorf("Error dumping memory profile %v", err)
}
assembler.waitAndDump()

View File

@@ -18,24 +18,6 @@ const (
TcpStreamChannelTimeoutMsDefaultValue = 10000
)
type globalSettings struct {
filterAuthorities []string
}
var gSettings = &globalSettings{
filterAuthorities: []string{},
}
func SetFilterAuthorities(ipAddresses []string) {
gSettings.filterAuthorities = ipAddresses
}
func GetFilterIPs() []string {
addresses := make([]string, len(gSettings.filterAuthorities))
copy(addresses, gSettings.filterAuthorities)
return addresses
}
func GetMaxBufferedPagesTotal() int {
valueFromEnv, err := strconv.Atoi(os.Getenv(MaxBufferedPagesTotalEnvVarName))
if err != nil {

View File

@@ -0,0 +1,112 @@
package source
import (
"fmt"
"io/ioutil"
"os"
"regexp"
"strings"
"github.com/up9inc/mizu/shared/logger"
)
const envoyBinary = "/envoy"
var numberRegex = regexp.MustCompile("[0-9]+")
func discoverRelevantEnvoyPids(procfs string, clusterIps []string) ([]string, error) {
result := make([]string, 0)
pids, err := ioutil.ReadDir(procfs)
if err != nil {
return result, err
}
logger.Log.Infof("Starting envoy auto discoverer %v %v - scanning %v potential pids",
procfs, clusterIps, len(pids))
for _, pid := range pids {
if !pid.IsDir() {
continue
}
if !numberRegex.MatchString(pid.Name()) {
continue
}
if checkPid(procfs, pid.Name(), clusterIps) {
result = append(result, pid.Name())
}
}
logger.Log.Infof("Found %v relevant envoy processes - %v", len(result), result)
return result, nil
}
func checkPid(procfs string, pid string, clusterIps []string) bool {
execLink := fmt.Sprintf("%v/%v/exe", procfs, pid)
exec, err := os.Readlink(execLink)
if err != nil {
// Debug on purpose - it may happen due to many reasons and we only care
// for it during troubleshooting
//
logger.Log.Debugf("Unable to read link %v - %v\n", execLink, err)
return false
}
if !strings.HasSuffix(exec, envoyBinary) {
return false
}
environmentFile := fmt.Sprintf("%v/%v/environ", procfs, pid)
clusterIp, err := readEnvironmentVariable(environmentFile, "INSTANCE_IP")
if err != nil {
return false
}
if clusterIp == "" {
logger.Log.Debugf("Found an envoy process without INSTANCE_IP variable %v\n", pid)
return false
}
logger.Log.Infof("Found envoy pid %v with cluster ip %v", pid, clusterIp)
for _, value := range clusterIps {
if value == clusterIp {
return true
}
}
return false
}
func readEnvironmentVariable(file string, name string) (string, error) {
bytes, err := ioutil.ReadFile(file)
if err != nil {
logger.Log.Warningf("Error reading environment file %v - %v", file, err)
return "", err
}
envs := strings.Split(string(bytes), string([]byte{0}))
for _, env := range envs {
if !strings.Contains(env, "=") {
continue
}
parts := strings.Split(env, "=")
varName := parts[0]
value := parts[1]
if name == varName {
return value, nil
}
}
return "", nil
}

View File

@@ -15,26 +15,63 @@ type PacketSourceManager struct {
}
func NewPacketSourceManager(procfs string, pids string, filename string, interfaceName string,
behaviour TcpPacketSourceBehaviour) (*PacketSourceManager, error) {
istio bool, clusterIps []string, behaviour TcpPacketSourceBehaviour) (*PacketSourceManager, error) {
sources := make([]*tcpPacketSource, 0)
hostSource, err := newHostPacketSource(filename, interfaceName, behaviour)
sources, err := createHostSource(sources, filename, interfaceName, behaviour)
if err != nil {
return nil, err
}
sources = append(sources, hostSource)
if pids != "" {
netnsSources := newNetnsPacketSources(procfs, pids, interfaceName, behaviour)
sources = append(sources, netnsSources...)
}
sources = createSourcesFromPids(sources, procfs, pids, interfaceName, behaviour)
sources = createSourcesFromEnvoy(sources, istio, procfs, clusterIps, interfaceName, behaviour)
return &PacketSourceManager{
sources: sources,
}, nil
}
func createHostSource(sources []*tcpPacketSource, filename string, interfaceName string,
behaviour TcpPacketSourceBehaviour) ([]*tcpPacketSource, error) {
hostSource, err := newHostPacketSource(filename, interfaceName, behaviour)
if err != nil {
return sources, err
}
return append(sources, hostSource), nil
}
func createSourcesFromPids(sources []*tcpPacketSource, procfs string, pids string,
interfaceName string, behaviour TcpPacketSourceBehaviour) []*tcpPacketSource {
if pids == "" {
return sources
}
netnsSources := newNetnsPacketSources(procfs, strings.Split(pids, ","), interfaceName, behaviour)
sources = append(sources, netnsSources...)
return sources
}
func createSourcesFromEnvoy(sources []*tcpPacketSource, istio bool, procfs string, clusterIps []string,
interfaceName string, behaviour TcpPacketSourceBehaviour) []*tcpPacketSource {
if !istio {
return sources
}
envoyPids, err := discoverRelevantEnvoyPids(procfs, clusterIps)
if err != nil {
logger.Log.Warningf("Unable to discover envoy pids - %v", err)
return sources
}
netnsSources := newNetnsPacketSources(procfs, envoyPids, interfaceName, behaviour)
sources = append(sources, netnsSources...)
return sources
}
func newHostPacketSource(filename string, interfaceName string,
behaviour TcpPacketSourceBehaviour) (*tcpPacketSource, error) {
var name string
@@ -54,11 +91,11 @@ func newHostPacketSource(filename string, interfaceName string,
return source, nil
}
func newNetnsPacketSources(procfs string, pids string, interfaceName string,
func newNetnsPacketSources(procfs string, pids []string, interfaceName string,
behaviour TcpPacketSourceBehaviour) []*tcpPacketSource {
result := make([]*tcpPacketSource, 0)
for _, pidstr := range strings.Split(pids, ",") {
for _, pidstr := range pids {
pid, err := strconv.Atoi(pidstr)
if err != nil {
@@ -100,9 +137,9 @@ func newNetnsPacketSource(pid int, nsh netns.NsHandle, interfaceName string,
//
runtime.LockOSThread()
defer runtime.UnlockOSThread()
oldnetns, err := netns.Get()
if err != nil {
logger.Log.Errorf("Unable to get netns of current thread %v", err)
errors <- err

View File

@@ -121,29 +121,27 @@ func (source *tcpPacketSource) readPackets(ipdefrag bool, packets chan<- TcpPack
}
// defrag the IPv4 packet if required
if !ipdefrag {
ip4Layer := packet.Layer(layers.LayerTypeIPv4)
if ip4Layer == nil {
continue
}
ip4 := ip4Layer.(*layers.IPv4)
l := ip4.Length
newip4, err := source.defragger.DefragIPv4(ip4)
if err != nil {
logger.Log.Fatal("Error while de-fragmenting", err)
} else if newip4 == nil {
logger.Log.Debugf("Fragment...")
continue // packet fragment, we don't have whole packet yet.
}
if newip4.Length != l {
diagnose.InternalStats.Ipdefrag++
logger.Log.Debugf("Decoding re-assembled packet: %s", newip4.NextLayerType())
pb, ok := packet.(gopacket.PacketBuilder)
if !ok {
logger.Log.Panic("Not a PacketBuilder")
if ipdefrag {
if ip4Layer := packet.Layer(layers.LayerTypeIPv4); ip4Layer != nil {
ip4 := ip4Layer.(*layers.IPv4)
l := ip4.Length
newip4, err := source.defragger.DefragIPv4(ip4)
if err != nil {
logger.Log.Fatal("Error while de-fragmenting", err)
} else if newip4 == nil {
logger.Log.Debugf("Fragment...")
continue // packet fragment, we don't have whole packet yet.
}
if newip4.Length != l {
diagnose.InternalStats.Ipdefrag++
logger.Log.Debugf("Decoding re-assembled packet: %s", newip4.NextLayerType())
pb, ok := packet.(gopacket.PacketBuilder)
if !ok {
logger.Log.Panic("Not a PacketBuilder")
}
nextDecoder := newip4.NextLayerType()
_ = nextDecoder.Decode(newip4.Payload, pb)
}
nextDecoder := newip4.NextLayerType()
_ = nextDecoder.Decode(newip4.Payload, pb)
}
}

View File

@@ -16,6 +16,8 @@ import (
"github.com/up9inc/mizu/tap/source"
)
const PACKETS_SEEN_LOG_THRESHOLD = 1000
type tcpAssembler struct {
*reassembly.Assembler
streamPool *reassembly.StreamPool
@@ -33,13 +35,13 @@ func (c *context) GetCaptureInfo() gopacket.CaptureInfo {
return c.CaptureInfo
}
func NewTcpAssembler(outputItems chan *api.OutputChannelItem, streamsMap *tcpStreamMap) *tcpAssembler {
func NewTcpAssembler(outputItems chan *api.OutputChannelItem, streamsMap *tcpStreamMap, opts *TapOpts) *tcpAssembler {
var emitter api.Emitter = &api.Emitting{
AppStats: &diagnose.AppStats,
OutputChannel: outputItems,
}
streamFactory := NewTcpStreamFactory(emitter, streamsMap)
streamFactory := NewTcpStreamFactory(emitter, streamsMap, opts)
streamPool := reassembly.NewStreamPool(streamFactory)
assembler := reassembly.NewAssembler(streamPool)
@@ -63,7 +65,11 @@ func (a *tcpAssembler) processPackets(dumpPacket bool, packets <-chan source.Tcp
for packetInfo := range packets {
packetsCount := diagnose.AppStats.IncPacketsCount()
logger.Log.Debugf("PACKET #%d", packetsCount)
if packetsCount % PACKETS_SEEN_LOG_THRESHOLD == 0 {
logger.Log.Debugf("Packets seen: #%d", packetsCount)
}
packet := packetInfo.Packet
data := packet.Data()
diagnose.AppStats.UpdateProcessedBytes(uint64(len(data)))
@@ -78,14 +84,14 @@ func (a *tcpAssembler) processPackets(dumpPacket bool, packets <-chan source.Tcp
if *checksum {
err := tcp.SetNetworkLayerForChecksum(packet.NetworkLayer())
if err != nil {
logger.Log.Fatalf("Failed to set network layer for checksum: %s\n", err)
logger.Log.Fatalf("Failed to set network layer for checksum: %s", err)
}
}
c := context{
CaptureInfo: packet.Metadata().CaptureInfo,
}
diagnose.InternalStats.Totalsz += len(tcp.Payload)
logger.Log.Debugf("%s : %v -> %s : %v", packet.NetworkLayer().NetworkFlow().Src(), tcp.SrcPort, packet.NetworkLayer().NetworkFlow().Dst(), tcp.DstPort)
logger.Log.Debugf("%s:%v -> %s:%v", packet.NetworkLayer().NetworkFlow().Src(), tcp.SrcPort, packet.NetworkLayer().NetworkFlow().Dst(), tcp.DstPort)
a.assemblerMutex.Lock()
a.AssembleWithContext(packet.NetworkLayer().NetworkFlow(), tcp, &c)
a.assemblerMutex.Unlock()

View File

@@ -66,7 +66,7 @@ func (h *tcpReader) Read(p []byte) (int, error) {
clientHello := tlsx.ClientHello{}
err := clientHello.Unmarshall(msg.bytes)
if err == nil {
logger.Log.Debugf("Detected TLS client hello with SNI %s\n", clientHello.SNI)
logger.Log.Debugf("Detected TLS client hello with SNI %s", clientHello.SNI)
// TODO: Throws `panic: runtime error: invalid memory address or nil pointer dereference` error.
// numericPort, _ := strconv.Atoi(h.tcpID.DstPort)
// h.outboundLinkWriter.WriteOutboundLink(h.tcpID.SrcIP, h.tcpID.DstIP, numericPort, clientHello.SNI, TLSProtocol)

View File

@@ -24,6 +24,7 @@ type tcpStreamFactory struct {
Emitter api.Emitter
streamsMap *tcpStreamMap
ownIps []string
opts *TapOpts
}
type tcpStreamWrapper struct {
@@ -31,7 +32,7 @@ type tcpStreamWrapper struct {
createdAt time.Time
}
func NewTcpStreamFactory(emitter api.Emitter, streamsMap *tcpStreamMap) *tcpStreamFactory {
func NewTcpStreamFactory(emitter api.Emitter, streamsMap *tcpStreamMap, opts *TapOpts) *tcpStreamFactory {
var ownIps []string
if localhostIPs, err := getLocalhostIPs(); err != nil {
@@ -47,6 +48,7 @@ func NewTcpStreamFactory(emitter api.Emitter, streamsMap *tcpStreamMap) *tcpStre
Emitter: emitter,
streamsMap: streamsMap,
ownIps: ownIps,
opts: opts,
}
}
@@ -139,17 +141,17 @@ func (factory *tcpStreamFactory) WaitGoRoutines() {
}
func (factory *tcpStreamFactory) getStreamProps(srcIP string, srcPort string, dstIP string, dstPort string) *streamProps {
if hostMode {
if inArrayString(gSettings.filterAuthorities, fmt.Sprintf("%s:%s", dstIP, dstPort)) {
if factory.opts.HostMode {
if inArrayString(factory.opts.FilterAuthorities, fmt.Sprintf("%s:%s", dstIP, dstPort)) {
logger.Log.Debugf("getStreamProps %s", fmt.Sprintf("+ host1 %s:%s", dstIP, dstPort))
return &streamProps{isTapTarget: true, isOutgoing: false}
} else if inArrayString(gSettings.filterAuthorities, dstIP) {
} else if inArrayString(factory.opts.FilterAuthorities, dstIP) {
logger.Log.Debugf("getStreamProps %s", fmt.Sprintf("+ host2 %s", dstIP))
return &streamProps{isTapTarget: true, isOutgoing: false}
} else if inArrayString(gSettings.filterAuthorities, fmt.Sprintf("%s:%s", srcIP, srcPort)) {
} else if inArrayString(factory.opts.FilterAuthorities, fmt.Sprintf("%s:%s", srcIP, srcPort)) {
logger.Log.Debugf("getStreamProps %s", fmt.Sprintf("+ host3 %s:%s", srcIP, srcPort))
return &streamProps{isTapTarget: true, isOutgoing: true}
} else if inArrayString(gSettings.filterAuthorities, srcIP) {
} else if inArrayString(factory.opts.FilterAuthorities, srcIP) {
logger.Log.Debugf("getStreamProps %s", fmt.Sprintf("+ host4 %s", srcIP))
return &streamProps{isTapTarget: true, isOutgoing: true}
}

View File

@@ -46,7 +46,7 @@ func (streamMap *tcpStreamMap) closeTimedoutTcpStreamChannels() {
if !stream.isClosed && time.Now().After(streamWrapper.createdAt.Add(tcpStreamChannelTimeout)) {
stream.Close()
diagnose.AppStats.IncDroppedTcpStreams()
logger.Log.Debugf("Dropped an unidentified TCP stream because of timeout. Total dropped: %d Total Goroutines: %d Timeout (ms): %d\n",
logger.Log.Debugf("Dropped an unidentified TCP stream because of timeout. Total dropped: %d Total Goroutines: %d Timeout (ms): %d",
diagnose.AppStats.DroppedTcpStreams, runtime.NumGoroutine(), tcpStreamChannelTimeout/1000000)
}
} else {

View File

@@ -1 +0,0 @@
tester

View File

@@ -1,12 +0,0 @@
This tester used to launch passive-tapper locally without Docker or Kuberenetes environment.
Its good for testing purposes.
# How to run
From the `tap` folder run:
`./tester/launch.sh`
The tester gets the same arguments the passive_tapper gets, run with `--help` to get a complete list of options.
`./tester/launch.sh --help`

View File

@@ -1,10 +0,0 @@
#!/bin/bash
set -e
echo "Building extensions..."
pushd .. && ./devops/build_extensions.sh && popd
go build -o tester tester/tester.go
sudo ./tester/tester "$@"

View File

@@ -1,114 +0,0 @@
package main
import (
"bufio"
"io/ioutil"
"os"
"path"
"plugin"
"sort"
"strings"
"github.com/op/go-logging"
"github.com/go-errors/errors"
"github.com/up9inc/mizu/shared/logger"
"github.com/up9inc/mizu/tap"
tapApi "github.com/up9inc/mizu/tap/api"
)
func loadExtensions() ([]*tapApi.Extension, error) {
extensionsDir := "./extensions"
files, err := ioutil.ReadDir(extensionsDir)
if err != nil {
return nil, errors.Wrap(err, 0)
}
extensions := make([]*tapApi.Extension, 0)
for _, file := range files {
filename := file.Name()
if !strings.HasSuffix(filename, ".so") {
continue
}
logger.Log.Infof("Loading extension: %s\n", filename)
extension := &tapApi.Extension{
Path: path.Join(extensionsDir, filename),
}
plug, err := plugin.Open(extension.Path)
if err != nil {
return nil, errors.Wrap(err, 0)
}
extension.Plug = plug
symDissector, err := plug.Lookup("Dissector")
if err != nil {
return nil, errors.Wrap(err, 0)
}
dissector, ok := symDissector.(tapApi.Dissector)
if !ok {
return nil, errors.Errorf("Symbol Dissector type error: %v %T\n", file, symDissector)
}
dissector.Register(extension)
extension.Dissector = dissector
extensions = append(extensions, extension)
}
sort.Slice(extensions, func(i, j int) bool {
return extensions[i].Protocol.Priority < extensions[j].Protocol.Priority
})
for _, extension := range extensions {
logger.Log.Infof("Extension Properties: %+v\n", extension)
}
return extensions, nil
}
func internalRun() error {
logger.InitLoggerStderrOnly(logging.DEBUG)
opts := tap.TapOpts{
HostMode: false,
}
outputItems := make(chan *tapApi.OutputChannelItem, 1000)
extenssions, err := loadExtensions()
if err != nil {
return err
}
tapOpts := tapApi.TrafficFilteringOptions{}
tap.StartPassiveTapper(&opts, outputItems, extenssions, &tapOpts)
logger.Log.Infof("Tapping, press enter to exit...\n")
reader := bufio.NewReader(os.Stdin)
reader.ReadLine()
return nil
}
func main() {
err := internalRun()
if err != nil {
switch err := err.(type) {
case *errors.Error:
logger.Log.Errorf("Error: %v\n", err.ErrorStack())
default:
logger.Log.Errorf("Error: %v\n", err)
}
os.Exit(1)
}
}

View File

@@ -133,3 +133,6 @@ ignore:
SNYK-JS-WS-1296835:
- '*':
reason: Non given
SNYK-JS-JSONSCHEMA-1920922:
- '*':
reason: Non given

11
ui/package-lock.json generated
View File

@@ -11080,6 +11080,11 @@
"minimist": "^1.2.5"
}
},
"moment": {
"version": "2.29.1",
"resolved": "https://registry.npmjs.org/moment/-/moment-2.29.1.tgz",
"integrity": "sha512-kHmoybcPV8Sqy59DwNDY3Jefr64lK/by/da0ViFcuA4DH0vQg5Q6Ze5VimxkfQNSC+Mls/Kx53s7TjP1RhFEDQ=="
},
"move-concurrently": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/move-concurrently/-/move-concurrently-1.0.1.tgz",
@@ -13644,9 +13649,9 @@
}
},
"react-scrollable-feed-virtualized": {
"version": "1.4.3",
"resolved": "https://registry.npmjs.org/react-scrollable-feed-virtualized/-/react-scrollable-feed-virtualized-1.4.3.tgz",
"integrity": "sha512-M9WgJKr57jCyWKNCksc3oi+xhtO0YbL9d7Ll8Sdc5ZWOIstNvdNbNX0k4Nq6kXUVaHCJ9qE8omdSI/CxT3MLAQ=="
"version": "1.4.9",
"resolved": "https://registry.npmjs.org/react-scrollable-feed-virtualized/-/react-scrollable-feed-virtualized-1.4.9.tgz",
"integrity": "sha512-YkFkPjdIXDUsaCNYhZ+Blpp3LF+CsJWscwn/0fGSjF5QBKCtPURO9AEUA362Qnjr4S8LF2IjSAOCCFedIEnVNw=="
},
"react-syntax-highlighter": {
"version": "15.4.3",

View File

@@ -16,6 +16,7 @@
"@uiw/react-textarea-code-editor": "^1.4.12",
"axios": "^0.21.1",
"jsonpath": "^1.1.1",
"moment": "^2.29.1",
"node-sass": "^5.0.0",
"numeral": "^2.0.6",
"protobuf-decoder": "^0.1.0",
@@ -23,7 +24,7 @@
"react-copy-to-clipboard": "^5.0.3",
"react-dom": "^17.0.2",
"react-scripts": "4.0.3",
"react-scrollable-feed-virtualized": "^1.4.3",
"react-scrollable-feed-virtualized": "^1.4.9",
"react-syntax-highlighter": "^15.4.3",
"react-toastify": "^8.0.3",
"typescript": "^4.2.4",

View File

@@ -1,52 +1,150 @@
import {EntryItem} from "./EntryListItem/EntryListItem";
import React, {useRef} from "react";
import React, {useCallback, useEffect, useMemo, useRef, useState} from "react";
import styles from './style/EntriesList.module.sass';
import ScrollableFeedVirtualized from "react-scrollable-feed-virtualized";
import Moment from 'moment';
import {EntryItem} from "./EntryListItem/EntryListItem";
import down from "./assets/downImg.svg";
import spinner from './assets/spinner.svg';
import Api from "../helpers/api";
interface EntriesListProps {
entries: any[];
setEntries: (entries: any[]) => void;
focusedEntryId: string;
setFocusedEntryId: (id: string) => void;
listEntryREF: any;
onScrollEvent: (isAtBottom:boolean) => void;
scrollableList: boolean;
ws: any
openWebSocket: any;
setEntries: any;
query: string;
updateQuery: any;
listEntryREF: any;
onSnapBrokenEvent: () => void;
isSnappedToBottom: boolean;
setIsSnappedToBottom: any;
queriedCurrent: number;
setQueriedCurrent: any;
queriedTotal: number;
startTime: number;
noMoreDataTop: boolean;
setNoMoreDataTop: (flag: boolean) => void;
focusedEntryId: string;
setFocusedEntryId: (id: string) => void;
updateQuery: any;
leftOffTop: number;
setLeftOffTop: (leftOffTop: number) => void;
isWebSocketConnectionClosed: boolean;
ws: any;
openWebSocket: (query: string, resetEntries: boolean) => void;
leftOffBottom: number;
}
export const EntriesList: React.FC<EntriesListProps> = ({entries, setEntries, focusedEntryId, setFocusedEntryId, listEntryREF, onScrollEvent, scrollableList, ws, openWebSocket, query, updateQuery, queriedCurrent, queriedTotal, startTime}) => {
const api = new Api();
export const EntriesList: React.FC<EntriesListProps> = ({entries, setEntries, query, listEntryREF, onSnapBrokenEvent, isSnappedToBottom, setIsSnappedToBottom, queriedCurrent, setQueriedCurrent, queriedTotal, startTime, noMoreDataTop, setNoMoreDataTop, focusedEntryId, setFocusedEntryId, updateQuery, leftOffTop, setLeftOffTop, isWebSocketConnectionClosed, ws, openWebSocket, leftOffBottom}) => {
const [loadMoreTop, setLoadMoreTop] = useState(false);
const [isLoadingTop, setIsLoadingTop] = useState(false);
const scrollableRef = useRef(null);
useEffect(() => {
const list = document.getElementById('list').firstElementChild;
list.addEventListener('scroll', (e) => {
const el: any = e.target;
if(el.scrollTop === 0) {
setLoadMoreTop(true);
} else {
setNoMoreDataTop(false);
setLoadMoreTop(false);
}
});
}, [setLoadMoreTop, setNoMoreDataTop]);
const memoizedEntries = useMemo(() => {
return entries;
},[entries]);
const getOldEntries = useCallback(async () => {
setLoadMoreTop(false);
if (leftOffTop === null || leftOffTop <= 0) {
return;
}
setIsLoadingTop(true);
const data = await api.fetchEntries(leftOffTop, -1, query, 100, 3000);
if (!data || !data.meta) {
setNoMoreDataTop(true);
setIsLoadingTop(false);
return;
}
setLeftOffTop(data.meta.leftOff);
let scrollTo: boolean;
if (data.meta.leftOff === 0) {
setNoMoreDataTop(true);
scrollTo = false;
} else {
scrollTo = true;
}
setIsLoadingTop(false);
const newEntries = [...data.data.reverse(), ...entries];
setEntries(newEntries);
setQueriedCurrent(queriedCurrent + data.meta.current);
if (scrollTo) {
scrollableRef.current.scrollToIndex(data.data.length - 1);
}
},[setLoadMoreTop, setIsLoadingTop, entries, setEntries, query, setNoMoreDataTop, leftOffTop, setLeftOffTop, queriedCurrent, setQueriedCurrent]);
useEffect(() => {
if(!isWebSocketConnectionClosed || !loadMoreTop || noMoreDataTop) return;
getOldEntries();
}, [loadMoreTop, noMoreDataTop, getOldEntries, isWebSocketConnectionClosed]);
const scrollbarVisible = scrollableRef.current?.childWrapperRef.current.clientHeight > scrollableRef.current?.wrapperRef.current.clientHeight;
return <>
<div className={styles.list}>
<div id="list" ref={listEntryREF} className={styles.list}>
<ScrollableFeedVirtualized ref={scrollableRef} itemHeight={48} marginTop={10} onScroll={(isAtBottom) => onScrollEvent(isAtBottom)}>
{false /* TODO: why there is a need for something here (not necessarily false)? */}
{entries.map(entry => <EntryItem key={entry.id}
entry={entry}
setFocusedEntryId={setFocusedEntryId}
isSelected={focusedEntryId === entry.id.toString()}
style={{}}
updateQuery={updateQuery}/>)}
{isLoadingTop && <div className={styles.spinnerContainer}>
<img alt="spinner" src={spinner} style={{height: 25}}/>
</div>}
{noMoreDataTop && <div id="noMoreDataTop" className={styles.noMoreDataAvailable}>No more data available</div>}
<ScrollableFeedVirtualized ref={scrollableRef} itemHeight={48} marginTop={10} onSnapBroken={onSnapBrokenEvent}>
{false /* It's because the first child is ignored by ScrollableFeedVirtualized */}
{memoizedEntries.map(entry => <EntryItem
key={`entry-${entry.id}`}
entry={entry}
focusedEntryId={focusedEntryId}
setFocusedEntryId={setFocusedEntryId}
style={{}}
updateQuery={updateQuery}
headingMode={false}
/>)}
</ScrollableFeedVirtualized>
<button type="button"
className={`${styles.btnLive} ${scrollableList ? styles.showButton : styles.hideButton}`}
onClick={(_) => scrollableRef.current.jumpToBottom()}>
title="Fetch old records"
className={`${styles.btnOld} ${!scrollbarVisible && leftOffTop > 0 ? styles.showButton : styles.hideButton}`}
onClick={(_) => {
ws.close();
getOldEntries();
}}>
<img alt="down" src={down} />
</button>
<button type="button"
title="Snap to bottom"
className={`${styles.btnLive} ${isSnappedToBottom && !isWebSocketConnectionClosed ? styles.hideButton : styles.showButton}`}
onClick={(_) => {
if (isWebSocketConnectionClosed) {
if (query) {
openWebSocket(`(${query}) and leftOff(${leftOffBottom})`, false);
} else {
openWebSocket(`leftOff(${leftOffBottom})`, false);
}
}
scrollableRef.current.jumpToBottom();
setIsSnappedToBottom(true);
}}>
<img alt="down" src={down} />
</button>
</div>
<div className={styles.footer}>
<div>Displaying <b>{entries?.length}</b> results (queried <b>{queriedCurrent}</b>/<b>{queriedTotal}</b>)</div>
{startTime !== 0 && <div>Started listening at <span style={{marginRight: 5, fontWeight: 600, fontSize: 13}}>{new Date(startTime).toLocaleString()}</span></div>}
<div>Displaying <b>{entries?.length}</b> results out of <b>{queriedTotal}</b> total</div>
{startTime !== 0 && <div>Started listening at <span style={{marginRight: 5, fontWeight: 600, fontSize: 13}}>{Moment(startTime).utc().format('MM/DD/YYYY, h:mm:ss.SSS A')}</span></div>}
</div>
</div>
</>;

View File

@@ -1,9 +1,9 @@
import React from "react";
import EntryViewer from "./EntryDetailed/EntryViewer";
import {EntryItem} from "./EntryListItem/EntryListItem";
import {makeStyles} from "@material-ui/core";
import Protocol from "./UI/Protocol"
import StatusCode from "./UI/StatusCode";
import {Summary} from "./UI/Summary";
import Queryable from "./UI/Queryable";
const useStyles = makeStyles(() => ({
entryTitle: {
@@ -12,6 +12,7 @@ const useStyles = makeStyles(() => ({
maxHeight: 46,
alignItems: 'center',
marginBottom: 4,
marginLeft: 6,
padding: 2,
paddingBottom: 0
},
@@ -37,45 +38,49 @@ const EntryTitle: React.FC<any> = ({protocol, data, bodySize, elapsedTime, updat
const classes = useStyles();
const response = data.response;
return <div className={classes.entryTitle}>
<Protocol protocol={protocol} horizontal={true} updateQuery={null}/>
<Protocol protocol={protocol} horizontal={true} updateQuery={updateQuery}/>
<div style={{right: "30px", position: "absolute", display: "flex"}}>
{response && <div
className="queryable"
style={{margin: "0 18px", opacity: 0.5}}
onClick={() => {
updateQuery(`response.bodySize == ${bodySize}`)
}}
{response && <Queryable
query={`response.bodySize == ${bodySize}`}
updateQuery={updateQuery}
style={{margin: "0 18px"}}
displayIconOnMouseOver={true}
>
{formatSize(bodySize)}
</div>}
{response && <div
className="queryable"
style={{marginRight: 18, opacity: 0.5}}
onClick={() => {
updateQuery(`elapsedTime >= ${elapsedTime}`)
}}
<div
style={{opacity: 0.5}}
>
{formatSize(bodySize)}
</div>
</Queryable>}
{response && <Queryable
query={`elapsedTime >= ${elapsedTime}`}
updateQuery={updateQuery}
style={{marginRight: 18}}
displayIconOnMouseOver={true}
>
{Math.round(elapsedTime)}ms
</div>}
<div
style={{opacity: 0.5}}
>
{Math.round(elapsedTime)}ms
</div>
</Queryable>}
</div>
</div>;
};
const EntrySummary: React.FC<any> = ({data, updateQuery}) => {
const classes = useStyles();
const entry = data.base;
const response = data.response;
return <div className={classes.entrySummary}>
{response && "status" in response && <div style={{marginRight: 8}}>
<StatusCode statusCode={response.status} updateQuery={updateQuery}/>
</div>}
<div style={{flexGrow: 1, overflow: 'hidden'}}>
<Summary method={data.method} summary={data.summary} updateQuery={updateQuery}/>
</div>
</div>;
return <EntryItem
key={`entry-${entry.id}`}
entry={entry}
focusedEntryId={null}
setFocusedEntryId={null}
style={{}}
updateQuery={updateQuery}
headingMode={true}
/>;
};
export const EntryDetailed: React.FC<EntryDetailedProps> = ({entryData, updateQuery}) => {

View File

@@ -27,7 +27,7 @@
font-weight: 600
font-size: .75rem
line-height: 1.2
margin: .3rem 0
margin-bottom: -2px
.dataKey
color: $blue-gray

View File

@@ -3,6 +3,7 @@ import React, {useState} from "react";
import {SyntaxHighlighter} from "../UI/SyntaxHighlighter/index";
import CollapsibleContainer from "../UI/CollapsibleContainer";
import FancyTextDisplay from "../UI/FancyTextDisplay";
import Queryable from "../UI/Queryable";
import Checkbox from "../UI/Checkbox";
import ProtobufDecoder from "protobuf-decoder";
@@ -15,23 +16,29 @@ interface EntryViewLineProps {
}
const EntryViewLine: React.FC<EntryViewLineProps> = ({label, value, updateQuery, selector, overrideQueryValue}) => {
return (label && value && <tr className={styles.dataLine}>
<td
className={`queryable ${styles.dataKey}`}
onClick={() => {
if (!selector) {
return
} else if (overrideQueryValue) {
updateQuery(`${selector} == ${overrideQueryValue}`)
} else if (typeof(value) === "string") {
updateQuery(`${selector} == "${JSON.stringify(value).slice(1, -1)}"`)
} else {
updateQuery(`${selector} == ${value}`)
}
}}
>
{label}
</td>
let query: string;
if (!selector) {
query = "";
} else if (overrideQueryValue) {
query = `${selector} == ${overrideQueryValue}`;
} else if (typeof(value) == "string") {
query = `${selector} == "${JSON.stringify(value).slice(1, -1)}"`;
} else {
query = `${selector} == ${value}`;
}
return (label && <tr className={styles.dataLine}>
<td className={`${styles.dataKey}`}>
<Queryable
query={query}
updateQuery={updateQuery}
style={{float: "right", height: "18px"}}
iconStyle={{marginRight: "20px"}}
flipped={true}
displayIconOnMouseOver={true}
>
{label}
</Queryable>
</td>
<td>
<FancyTextDisplay
className={styles.dataValue}
@@ -53,9 +60,9 @@ interface EntrySectionCollapsibleTitleProps {
const EntrySectionCollapsibleTitle: React.FC<EntrySectionCollapsibleTitleProps> = ({title, color, isExpanded}) => {
return <div className={styles.title}>
<span className={`${styles.button} ${isExpanded ? styles.expanded : ''}`} style={{backgroundColor: color}}>
<div className={`${styles.button} ${isExpanded ? styles.expanded : ''}`} style={{backgroundColor: color}}>
{isExpanded ? '-' : '+'}
</span>
</div>
<span>{title}</span>
</div>
}
@@ -130,7 +137,7 @@ export const EntryBodySection: React.FC<EntryBodySectionProps> = ({
<table>
<tbody>
<EntryViewLine label={'Mime type'} value={contentType} updateQuery={updateQuery} selector={selector} overrideQueryValue={`r".*"`}/>
<EntryViewLine label={'Encoding'} value={encoding} updateQuery={updateQuery} selector={selector} overrideQueryValue={`r".*"`}/>
{encoding && <EntryViewLine label={'Encoding'} value={encoding} updateQuery={updateQuery} selector={selector} overrideQueryValue={`r".*"`}/>}
</tbody>
</table>

View File

@@ -4,6 +4,7 @@
font-family: "Source Sans Pro", Lucida Grande, Tahoma, sans-serif
height: calc(100% - 70px)
width: 100%
margin-top: 10px
h3,
h4

View File

@@ -19,7 +19,6 @@
.rowSelected
border: 1px $blue-color solid
margin-right: 3px
.ruleSuccessRow
background: #E8FFF1
@@ -46,13 +45,12 @@
.ruleNumberTextSuccess
color: #219653
.service
.resolvedName
text-overflow: ellipsis
overflow: hidden
white-space: nowrap
color: $secondary-font-color
padding-left: 4px
padding-top: 3px
padding-right: 10px
display: flex
font-size: 12px
@@ -62,7 +60,7 @@
color: $secondary-font-color
padding-left: 12px
flex-shrink: 0
width: 145px
width: 185px
text-align: left
.endpointServiceContainer
@@ -70,7 +68,6 @@
flex-direction: column
overflow: hidden
padding-right: 10px
padding-left: 10px
flex-grow: 1
.separatorRight
@@ -84,7 +81,14 @@
padding: 4px
padding-left: 12px
.port
.tcpInfo
font-size: 12px
color: $secondary-font-color
margin: 5px
margin-top: 5px
margin-bottom: 5px
.port
margin-right: 5px
.ip
margin-left: 5px

View File

@@ -1,8 +1,11 @@
import React from "react";
import Moment from 'moment';
import SwapHorizIcon from '@material-ui/icons/SwapHoriz';
import styles from './EntryListItem.module.sass';
import StatusCode, {getClassification, StatusCodeClassification} from "../UI/StatusCode";
import Protocol, {ProtocolInterface} from "../UI/Protocol"
import {Summary} from "../UI/Summary";
import Queryable from "../UI/Queryable";
import ingoingIconSuccess from "../assets/ingoing-traffic-success.svg"
import ingoingIconFailure from "../assets/ingoing-traffic-failure.svg"
import ingoingIconNeutral from "../assets/ingoing-traffic-neutral.svg"
@@ -10,19 +13,21 @@ import outgoingIconSuccess from "../assets/outgoing-traffic-success.svg"
import outgoingIconFailure from "../assets/outgoing-traffic-failure.svg"
import outgoingIconNeutral from "../assets/outgoing-traffic-neutral.svg"
interface TCPInterface {
ip: string
port: string
name: string
}
interface Entry {
protocol: ProtocolInterface,
method?: string,
summary: string,
service: string,
id: number,
statusCode?: number;
url?: string;
timestamp: Date;
sourceIp: string,
sourcePort: string,
destinationIp: string,
destinationPort: string,
src: TCPInterface,
dst: TCPInterface,
isOutgoing?: boolean;
latency: number;
rules: Rules;
@@ -37,13 +42,17 @@ interface Rules {
interface EntryProps {
entry: Entry;
focusedEntryId: string;
setFocusedEntryId: (id: string) => void;
isSelected?: boolean;
style: object;
updateQuery: any;
headingMode: boolean;
}
export const EntryItem: React.FC<EntryProps> = ({entry, setFocusedEntryId, isSelected, style, updateQuery}) => {
export const EntryItem: React.FC<EntryProps> = ({entry, focusedEntryId, setFocusedEntryId, style, updateQuery, headingMode}) => {
const isSelected = focusedEntryId === entry.id.toString();
const classification = getClassification(entry.statusCode)
const numberOfRules = entry.rules.numberOfRules
let ingoingIcon;
@@ -114,40 +123,66 @@ export const EntryItem: React.FC<EntryProps> = ({entry, setFocusedEntryId, isSel
break;
}
const isStatusCodeEnabled = ((entry.protocol.name === "http" && "statusCode" in entry) || entry.statusCode !== 0);
var endpointServiceContainer = "10px";
if (!isStatusCodeEnabled) endpointServiceContainer = "20px";
return <>
<div
id={entry.id.toString()}
id={`entry-${entry.id.toString()}`}
className={`${styles.row}
${isSelected && !rule && !contractEnabled ? styles.rowSelected : additionalRulesProperties}`}
onClick={() => setFocusedEntryId(entry.id.toString())}
onClick={() => {
if (!setFocusedEntryId) return;
setFocusedEntryId(entry.id.toString());
}}
style={{
border: isSelected ? `1px ${entry.protocol.backgroundColor} solid` : "1px transparent solid",
position: "absolute",
position: !headingMode ? "absolute" : "unset",
top: style['top'],
marginTop: style['marginTop'],
width: "calc(100% - 25px)",
marginTop: !headingMode ? style['marginTop'] : "10px",
width: !headingMode ? "calc(100% - 25px)" : "calc(100% - 18px)",
}}
>
<Protocol
{!headingMode ? <Protocol
protocol={entry.protocol}
horizontal={false}
updateQuery={updateQuery}
/>
{((entry.protocol.name === "http" && "statusCode" in entry) || entry.statusCode !== 0) && <div>
/> : null}
{isStatusCodeEnabled && <div>
<StatusCode statusCode={entry.statusCode} updateQuery={updateQuery}/>
</div>}
<div className={styles.endpointServiceContainer}>
<div className={styles.endpointServiceContainer} style={{paddingLeft: endpointServiceContainer}}>
<Summary method={entry.method} summary={entry.summary} updateQuery={updateQuery}/>
<div className={styles.service}>
<span
title="Service Name"
className="queryable"
onClick={() => {
updateQuery(`service == "${entry.service}"`)
}}
<div className={styles.resolvedName}>
<Queryable
query={`src.name == "${entry.src.name}"`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
flipped={true}
style={{marginTop: "-4px", overflow: "visible"}}
iconStyle={!headingMode ? {marginTop: "4px", left: "68px", position: "absolute"} : {marginTop: "4px", left: "calc(50vw + 41px)", position: "absolute"}}
>
{entry.service}
</span>
<span
title="Source Name"
>
{entry.src.name ? entry.src.name : "[Unresolved]"}
</span>
</Queryable>
<SwapHorizIcon style={{color: entry.protocol.backgroundColor, marginTop: "-2px"}}></SwapHorizIcon>
<Queryable
query={`dst.name == "${entry.dst.name}"`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
style={{marginTop: "-4px"}}
iconStyle={{marginTop: "4px", marginLeft: "-2px"}}
>
<span
title="Destination Name"
>
{entry.dst.name ? entry.dst.name : "[Unresolved]"}
</span>
</Queryable>
</div>
</div>
{
@@ -165,54 +200,109 @@ export const EntryItem: React.FC<EntryProps> = ({entry, setFocusedEntryId, isSel
: ""
}
<div className={styles.separatorRight}>
<span
className={`queryable ${styles.port}`}
title="Source Port"
onClick={() => {
updateQuery(`src.port == "${entry.sourcePort}"`)
}}
<Queryable
query={`src.ip == "${entry.src.ip}"`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
flipped={true}
iconStyle={{marginRight: "16px"}}
>
{entry.sourcePort}
</span>
<span
className={`${styles.tcpInfo} ${styles.ip}`}
title="Source IP"
>
{entry.src.ip}
</span>
</Queryable>
<span className={`${styles.tcpInfo}`} style={{marginTop: "18px"}}>:</span>
<Queryable
query={`src.port == "${entry.src.port}"`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
flipped={true}
iconStyle={{marginTop: "28px"}}
>
<span
className={`${styles.tcpInfo} ${styles.port}`}
title="Source Port"
>
{entry.src.port}
</span>
</Queryable>
{entry.isOutgoing ?
<img
src={outgoingIcon}
alt="Ingoing traffic"
title="Ingoing"
onClick={() => {
updateQuery(`outgoing == true`)
}}
/>
<Queryable
query={`outgoing == true`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
flipped={true}
iconStyle={{marginTop: "28px"}}
>
<img
src={outgoingIcon}
alt="Ingoing traffic"
title="Ingoing"
/>
</Queryable>
:
<img
src={ingoingIcon}
alt="Outgoing traffic"
title="Outgoing"
onClick={() => {
updateQuery(`outgoing == false`)
}}
/>
<Queryable
query={`outgoing == true`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
flipped={true}
iconStyle={{marginTop: "28px"}}
>
<img
src={ingoingIcon}
alt="Outgoing traffic"
title="Outgoing"
onClick={() => {
updateQuery(`outgoing == false`)
}}
/>
</Queryable>
}
<span
className={`queryable ${styles.port}`}
title="Destination Port"
onClick={() => {
updateQuery(`dst.port == "${entry.destinationPort}"`)
}}
<Queryable
query={`dst.ip == "${entry.dst.ip}"`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
flipped={false}
iconStyle={{marginTop: "28px"}}
>
{entry.destinationPort}
</span>
<span
className={`${styles.tcpInfo} ${styles.ip}`}
title="Destination IP"
>
{entry.dst.ip}
</span>
</Queryable>
<span className={`${styles.tcpInfo}`} style={{marginTop: "18px"}}>:</span>
<Queryable
query={`dst.port == "${entry.dst.port}"`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
flipped={false}
>
<span
className={`${styles.tcpInfo} ${styles.port}`}
title="Destination Port"
>
{entry.dst.port}
</span>
</Queryable>
</div>
<div className={styles.timestamp}>
<span
title="Timestamp"
className="queryable"
onClick={() => {
updateQuery(`timestamp >= datetime("${new Date(+entry.timestamp)?.toLocaleString("en-US", {timeZone: 'UTC' })}")`)
}}
<Queryable
query={`timestamp >= datetime("${Moment(+entry.timestamp)?.utc().format('MM/DD/YYYY, h:mm:ss.SSS A')}")`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
flipped={false}
>
{new Date(+entry.timestamp)?.toLocaleString("en-US")}
</span>
<span
title="Timestamp"
>
{Moment(+entry.timestamp)?.utc().format('MM/DD/YYYY, h:mm:ss.SSS A')}
</span>
</Queryable>
</div>
</div>
</>

View File

@@ -13,7 +13,7 @@ interface FiltersProps {
setQuery: any
backgroundColor: string
ws: any
openWebSocket: (query: string) => void;
openWebSocket: (query: string, resetEntries: boolean) => void;
}
export const Filters: React.FC<FiltersProps> = ({query, setQuery, backgroundColor, ws, openWebSocket}) => {
@@ -33,7 +33,7 @@ interface QueryFormProps {
setQuery: any
backgroundColor: string
ws: any
openWebSocket: (query: string) => void;
openWebSocket: (query: string, resetEntries: boolean) => void;
}
const style = {
@@ -63,8 +63,12 @@ export const QueryForm: React.FC<QueryFormProps> = ({query, setQuery, background
}
const handleSubmit = (e) => {
ws.close()
openWebSocket(query)
ws.close();
if (query) {
openWebSocket(`(${query}) and leftOff(-1)`, true);
} else {
openWebSocket(`leftOff(-1)`, true);
}
e.preventDefault();
}
@@ -210,7 +214,7 @@ export const QueryForm: React.FC<QueryFormProps> = ({query, setQuery, background
<SyntaxHighlighter
isWrapped={false}
showLineNumbers={false}
code={`timestamp < datetime("10/28/2021, 9:13:02 PM")`}
code={`timestamp < datetime("10/28/2021, 9:13:02.905 PM")`}
language="python"
/>
</Grid>
@@ -226,7 +230,7 @@ export const QueryForm: React.FC<QueryFormProps> = ({query, setQuery, background
language="python"
/>
<Typography id="modal-modal-description">
By clicking the UI elements in both left-pane and right-pane, you can automatically select a field and update the query:
By clicking the plus icon that appears beside the queryable UI elements on hovering in both left-pane and right-pane, you can automatically select a field and update the query:
</Typography>
<img
src={filterUIExample1}
@@ -235,12 +239,12 @@ export const QueryForm: React.FC<QueryFormProps> = ({query, setQuery, background
title="Clicking to UI elements (left-pane)"
/>
<Typography id="modal-modal-description">
Such that; clicking this in left-pane, would append the query below:
Such that; clicking this icon in left-pane, would append the query below:
</Typography>
<SyntaxHighlighter
isWrapped={false}
showLineNumbers={false}
code={`and service == "http://carts.sock-shop"`}
code={`and dst.name == "carts.sock-shop"`}
language="python"
/>
<Typography id="modal-modal-description">
@@ -301,7 +305,7 @@ export const QueryForm: React.FC<QueryFormProps> = ({query, setQuery, background
<SyntaxHighlighter
isWrapped={false}
showLineNumbers={false}
code={`timestamp >= datetime("10/19/2021, 6:29:02 PM")`}
code={`timestamp >= datetime("10/19/2021, 6:29:02.593 PM")`}
language="python"
/>
<Typography id="modal-modal-description">

View File

@@ -20,7 +20,7 @@ const useLayoutStyles = makeStyles(() => ({
padding: "12px 24px",
borderRadius: 4,
marginTop: 15,
background: variables.headerBackgoundColor,
background: variables.headerBackgroundColor,
},
viewer: {
@@ -54,52 +54,78 @@ export const TrafficPage: React.FC<TrafficPageProps> = ({setAnalyzeStatus, onTLS
const [selectedEntryData, setSelectedEntryData] = useState(null);
const [connection, setConnection] = useState(ConnectionStatus.Closed);
const [noMoreDataTop, setNoMoreDataTop] = useState(false);
const [tappingStatus, setTappingStatus] = useState(null);
const [disableScrollList, setDisableScrollList] = useState(false);
const [isSnappedToBottom, setIsSnappedToBottom] = useState(true);
const [query, setQueryDefault] = useState("");
const [query, setQuery] = useState("");
const [queryBackgroundColor, setQueryBackgroundColor] = useState("#f5f5f5");
const [addition, updateQuery] = useState("");
const [queriedCurrent, setQueriedCurrent] = useState(0);
const [queriedTotal, setQueriedTotal] = useState(0);
const [leftOffBottom, setLeftOffBottom] = useState(0);
const [leftOffTop, setLeftOffTop] = useState(null);
const [startTime, setStartTime] = useState(0);
const setQuery = async (query) => {
if (!query) {
setQueryBackgroundColor("#f5f5f5")
} else {
const data = await api.validateQuery(query);
if (data.valid) {
setQueryBackgroundColor("#d2fad2")
useEffect(() => {
(async function() {
if (!query) {
setQueryBackgroundColor("#f5f5f5")
} else {
setQueryBackgroundColor("#fad6dc")
const data = await api.validateQuery(query);
if (!data) {
return;
}
if (data.valid) {
setQueryBackgroundColor("#d2fad2");
} else {
setQueryBackgroundColor("#fad6dc");
}
}
}
setQueryDefault(query)
}
})();
}, [query]);
const updateQuery = (addition) => {
useEffect(() => {
if (query) {
setQuery(`${query} and ${addition}`)
setQuery(`${query} and ${addition}`);
} else {
setQuery(addition)
setQuery(addition);
}
}
// eslint-disable-next-line
}, [addition]);
const ws = useRef(null);
const listEntry = useRef(null);
const openWebSocket = (query) => {
setEntries([])
const openWebSocket = (query: string, resetEntries: boolean) => {
if (resetEntries) {
setFocusedEntryId(null);
setEntries([]);
setQueriedCurrent(0);
setLeftOffTop(null);
setNoMoreDataTop(false);
}
ws.current = new WebSocket(MizuWebsocketURL);
ws.current.onopen = () => {
ws.current.send(query)
setConnection(ConnectionStatus.Connected);
ws.current.send(query);
}
ws.current.onclose = () => {
setConnection(ConnectionStatus.Closed);
}
ws.current.onerror = (event) => {
console.error("WebSocket error:", event);
if (query) {
openWebSocket(`(${query}) and leftOff(${leftOffBottom})`, false);
} else {
openWebSocket(`leftOff(${leftOffBottom})`, false);
}
}
ws.current.onclose = () => setConnection(ConnectionStatus.Closed);
}
if (ws.current) {
@@ -108,15 +134,15 @@ export const TrafficPage: React.FC<TrafficPageProps> = ({setAnalyzeStatus, onTLS
const message = JSON.parse(e.data);
switch (message.messageType) {
case "entry":
const entry = message.data
const entry = message.data;
if (!focusedEntryId) setFocusedEntryId(entry.id.toString())
let newEntries = [...entries];
setEntries([...newEntries, entry])
if(listEntry.current) {
if(isScrollable(listEntry.current.firstChild)) {
setDisableScrollList(true)
}
const newEntries = [...entries, entry];
if (newEntries.length === 10001) {
setLeftOffTop(newEntries[0].entry.id);
newEntries.shift();
setNoMoreDataTop(false);
}
setEntries(newEntries);
break
case "status":
setTappingStatus(message.tappingStatus);
@@ -140,8 +166,12 @@ export const TrafficPage: React.FC<TrafficPageProps> = ({setAnalyzeStatus, onTLS
});
break;
case "queryMetadata":
setQueriedCurrent(message.data.current)
setQueriedTotal(message.data.total)
setQueriedCurrent(queriedCurrent + message.data.current);
setQueriedTotal(message.data.total);
setLeftOffBottom(message.data.leftOff);
if (leftOffTop === null) {
setLeftOffTop(message.data.leftOff - 1);
}
break;
case "startTime":
setStartTime(message.data);
@@ -154,7 +184,7 @@ export const TrafficPage: React.FC<TrafficPageProps> = ({setAnalyzeStatus, onTLS
useEffect(() => {
(async () => {
openWebSocket("rlimit(100)");
openWebSocket("leftOff(-1)", true);
try{
const tapStatusResponse = await api.tapStatus();
setTappingStatus(tapStatusResponse);
@@ -176,17 +206,33 @@ export const TrafficPage: React.FC<TrafficPageProps> = ({setAnalyzeStatus, onTLS
const entryData = await api.getEntry(focusedEntryId);
setSelectedEntryData(entryData);
} catch (error) {
if (error.response) {
toast[error.response.data.type](`Entry[${focusedEntryId}]: ${error.response.data.msg}`, {
position: "bottom-right",
theme: "colored",
autoClose: error.response.data.autoClose,
hideProgressBar: false,
closeOnClick: true,
pauseOnHover: true,
draggable: true,
progress: undefined,
});
}
console.error(error);
}
})()
}, [focusedEntryId])
})();
// eslint-disable-next-line
}, [focusedEntryId]);
const toggleConnection = () => {
if (connection === ConnectionStatus.Connected) {
ws.current.close();
} else {
openWebSocket(query);
setConnection(ConnectionStatus.Connected);
if (query) {
openWebSocket(`(${query}) and leftOff(${leftOffBottom})`, false);
} else {
openWebSocket(`leftOff(${leftOffBottom})`, false);
}
}
}
@@ -209,21 +255,20 @@ export const TrafficPage: React.FC<TrafficPageProps> = ({setAnalyzeStatus, onTLS
}
}
const onScrollEvent = (isAtBottom) => {
isAtBottom ? setDisableScrollList(false) : setDisableScrollList(true)
const onSnapBrokenEvent = () => {
setIsSnappedToBottom(false);
if (connection === ConnectionStatus.Connected) {
ws.current.close();
}
}
const isScrollable = (element) => {
return element.scrollHeight > element.clientHeight;
};
return (
<div className="TrafficPage">
<div className="TrafficPageHeader">
<img className="playPauseIcon" style={{visibility: connection === ConnectionStatus.Connected ? "visible" : "hidden"}} alt="pause"
src={pauseIcon} onClick={toggleConnection}/>
<img className="playPauseIcon" style={{position: "absolute", visibility: connection === ConnectionStatus.Connected ? "hidden" : "visible"}} alt="play"
src={playIcon} onClick={toggleConnection}/>
src={playIcon} onClick={toggleConnection}/>
<div className="connectionText">
{getConnectionTitle()}
<div className={"indicatorContainer " + getConnectionStatusClass(true)}>
@@ -244,18 +289,26 @@ export const TrafficPage: React.FC<TrafficPageProps> = ({setAnalyzeStatus, onTLS
<EntriesList
entries={entries}
setEntries={setEntries}
focusedEntryId={focusedEntryId}
setFocusedEntryId={setFocusedEntryId}
listEntryREF={listEntry}
onScrollEvent={onScrollEvent}
scrollableList={disableScrollList}
ws={ws.current}
openWebSocket={openWebSocket}
query={query}
updateQuery={updateQuery}
listEntryREF={listEntry}
onSnapBrokenEvent={onSnapBrokenEvent}
isSnappedToBottom={isSnappedToBottom}
setIsSnappedToBottom={setIsSnappedToBottom}
queriedCurrent={queriedCurrent}
setQueriedCurrent={setQueriedCurrent}
queriedTotal={queriedTotal}
startTime={startTime}
noMoreDataTop={noMoreDataTop}
setNoMoreDataTop={setNoMoreDataTop}
focusedEntryId={focusedEntryId}
setFocusedEntryId={setFocusedEntryId}
updateQuery={updateQuery}
leftOffTop={leftOffTop}
setLeftOffTop={setLeftOffTop}
isWebSocketConnectionClosed={connection === ConnectionStatus.Closed}
ws={ws.current}
openWebSocket={openWebSocket}
leftOffBottom={leftOffBottom}
/>
</div>
</div>

View File

@@ -17,7 +17,7 @@ interface Props {
const FancyTextDisplay: React.FC<Props> = ({text, className, isPossibleToCopy = true, applyTextEllipsis = true, flipped = false, useTooltip= false, displayIconOnMouseOver = false, buttonOnly = false}) => {
const [showCopiedNotification, setCopied] = useState(false);
const [showTooltip, setShowTooltip] = useState(false);
const displayText = text || '';
text = String(text);
const onCopy = () => {
setCopied(true)
@@ -33,12 +33,12 @@ const FancyTextDisplay: React.FC<Props> = ({text, className, isPossibleToCopy =
return () => clearTimeout(timer);
}, [showCopiedNotification]);
const textElement = <span className={'FancyTextDisplay-Text'}>{displayText}</span>;
const textElement = <span className={'FancyTextDisplay-Text'}>{text}</span>;
const copyButton = isPossibleToCopy && displayText ? <CopyToClipboard text={displayText} onCopy={onCopy}>
const copyButton = isPossibleToCopy && text ? <CopyToClipboard text={text} onCopy={onCopy}>
<span
className={`FancyTextDisplay-Icon`}
title={`Copy "${displayText}" value to clipboard`}
title={`Copy "${text}" value to clipboard`}
>
<img src={duplicateImg} alt="Duplicate full value"/>
{showCopiedNotification && <span className={'FancyTextDisplay-CopyNotifier'}>Copied</span>}
@@ -48,14 +48,14 @@ const FancyTextDisplay: React.FC<Props> = ({text, className, isPossibleToCopy =
return (
<p
className={`FancyTextDisplay-Container ${className ? className : ''} ${displayIconOnMouseOver ? 'displayIconOnMouseOver ' : ''} ${applyTextEllipsis ? ' FancyTextDisplay-ContainerEllipsis' : ''}`}
title={displayText.toString()}
title={text}
onMouseOver={ e => setShowTooltip(true)}
onMouseLeave={ e => setShowTooltip(false)}
>
{!buttonOnly && flipped && textElement}
{copyButton}
{!buttonOnly && !flipped && textElement}
{useTooltip && showTooltip && <span className={'FancyTextDisplay-CopyNotifier'}>{displayText}</span>}
{useTooltip && showTooltip && <span className={'FancyTextDisplay-CopyNotifier'}>{text}</span>}
</p>
);
};

View File

@@ -1,5 +1,6 @@
import React from "react";
import styles from './style/Protocol.module.sass';
import Queryable from "./Queryable";
export interface ProtocolInterface {
name: string
@@ -22,34 +23,46 @@ interface ProtocolProps {
const Protocol: React.FC<ProtocolProps> = ({protocol, horizontal, updateQuery}) => {
if (horizontal) {
return <a target="_blank" rel="noopener noreferrer" href={protocol.referenceLink}>
return <Queryable
query={protocol.macro}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
>
<a target="_blank" rel="noopener noreferrer" href={protocol.referenceLink}>
<span
className={`${styles.base} ${styles.horizontal}`}
style={{
backgroundColor: protocol.backgroundColor,
color: protocol.foregroundColor,
fontSize: 13,
}}
title={protocol.abbr}
>
{protocol.longName}
</span>
</a>
</Queryable>
} else {
return <Queryable
query={protocol.macro}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
flipped={false}
iconStyle={{marginTop: "52px", marginRight: "10px", zIndex: 1000}}
>
<span
className={`${styles.base} ${styles.horizontal}`}
className={`${styles.base} ${styles.vertical}`}
style={{
backgroundColor: protocol.backgroundColor,
color: protocol.foregroundColor,
fontSize: 13,
fontSize: protocol.fontSize,
marginRight: "-20px",
}}
title={protocol.abbr}
title={protocol.longName}
>
{protocol.longName}
{protocol.abbr}
</span>
</a>
} else {
return <span
className={`${styles.base} ${styles.vertical}`}
style={{
backgroundColor: protocol.backgroundColor,
color: protocol.foregroundColor,
fontSize: protocol.fontSize,
}}
title={protocol.longName}
onClick={() => {
updateQuery(protocol.macro)
}}
>
{protocol.abbr}
</span>
</Queryable>
}
};

View File

@@ -0,0 +1,62 @@
import React, { useEffect, useState } from 'react';
import { CopyToClipboard } from 'react-copy-to-clipboard';
import AddCircleIcon from '@material-ui/icons/AddCircle';
import './style/Queryable.sass';
interface Props {
query: string,
updateQuery: any,
style?: object,
iconStyle?: object,
className?: string,
useTooltip?: boolean,
displayIconOnMouseOver?: boolean,
flipped?: boolean,
}
const Queryable: React.FC<Props> = ({query, updateQuery, style, iconStyle, className, useTooltip= true, displayIconOnMouseOver = false, flipped = false, children}) => {
const [showAddedNotification, setAdded] = useState(false);
const [showTooltip, setShowTooltip] = useState(false);
const onCopy = () => {
setAdded(true)
};
useEffect(() => {
let timer;
if (showAddedNotification) {
updateQuery(query);
timer = setTimeout(() => {
setAdded(false);
}, 1000);
}
return () => clearTimeout(timer);
}, [showAddedNotification, query, updateQuery]);
const addButton = query ? <CopyToClipboard text={query} onCopy={onCopy}>
<span
className={`Queryable-Icon`}
title={`Add "${query}" to the filter`}
style={iconStyle}
>
<AddCircleIcon fontSize="small" color="inherit"></AddCircleIcon>
{showAddedNotification && <span className={'Queryable-AddNotifier'}>Added</span>}
</span>
</CopyToClipboard> : null;
return (
<div
className={`Queryable-Container displayIconOnMouseOver ${className ? className : ''} ${displayIconOnMouseOver ? 'displayIconOnMouseOver ' : ''}`}
style={style}
onMouseOver={ e => setShowTooltip(true)}
onMouseLeave={ e => setShowTooltip(false)}
>
{flipped && addButton}
{children}
{!flipped && addButton}
{useTooltip && showTooltip && <span className={'Queryable-Tooltip'}>{query}</span>}
</div>
);
};
export default Queryable;

View File

@@ -1,5 +1,6 @@
import React from "react";
import styles from './style/StatusCode.module.sass';
import Queryable from "./Queryable";
export enum StatusCodeClassification {
SUCCESS = "success",
@@ -16,15 +17,20 @@ const StatusCode: React.FC<EntryProps> = ({statusCode, updateQuery}) => {
const classification = getClassification(statusCode)
return <span
title="Status Code"
className={`queryable ${styles[classification]} ${styles.base}`}
onClick={() => {
updateQuery(`response.status == ${statusCode}`)
}}
return <Queryable
query={`response.status == ${statusCode}`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
flipped={true}
iconStyle={{marginTop: "40px", paddingLeft: "10px"}}
>
{statusCode}
</span>
<span
title="Status Code"
className={`${styles[classification]} ${styles.base}`}
>
{statusCode}
</span>
</Queryable>
};
export function getClassification(statusCode: number): string {

View File

@@ -1,6 +1,7 @@
import miscStyles from "./style/misc.module.sass";
import React from "react";
import styles from './style/Summary.module.sass';
import Queryable from "./Queryable";
interface SummaryProps {
method: string
@@ -9,24 +10,29 @@ interface SummaryProps {
}
export const Summary: React.FC<SummaryProps> = ({method, summary, updateQuery}) => {
return <div className={styles.container}>
{method && <span
title="Method"
className={`queryable ${miscStyles.protocol} ${miscStyles.method}`}
onClick={() => {
updateQuery(`method == "${method}"`)
}}
{method && <Queryable
query={`method == "${method}"`}
className={`${miscStyles.protocol} ${miscStyles.method}`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
style={{whiteSpace: "nowrap"}}
>
{method}
</span>}
{summary && <div
title="Summary"
className={`queryable ${styles.summary}`}
onClick={() => {
updateQuery(`summary == "${summary}"`)
}}
<span>
{method}
</span>
</Queryable>}
{summary && <Queryable
query={`summary == "${summary}"`}
updateQuery={updateQuery}
displayIconOnMouseOver={true}
>
{summary}
</div>}
<div
className={`${styles.summary}`}
>
{summary}
</div>
</Queryable>}
</div>
};

View File

@@ -0,0 +1,48 @@
.Queryable-Container
display: flex
align-items: center
&.displayIconOnMouseOver
.Queryable-Icon
opacity: 0
width: 0px
pointer-events: none
&:hover
.Queryable-Icon
opacity: 1
pointer-events: all
.Queryable-Icon
height: 22px
width: 22px
cursor: pointer
color: #27AE60
&:hover
background-color: rgba(255, 255, 255, 0.06)
border-radius: 4px
color: #1E884B
.Queryable-AddNotifier
background-color: #1E884B
font-weight: normal
padding: 2px 5px
border-radius: 4px
position: absolute
transform: translate(0, 10%)
color: white
z-index: 1000
font-size: 11px
.Queryable-Tooltip
background-color: #1E884B
font-weight: normal
padding: 2px 5px
border-radius: 4px
position: absolute
transform: translate(0, -80%)
color: white
z-index: 1000
font-size: 11px

View File

@@ -3,6 +3,4 @@
align-items: center
.summary
text-overflow: ellipsis
overflow: hidden
white-space: nowrap

View File

@@ -5,20 +5,20 @@
border: solid 1px $secondary-font-color
margin-left: 4px
padding: 2px 5px
text-transform: uppercase
font-family: "Source Sans Pro", sans-serif
font-size: 11px
font-weight: bold
&.method
margin-right: 10px
height: 12px
&.filterPlate
border-color: #bcc6dd20
color: #a0b2ff
font-size: 10px
.noSelect
.noSelect
-webkit-touch-callout: none
-webkit-user-select: none
-khtml-user-select: none

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Some files were not shown because too many files have changed in this diff Show More