Compare commits

...

38 Commits

Author SHA1 Message Date
M. Mert Yıldıran
d4436d9f15 Turn table and body strings to constants and move them to extension API (#262) 2021-09-05 06:44:16 +03:00
M. Mert Yıldıran
4e0ff74944 Fix body size, receive (elapsed time) and timestamps (#258)
* Fix the HTTP body size (it's not applicable to AMQP and Kafka)

* Fix the elapsed time

* Change JSON fields from snake_case to camelCase
2021-09-04 17:15:39 +03:00
M. Mert Yıldıran
366c1d0c6c Refactor Mizu, define an extension API and add new protocols: AMQP, Kafka (#224)
* Separate HTTP related code into `extensions/http` as a Go plugin

* Move `extensions` folder into `tap` folder

* Move HTTP files into `tap/extensions/lib` for now

* Replace `orcaman/concurrent-map` with `sync.Map`

* Remove `grpc_assembler.go`

* Remove `github.com/up9inc/mizu/tap/extensions/http/lib`

* Add a build script to automatically build extensions from a known path and load them

* Start to define the extension API

* Implement the `run()` function for the TCP stream

* Add support of defining multiple ports to the extension API

* Set the extension name inside the extension

* Declare the `Dissect` function in the extension API

* Dissect HTTP request from inside the HTTP extension

* Make the distinction of outbound and inbound ports

* Dissect HTTP response from inside the HTTP extension

* Bring back the HTTP request-response pair matcher

* Return a `*api.RequestResponsePair` from the dissection

* Bring back the gRPC-HTTP/2 parser

* Fix the issues in `handleHTTP1ClientStream` and `handleHTTP1ServerStream`

* Call a function pointer to emit dissected data back to the `tap` package

* roee changes -
trying to fix agent to work with the "api" object) - ***still not working***

* small mistake in the conflicts

* Fix the issues that are introduced by the merge conflict

* Add `Emitter` interface to the API and send `OutputChannelItem`(s) to `OutputChannel`

* Fix the `HTTP1` handlers

* Set `ConnectionInfo` in HTTP handlers

* Fix the `Dockerfile` to build the extensions

* remove some unwanted code

* no message

* Re-enable `getStreamProps` function

* Migrate back from `gopacket/tcpassembly` to `gopacket/reassembly`

* Introduce `HTTPPayload` struct and `HTTPPayloader` interface to `MarshalJSON()` all the data structures that are returned by the HTTP protocol

* Read `socketHarOutChannel` instead of `filteredHarChannel`

* Connect `OutputChannelItem` to the last WebSocket means that finally the web UI started to work again

* Add `.env.example` to React app

* Marshal and unmarshal `*http.Request`, `*http.Response` pairs

* Move `loadExtensions` into `main.go` and map extensions into `extensionsMap`

* Add `Summarize()` method to the `Dissector` interface

* Add `Analyze` method to the `Dissector` interface and `MizuEntry` to the extension API

* Add `Protocol` struct and make it effect the UI

* Refactor `BaseEntryDetails` struct and display the source and destination ports in the UI

* Display the protocol name inside the details layout

* Add `Represent` method to the `Dissector` interface and manipulate the UI through this method

* Make the protocol color affect the details layout color and write protocol abbreviation vertically

* Remove everything HTTP related from the `tap` package and make the extension system fully functional

* Fix the TypeScript warnings

* Bring in the files related AMQP into `amqp` directory

* Add `--nodefrag` flag to the tapper and bring in the main AMQP code

* Implement the AMQP `BasicPublish` and fix some issues in the UI when the response payload is missing

* Implement `representBasicPublish` method

* Fix several minor issues

* Implement the AMQP `BasicDeliver`

* Implement the AMQP `QueueDeclare`

* Implement the AMQP `ExchangeDeclare`

* Implement the AMQP `ConnectionStart`

* Implement the AMQP `ConnectionClose`

* Implement the AMQP `QueueBind`

* Implement the AMQP `BasicConsume`

* Fix an issue in `ConnectionStart`

* Fix a linter error

* Bring in the files related Kafka into `kafka` directory

* Fix the build errors in Kafka Go files

* Implement `Dissect` method of Kafka and adapt request-response pair matcher to asynchronous client-server stream

* Do the "Is reversed?" checked inside `getStreamProps` and fix an issue in Kafka `Dissect` method

* Implement `Analyze`, `Summarize` methods of Kafka

* Implement the representations for Kafka `Metadata`, `RequestHeader` and `ResponseHeader`

* Refactor the AMQP and Kafka implementations to create the summary string only inside the `Analyze` method

* Implement the representations for Kafka `ApiVersions`

* Implement the representations for Kafka `Produce`

* Implement the representations for Kafka `Fetch`

* Implement the representations for Kafka `ListOffsets`, `CreateTopics` and `DeleteTopics`

* Fix the encoding of AMQP `BasicPublish` and `BasicDeliver` body

* Remove the unnecessary logging

* Remove more logging

* Introduce `Version` field to `Protocol` struct for dynamically switching the HTTP protocol to HTTP/2

* Fix the issues in analysis and representation of HTTP/2 (gRPC) protocol

* Fix the issues in summary section of details layout for HTTP/2 (gRPC) protocol

* Fix the read errors that freezes the sniffer in HTTP and Kafka

* Fix the issues in HTTP POST data

* Fix one more issue in HTTP POST data

* Fix an infinite loop in Kafka

* Fix another freezing issue in Kafka

* Revert "UI Infra - Support multiple entry types + refactoring (#211)"

This reverts commit f74a52d4dc.

* Fix more issues that are introduced by the merge

* Fix the status code in the summary section

* adding the cleaner again (why we removed it?).
add TODO: on the extension loop .

* fix dockerfile (remove deleting .env file) - it is found in dockerignore and fails to build if the file not exists

* fix GetEntrties ("/entries" endpoint) - working with "tapApi.BaseEntryDetail" (moved from shared)

* Fix an issue in the UI summary section

* Refactor the protocol payload structs

* Fix a log message in the passive tapper

* Adapt `APP_PORTS` environment variable to the new extension system and change its format to `APP_PORTS='{"http": ["8001"]}' `

* Revert "fix dockerfile (remove deleting .env file) - it is found in dockerignore and fails to build if the file not exists"

This reverts commit 4f514ae1f4.

* Bring in the necessary changes from f74a52d4dc

* Open the API server URL in the web browser as soon as Mizu is ready

* Make the TCP reader consists of a single Go routine (instead of two) and try to dissect in both client and server mode by rewinding

* Swap `TcpID` without overwriting it

* Sort extension by priority

* Try to dissect with looping through all the extensions

* fix getStreamProps function.
(it should be passed from CLI as it was before).

* Turn TCP reader back into two Goroutines (client and server)

* typo

* Learn `isClient` from the TCP stream

* Set `viewer` style `overflow: "auto"`

* Fix the memory leaks in AMQP and Kafka dissectors

* Revert some of the changes in be7c65eb6d

* Remove `allExtensionPorts` since it's no longer needed

* Remove `APP_PORTS` since it's no longer needed

* Fix all of the minor issues in the React code

* Check Kafka header size and fail-fast

* Break the dissectors loop upon a successful dissection

* Don't break the dissector loop. Protocols might collide

* Improve the HTTP request-response counter (still not perfect)

* Make the HTTP request-response counter perfect

* Revert "Revert some of the changes in be7c65eb6d3fb657a059707da3ca559937e59739"

This reverts commit 08e7d786d8.

* Bring back `filterItems` and `isHealthCheckByUserAgent` functions

* Remove some development artifacts

* remove unused and commented lines that are not relevant

* Fix the performance in TCP stream factory. Make it create two `tcpReader`(s) per extension

* Change a log to debug

* Make `*api.CounterPair` a field of `tcpReader`

* Set `isTapTarget` to always `true` again since `filterAuthorities` implementation has problems

* Remove a variable that's only used for logging even though not introduced by this branch

* Bring back the `NumberOfRules` field of `ApplicableRules` struct

* Remove the unused `NewEntry` function

* Move `k8sResolver == nil` check to a more appropriate place

* default healthChecksUserAgentHeaders should be empty array (like the default config value)

* remove spam console.log

* Rules button cause app to crash (access the service via incorrect property)

* Ignore all .env* files in docker build.

* Better caching in dockerfile: only copy go.mod before go mod download.

* Check for errors while loading an extension

* Add a comment about why `Protocol` is not a pointer

* Bring back the call to `deleteOlderThan`

* Remove the `nil` check

* Reduce the maximum allowed AMQP message from 128MB to 1MB

* Fix an error that only occurs when a Kafka broker is initiating

* Revert the change in b2abd7b990

* Fix the service name resolution in all protocols

* Remove the `anydirection` flag and fix the issue in `filterAuthorities`

* Pass `sync.Map` by reference to `deleteOlderThan` method

* Fix the packet capture issue in standalone mode that's introduced by the removal of `anydirection`

* Temporarily resolve the memory exhaustion in AMQP

* Fix a nil pointer dereference error

* Fix the CLI build error

* Fix a memory leak that's identified by `pprof`

Co-authored-by: Roee Gadot <roee.gadot@up9.com>
Co-authored-by: Nimrod Gilboa Markevich <nimrod@up9.com>
2021-09-02 14:34:06 +03:00
RoyUP9
17fa163ee3 added proxy logs, added events logs (#254) 2021-09-01 15:30:37 +03:00
Neim Elezi
3644fdb533 Feature/tra 3533 ssl connection pop up (#223)
* pop-up message for HTTPS domains is modified

* scroll added on hover of the TLS pop-up

* domains that were for testing are removed

* height of the pop-up is decreased

* condition for return is changed
2021-09-01 13:39:02 +03:00
gadotroee
ab7c4e72c6 no message (#253) 2021-08-31 15:27:13 +03:00
RoyUP9
e25e7925b6 fixed version blocking (#251) 2021-08-30 15:11:14 +03:00
RoyUP9
80237c8090 fixed error on invalid config path (#250) 2021-08-30 11:43:44 +03:00
Igor Gov
a310953f05 Fixing call to analysis (#248) 2021-08-26 15:55:05 +03:00
RoyUP9
a9e92b60f5 added custom config path option (#247) 2021-08-26 13:50:41 +03:00
RoyUP9
35e40cd230 added tap acceptance tests, fixed duplicate namespace problem (#244) 2021-08-26 09:56:18 +03:00
Igor Gov
2575ad722a Introducing API server provider (#243) 2021-08-22 11:41:38 +03:00
RoyUP9
afd5757315 added tapper count route and wait time for tappers in test (#226) 2021-08-22 11:38:19 +03:00
Alon Girmonsky
dba8b1f215 some changes in the read me (#241)
change prerequisite to permissions and kubeconfig. These are more FYIs as Mizu requires very little prerequisites. 
Change the description to match getmizu.io
2021-08-20 12:39:52 +03:00
Igor Gov
6dd0ef1268 Adding user friendly message in view command before sleeping (#239) 2021-08-19 12:22:18 +03:00
Alex Haiut
83cfaed1a3 updated readme for release (#237) 2021-08-19 11:47:02 +03:00
Igor Gov
41cb9ee12e run acceptance tests for the latest code (and cancel all other jobs) (#238) 2021-08-19 11:40:37 +03:00
RoyUP9
667f0dc87d fixed namespace restricted validation (#235) 2021-08-19 11:33:48 +03:00
Igor Gov
a34c2fc0dc Adding version check to all commands execution (#236) 2021-08-19 11:33:20 +03:00
Nimrod Gilboa Markevich
7a31263e4a Reduce spam - print TLS detected as DEBUG level (#234) 2021-08-19 11:17:59 +03:00
RoyUP9
7f9fd82c0e fixed panic when using invalid kube config path (#231) 2021-08-19 10:59:31 +03:00
Nimrod Gilboa Markevich
a37d1f4aeb Fixed: Stopped redacting JSON after encountering nil values (#233) 2021-08-19 10:59:13 +03:00
gadotroee
acdbdedd5d Add concurrency to mizu publish action (#232) 2021-08-19 10:31:55 +03:00
Igor Gov
a9b5eba9d4 Fix: View command fail sporadically (#228) 2021-08-19 09:44:43 +03:00
RoyUP9
80201224c6 telemetry machine id (#230) 2021-08-19 09:44:23 +03:00
Selton Fiuza
e6e7d8d58b Fix TRA-3590 TRA-3589 (#229) 2021-08-18 22:28:13 +03:00
RoyUP9
bf27e94003 fixed version check, removed duplicate kube config, fix flags warning, fixed log of invalid config (#227) 2021-08-18 18:10:47 +03:00
Igor Gov
2ae0a2400d PR validation should be triggered just by PR (#225) 2021-08-18 12:51:24 +03:00
RoyUP9
db1f4458c5 Introducing acceptance test (#222) 2021-08-18 10:22:45 +03:00
Nimrod Gilboa Markevich
5d5c11c37c Add to periodic stats print in tapper (#220) 2021-08-16 14:51:01 +03:00
RoyUP9
b4f3b2c540 fixed test coverage (#218) 2021-08-15 14:22:49 +03:00
RoyUP9
a427534605 tests refactor (#216) 2021-08-15 12:30:34 +03:00
RoyUP9
1d6ca9d392 codecov yml for tests threshold (#214) 2021-08-15 12:19:00 +03:00
lirazyehezkel
f74a52d4dc UI Infra - Support multiple entry types + refactoring (#211)
* no message

* change local api path

* generic entry list item + rename files and vars

* entry detailed generic

* fix api file

* clean warnings

* switch

* empty lines

* fix scroll to end feature

Co-authored-by: Roee Gadot <roee.gadot@up9.com>
2021-08-15 12:09:56 +03:00
Neim Elezi
6d2e9af5d7 Feature/tra 3475 scroll to end (#206)
* configuration changed

* testing scroll with button

* back to scroll button feature is done

* scroll to the end of entries feature is done

* config of docker image is reverted back

* path of docker image is changed in configStruct.go
2021-08-15 10:58:16 +03:00
Igor Gov
e4ff4a0745 Run CI checks in parallel (#210) 2021-08-12 18:04:57 +03:00
RoyUP9
f9677dbaa1 added resources to config (#208) 2021-08-12 16:33:32 +03:00
RoyUP9
0afab6c068 added set hierarchy, removed allowed set flags (#205) 2021-08-12 16:01:33 +03:00
172 changed files with 17335 additions and 2460 deletions

View File

@@ -2,7 +2,7 @@
.dockerignore
.editorconfig
.gitignore
.env.*
**/.env*
Dockerfile
Makefile
LICENSE

32
.github/workflows/acceptance_tests.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
name: acceptance tests
on:
pull_request:
branches:
- 'main'
push:
branches:
- 'develop'
concurrency:
group: mizu-acceptance-tests-${{ github.ref }}
cancel-in-progress: true
jobs:
run-acceptance-tests:
name: Run acceptance tests
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '^1.16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Setup acceptance test
run: source ./acceptanceTests/setup.sh
- name: Test
run: make acceptance-test

View File

@@ -1,23 +1,24 @@
name: test
name: PR validation
on:
pull_request:
branches:
- 'develop'
- 'main'
push:
branches:
- 'develop'
- 'main'
concurrency:
group: mizu-pr-validation-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
name: Build
build-cli:
name: Build CLI
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '^1.16'
- run: go version
- name: Check out code into the Go module directory
uses: actions/checkout@v2
@@ -25,15 +26,21 @@ jobs:
- name: Build CLI
run: make cli
build-agent:
name: Build Agent
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '^1.16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- shell: bash
run: |
sudo apt-get install libpcap-dev
- name: Build Agent
run: make agent
- name: Test
run: make test
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2

View File

@@ -1,9 +1,15 @@
name: publish
on:
push:
branches:
- 'develop'
- 'main'
concurrency:
group: mizu-publish-${{ github.ref }}
cancel-in-progress: true
jobs:
docker:
runs-on: ubuntu-latest
@@ -78,4 +84,3 @@ jobs:
tag: ${{ steps.versioning.outputs.version }}
prerelease: ${{ github.ref != 'refs/heads/main' }}
bodyFile: 'cli/bin/README.md'

56
.github/workflows/tests_validation.yml vendored Normal file
View File

@@ -0,0 +1,56 @@
name: tests validation
on:
pull_request:
branches:
- 'develop'
- 'main'
push:
branches:
- 'develop'
- 'main'
concurrency:
group: mizu-tests-validation-${{ github.ref }}
cancel-in-progress: true
jobs:
run-tests-cli:
name: Run CLI tests
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '^1.16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Test
run: make test-cli
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2
run-tests-agent:
name: Run Agent tests
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '^1.16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- shell: bash
run: |
sudo apt-get install libpcap-dev
- name: Test
run: make test-agent
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2

7
.gitignore vendored
View File

@@ -19,3 +19,10 @@ build
# Mac OS
.DS_Store
.vscode/
# Ignore the scripts that are created for development
*dev.*
# Environment variables
.env

View File

@@ -11,7 +11,7 @@ FROM golang:1.16-alpine AS builder
# Set necessary environment variables needed for our image.
ENV CGO_ENABLED=1 GOOS=linux GOARCH=amd64
RUN apk add libpcap-dev gcc g++ make
RUN apk add libpcap-dev gcc g++ make bash
# Move to agent working directory (/agent-build).
WORKDIR /app/agent-build
@@ -19,6 +19,7 @@ WORKDIR /app/agent-build
COPY agent/go.mod agent/go.sum ./
COPY shared/go.mod shared/go.mod ../shared/
COPY tap/go.mod tap/go.mod ../tap/
COPY tap/api/go.* ../tap/api/
RUN go mod download
# cheap trick to make the build faster (As long as go.mod wasn't changes)
RUN go list -f '{{.Path}}@{{.Version}}' -m all | sed 1d | grep -e 'go-cache' -e 'sqlite' | xargs go get
@@ -38,6 +39,8 @@ RUN go build -ldflags="-s -w \
-X 'mizuserver/pkg/version.BuildTimestamp=${BUILD_TIMESTAMP}' \
-X 'mizuserver/pkg/version.SemVer=${SEM_VER}'" -o mizuagent .
COPY build_extensions.sh ..
RUN cd .. && /bin/bash build_extensions.sh
FROM alpine:3.13.5
@@ -46,6 +49,7 @@ WORKDIR /app
# Copy binary and config files from /build to root folder of scratch container.
COPY --from=builder ["/app/agent-build/mizuagent", "."]
COPY --from=builder ["/app/agent/build/extensions", "extensions"]
COPY --from=site-build ["/app/ui-build/build", "site"]
# gin-gonic runs in debug mode without this

View File

@@ -23,14 +23,18 @@ export SEM_VER?=0.0.0
ui: ## Build UI.
@(cd ui; npm i ; npm run build; )
@ls -l ui/build
@ls -l ui/build
cli: ## Build CLI.
@echo "building cli"; cd cli && $(MAKE) build
build-cli-ci: ## Build CLI for CI.
@echo "building cli for ci"; cd cli && $(MAKE) build GIT_BRANCH=ci SUFFIX=ci
agent: ## Build agent.
@(echo "building mizu agent .." )
@(cd agent; go build -o build/mizuagent main.go)
${MAKE} extensions
@ls -l agent/build
docker: ## Build and publish agent docker image.
@@ -42,6 +46,10 @@ push-docker: ## Build and publish agent docker image.
@echo "publishing Docker image .. "
./build-push-featurebranch.sh
build-docker-ci: ## Build agent docker image for CI.
@echo "building docker image for ci"
./build-agent-ci.sh
push-cli: ## Build and publish CLI.
@echo "publishing CLI .. "
@cd cli; $(MAKE) build-all
@@ -50,7 +58,6 @@ push-cli: ## Build and publish CLI.
gsutil cp -r ./cli/bin/* gs://${BUCKET_PATH}/
gsutil setmeta -r -h "Cache-Control:public, max-age=30" gs://${BUCKET_PATH}/\*
clean: clean-ui clean-agent clean-cli clean-docker ## Clean all build artifacts.
clean-ui: ## Clean UI.
@@ -65,6 +72,14 @@ clean-cli: ## Clean CLI.
clean-docker:
@(echo "DOCKER cleanup - NOT IMPLEMENTED YET " )
test: ## Run tests.
extensions:
./build_extensions.sh
test-cli:
@echo "running cli tests"; cd cli && $(MAKE) test
test-agent:
@echo "running agent tests"; cd agent && $(MAKE) test
acceptance-test:
@echo "running acceptance tests"; cd acceptanceTests && $(MAKE) test

View File

@@ -2,7 +2,9 @@
# The API Traffic Viewer for Kubernetes
A simple-yet-powerful API traffic viewer for Kubernetes to help you troubleshoot and debug your microservices. Think TCPDump and Chrome Dev Tools combined
A simple-yet-powerful API traffic viewer for Kubernetes enabling you to view all API communication between microservices to help your debug and troubleshoot regressions.
Think TCPDump and Chrome Dev Tools combined.
![Simple UI](assets/mizu-ui.png)
@@ -38,8 +40,10 @@ SHA256 checksums are available on the [Releases](https://github.com/up9inc/mizu/
### Development (unstable) Build
Pick one from the [Releases](https://github.com/up9inc/mizu/releases) page
## Prerequisites
1. Set `KUBECONFIG` environment variable to your Kubernetes configuration. If this is not set, Mizu assumes that configuration is at `${HOME}/.kube/config`
## Kubeconfig & Permissions
While `mizu`most often works out of the box, you can influence its behavior:
1. [OPTIONAL] Set `KUBECONFIG` environment variable to your Kubernetes configuration. If this is not set, Mizu assumes that configuration is at `${HOME}/.kube/config`
2. `mizu` assumes user running the command has permissions to create resources (such as pods, services, namespaces) on your Kubernetes cluster (no worries - `mizu` resources are cleaned up upon termination)
For detailed list of k8s permissions see [PERMISSIONS](PERMISSIONS.md) document
@@ -155,3 +159,15 @@ Such validation may test response for specific JSON fields, headers, etc.
Please see [API RULES](docs/POLICY_RULES.md) page for more details and syntax.
## How to Run local UI
- run from mizu/agent `go run main.go --hars-read --hars-dir <folder>`
- copy Har files into the folder from last command
- change `MizuWebsocketURL` and `apiURL` in `api.js` file
- run from mizu/ui - `npm run start`
- open browser on `localhost:3000`

2
acceptanceTests/Makefile Normal file
View File

@@ -0,0 +1,2 @@
test: ## Run acceptance tests.
@go test ./... -timeout 1h

View File

@@ -0,0 +1,283 @@
package acceptanceTests
import (
"fmt"
"gopkg.in/yaml.v3"
"io/ioutil"
"os"
"os/exec"
"testing"
)
type tapConfig struct {
GuiPort uint16 `yaml:"gui-port"`
}
type configStruct struct {
Tap tapConfig `yaml:"tap"`
}
func TestConfigRegenerate(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
configPath, configPathErr := getConfigPath()
if configPathErr != nil {
t.Errorf("failed to get config path, err: %v", cliPathErr)
return
}
configCmdArgs := getDefaultConfigCommandArgs()
configCmdArgs = append(configCmdArgs, "-r")
configCmd := exec.Command(cliPath, configCmdArgs...)
t.Logf("running command: %v", configCmd.String())
t.Cleanup(func() {
if err := os.Remove(configPath); err != nil {
t.Logf("failed to delete config file, err: %v", err)
}
})
if err := configCmd.Start(); err != nil {
t.Errorf("failed to start config command, err: %v", err)
return
}
if err := configCmd.Wait(); err != nil {
t.Errorf("failed to wait config command, err: %v", err)
return
}
_, readFileErr := ioutil.ReadFile(configPath)
if readFileErr != nil {
t.Errorf("failed to read config file, err: %v", readFileErr)
return
}
}
func TestConfigGuiPort(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
tests := []uint16{8898}
for _, guiPort := range tests {
t.Run(fmt.Sprintf("%d", guiPort), func(t *testing.T) {
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
configPath, configPathErr := getConfigPath()
if configPathErr != nil {
t.Errorf("failed to get config path, err: %v", cliPathErr)
return
}
config := configStruct{}
config.Tap.GuiPort = guiPort
configBytes, marshalErr := yaml.Marshal(config)
if marshalErr != nil {
t.Errorf("failed to marshal config, err: %v", marshalErr)
return
}
if writeErr := ioutil.WriteFile(configPath, configBytes, 0644); writeErr != nil {
t.Errorf("failed to write config to file, err: %v", writeErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
if err := os.Remove(configPath); err != nil {
t.Logf("failed to delete config file, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(guiPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
})
}
}
func TestConfigSetGuiPort(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
tests := []struct {
ConfigFileGuiPort uint16
SetGuiPort uint16
}{
{ConfigFileGuiPort: 8898, SetGuiPort: 8897},
}
for _, guiPortStruct := range tests {
t.Run(fmt.Sprintf("%d", guiPortStruct.SetGuiPort), func(t *testing.T) {
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
configPath, configPathErr := getConfigPath()
if configPathErr != nil {
t.Errorf("failed to get config path, err: %v", cliPathErr)
return
}
config := configStruct{}
config.Tap.GuiPort = guiPortStruct.ConfigFileGuiPort
configBytes, marshalErr := yaml.Marshal(config)
if marshalErr != nil {
t.Errorf("failed to marshal config, err: %v", marshalErr)
return
}
if writeErr := ioutil.WriteFile(configPath, configBytes, 0644); writeErr != nil {
t.Errorf("failed to write config to file, err: %v", writeErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "--set", fmt.Sprintf("tap.gui-port=%v", guiPortStruct.SetGuiPort))
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
if err := os.Remove(configPath); err != nil {
t.Logf("failed to delete config file, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(guiPortStruct.SetGuiPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
})
}
}
func TestConfigFlagGuiPort(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
tests := []struct {
ConfigFileGuiPort uint16
FlagGuiPort uint16
}{
{ConfigFileGuiPort: 8898, FlagGuiPort: 8896},
}
for _, guiPortStruct := range tests {
t.Run(fmt.Sprintf("%d", guiPortStruct.FlagGuiPort), func(t *testing.T) {
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
configPath, configPathErr := getConfigPath()
if configPathErr != nil {
t.Errorf("failed to get config path, err: %v", cliPathErr)
return
}
config := configStruct{}
config.Tap.GuiPort = guiPortStruct.ConfigFileGuiPort
configBytes, marshalErr := yaml.Marshal(config)
if marshalErr != nil {
t.Errorf("failed to marshal config, err: %v", marshalErr)
return
}
if writeErr := ioutil.WriteFile(configPath, configBytes, 0644); writeErr != nil {
t.Errorf("failed to write config to file, err: %v", writeErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "-p", fmt.Sprintf("%v", guiPortStruct.FlagGuiPort))
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
if err := os.Remove(configPath); err != nil {
t.Logf("failed to delete config file, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(guiPortStruct.FlagGuiPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
})
}
}

5
acceptanceTests/go.mod Normal file
View File

@@ -0,0 +1,5 @@
module github.com/up9inc/mizu/tests
go 1.16
require gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b

4
acceptanceTests/go.sum Normal file
View File

@@ -0,0 +1,4 @@
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

55
acceptanceTests/setup.sh Normal file
View File

@@ -0,0 +1,55 @@
#!/bin/bash
PREFIX=$HOME/local/bin
VERSION=v1.22.0
echo "Attempting to install minikube and assorted tools to $PREFIX"
if ! [ -x "$(command -v kubectl)" ]; then
echo "Installing kubectl version $VERSION"
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$VERSION/bin/linux/amd64/kubectl"
chmod +x kubectl
mv kubectl "$PREFIX"
else
echo "kubetcl is already installed"
fi
if ! [ -x "$(command -v minikube)" ]; then
echo "Installing minikube version $VERSION"
curl -Lo minikube https://storage.googleapis.com/minikube/releases/$VERSION/minikube-linux-amd64
chmod +x minikube
mv minikube "$PREFIX"
else
echo "minikube is already installed"
fi
echo "Starting minikube..."
minikube start
echo "Creating mizu tests namespaces"
kubectl create namespace mizu-tests
kubectl create namespace mizu-tests2
echo "Creating httpbin deployments"
kubectl create deployment httpbin --image=kennethreitz/httpbin -n mizu-tests
kubectl create deployment httpbin2 --image=kennethreitz/httpbin -n mizu-tests
kubectl create deployment httpbin --image=kennethreitz/httpbin -n mizu-tests2
echo "Creating httpbin services"
kubectl expose deployment httpbin --type=NodePort --port=80 -n mizu-tests
kubectl expose deployment httpbin2 --type=NodePort --port=80 -n mizu-tests
kubectl expose deployment httpbin --type=NodePort --port=80 -n mizu-tests2
echo "Starting proxy"
kubectl proxy --port=8080 &
echo "Setting minikube docker env"
eval $(minikube docker-env)
echo "Build agent image"
make build-docker-ci
echo "Build cli"
make build-cli-ci

765
acceptanceTests/tap_test.go Normal file
View File

@@ -0,0 +1,765 @@
package acceptanceTests
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"os/exec"
"strings"
"testing"
"time"
)
func TestTapAndFetch(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
tests := []int{50}
for _, entriesCount := range tests {
t.Run(fmt.Sprintf("%d", entriesCount), func(t *testing.T) {
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
proxyUrl := getProxyUrl(defaultNamespaceName, defaultServiceName)
for i := 0; i < entriesCount; i++ {
if _, requestErr := executeHttpGetRequest(fmt.Sprintf("%v/get", proxyUrl)); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
}
entriesCheckFunc := func() error {
timestamp := time.Now().UnixNano() / int64(time.Millisecond)
entriesUrl := fmt.Sprintf("%v/api/entries?limit=%v&operator=lt&timestamp=%v", apiServerUrl, entriesCount, timestamp)
requestResult, requestErr := executeHttpGetRequest(entriesUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entries, err: %v", requestErr)
}
entries := requestResult.([]interface{})
if len(entries) == 0 {
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
}
entry := entries[0].(map[string]interface{})
entryUrl := fmt.Sprintf("%v/api/entries/%v", apiServerUrl, entry["id"])
requestResult, requestErr = executeHttpGetRequest(entryUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entry, err: %v", requestErr)
}
if requestResult == nil {
return fmt.Errorf("unexpected nil entry result")
}
return nil
}
if err := retriesExecute(shortRetriesCount, entriesCheckFunc); err != nil {
t.Errorf("%v", err)
return
}
fetchCmdArgs := getDefaultFetchCommandArgs()
fetchCmd := exec.Command(cliPath, fetchCmdArgs...)
t.Logf("running command: %v", fetchCmd.String())
if err := fetchCmd.Start(); err != nil {
t.Errorf("failed to start fetch command, err: %v", err)
return
}
harCheckFunc := func() error {
harBytes, readFileErr := ioutil.ReadFile("./unknown_source.har")
if readFileErr != nil {
return fmt.Errorf("failed to read har file, err: %v", readFileErr)
}
harEntries, err := getEntriesFromHarBytes(harBytes)
if err != nil {
return fmt.Errorf("failed to get entries from har, err: %v", err)
}
if len(harEntries) == 0 {
return fmt.Errorf("unexpected har entries result - Expected more than 0 entries")
}
return nil
}
if err := retriesExecute(shortRetriesCount, harCheckFunc); err != nil {
t.Errorf("%v", err)
return
}
})
}
}
func TestTapGuiPort(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
tests := []uint16{8898}
for _, guiPort := range tests {
t.Run(fmt.Sprintf("%d", guiPort), func(t *testing.T) {
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "-p", fmt.Sprintf("%d", guiPort))
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(guiPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
})
}
}
func TestTapAllNamespaces(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
expectedPods := []struct{
Name string
Namespace string
}{
{Name: "httpbin", Namespace: "mizu-tests"},
{Name: "httpbin", Namespace: "mizu-tests2"},
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapCmdArgs = append(tapCmdArgs, "-A")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
podsUrl := fmt.Sprintf("%v/api/tapStatus", apiServerUrl)
requestResult, requestErr := executeHttpGetRequest(podsUrl)
if requestErr != nil {
t.Errorf("failed to get tap status, err: %v", requestErr)
return
}
pods, err := getPods(requestResult)
if err != nil {
t.Errorf("failed to get pods, err: %v", err)
return
}
for _, expectedPod := range expectedPods {
podFound := false
for _, pod := range pods {
podNamespace := pod["namespace"].(string)
podName := pod["name"].(string)
if expectedPod.Namespace == podNamespace && strings.Contains(podName, expectedPod.Name) {
podFound = true
break
}
}
if !podFound {
t.Errorf("unexpected result - expected pod not found, pod namespace: %v, pod name: %v", expectedPod.Namespace, expectedPod.Name)
return
}
}
}
func TestTapMultipleNamespaces(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
expectedPods := []struct{
Name string
Namespace string
}{
{Name: "httpbin", Namespace: "mizu-tests"},
{Name: "httpbin2", Namespace: "mizu-tests"},
{Name: "httpbin", Namespace: "mizu-tests2"},
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
var namespacesCmd []string
for _, expectedPod := range expectedPods {
namespacesCmd = append(namespacesCmd, "-n", expectedPod.Namespace)
}
tapCmdArgs = append(tapCmdArgs, namespacesCmd...)
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
podsUrl := fmt.Sprintf("%v/api/tapStatus", apiServerUrl)
requestResult, requestErr := executeHttpGetRequest(podsUrl)
if requestErr != nil {
t.Errorf("failed to get tap status, err: %v", requestErr)
return
}
pods, err := getPods(requestResult)
if err != nil {
t.Errorf("failed to get pods, err: %v", err)
return
}
if len(expectedPods) != len(pods) {
t.Errorf("unexpected result - expected pods length: %v, actual pods length: %v", len(expectedPods), len(pods))
return
}
for _, expectedPod := range expectedPods {
podFound := false
for _, pod := range pods {
podNamespace := pod["namespace"].(string)
podName := pod["name"].(string)
if expectedPod.Namespace == podNamespace && strings.Contains(podName, expectedPod.Name) {
podFound = true
break
}
}
if !podFound {
t.Errorf("unexpected result - expected pod not found, pod namespace: %v, pod name: %v", expectedPod.Namespace, expectedPod.Name)
return
}
}
}
func TestTapRegex(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
regexPodName := "httpbin2"
expectedPods := []struct{
Name string
Namespace string
}{
{Name: regexPodName, Namespace: "mizu-tests"},
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgsWithRegex(regexPodName)
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
podsUrl := fmt.Sprintf("%v/api/tapStatus", apiServerUrl)
requestResult, requestErr := executeHttpGetRequest(podsUrl)
if requestErr != nil {
t.Errorf("failed to get tap status, err: %v", requestErr)
return
}
pods, err := getPods(requestResult)
if err != nil {
t.Errorf("failed to get pods, err: %v", err)
return
}
if len(expectedPods) != len(pods) {
t.Errorf("unexpected result - expected pods length: %v, actual pods length: %v", len(expectedPods), len(pods))
return
}
for _, expectedPod := range expectedPods {
podFound := false
for _, pod := range pods {
podNamespace := pod["namespace"].(string)
podName := pod["name"].(string)
if expectedPod.Namespace == podNamespace && strings.Contains(podName, expectedPod.Name) {
podFound = true
break
}
}
if !podFound {
t.Errorf("unexpected result - expected pod not found, pod namespace: %v, pod name: %v", expectedPod.Namespace, expectedPod.Name)
return
}
}
}
func TestTapDryRun(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "--dry-run")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
resultChannel := make(chan string, 1)
go func() {
if err := tapCmd.Wait(); err != nil {
resultChannel <- "fail"
return
}
resultChannel <- "success"
}()
go func() {
time.Sleep(shortRetriesCount * time.Second)
resultChannel <- "fail"
}()
testResult := <- resultChannel
if testResult != "success" {
t.Errorf("unexpected result - dry run cmd not done")
}
}
func TestTapRedact(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
proxyUrl := getProxyUrl(defaultNamespaceName, defaultServiceName)
requestBody := map[string]string{"User": "Mizu"}
for i := 0; i < defaultEntriesCount; i++ {
if _, requestErr := executeHttpPostRequest(fmt.Sprintf("%v/post", proxyUrl), requestBody); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
}
redactCheckFunc := func() error {
timestamp := time.Now().UnixNano() / int64(time.Millisecond)
entriesUrl := fmt.Sprintf("%v/api/entries?limit=%v&operator=lt&timestamp=%v", apiServerUrl, defaultEntriesCount, timestamp)
requestResult, requestErr := executeHttpGetRequest(entriesUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entries, err: %v", requestErr)
}
entries := requestResult.([]interface{})
if len(entries) == 0 {
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
}
firstEntry := entries[0].(map[string]interface{})
entryUrl := fmt.Sprintf("%v/api/entries/%v", apiServerUrl, firstEntry["id"])
requestResult, requestErr = executeHttpGetRequest(entryUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entry, err: %v", requestErr)
}
entry := requestResult.(map[string]interface{})["entry"].(map[string]interface{})
entryRequest := entry["request"].(map[string]interface{})
headers := entryRequest["headers"].([]interface{})
for _, headerInterface := range headers {
header := headerInterface.(map[string]interface{})
if header["name"].(string) != "User-Agent" {
continue
}
userAgent := header["value"].(string)
if userAgent != "[REDACTED]" {
return fmt.Errorf("unexpected result - user agent is not redacted")
}
}
data := entryRequest["postData"].(map[string]interface{})
textDataStr := data["text"].(string)
var textData map[string]string
if parseErr := json.Unmarshal([]byte(textDataStr), &textData); parseErr != nil {
return fmt.Errorf("failed to parse text data, err: %v", parseErr)
}
if textData["User"] != "[REDACTED]" {
return fmt.Errorf("unexpected result - user in body is not redacted")
}
return nil
}
if err := retriesExecute(shortRetriesCount, redactCheckFunc); err != nil {
t.Errorf("%v", err)
return
}
}
func TestTapNoRedact(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "--no-redact")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
proxyUrl := getProxyUrl(defaultNamespaceName, defaultServiceName)
requestBody := map[string]string{"User": "Mizu"}
for i := 0; i < defaultEntriesCount; i++ {
if _, requestErr := executeHttpPostRequest(fmt.Sprintf("%v/post", proxyUrl), requestBody); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
}
redactCheckFunc := func() error {
timestamp := time.Now().UnixNano() / int64(time.Millisecond)
entriesUrl := fmt.Sprintf("%v/api/entries?limit=%v&operator=lt&timestamp=%v", apiServerUrl, defaultEntriesCount, timestamp)
requestResult, requestErr := executeHttpGetRequest(entriesUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entries, err: %v", requestErr)
}
entries := requestResult.([]interface{})
if len(entries) == 0 {
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
}
firstEntry := entries[0].(map[string]interface{})
entryUrl := fmt.Sprintf("%v/api/entries/%v", apiServerUrl, firstEntry["id"])
requestResult, requestErr = executeHttpGetRequest(entryUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entry, err: %v", requestErr)
}
entry := requestResult.(map[string]interface{})["entry"].(map[string]interface{})
entryRequest := entry["request"].(map[string]interface{})
headers := entryRequest["headers"].([]interface{})
for _, headerInterface := range headers {
header := headerInterface.(map[string]interface{})
if header["name"].(string) != "User-Agent" {
continue
}
userAgent := header["value"].(string)
if userAgent == "[REDACTED]" {
return fmt.Errorf("unexpected result - user agent is redacted")
}
}
data := entryRequest["postData"].(map[string]interface{})
textDataStr := data["text"].(string)
var textData map[string]string
if parseErr := json.Unmarshal([]byte(textDataStr), &textData); parseErr != nil {
return fmt.Errorf("failed to parse text data, err: %v", parseErr)
}
if textData["User"] == "[REDACTED]" {
return fmt.Errorf("unexpected result - user in body is redacted")
}
return nil
}
if err := retriesExecute(shortRetriesCount, redactCheckFunc); err != nil {
t.Errorf("%v", err)
return
}
}
func TestTapRegexMasking(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "-r", "Mizu")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
proxyUrl := getProxyUrl(defaultNamespaceName, defaultServiceName)
for i := 0; i < defaultEntriesCount; i++ {
response, requestErr := http.Post(fmt.Sprintf("%v/post", proxyUrl), "text/plain", bytes.NewBufferString("Mizu"))
if _, requestErr = executeHttpRequest(response, requestErr); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
}
redactCheckFunc := func() error {
timestamp := time.Now().UnixNano() / int64(time.Millisecond)
entriesUrl := fmt.Sprintf("%v/api/entries?limit=%v&operator=lt&timestamp=%v", apiServerUrl, defaultEntriesCount, timestamp)
requestResult, requestErr := executeHttpGetRequest(entriesUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entries, err: %v", requestErr)
}
entries := requestResult.([]interface{})
if len(entries) == 0 {
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
}
firstEntry := entries[0].(map[string]interface{})
entryUrl := fmt.Sprintf("%v/api/entries/%v", apiServerUrl, firstEntry["id"])
requestResult, requestErr = executeHttpGetRequest(entryUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entry, err: %v", requestErr)
}
entry := requestResult.(map[string]interface{})["entry"].(map[string]interface{})
entryRequest := entry["request"].(map[string]interface{})
data := entryRequest["postData"].(map[string]interface{})
textData := data["text"].(string)
if textData != "[REDACTED]" {
return fmt.Errorf("unexpected result - body is not redacted")
}
return nil
}
if err := retriesExecute(shortRetriesCount, redactCheckFunc); err != nil {
t.Errorf("%v", err)
return
}
}

View File

@@ -0,0 +1,205 @@
package acceptanceTests
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"os"
"os/exec"
"path"
"syscall"
"time"
)
const (
longRetriesCount = 100
shortRetriesCount = 10
defaultApiServerPort = 8899
defaultNamespaceName = "mizu-tests"
defaultServiceName = "httpbin"
defaultEntriesCount = 50
)
func getCliPath() (string, error) {
dir, filePathErr := os.Getwd()
if filePathErr != nil {
return "", filePathErr
}
cliPath := path.Join(dir, "../cli/bin/mizu_ci")
return cliPath, nil
}
func getConfigPath() (string, error) {
home, homeDirErr := os.UserHomeDir()
if homeDirErr != nil {
return "", homeDirErr
}
return path.Join(home, ".mizu", "config.yaml"), nil
}
func getProxyUrl(namespace string, service string) string {
return fmt.Sprintf("http://localhost:8080/api/v1/namespaces/%v/services/%v/proxy", namespace, service)
}
func getApiServerUrl(port uint16) string {
return fmt.Sprintf("http://localhost:%v/mizu", port)
}
func getDefaultCommandArgs() []string {
setFlag := "--set"
telemetry := "telemetry=false"
agentImage := "agent-image=gcr.io/up9-docker-hub/mizu/ci:0.0.0"
imagePullPolicy := "image-pull-policy=Never"
return []string{setFlag, telemetry, setFlag, agentImage, setFlag, imagePullPolicy}
}
func getDefaultTapCommandArgs() []string {
tapCommand := "tap"
defaultCmdArgs := getDefaultCommandArgs()
return append([]string{tapCommand}, defaultCmdArgs...)
}
func getDefaultTapCommandArgsWithRegex(regex string) []string {
tapCommand := "tap"
defaultCmdArgs := getDefaultCommandArgs()
return append([]string{tapCommand, regex}, defaultCmdArgs...)
}
func getDefaultTapNamespace() []string {
return []string{"-n", "mizu-tests"}
}
func getDefaultFetchCommandArgs() []string {
fetchCommand := "fetch"
defaultCmdArgs := getDefaultCommandArgs()
return append([]string{fetchCommand}, defaultCmdArgs...)
}
func getDefaultConfigCommandArgs() []string {
configCommand := "config"
defaultCmdArgs := getDefaultCommandArgs()
return append([]string{configCommand}, defaultCmdArgs...)
}
func retriesExecute(retriesCount int, executeFunc func() error) error {
var lastError error
for i := 0; i < retriesCount; i++ {
if err := executeFunc(); err != nil {
lastError = err
time.Sleep(1 * time.Second)
continue
}
return nil
}
return fmt.Errorf("reached max retries count, retries count: %v, last err: %v", retriesCount, lastError)
}
func waitTapPodsReady(apiServerUrl string) error {
resolvingUrl := fmt.Sprintf("%v/status/tappersCount", apiServerUrl)
tapPodsReadyFunc := func() error {
requestResult, requestErr := executeHttpGetRequest(resolvingUrl)
if requestErr != nil {
return requestErr
}
tappersCount := requestResult.(float64)
if tappersCount == 0 {
return fmt.Errorf("no tappers running")
}
return nil
}
return retriesExecute(longRetriesCount, tapPodsReadyFunc)
}
func jsonBytesToInterface(jsonBytes []byte) (interface{}, error) {
var result interface{}
if parseErr := json.Unmarshal(jsonBytes, &result); parseErr != nil {
return nil, parseErr
}
return result, nil
}
func executeHttpRequest(response *http.Response, requestErr error) (interface{}, error) {
if requestErr != nil {
return nil, requestErr
} else if response.StatusCode != 200 {
return nil, fmt.Errorf("invalid status code %v", response.StatusCode)
}
defer func() { response.Body.Close() }()
data, readErr := ioutil.ReadAll(response.Body)
if readErr != nil {
return nil, readErr
}
return jsonBytesToInterface(data)
}
func executeHttpGetRequest(url string) (interface{}, error) {
response, requestErr := http.Get(url)
return executeHttpRequest(response, requestErr)
}
func executeHttpPostRequest(url string, body interface{}) (interface{}, error) {
requestBody, jsonErr := json.Marshal(body)
if jsonErr != nil {
return nil, jsonErr
}
response, requestErr := http.Post(url, "application/json", bytes.NewBuffer(requestBody))
return executeHttpRequest(response, requestErr)
}
func cleanupCommand(cmd *exec.Cmd) error {
if err := cmd.Process.Signal(syscall.SIGQUIT); err != nil {
return err
}
if err := cmd.Wait(); err != nil {
return err
}
return nil
}
func getEntriesFromHarBytes(harBytes []byte) ([]interface{}, error) {
harInterface, convertErr := jsonBytesToInterface(harBytes)
if convertErr != nil {
return nil, convertErr
}
har := harInterface.(map[string]interface{})
harLog := har["log"].(map[string]interface{})
harEntries := harLog["entries"].([]interface{})
return harEntries, nil
}
func getPods(tapStatusInterface interface{}) ([]map[string]interface{}, error) {
tapStatus := tapStatusInterface.(map[string]interface{})
podsInterface := tapStatus["pods"].([]interface{})
var pods []map[string]interface{}
for _, podInterface := range podsInterface {
pods = append(pods, podInterface.(map[string]interface{}))
}
return pods, nil
}

View File

@@ -1,2 +1,2 @@
test: ## Run agent tests.
@go test ./... -race -coverprofile=coverage.out -covermode=atomic
@go test ./... -coverpkg=./... -race -coverprofile=coverage.out -covermode=atomic

View File

@@ -3,7 +3,6 @@ module mizuserver
go 1.16
require (
github.com/beevik/etree v1.1.0
github.com/djherbis/atime v1.0.0
github.com/fsnotify/fsnotify v1.4.9
github.com/gin-contrib/static v0.0.1
@@ -18,8 +17,9 @@ require (
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7
github.com/up9inc/mizu/shared v0.0.0
github.com/up9inc/mizu/tap v0.0.0
github.com/up9inc/mizu/tap/api v0.0.0
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0
go.mongodb.org/mongo-driver v1.5.1
go.mongodb.org/mongo-driver v1.7.1
gorm.io/driver/sqlite v1.1.4
gorm.io/gorm v1.21.8
k8s.io/api v0.21.0
@@ -30,3 +30,5 @@ require (
replace github.com/up9inc/mizu/shared v0.0.0 => ../shared
replace github.com/up9inc/mizu/tap v0.0.0 => ../tap
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../tap/api

View File

@@ -42,9 +42,6 @@ github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb0
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/beevik/etree v1.1.0 h1:T0xke/WvNtMoCqgzPhkX2r4rjY3GDZFi+FjpRZY2Jbs=
github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A=
github.com/bradleyfalzon/tlsx v0.0.0-20170624122154-28fd0e59bac4 h1:NJOOlc6ZJjix0A1rAU+nxruZtR8KboG1848yqpIUo4M=
github.com/bradleyfalzon/tlsx v0.0.0-20170624122154-28fd0e59bac4/go.mod h1:DQPxZS994Ld1Y8uwnJT+dRL04XPD0cElP/pHH/zEBHM=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
@@ -101,7 +98,6 @@ github.com/go-playground/validator/v10 v10.2.0/go.mod h1:uOYAAleCW8F/7oMFd6aG0GO
github.com/go-playground/validator/v10 v10.4.1/go.mod h1:nlOn6nFhuKACm19sB/8EGNn9GlaMV7XkbRSipzJ0Ii4=
github.com/go-playground/validator/v10 v10.5.0 h1:X9rflw/KmpACwT8zdrm1upefpvdy6ur8d1kWyq6sg3E=
github.com/go-playground/validator/v10 v10.5.0/go.mod h1:xm76BBt941f7yWdGnI2DVPFFg1UK3YY04qifoXU3lOk=
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gobuffalo/attrs v0.0.0-20190224210810-a9411de4debd/go.mod h1:4duuawTqi2wkkpB4ePgWMaai6/Kc6WEz83bhFwpHzj0=
github.com/gobuffalo/depgen v0.0.0-20190329151759-d478694a28d3/go.mod h1:3STtPUQYuzV0gBVOY3vy6CfMm/ljR4pABfrTeHNLHUY=
@@ -194,8 +190,6 @@ github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkr
github.com/jinzhu/now v1.1.1/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/jinzhu/now v1.1.2 h1:eVKgfIdy9b6zbWBMgFpfDPoAMifwSZagU9HmEU6zgiI=
github.com/jinzhu/now v1.1.2/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
@@ -292,8 +286,8 @@ github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0/go.mod h1:/LWChgwKmv
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.mongodb.org/mongo-driver v1.5.1 h1:9nOVLGDfOaZ9R0tBumx/BcuqkbFpyTCU2r/Po7A2azI=
go.mongodb.org/mongo-driver v1.5.1/go.mod h1:gRXCHX4Jo7J0IJ1oDQyUxF7jfy19UfxniMS4xxMmUqw=
go.mongodb.org/mongo-driver v1.7.1 h1:jwqTeEM3x6L9xDXrCxN0Hbg7vdGfPBOTIkr0+/LYZDA=
go.mongodb.org/mongo-driver v1.7.1/go.mod h1:Q4oFMbo1+MSNqICAdYMlC/zSTrwCogR4R8NzkI+yfU8=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
@@ -362,9 +356,8 @@ golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7 h1:OgUuv8lsRpBibGNbSizVwKWlysjaNzmC9gYMhPVfqFM=
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210421230115-4e50805a0758 h1:aEpZnXcAmXkd6AvLb2OPt+EN1Zu/8Ne3pCqPjja5PXY=
golang.org/x/net v0.0.0-20210421230115-4e50805a0758/go.mod h1:72T/g9IO56b78aLF+1Kcs5dz7/ng1VjMUvfKvpfy+jM=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -410,9 +403,8 @@ golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073 h1:8qxJSnu+7dRq6upnbntrmriWByIakBuct5OM/MdQC1M=
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe h1:WdX7u8s3yOigWAhHEaDl8r9G+4XwFQEQFtBMYyN+kXQ=
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d h1:SZxvLBoTP5yHO3Frd4z4vrF+DBX9vMVanchswa69toE=
@@ -423,9 +415,8 @@ golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5 h1:i6eZZ+zk0SOf0xgBpEpPD18qWcJda6q1sxt3S0kzyUQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=

View File

@@ -4,21 +4,29 @@ import (
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"log"
"mizuserver/pkg/api"
"mizuserver/pkg/controllers"
"mizuserver/pkg/models"
"mizuserver/pkg/routes"
"mizuserver/pkg/utils"
"net/http"
"os"
"os/signal"
"path"
"path/filepath"
"plugin"
"sort"
"strings"
"github.com/gin-contrib/static"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
"github.com/romana/rlog"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap"
"mizuserver/pkg/api"
"mizuserver/pkg/models"
"mizuserver/pkg/routes"
"mizuserver/pkg/sensitiveDataFiltering"
"mizuserver/pkg/utils"
"net/http"
"os"
"os/signal"
"strings"
tapApi "github.com/up9inc/mizu/tap/api"
)
var tapperMode = flag.Bool("tap", false, "Run in tapper mode without API")
@@ -26,25 +34,32 @@ var apiServerMode = flag.Bool("api-server", false, "Run in API server mode with
var standaloneMode = flag.Bool("standalone", false, "Run in standalone tapper and API mode")
var apiServerAddress = flag.String("api-server-address", "", "Address of mizu API server")
var namespace = flag.String("namespace", "", "Resolve IPs if they belong to resources in this namespace (default is all)")
var harsReaderMode = flag.Bool("hars-read", false, "Run in hars-read mode")
var harsDir = flag.String("hars-dir", "", "Directory to read hars from")
var extensions []*tapApi.Extension // global
var extensionsMap map[string]*tapApi.Extension // global
func main() {
flag.Parse()
loadExtensions()
hostMode := os.Getenv(shared.HostModeEnvVar) == "1"
tapOpts := &tap.TapOpts{HostMode: hostMode}
if !*tapperMode && !*apiServerMode && !*standaloneMode {
panic("One of the flags --tap, --api or --standalone must be provided")
if !*tapperMode && !*apiServerMode && !*standaloneMode && !*harsReaderMode {
panic("One of the flags --tap, --api or --standalone or --hars-read must be provided")
}
if *standaloneMode {
api.StartResolving(*namespace)
harOutputChannel, outboundLinkOutputChannel := tap.StartPassiveTapper(tapOpts)
filteredHarChannel := make(chan *tap.OutputChannelItem)
outputItemsChannel := make(chan *tapApi.OutputChannelItem)
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)
tap.StartPassiveTapper(tapOpts, outputItemsChannel, extensions)
go filterHarItems(harOutputChannel, filteredHarChannel, getTrafficFilteringOptions())
go api.StartReadingEntries(filteredHarChannel, nil)
go api.StartReadingOutbound(outboundLinkOutputChannel)
go filterItems(outputItemsChannel, filteredOutputItemsChannel, getTrafficFilteringOptions())
go api.StartReadingEntries(filteredOutputItemsChannel, nil, extensionsMap)
// go api.StartReadingOutbound(outboundLinkOutputChannel)
hostApi(nil)
} else if *tapperMode {
@@ -58,25 +73,33 @@ func main() {
rlog.Infof("Filtering for the following authorities: %v", tap.GetFilterIPs())
}
harOutputChannel, outboundLinkOutputChannel := tap.StartPassiveTapper(tapOpts)
// harOutputChannel, outboundLinkOutputChannel := tap.StartPassiveTapper(tapOpts)
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)
tap.StartPassiveTapper(tapOpts, filteredOutputItemsChannel, extensions)
socketConnection, err := shared.ConnectToSocketServer(*apiServerAddress, shared.DEFAULT_SOCKET_RETRIES, shared.DEFAULT_SOCKET_RETRY_SLEEP_TIME, false)
if err != nil {
panic(fmt.Sprintf("Error connecting to socket server at %s %v", *apiServerAddress, err))
}
go pipeTapChannelToSocket(socketConnection, harOutputChannel)
go pipeOutboundLinksChannelToSocket(socketConnection, outboundLinkOutputChannel)
go pipeTapChannelToSocket(socketConnection, filteredOutputItemsChannel)
// go pipeOutboundLinksChannelToSocket(socketConnection, outboundLinkOutputChannel)
} else if *apiServerMode {
api.StartResolving(*namespace)
socketHarOutChannel := make(chan *tap.OutputChannelItem, 1000)
filteredHarChannel := make(chan *tap.OutputChannelItem)
outputItemsChannel := make(chan *tapApi.OutputChannelItem)
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)
go filterHarItems(socketHarOutChannel, filteredHarChannel, getTrafficFilteringOptions())
go api.StartReadingEntries(filteredHarChannel, nil)
go filterItems(outputItemsChannel, filteredOutputItemsChannel, getTrafficFilteringOptions())
go api.StartReadingEntries(filteredOutputItemsChannel, nil, extensionsMap)
hostApi(socketHarOutChannel)
hostApi(outputItemsChannel)
} else if *harsReaderMode {
outputItemsChannel := make(chan *tapApi.OutputChannelItem, 1000)
filteredHarChannel := make(chan *tapApi.OutputChannelItem)
go filterItems(outputItemsChannel, filteredHarChannel, getTrafficFilteringOptions())
go api.StartReadingEntries(filteredHarChannel, harsDir, extensionsMap)
hostApi(nil)
}
signalChan := make(chan os.Signal, 1)
@@ -86,7 +109,50 @@ func main() {
rlog.Info("Exiting")
}
func hostApi(socketHarOutputChannel chan<- *tap.OutputChannelItem) {
func loadExtensions() {
dir, _ := filepath.Abs(filepath.Dir(os.Args[0]))
extensionsDir := path.Join(dir, "./extensions/")
files, err := ioutil.ReadDir(extensionsDir)
if err != nil {
log.Fatal(err)
}
extensions = make([]*tapApi.Extension, len(files))
extensionsMap = make(map[string]*tapApi.Extension)
for i, file := range files {
filename := file.Name()
log.Printf("Loading extension: %s\n", filename)
extension := &tapApi.Extension{
Path: path.Join(extensionsDir, filename),
}
plug, _ := plugin.Open(extension.Path)
extension.Plug = plug
symDissector, err := plug.Lookup("Dissector")
var dissector tapApi.Dissector
var ok bool
dissector, ok = symDissector.(tapApi.Dissector)
if err != nil || !ok {
panic(fmt.Sprintf("Failed to load the extension: %s\n", extension.Path))
}
dissector.Register(extension)
extension.Dissector = dissector
extensions[i] = extension
extensionsMap[extension.Protocol.Name] = extension
}
sort.Slice(extensions, func(i, j int) bool {
return extensions[i].Protocol.Priority < extensions[j].Protocol.Priority
})
for _, extension := range extensions {
log.Printf("Extension Properties: %+v\n", extension)
}
controllers.InitExtensionsMap(extensionsMap)
}
func hostApi(socketHarOutputChannel chan<- *tapApi.OutputChannelItem) {
app := gin.Default()
app.GET("/echo", func(c *gin.Context) {
@@ -94,9 +160,10 @@ func hostApi(socketHarOutputChannel chan<- *tap.OutputChannelItem) {
})
eventHandlers := api.RoutesEventHandlers{
SocketHarOutChannel: socketHarOutputChannel,
SocketOutChannel: socketHarOutputChannel,
}
app.Use(DisableRootStaticCache())
app.Use(static.ServeRoot("/", "./site"))
app.Use(CORSMiddleware()) // This has to be called after the static middleware, does not work if its called before
@@ -109,6 +176,17 @@ func hostApi(socketHarOutputChannel chan<- *tap.OutputChannelItem) {
utils.StartServer(app)
}
func DisableRootStaticCache() gin.HandlerFunc {
return func(c *gin.Context) {
if c.Request.RequestURI == "/" {
// Disable cache only for the main static route
c.Writer.Header().Set("Cache-Control", "no-store")
}
c.Next()
}
}
func CORSMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
c.Writer.Header().Set("Access-Control-Allow-Origin", "*")
@@ -125,20 +203,34 @@ func CORSMiddleware() gin.HandlerFunc {
}
}
func parseEnvVar(env string) map[string][]string {
var mapOfList map[string][]string
val, present := os.LookupEnv(env)
if !present {
return mapOfList
}
err := json.Unmarshal([]byte(val), &mapOfList)
if err != nil {
panic(fmt.Sprintf("env var %s's value of %s is invalid! must be map[string][]string %v", env, mapOfList, err))
}
return mapOfList
}
func getTapTargets() []string {
nodeName := os.Getenv(shared.NodeNameEnvVar)
var tappedAddressesPerNodeDict map[string][]string
err := json.Unmarshal([]byte(os.Getenv(shared.TappedAddressesPerNodeDictEnvVar)), &tappedAddressesPerNodeDict)
if err != nil {
panic(fmt.Sprintf("env var %s's value of %s is invalid! must be map[string][]string %v", shared.TappedAddressesPerNodeDictEnvVar, tappedAddressesPerNodeDict, err))
}
tappedAddressesPerNodeDict := parseEnvVar(shared.TappedAddressesPerNodeDictEnvVar)
return tappedAddressesPerNodeDict[nodeName]
}
func getTrafficFilteringOptions() *shared.TrafficFilteringOptions {
filteringOptionsJson := os.Getenv(shared.MizuFilteringOptionsEnvVar)
if filteringOptionsJson == "" {
return nil
return &shared.TrafficFilteringOptions{
HealthChecksUserAgentHeaders: []string{},
}
}
var filteringOptions shared.TrafficFilteringOptions
err := json.Unmarshal([]byte(filteringOptionsJson), &filteringOptions)
@@ -149,7 +241,7 @@ func getTrafficFilteringOptions() *shared.TrafficFilteringOptions {
return &filteringOptions
}
func filterHarItems(inChannel <-chan *tap.OutputChannelItem, outChannel chan *tap.OutputChannelItem, filterOptions *shared.TrafficFilteringOptions) {
func filterItems(inChannel <-chan *tapApi.OutputChannelItem, outChannel chan *tapApi.OutputChannelItem, filterOptions *shared.TrafficFilteringOptions) {
for message := range inChannel {
if message.ConnectionInfo.IsOutgoing && api.CheckIsServiceIP(message.ConnectionInfo.ServerIP) {
continue
@@ -159,19 +251,27 @@ func filterHarItems(inChannel <-chan *tap.OutputChannelItem, outChannel chan *ta
continue
}
if !filterOptions.DisableRedaction {
sensitiveDataFiltering.FilterSensitiveInfoFromHarRequest(message, filterOptions)
}
// if !filterOptions.DisableRedaction {
// sensitiveDataFiltering.FilterSensitiveInfoFromHarRequest(message, filterOptions)
// }
outChannel <- message
}
}
func isHealthCheckByUserAgent(message *tap.OutputChannelItem, userAgentsToIgnore []string) bool {
for _, header := range message.HarEntry.Request.Headers {
if strings.ToLower(header.Name) == "user-agent" {
func isHealthCheckByUserAgent(item *tapApi.OutputChannelItem, userAgentsToIgnore []string) bool {
if item.Protocol.Name != "http" {
return false
}
request := item.Pair.Request.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
for _, header := range reqDetails["headers"].([]interface{}) {
h := header.(map[string]interface{})
if strings.ToLower(h["name"].(string)) == "user-agent" {
for _, userAgent := range userAgentsToIgnore {
if strings.Contains(strings.ToLower(header.Value), strings.ToLower(userAgent)) {
if strings.Contains(strings.ToLower(h["value"].(string)), strings.ToLower(userAgent)) {
return true
}
}
@@ -181,7 +281,7 @@ func isHealthCheckByUserAgent(message *tap.OutputChannelItem, userAgentsToIgnore
return false
}
func pipeTapChannelToSocket(connection *websocket.Conn, messageDataChannel <-chan *tap.OutputChannelItem) {
func pipeTapChannelToSocket(connection *websocket.Conn, messageDataChannel <-chan *tapApi.OutputChannelItem) {
if connection == nil {
panic("Websocket connection is nil")
}
@@ -197,6 +297,8 @@ func pipeTapChannelToSocket(connection *websocket.Conn, messageDataChannel <-cha
continue
}
// NOTE: This is where the `*tapApi.OutputChannelItem` leaves the code
// and goes into the intermediate WebSocket.
err = connection.WriteMessage(websocket.TextMessage, marshaledData)
if err != nil {
rlog.Infof("error sending message through socket server %s, (%v,%+v)\n", err, err, err)

View File

@@ -5,8 +5,8 @@ import (
"context"
"encoding/json"
"fmt"
"mizuserver/pkg/database"
"mizuserver/pkg/holder"
"mizuserver/pkg/providers"
"net/url"
"os"
"path"
@@ -14,12 +14,13 @@ import (
"strings"
"time"
"go.mongodb.org/mongo-driver/bson/primitive"
"github.com/google/martian/har"
"github.com/romana/rlog"
"github.com/up9inc/mizu/tap"
"go.mongodb.org/mongo-driver/bson/primitive"
tapApi "github.com/up9inc/mizu/tap/api"
"mizuserver/pkg/database"
"mizuserver/pkg/models"
"mizuserver/pkg/resolver"
"mizuserver/pkg/utils"
@@ -49,11 +50,11 @@ func StartResolving(namespace string) {
holder.SetResolver(res)
}
func StartReadingEntries(harChannel <-chan *tap.OutputChannelItem, workingDir *string) {
func StartReadingEntries(harChannel <-chan *tapApi.OutputChannelItem, workingDir *string, extensionsMap map[string]*tapApi.Extension) {
if workingDir != nil && *workingDir != "" {
startReadingFiles(*workingDir)
} else {
startReadingChannel(harChannel)
startReadingChannel(harChannel, extensionsMap)
}
}
@@ -87,30 +88,36 @@ func startReadingFiles(workingDir string) {
decErr := json.NewDecoder(bufio.NewReader(file)).Decode(&inputHar)
utils.CheckErr(decErr)
for _, entry := range inputHar.Log.Entries {
time.Sleep(time.Millisecond * 250)
connectionInfo := &tap.ConnectionInfo{
ClientIP: fileInfo.Name(),
ClientPort: "",
ServerIP: "",
ServerPort: "",
IsOutgoing: false,
}
saveHarToDb(entry, connectionInfo)
}
// for _, entry := range inputHar.Log.Entries {
// time.Sleep(time.Millisecond * 250)
// // connectionInfo := &tap.ConnectionInfo{
// // ClientIP: fileInfo.Name(),
// // ClientPort: "",
// // ServerIP: "",
// // ServerPort: "",
// // IsOutgoing: false,
// // }
// // saveHarToDb(entry, connectionInfo)
// }
rmErr := os.Remove(inputFilePath)
utils.CheckErr(rmErr)
}
}
func startReadingChannel(outputItems <-chan *tap.OutputChannelItem) {
func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extensionsMap map[string]*tapApi.Extension) {
if outputItems == nil {
panic("Channel of captured messages is nil")
}
for item := range outputItems {
providers.EntryAdded()
saveHarToDb(item.HarEntry, item.ConnectionInfo)
extension := extensionsMap[item.Protocol.Name]
resolvedSource, resolvedDestionation := resolveIP(item.ConnectionInfo)
mizuEntry := extension.Dissector.Analyze(item, primitive.NewObjectID().Hex(), resolvedSource, resolvedDestionation)
baseEntry := extension.Dissector.Summarize(mizuEntry)
mizuEntry.EstimatedSizeBytes = getEstimatedEntrySizeBytes(mizuEntry)
database.CreateEntry(mizuEntry)
baseEntryBytes, _ := models.CreateBaseEntryWebSocketMessage(baseEntry)
BroadcastToBrowserClients(baseEntryBytes)
}
}
@@ -121,14 +128,7 @@ func StartReadingOutbound(outboundLinkChannel <-chan *tap.OutboundLink) {
}
}
func saveHarToDb(entry *har.Entry, connectionInfo *tap.ConnectionInfo) {
entryBytes, _ := json.Marshal(entry)
serviceName, urlPath := getServiceNameFromUrl(entry.Request.URL)
entryId := primitive.NewObjectID().Hex()
var (
resolvedSource string
resolvedDestination string
)
func resolveIP(connectionInfo *tapApi.ConnectionInfo) (resolvedSource string, resolvedDestination string) {
if k8sResolver != nil {
unresolvedSource := connectionInfo.ClientIP
resolvedSource = k8sResolver.Resolve(unresolvedSource)
@@ -147,32 +147,7 @@ func saveHarToDb(entry *har.Entry, connectionInfo *tap.ConnectionInfo) {
}
}
}
mizuEntry := models.MizuEntry{
EntryId: entryId,
Entry: string(entryBytes), // simple way to store it and not convert to bytes
Service: serviceName,
Url: entry.Request.URL,
Path: urlPath,
Method: entry.Request.Method,
Status: entry.Response.Status,
RequestSenderIp: connectionInfo.ClientIP,
Timestamp: entry.StartedDateTime.UnixNano() / int64(time.Millisecond),
ResolvedSource: resolvedSource,
ResolvedDestination: resolvedDestination,
IsOutgoing: connectionInfo.IsOutgoing,
}
mizuEntry.EstimatedSizeBytes = getEstimatedEntrySizeBytes(mizuEntry)
database.CreateEntry(&mizuEntry)
baseEntry := models.BaseEntryDetails{}
if err := models.GetEntry(&mizuEntry, &baseEntry); err != nil {
return
}
baseEntry.Rules = models.RunValidationRulesState(*entry, serviceName)
baseEntry.Latency = entry.Timings.Receive
baseEntryBytes, _ := models.CreateBaseEntryWebSocketMessage(&baseEntry)
BroadcastToBrowserClients(baseEntryBytes)
return resolvedSource, resolvedDestination
}
func getServiceNameFromUrl(inputUrl string) (string, string) {
@@ -182,11 +157,14 @@ func getServiceNameFromUrl(inputUrl string) (string, string) {
}
func CheckIsServiceIP(address string) bool {
if k8sResolver == nil {
return false
}
return k8sResolver.CheckIsServiceIP(address)
}
// gives a rough estimate of the size this will take up in the db, good enough for maintaining db size limit accurately
func getEstimatedEntrySizeBytes(mizuEntry models.MizuEntry) int {
func getEstimatedEntrySizeBytes(mizuEntry *tapApi.MizuEntry) int {
sizeBytes := len(mizuEntry.Entry)
sizeBytes += len(mizuEntry.EntryId)
sizeBytes += len(mizuEntry.Service)

View File

@@ -8,9 +8,10 @@ import (
"mizuserver/pkg/up9"
"sync"
tapApi "github.com/up9inc/mizu/tap/api"
"github.com/romana/rlog"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap"
)
var browserClientSocketUUIDs = make([]int, 0)
@@ -18,7 +19,7 @@ var socketListLock = sync.Mutex{}
type RoutesEventHandlers struct {
EventHandlers
SocketHarOutChannel chan<- *tap.OutputChannelItem
SocketOutChannel chan<- *tapApi.OutputChannelItem
}
func init() {
@@ -28,6 +29,7 @@ func init() {
func (h *RoutesEventHandlers) WebSocketConnect(socketId int, isTapper bool) {
if isTapper {
rlog.Infof("Websocket event - Tapper connected, socket ID: %d", socketId)
providers.TapperAdded()
} else {
rlog.Infof("Websocket event - Browser socket connected, socket ID: %d", socketId)
socketListLock.Lock()
@@ -39,6 +41,7 @@ func (h *RoutesEventHandlers) WebSocketConnect(socketId int, isTapper bool) {
func (h *RoutesEventHandlers) WebSocketDisconnect(socketId int, isTapper bool) {
if isTapper {
rlog.Infof("Websocket event - Tapper disconnected, socket ID: %d", socketId)
providers.TapperRemoved()
} else {
rlog.Infof("Websocket event - Browser socket disconnected, socket ID: %d", socketId)
socketListLock.Lock()
@@ -71,7 +74,8 @@ func (h *RoutesEventHandlers) WebSocketMessage(_ int, message []byte) {
if err != nil {
rlog.Infof("Could not unmarshal message of message type %s %v\n", socketMessageBase.MessageType, err)
} else {
h.SocketHarOutChannel <- tappedEntryMessage.Data
// NOTE: This is where the message comes back from the intermediate WebSocket to code.
h.SocketOutChannel <- tappedEntryMessage.Data
}
case shared.WebSocketMessageTypeUpdateStatus:
var statusMessage shared.WebSocketStatusMessage

View File

@@ -16,8 +16,16 @@ import (
"github.com/gin-gonic/gin"
"github.com/google/martian/har"
"github.com/romana/rlog"
tapApi "github.com/up9inc/mizu/tap/api"
)
var extensionsMap map[string]*tapApi.Extension // global
func InitExtensionsMap(ref map[string]*tapApi.Extension) {
extensionsMap = ref
}
func GetEntries(c *gin.Context) {
entriesFilter := &models.EntriesFilter{}
@@ -31,7 +39,7 @@ func GetEntries(c *gin.Context) {
order := database.OperatorToOrderMapping[entriesFilter.Operator]
operatorSymbol := database.OperatorToSymbolMapping[entriesFilter.Operator]
var entries []models.MizuEntry
var entries []tapApi.MizuEntry
database.GetEntriesTable().
Order(fmt.Sprintf("timestamp %s", order)).
Where(fmt.Sprintf("timestamp %s %v", operatorSymbol, entriesFilter.Timestamp)).
@@ -44,9 +52,9 @@ func GetEntries(c *gin.Context) {
utils.ReverseSlice(entries)
}
baseEntries := make([]models.BaseEntryDetails, 0)
baseEntries := make([]tapApi.BaseEntryDetails, 0)
for _, data := range entries {
harEntry := models.BaseEntryDetails{}
harEntry := tapApi.BaseEntryDetails{}
if err := models.GetEntry(&data, &harEntry); err != nil {
continue
}
@@ -80,7 +88,7 @@ func GetHARs(c *gin.Context) {
timestampTo = entriesFilter.To
}
var entries []models.MizuEntry
var entries []tapApi.MizuEntry
database.GetEntriesTable().
Where(fmt.Sprintf("timestamp BETWEEN %v AND %v", timestampFrom, timestampTo)).
Order(fmt.Sprintf("timestamp %s", order)).
@@ -207,7 +215,7 @@ func GetFullEntries(c *gin.Context) {
}
func GetEntry(c *gin.Context) {
var entryData models.MizuEntry
var entryData tapApi.MizuEntry
database.GetEntriesTable().
Where(map[string]string{"entryId": c.Param("entryId")}).
First(&entryData)
@@ -219,20 +227,29 @@ func GetEntry(c *gin.Context) {
"msg": "Can't get entry details",
})
}
fullEntryWithPolicy := models.FullEntryWithPolicy{}
if err := models.GetEntry(&entryData, &fullEntryWithPolicy); err != nil {
c.JSON(http.StatusInternalServerError, map[string]interface{}{
"error": true,
"msg": "Can't get entry details",
})
}
c.JSON(http.StatusOK, fullEntryWithPolicy)
// FIXME: Fix the part below
// fullEntryWithPolicy := models.FullEntryWithPolicy{}
// if err := models.GetEntry(&entryData, &fullEntryWithPolicy); err != nil {
// c.JSON(http.StatusInternalServerError, map[string]interface{}{
// "error": true,
// "msg": "Can't get entry details",
// })
// }
extension := extensionsMap[entryData.ProtocolName]
protocol, representation, bodySize, _ := extension.Dissector.Represent(&entryData)
c.JSON(http.StatusOK, tapApi.MizuEntryWrapper{
Protocol: protocol,
Representation: string(representation),
BodySize: bodySize,
Data: entryData,
})
}
func DeleteAllEntries(c *gin.Context) {
database.GetEntriesTable().
Where("1 = 1").
Delete(&models.MizuEntry{})
Delete(&tapApi.MizuEntry{})
c.JSON(http.StatusOK, map[string]string{
"msg": "Success",

View File

@@ -30,3 +30,7 @@ func PostTappedPods(c *gin.Context) {
api.BroadcastToBrowserClients(jsonBytes)
}
}
func GetTappersCount(c *gin.Context) {
c.JSON(http.StatusOK, providers.TappersCount)
}

View File

@@ -2,16 +2,18 @@ package database
import (
"fmt"
"mizuserver/pkg/utils"
"time"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"gorm.io/gorm/logger"
"mizuserver/pkg/models"
"mizuserver/pkg/utils"
"time"
tapApi "github.com/up9inc/mizu/tap/api"
)
const (
DBPath = "./entries.db"
DBPath = "./entries.db"
OrderDesc = "desc"
OrderAsc = "asc"
LT = "lt"
@@ -19,8 +21,8 @@ const (
)
var (
DB *gorm.DB
IsDBLocked = false
DB *gorm.DB
IsDBLocked = false
OperatorToSymbolMapping = map[string]string{
LT: "<",
GT: ">",
@@ -40,7 +42,7 @@ func GetEntriesTable() *gorm.DB {
return DB.Table("mizu_entries")
}
func CreateEntry(entry *models.MizuEntry) {
func CreateEntry(entry *tapApi.MizuEntry) {
if IsDBLocked {
return
}
@@ -51,14 +53,13 @@ func initDataBase(databasePath string) *gorm.DB {
temp, _ := gorm.Open(sqlite.Open(databasePath), &gorm.Config{
Logger: &utils.TruncatingLogger{LogLevel: logger.Warn, SlowThreshold: 500 * time.Millisecond},
})
_ = temp.AutoMigrate(&models.MizuEntry{}) // this will ensure table is created
_ = temp.AutoMigrate(&tapApi.MizuEntry{}) // this will ensure table is created
return temp
}
func GetEntriesFromDb(timestampFrom int64, timestampTo int64) []models.MizuEntry {
func GetEntriesFromDb(timestampFrom int64, timestampTo int64) []tapApi.MizuEntry {
order := OrderDesc
var entries []models.MizuEntry
var entries []tapApi.MizuEntry
GetEntriesTable().
Where(fmt.Sprintf("timestamp BETWEEN %v AND %v", timestampFrom, timestampTo)).
Order(fmt.Sprintf("timestamp %s", order)).
@@ -70,4 +71,3 @@ func GetEntriesFromDb(timestampFrom int64, timestampTo int64) []models.MizuEntry
}
return entries
}

View File

@@ -1,16 +1,17 @@
package database
import (
"log"
"os"
"strconv"
"time"
"github.com/fsnotify/fsnotify"
"github.com/romana/rlog"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/debounce"
"github.com/up9inc/mizu/shared/units"
"log"
"mizuserver/pkg/models"
"os"
"strconv"
"time"
tapApi "github.com/up9inc/mizu/tap/api"
)
const percentageOfMaxSizeBytesToPrune = 15
@@ -99,7 +100,7 @@ func pruneOldEntries(currentFileSize int64) {
if bytesToBeRemoved >= amountOfBytesToTrim {
break
}
var entry models.MizuEntry
var entry tapApi.MizuEntry
err = DB.ScanRows(rows, &entry)
if err != nil {
rlog.Errorf("Error scanning db row: %v", err)
@@ -111,7 +112,7 @@ func pruneOldEntries(currentFileSize int64) {
}
if len(entryIdsToRemove) > 0 {
GetEntriesTable().Where(entryIdsToRemove).Delete(models.MizuEntry{})
GetEntriesTable().Where(entryIdsToRemove).Delete(tapApi.MizuEntry{})
// VACUUM causes sqlite to shrink the db file after rows have been deleted, the db file will not shrink without this
DB.Exec("VACUUM")
rlog.Errorf("Removed %d rows and cleared %s", len(entryIdsToRemove), units.BytesToHumanReadable(bytesToBeRemoved))

View File

@@ -3,64 +3,22 @@ package models
import (
"encoding/json"
tapApi "github.com/up9inc/mizu/tap/api"
"mizuserver/pkg/rules"
"mizuserver/pkg/utils"
"time"
"github.com/google/martian/har"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap"
)
type DataUnmarshaler interface {
UnmarshalData(*MizuEntry) error
}
func GetEntry(r *MizuEntry, v DataUnmarshaler) error {
func GetEntry(r *tapApi.MizuEntry, v tapApi.DataUnmarshaler) error {
return v.UnmarshalData(r)
}
type MizuEntry struct {
ID uint `gorm:"primarykey"`
CreatedAt time.Time
UpdatedAt time.Time
Entry string `json:"entry,omitempty" gorm:"column:entry"`
EntryId string `json:"entryId" gorm:"column:entryId"`
Url string `json:"url" gorm:"column:url"`
Method string `json:"method" gorm:"column:method"`
Status int `json:"status" gorm:"column:status"`
RequestSenderIp string `json:"requestSenderIp" gorm:"column:requestSenderIp"`
Service string `json:"service" gorm:"column:service"`
Timestamp int64 `json:"timestamp" gorm:"column:timestamp"`
Path string `json:"path" gorm:"column:path"`
ResolvedSource string `json:"resolvedSource,omitempty" gorm:"column:resolvedSource"`
ResolvedDestination string `json:"resolvedDestination,omitempty" gorm:"column:resolvedDestination"`
IsOutgoing bool `json:"isOutgoing,omitempty" gorm:"column:isOutgoing"`
EstimatedSizeBytes int `json:"-" gorm:"column:estimatedSizeBytes"`
}
type BaseEntryDetails struct {
Id string `json:"id,omitempty"`
Url string `json:"url,omitempty"`
RequestSenderIp string `json:"requestSenderIp,omitempty"`
Service string `json:"service,omitempty"`
Path string `json:"path,omitempty"`
StatusCode int `json:"statusCode,omitempty"`
Method string `json:"method,omitempty"`
Timestamp int64 `json:"timestamp,omitempty"`
IsOutgoing bool `json:"isOutgoing,omitempty"`
Latency int64 `json:"latency,omitempty"`
Rules ApplicableRules `json:"rules,omitempty"`
}
type ApplicableRules struct {
Latency int64 `json:"latency,omitempty"`
Status bool `json:"status,omitempty"`
NumberOfRules int `json:"numberOfRules,omitempty"`
}
func NewApplicableRules(status bool, latency int64, number int) ApplicableRules {
ar := ApplicableRules{}
func NewApplicableRules(status bool, latency int64, number int) tapApi.ApplicableRules {
ar := tapApi.ApplicableRules{}
ar.Status = status
ar.Latency = latency
ar.NumberOfRules = number
@@ -75,26 +33,7 @@ type FullEntryDetailsExtra struct {
har.Entry
}
func (bed *BaseEntryDetails) UnmarshalData(entry *MizuEntry) error {
entryUrl := entry.Url
service := entry.Service
if entry.ResolvedDestination != "" {
entryUrl = utils.SetHostname(entryUrl, entry.ResolvedDestination)
service = utils.SetHostname(service, entry.ResolvedDestination)
}
bed.Id = entry.EntryId
bed.Url = entryUrl
bed.Service = service
bed.Path = entry.Path
bed.StatusCode = entry.Status
bed.Method = entry.Method
bed.Timestamp = entry.Timestamp
bed.RequestSenderIp = entry.RequestSenderIp
bed.IsOutgoing = entry.IsOutgoing
return nil
}
func (fed *FullEntryDetails) UnmarshalData(entry *MizuEntry) error {
func (fed *FullEntryDetails) UnmarshalData(entry *tapApi.MizuEntry) error {
if err := json.Unmarshal([]byte(entry.Entry), &fed.Entry); err != nil {
return err
}
@@ -105,7 +44,7 @@ func (fed *FullEntryDetails) UnmarshalData(entry *MizuEntry) error {
return nil
}
func (fedex *FullEntryDetailsExtra) UnmarshalData(entry *MizuEntry) error {
func (fedex *FullEntryDetailsExtra) UnmarshalData(entry *tapApi.MizuEntry) error {
if err := json.Unmarshal([]byte(entry.Entry), &fedex.Entry); err != nil {
return err
}
@@ -138,12 +77,12 @@ type HarFetchRequestQuery struct {
type WebSocketEntryMessage struct {
*shared.WebSocketMessageMetadata
Data *BaseEntryDetails `json:"data,omitempty"`
Data *tapApi.BaseEntryDetails `json:"data,omitempty"`
}
type WebSocketTappedEntryMessage struct {
*shared.WebSocketMessageMetadata
Data *tap.OutputChannelItem
Data *tapApi.OutputChannelItem
}
type WebsocketOutboundLinkMessage struct {
@@ -151,7 +90,7 @@ type WebsocketOutboundLinkMessage struct {
Data *tap.OutboundLink
}
func CreateBaseEntryWebSocketMessage(base *BaseEntryDetails) ([]byte, error) {
func CreateBaseEntryWebSocketMessage(base *tapApi.BaseEntryDetails) ([]byte, error) {
message := &WebSocketEntryMessage{
WebSocketMessageMetadata: &shared.WebSocketMessageMetadata{
MessageType: shared.WebSocketMessageTypeEntry,
@@ -161,7 +100,7 @@ func CreateBaseEntryWebSocketMessage(base *BaseEntryDetails) ([]byte, error) {
return json.Marshal(message)
}
func CreateWebsocketTappedEntryMessage(base *tap.OutputChannelItem) ([]byte, error) {
func CreateWebsocketTappedEntryMessage(base *tapApi.OutputChannelItem) ([]byte, error) {
message := &WebSocketTappedEntryMessage{
WebSocketMessageMetadata: &shared.WebSocketMessageMetadata{
MessageType: shared.WebSocketMessageTypeTappedEntry,
@@ -207,7 +146,7 @@ type FullEntryWithPolicy struct {
Service string `json:"service"`
}
func (fewp *FullEntryWithPolicy) UnmarshalData(entry *MizuEntry) error {
func (fewp *FullEntryWithPolicy) UnmarshalData(entry *tapApi.MizuEntry) error {
if err := json.Unmarshal([]byte(entry.Entry), &fewp.Entry); err != nil {
return err
}
@@ -218,7 +157,7 @@ func (fewp *FullEntryWithPolicy) UnmarshalData(entry *MizuEntry) error {
return nil
}
func RunValidationRulesState(harEntry har.Entry, service string) ApplicableRules {
func RunValidationRulesState(harEntry har.Entry, service string) tapApi.ApplicableRules {
numberOfRules, resultPolicyToSend := rules.MatchRequestPolicy(harEntry, service)
statusPolicyToSend, latency, numberOfRules := rules.PassedValidationRules(resultPolicyToSend, numberOfRules)
ar := NewApplicableRules(statusPolicyToSend, latency, numberOfRules)

View File

@@ -18,9 +18,7 @@ func TestEntryAddedCount(t *testing.T) {
tests := []int{1, 5, 10, 100, 500, 1000}
for _, entriesCount := range tests {
t.Run(fmt.Sprintf("EntriesCount%v", entriesCount), func(t *testing.T) {
t.Cleanup(providers.ResetGeneralStats)
t.Run(fmt.Sprintf("%d", entriesCount), func(t *testing.T) {
for i := 0; i < entriesCount; i++ {
providers.EntryAdded()
}
@@ -30,6 +28,8 @@ func TestEntryAddedCount(t *testing.T) {
if entriesStats.EntriesCount != entriesCount {
t.Errorf("unexpected result - expected: %v, actual: %v", entriesCount, entriesStats.EntriesCount)
}
t.Cleanup(providers.ResetGeneralStats)
})
}
}

View File

@@ -4,14 +4,18 @@ import (
"github.com/patrickmn/go-cache"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap"
"sync"
"time"
)
const tlsLinkRetainmentTime = time.Minute * 15
var (
TapStatus shared.TapStatus
TappersCount int
TapStatus shared.TapStatus
RecentTLSLinks = cache.New(tlsLinkRetainmentTime, tlsLinkRetainmentTime)
tappersCountLock = sync.Mutex{}
)
func GetAllRecentTLSAddresses() []string {
@@ -26,3 +30,15 @@ func GetAllRecentTLSAddresses() []string {
return recentTLSLinks
}
func TapperAdded() {
tappersCountLock.Lock()
TappersCount++
tappersCountLock.Unlock()
}
func TapperRemoved() {
tappersCountLock.Lock()
TappersCount--
tappersCountLock.Unlock()
}

View File

@@ -4,10 +4,11 @@ import (
"context"
"errors"
"fmt"
"github.com/romana/rlog"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
"github.com/orcaman/concurrent-map"
cmap "github.com/orcaman/concurrent-map"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/watch"

View File

@@ -9,4 +9,6 @@ func StatusRoutes(ginApp *gin.Engine) {
routeGroup := ginApp.Group("/status")
routeGroup.POST("/tappedPods", controllers.PostTappedPods)
routeGroup.GET("/tappersCount", controllers.GetTappersCount)
}

View File

@@ -1,198 +0,0 @@
package sensitiveDataFiltering
import (
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"github.com/up9inc/mizu/tap"
"net/url"
"strings"
"github.com/beevik/etree"
"github.com/google/martian/har"
"github.com/up9inc/mizu/shared"
)
func FilterSensitiveInfoFromHarRequest(harOutputItem *tap.OutputChannelItem, options *shared.TrafficFilteringOptions) {
harOutputItem.HarEntry.Request.Headers = filterHarHeaders(harOutputItem.HarEntry.Request.Headers)
harOutputItem.HarEntry.Response.Headers = filterHarHeaders(harOutputItem.HarEntry.Response.Headers)
harOutputItem.HarEntry.Request.Cookies = make([]har.Cookie, 0, 0)
harOutputItem.HarEntry.Response.Cookies = make([]har.Cookie, 0, 0)
harOutputItem.HarEntry.Request.URL = filterUrl(harOutputItem.HarEntry.Request.URL)
for i, queryString := range harOutputItem.HarEntry.Request.QueryString {
if isFieldNameSensitive(queryString.Name) {
harOutputItem.HarEntry.Request.QueryString[i].Value = maskedFieldPlaceholderValue
}
}
if harOutputItem.HarEntry.Request.PostData != nil {
requestContentType := getContentTypeHeaderValue(harOutputItem.HarEntry.Request.Headers)
filteredRequestBody, err := filterHttpBody([]byte(harOutputItem.HarEntry.Request.PostData.Text), requestContentType, options)
if err == nil {
harOutputItem.HarEntry.Request.PostData.Text = string(filteredRequestBody)
}
}
if harOutputItem.HarEntry.Response.Content != nil {
responseContentType := getContentTypeHeaderValue(harOutputItem.HarEntry.Response.Headers)
filteredResponseBody, err := filterHttpBody(harOutputItem.HarEntry.Response.Content.Text, responseContentType, options)
if err == nil {
harOutputItem.HarEntry.Response.Content.Text = filteredResponseBody
}
}
}
func filterHarHeaders(headers []har.Header) []har.Header {
newHeaders := make([]har.Header, 0)
for i, header := range headers {
if strings.ToLower(header.Name) == "cookie" {
continue
} else if isFieldNameSensitive(header.Name) {
newHeaders = append(newHeaders, har.Header{Name: header.Name, Value: maskedFieldPlaceholderValue})
headers[i].Value = maskedFieldPlaceholderValue
} else {
newHeaders = append(newHeaders, header)
}
}
return newHeaders
}
func getContentTypeHeaderValue(headers []har.Header) string {
for _, header := range headers {
if strings.ToLower(header.Name) == "content-type" {
return header.Value
}
}
return ""
}
func isFieldNameSensitive(fieldName string) bool {
name := strings.ToLower(fieldName)
name = strings.ReplaceAll(name, "_", "")
name = strings.ReplaceAll(name, "-", "")
name = strings.ReplaceAll(name, " ", "")
for _, sensitiveField := range personallyIdentifiableDataFields {
if strings.Contains(name, sensitiveField) {
return true
}
}
return false
}
func filterHttpBody(bytes []byte, contentType string, options *shared.TrafficFilteringOptions) ([]byte, error) {
mimeType := strings.Split(contentType, ";")[0]
switch strings.ToLower(mimeType) {
case "application/json":
return filterJsonBody(bytes)
case "text/html":
fallthrough
case "application/xhtml+xml":
fallthrough
case "text/xml":
fallthrough
case "application/xml":
return filterXmlEtree(bytes)
case "text/plain":
if options != nil && options.PlainTextMaskingRegexes != nil {
return filterPlainText(bytes, options), nil
}
}
return bytes, nil
}
func filterPlainText(bytes []byte, options *shared.TrafficFilteringOptions) []byte {
for _, regex := range options.PlainTextMaskingRegexes {
bytes = regex.ReplaceAll(bytes, []byte(maskedFieldPlaceholderValue))
}
return bytes
}
func filterXmlEtree(bytes []byte) ([]byte, error) {
if !IsValidXML(bytes) {
return nil, errors.New("Invalid XML")
}
xmlDoc := etree.NewDocument()
err := xmlDoc.ReadFromBytes(bytes)
if err != nil {
return nil, err
} else {
filterXmlElement(xmlDoc.Root())
}
return xmlDoc.WriteToBytes()
}
func IsValidXML(data []byte) bool {
return xml.Unmarshal(data, new(interface{})) == nil
}
func filterXmlElement(element *etree.Element) {
for i, attribute := range element.Attr {
if isFieldNameSensitive(attribute.Key) {
element.Attr[i].Value = maskedFieldPlaceholderValue
}
}
if element.ChildElements() == nil || len(element.ChildElements()) == 0 {
if isFieldNameSensitive(element.Tag) {
element.SetText(maskedFieldPlaceholderValue)
}
} else {
for _, element := range element.ChildElements() {
filterXmlElement(element)
}
}
}
func filterJsonBody(bytes []byte) ([]byte, error) {
var bodyJsonMap map[string] interface{}
err := json.Unmarshal(bytes ,&bodyJsonMap)
if err != nil {
return nil, err
}
filterJsonMap(bodyJsonMap)
return json.Marshal(bodyJsonMap)
}
func filterJsonMap(jsonMap map[string] interface{}) {
for key, value := range jsonMap {
if value == nil {
return
}
nestedMap, isNested := value.(map[string] interface{})
if isNested {
filterJsonMap(nestedMap)
} else {
if isFieldNameSensitive(key) {
jsonMap[key] = maskedFieldPlaceholderValue
}
}
}
}
// receives string representing url, returns string url without sensitive query param values (http://service/api?userId=bob&password=123&type=login -> http://service/api?userId=[REDACTED]&password=[REDACTED]&type=login)
func filterUrl(originalUrl string) string {
parsedUrl, err := url.Parse(originalUrl)
if err != nil {
return fmt.Sprintf("http://%s", maskedFieldPlaceholderValue)
} else {
if len(parsedUrl.RawQuery) > 0 {
newQueryArgs := make([]string, 0)
for urlQueryParamName, urlQueryParamValues := range parsedUrl.Query() {
newValues := urlQueryParamValues
if isFieldNameSensitive(urlQueryParamName) {
newValues = []string {maskedFieldPlaceholderValue}
}
for _, paramValue := range newValues {
newQueryArgs = append(newQueryArgs, fmt.Sprintf("%s=%s", urlQueryParamName, paramValue))
}
}
parsedUrl.RawQuery = strings.Join(newQueryArgs, "&")
}
return parsedUrl.String()
}
}

15
build-agent-ci.sh Executable file
View File

@@ -0,0 +1,15 @@
#!/bin/bash
set -e
GCP_PROJECT=up9-docker-hub
REPOSITORY=gcr.io/$GCP_PROJECT
SERVER_NAME=mizu
GIT_BRANCH=ci
DOCKER_REPO=$REPOSITORY/$SERVER_NAME/$GIT_BRANCH
SEM_VER=${SEM_VER=0.0.0}
DOCKER_TAGGED_BUILD="$DOCKER_REPO:$SEM_VER"
echo "building $DOCKER_TAGGED_BUILD"
docker build -t ${DOCKER_TAGGED_BUILD} --build-arg SEM_VER=${SEM_VER} --build-arg BUILD_TIMESTAMP=${BUILD_TIMESTAMP} --build-arg GIT_BRANCH=${GIT_BRANCH} --build-arg COMMIT_HASH=${COMMIT_HASH} .

View File

@@ -1,12 +1,14 @@
#!/bin/bash
set -e
SERVER_NAME=mizu
GCP_PROJECT=up9-docker-hub
REPOSITORY=gcr.io/$GCP_PROJECT
SERVER_NAME=mizu
GIT_BRANCH=$(git branch | grep \* | cut -d ' ' -f2 | tr '[:upper:]' '[:lower:]')
SEM_VER=${SEM_VER=0.0.0}
DOCKER_REPO=$REPOSITORY/$SERVER_NAME/$GIT_BRANCH
SEM_VER=${SEM_VER=0.0.0}
DOCKER_TAGGED_BUILDS=("$DOCKER_REPO:latest" "$DOCKER_REPO:$SEM_VER")
if [ "$GIT_BRANCH" = 'develop' -o "$GIT_BRANCH" = 'master' -o "$GIT_BRANCH" = 'main' ]
@@ -21,6 +23,6 @@ docker build $DOCKER_TAGS_ARGS --build-arg SEM_VER=${SEM_VER} --build-arg BUILD_
for DOCKER_TAG in "${DOCKER_TAGGED_BUILDS[@]}"
do
echo pushing "$DOCKER_TAG"
docker push "$DOCKER_TAG"
echo pushing "$DOCKER_TAG"
docker push "$DOCKER_TAG"
done

12
build_extensions.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
for f in tap/extensions/*; do
if [ -d "$f" ]; then
extension=$(basename $f) && \
cd tap/extensions/${extension} && \
go build -buildmode=plugin -o ../${extension}.so . && \
cd ../../.. && \
mkdir -p agent/build/extensions && \
cp tap/extensions/${extension}.so agent/build/extensions
fi
done

View File

@@ -24,7 +24,7 @@ build: ## Build mizu CLI binary (select platform via GOOS / GOARCH env variables
build-all: ## Build for all supported platforms.
@echo "Compiling for every OS and Platform"
@mkdir -p bin && echo "SHA256 checksums available for compiled binaries \n\nRun \`shasum -a 256 -c mizu_OS_ARCH.sha256\` to verify\n\n" > bin/README.md
@mkdir -p bin && sed s/_SEM_VER_/$(SEM_VER)/g README.md.TEMPLATE > bin/README.md
@$(MAKE) build GOOS=darwin GOARCH=amd64
@$(MAKE) build GOOS=linux GOARCH=amd64
@# $(MAKE) build GOOS=darwin GOARCH=arm64
@@ -41,4 +41,4 @@ clean: ## Clean all build artifacts.
rm -rf ./bin/*
test: ## Run cli tests.
@go test ./... -race -coverprofile=coverage.out -covermode=atomic
@go test ./... -coverpkg=./... -race -coverprofile=coverage.out -covermode=atomic

20
cli/README.md.TEMPLATE Normal file
View File

@@ -0,0 +1,20 @@
# Mizu release _SEM_VER_
Download Mizu for your platform
**Mac** (on Intel chip)
```
curl -Lo mizu https://github.com/up9inc/mizu/releases/download/_SEM_VER_/mizu_darwin_amd64 && chmod 755 mizu
```
**Linux**
```
curl -Lo mizu https://github.com/up9inc/mizu/releases/download/_SEM_VER_/mizu_linux_amd64 && chmod 755 mizu
```
### Checksums
SHA256 checksums available for compiled binaries.
Run `shasum -a 256 -c mizu_OS_ARCH.sha256` to verify.

178
cli/apiserver/provider.go Normal file
View File

@@ -0,0 +1,178 @@
package apiserver
import (
"archive/zip"
"bytes"
"encoding/json"
"fmt"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared"
"io/ioutil"
core "k8s.io/api/core/v1"
"net/http"
"net/url"
"time"
)
type apiServerProvider struct {
url string
isReady bool
retries int
}
var Provider = apiServerProvider{retries: config.GetIntEnvConfig(config.ApiServerRetries, 20)}
func (provider *apiServerProvider) InitAndTestConnection(url string) error {
healthUrl := fmt.Sprintf("%s/", url)
retriesLeft := provider.retries
for retriesLeft > 0 {
if response, err := http.Get(healthUrl); err != nil {
logger.Log.Debugf("[ERROR] failed connecting to api server %v", err)
} else if response.StatusCode != 200 {
responseBody := ""
data, readErr := ioutil.ReadAll(response.Body)
if readErr == nil {
responseBody = string(data)
}
logger.Log.Debugf("can't connect to api server yet, response status code: %v, body: %v", response.StatusCode, responseBody)
response.Body.Close()
} else {
logger.Log.Debugf("connection test to api server passed successfully")
break
}
retriesLeft -= 1
time.Sleep(time.Second)
}
if retriesLeft == 0 {
provider.isReady = false
return fmt.Errorf("couldn't reach the api server after %v retries", provider.retries)
}
provider.url = url
provider.isReady = true
return nil
}
func (provider *apiServerProvider) ReportTappedPods(pods []core.Pod) error {
if !provider.isReady {
return fmt.Errorf("trying to reach api server when not initialized yet")
}
tappedPodsUrl := fmt.Sprintf("%s/status/tappedPods", provider.url)
podInfos := make([]shared.PodInfo, 0)
for _, pod := range pods {
podInfos = append(podInfos, shared.PodInfo{Name: pod.Name, Namespace: pod.Namespace})
}
tapStatus := shared.TapStatus{Pods: podInfos}
if jsonValue, err := json.Marshal(tapStatus); err != nil {
return fmt.Errorf("failed Marshal the tapped pods %w", err)
} else {
if response, err := http.Post(tappedPodsUrl, "application/json", bytes.NewBuffer(jsonValue)); err != nil {
return fmt.Errorf("failed sending to API server the tapped pods %w", err)
} else if response.StatusCode != 200 {
return fmt.Errorf("failed sending to API server the tapped pods, response status code %v", response.StatusCode)
} else {
logger.Log.Debugf("Reported to server API about %d taped pods successfully", len(podInfos))
return nil
}
}
}
func (provider *apiServerProvider) RequestAnalysis(analysisDestination string, sleepIntervalSec int) error {
if !provider.isReady {
return fmt.Errorf("trying to reach api server when not initialized yet")
}
urlPath := fmt.Sprintf("%s/api/uploadEntries?dest=%s&interval=%v", provider.url, url.QueryEscape(analysisDestination), sleepIntervalSec)
u, parseErr := url.ParseRequestURI(urlPath)
if parseErr != nil {
logger.Log.Fatal("Failed parsing the URL (consider changing the analysis dest URL), err: %v", parseErr)
}
logger.Log.Debugf("Analysis url %v", u.String())
if response, requestErr := http.Get(u.String()); requestErr != nil {
return fmt.Errorf("failed to notify agent for analysis, err: %w", requestErr)
} else if response.StatusCode != 200 {
return fmt.Errorf("failed to notify agent for analysis, status code: %v", response.StatusCode)
} else {
logger.Log.Infof(uiUtils.Purple, "Traffic is uploading to UP9 for further analysis")
return nil
}
}
func (provider *apiServerProvider) GetGeneralStats() (map[string]interface{}, error) {
if !provider.isReady {
return nil, fmt.Errorf("trying to reach api server when not initialized yet")
}
generalStatsUrl := fmt.Sprintf("%s/api/generalStats", provider.url)
response, requestErr := http.Get(generalStatsUrl)
if requestErr != nil {
return nil, fmt.Errorf("failed to get general stats for telemetry, err: %w", requestErr)
} else if response.StatusCode != 200 {
return nil, fmt.Errorf("failed to get general stats for telemetry, status code: %v", response.StatusCode)
}
defer func() { _ = response.Body.Close() }()
data, readErr := ioutil.ReadAll(response.Body)
if readErr != nil {
return nil, fmt.Errorf("failed to read general stats for telemetry, err: %v", readErr)
}
var generalStats map[string]interface{}
if parseErr := json.Unmarshal(data, &generalStats); parseErr != nil {
return nil, fmt.Errorf("failed to parse general stats for telemetry, err: %v", parseErr)
}
return generalStats, nil
}
func (provider *apiServerProvider) GetHars(fromTimestamp int, toTimestamp int) (*zip.Reader, error) {
if !provider.isReady {
return nil, fmt.Errorf("trying to reach api server when not initialized yet")
}
resp, err := http.Get(fmt.Sprintf("%s/api/har?from=%v&to=%v", provider.url, fromTimestamp, toTimestamp))
if err != nil {
return nil, fmt.Errorf("failed getting har from api server %w", err)
}
defer func() { _ = resp.Body.Close() }()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("failed reading hars %w", err)
}
zipReader, err := zip.NewReader(bytes.NewReader(body), int64(len(body)))
if err != nil {
return nil, fmt.Errorf("failed craeting zip reader %w", err)
}
return zipReader, nil
}
func (provider *apiServerProvider) GetVersion() (string, error) {
if !provider.isReady {
return "", fmt.Errorf("trying to reach api server when not initialized yet")
}
versionUrl, _ := url.Parse(fmt.Sprintf("%s/metadata/version", provider.url))
req := &http.Request{
Method: http.MethodGet,
URL: versionUrl,
}
statusResp, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer statusResp.Body.Close()
versionResponse := &shared.VersionResponse{}
if err := json.NewDecoder(statusResp.Body).Decode(&versionResponse); err != nil {
return "", err
}
return versionResponse.SemVer, nil
}

70
cli/cmd/common.go Normal file
View File

@@ -0,0 +1,70 @@
package cmd
import (
"context"
"fmt"
"log"
"os"
"os/exec"
"os/signal"
"runtime"
"syscall"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/errormessage"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/cli/uiUtils"
)
func GetApiServerUrl() string {
return fmt.Sprintf("http://%s", kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Tap.GuiPort))
}
func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
err := kubernetes.StartProxy(kubernetesProvider, config.Config.Tap.GuiPort, config.Config.MizuResourcesNamespace, mizu.ApiServerPodName)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error occured while running k8s proxy %v\n"+
"Try setting different port by using --%s", errormessage.FormatError(err), configStructs.GuiPortTapName))
cancel()
}
logger.Log.Debugf("proxy ended")
}
func waitForFinish(ctx context.Context, cancel context.CancelFunc) {
logger.Log.Debugf("waiting for finish...")
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT)
// block until ctx cancel is called or termination signal is received
select {
case <-ctx.Done():
logger.Log.Debugf("ctx done")
break
case <-sigChan:
logger.Log.Debugf("Got termination signal, canceling execution...")
cancel()
}
}
func openBrowser(url string) {
var err error
switch runtime.GOOS {
case "linux":
err = exec.Command("xdg-open", url).Start()
case "windows":
err = exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start()
case "darwin":
err = exec.Command("open", url).Start()
default:
err = fmt.Errorf("unsupported platform")
}
if err != nil {
log.Fatal(err)
}
}

View File

@@ -2,34 +2,34 @@ package cmd
import (
"fmt"
"github.com/creasty/defaults"
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/cli/uiUtils"
"io/ioutil"
)
var regenerateFile bool
var configCmd = &cobra.Command{
Use: "config",
Short: "Generate config with default values",
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("config", config.Config)
go telemetry.ReportRun("config", config.Config.Config)
template, err := config.GetConfigWithDefaults()
if err != nil {
logger.Log.Errorf("Failed generating config with defaults %v", err)
return nil
}
if regenerateFile {
if config.Config.Config.Regenerate {
data := []byte(template)
if err := ioutil.WriteFile(config.GetConfigFilePath(), data, 0644); err != nil {
if err := ioutil.WriteFile(config.Config.ConfigFilePath, data, 0644); err != nil {
logger.Log.Errorf("Failed writing config %v", err)
return nil
}
logger.Log.Infof(fmt.Sprintf("Template File written to %s", fmt.Sprintf(uiUtils.Purple, config.GetConfigFilePath())))
logger.Log.Infof(fmt.Sprintf("Template File written to %s", fmt.Sprintf(uiUtils.Purple, config.Config.ConfigFilePath)))
} else {
logger.Log.Debugf("Writing template config.\n%v", template)
fmt.Printf("%v", template)
@@ -40,5 +40,9 @@ var configCmd = &cobra.Command{
func init() {
rootCmd.AddCommand(configCmd)
configCmd.Flags().BoolVarP(&regenerateFile, "regenerate", "r", false, fmt.Sprintf("Regenerate the config file with default values %s", config.GetConfigFilePath()))
defaultConfig := config.ConfigStruct{}
defaults.Set(&defaultConfig)
configCmd.Flags().BoolP(configStructs.RegenerateConfigName, "r", defaultConfig.Config.Regenerate, fmt.Sprintf("Regenerate the config file with default values to path %s or to chosen path using --%s", defaultConfig.ConfigFilePath, config.ConfigFilePathCommandName))
}

View File

@@ -3,10 +3,13 @@ package cmd
import (
"github.com/creasty/defaults"
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu/version"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/cli/uiUtils"
)
var fetchCmd = &cobra.Command{
@@ -15,7 +18,12 @@ var fetchCmd = &cobra.Command{
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("fetch", config.Config.Fetch)
if isCompatible, err := version.CheckVersionCompatibility(config.Config.Fetch.GuiPort); err != nil {
if err := apiserver.Provider.InitAndTestConnection(GetApiServerUrl()); err != nil {
logger.Log.Errorf(uiUtils.Error, "Couldn't connect to API server, make sure one running")
return nil
}
if isCompatible, err := version.CheckVersionCompatibility(); err != nil {
return err
} else if !isCompatible {
return nil

View File

@@ -1,96 +1,25 @@
package cmd
import (
"archive/zip"
"bytes"
"fmt"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/logger"
"io"
"io/ioutil"
"log"
"net/http"
"os"
"path/filepath"
"strings"
"github.com/up9inc/mizu/cli/mizu/fsUtils"
"github.com/up9inc/mizu/cli/uiUtils"
)
func RunMizuFetch() {
mizuProxiedUrl := kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Fetch.GuiPort)
resp, err := http.Get(fmt.Sprintf("http://%s/api/har?from=%v&to=%v", mizuProxiedUrl, config.Config.Fetch.FromTimestamp, config.Config.Fetch.ToTimestamp))
if err != nil {
log.Fatal(err)
if err := apiserver.Provider.InitAndTestConnection(GetApiServerUrl()); err != nil {
logger.Log.Errorf(uiUtils.Error, "Couldn't connect to API server, check logs")
}
defer func() { _ = resp.Body.Close() }()
body, err := ioutil.ReadAll(resp.Body)
zipReader, err := apiserver.Provider.GetHars(config.Config.Fetch.FromTimestamp, config.Config.Fetch.ToTimestamp)
if err != nil {
log.Fatal(err)
logger.Log.Errorf("Failed fetch data from API server %v", err)
return
}
zipReader, err := zip.NewReader(bytes.NewReader(body), int64(len(body)))
if err != nil {
log.Fatal(err)
if err := fsUtils.Unzip(zipReader, config.Config.Fetch.Directory); err != nil {
logger.Log.Debugf("[ERROR] failed unzip %v", err)
}
_ = Unzip(zipReader, config.Config.Fetch.Directory)
}
func Unzip(reader *zip.Reader, dest string) error {
dest, _ = filepath.Abs(dest)
_ = os.MkdirAll(dest, os.ModePerm)
// Closure to address file descriptors issue with all the deferred .Close() methods
extractAndWriteFile := func(f *zip.File) error {
rc, err := f.Open()
if err != nil {
return err
}
defer func() {
if err := rc.Close(); err != nil {
panic(err)
}
}()
path := filepath.Join(dest, f.Name)
// Check for ZipSlip (Directory traversal)
if !strings.HasPrefix(path, filepath.Clean(dest)+string(os.PathSeparator)) {
return fmt.Errorf("illegal file path: %s", path)
}
if f.FileInfo().IsDir() {
_ = os.MkdirAll(path, f.Mode())
} else {
_ = os.MkdirAll(filepath.Dir(path), f.Mode())
logger.Log.Infof("writing HAR file [ %v ]", path)
f, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
if err != nil {
return err
}
defer func() {
if err := f.Close(); err != nil {
panic(err)
}
logger.Log.Info(" done")
}()
_, err = io.Copy(f, rc)
if err != nil {
return err
}
}
return nil
}
for _, f := range reader.File {
err := extractAndWriteFile(f)
if err != nil {
return err
}
}
return nil
}

View File

@@ -2,42 +2,38 @@ package cmd
import (
"context"
"github.com/creasty/defaults"
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/errormessage"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu/fsUtils"
"github.com/up9inc/mizu/cli/telemetry"
"os"
"path"
)
var filePath string
var logsCmd = &cobra.Command{
Use: "logs",
Short: "Create a zip file with logs for Github issue or troubleshoot",
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("logs", config.Config)
go telemetry.ReportRun("logs", config.Config.Logs)
kubernetesProvider, err := kubernetes.NewProvider(config.Config.View.KubeConfigPath)
kubernetesProvider, err := kubernetes.NewProvider(config.Config.KubeConfigPath())
if err != nil {
logger.Log.Error(err)
return nil
}
ctx, _ := context.WithCancel(context.Background())
if filePath == "" {
pwd, err := os.Getwd()
if err != nil {
logger.Log.Errorf("Failed to get PWD, %v (try using `mizu logs -f <full path dest zip file>)`", err)
return nil
}
filePath = path.Join(pwd, "mizu_logs.zip")
if validationErr := config.Config.Logs.Validate(); validationErr != nil {
return errormessage.FormatError(validationErr)
}
logger.Log.Debugf("Using file path %s", filePath)
if err := fsUtils.DumpLogs(kubernetesProvider, ctx, filePath); err != nil {
logger.Log.Errorf("Failed dump logs %v", err)
logger.Log.Debugf("Using file path %s", config.Config.Logs.FilePath())
if dumpLogsErr := fsUtils.DumpLogs(kubernetesProvider, ctx, config.Config.Logs.FilePath()); dumpLogsErr != nil {
logger.Log.Errorf("Failed dump logs %v", dumpLogsErr)
}
return nil
@@ -46,5 +42,9 @@ var logsCmd = &cobra.Command{
func init() {
rootCmd.AddCommand(logsCmd)
logsCmd.Flags().StringVarP(&filePath, "file", "f", "", "Path for zip file (default current <pwd>\\mizu_logs.zip)")
defaultLogsConfig := configStructs.LogsConfig{}
defaults.Set(&defaultLogsConfig)
logsCmd.Flags().StringP(configStructs.FileLogsName, "f", defaultLogsConfig.FileStr, "Path for zip file (default current <pwd>\\mizu_logs.zip)")
}

View File

@@ -2,11 +2,15 @@ package cmd
import (
"fmt"
"github.com/creasty/defaults"
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/cli/mizu/fsUtils"
"github.com/up9inc/mizu/cli/mizu/version"
"github.com/up9inc/mizu/cli/uiUtils"
"time"
)
var rootCmd = &cobra.Command{
@@ -15,24 +19,42 @@ var rootCmd = &cobra.Command{
Long: `A web traffic viewer for kubernetes
Further info is available at https://github.com/up9inc/mizu`,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if err := fsUtils.EnsureDir(mizu.GetMizuFolderPath()); err != nil {
logger.Log.Errorf("Failed to use mizu folder, %v", err)
}
logger.InitLogger()
if err := config.InitConfig(cmd); err != nil {
logger.Log.Fatal(err)
}
return nil
},
}
func init() {
defaultConfig := config.ConfigStruct{}
defaults.Set(&defaultConfig)
rootCmd.PersistentFlags().StringSlice(config.SetCommandName, []string{}, fmt.Sprintf("Override values using --%s", config.SetCommandName))
rootCmd.PersistentFlags().String(config.ConfigFilePathCommandName, defaultConfig.ConfigFilePath, fmt.Sprintf("Override config file path using --%s", config.ConfigFilePathCommandName))
}
func printNewVersionIfNeeded(versionChan chan string) {
select {
case versionMsg := <-versionChan:
if versionMsg != "" {
logger.Log.Infof(uiUtils.Yellow, versionMsg)
}
case <-time.After(2 * time.Second):
}
}
// Execute adds all child commands to the root command and sets flags appropriately.
// This is called by main.main(). It only needs to happen once to the tapCmd.
func Execute() {
if err := fsUtils.EnsureDir(mizu.GetMizuFolderPath()); err != nil {
logger.Log.Errorf("Failed to use mizu folder, %v", err)
}
logger.InitLogger()
versionChan := make(chan string)
defer printNewVersionIfNeeded(versionChan)
go version.CheckNewerVersion(versionChan)
cobra.CheckErr(rootCmd.Execute())
}

View File

@@ -33,10 +33,6 @@ Supported protocols are HTTP and gRPC.`,
return errors.New("unexpected number of arguments")
}
if err := config.Config.Validate(); err != nil {
return errormessage.FormatError(err)
}
if err := config.Config.Tap.Validate(); err != nil {
return errormessage.FormatError(err)
}

View File

@@ -1,30 +1,23 @@
package cmd
import (
"bytes"
"context"
"encoding/json"
"fmt"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu/fsUtils"
"github.com/up9inc/mizu/cli/mizu/goUtils"
"github.com/up9inc/mizu/cli/mizu/version"
"github.com/up9inc/mizu/cli/telemetry"
"net/http"
"net/url"
"os"
"os/signal"
"path"
"regexp"
"strings"
"syscall"
"time"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/errormessage"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/cli/mizu/fsUtils"
"github.com/up9inc/mizu/cli/mizu/goUtils"
"github.com/up9inc/mizu/cli/telemetry"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/debounce"
@@ -61,8 +54,7 @@ func RunMizuTap() {
return
}
}
kubernetesProvider, err := kubernetes.NewProvider(config.Config.KubeConfigPath)
kubernetesProvider, err := kubernetes.NewProvider(config.Config.KubeConfigPath())
if err != nil {
logger.Log.Error(err)
return
@@ -73,13 +65,21 @@ func RunMizuTap() {
targetNamespaces := getNamespaces(kubernetesProvider)
if config.Config.IsNsRestrictedMode() {
if len(targetNamespaces) != 1 || !mizu.Contains(targetNamespaces, config.Config.MizuResourcesNamespace) {
logger.Log.Errorf("Not supported mode. Mizu can't resolve IPs in other namespaces when running in namespace restricted mode.\n"+
"You can use the same namespace for --%s and --%s", configStructs.NamespacesTapName, config.MizuResourcesNamespaceConfigName)
return
}
}
var namespacesStr string
if !mizu.Contains(targetNamespaces, mizu.K8sAllNamespaces) {
namespacesStr = fmt.Sprintf("namespaces \"%s\"", strings.Join(targetNamespaces, "\", \""))
} else {
namespacesStr = "all namespaces"
}
version.CheckNewerVersion()
logger.Log.Infof("Tapping pods in %s", namespacesStr)
if err, _ := updateCurrentlyTappedPods(kubernetesProvider, ctx, targetNamespaces); err != nil {
@@ -101,13 +101,13 @@ func RunMizuTap() {
nodeToTappedPodIPMap := getNodeHostToTappedPodIpsMap(state.currentlyTappedPods)
defer cleanUpMizu(kubernetesProvider)
defer finishMizuExecution(kubernetesProvider)
if err := createMizuResources(ctx, kubernetesProvider, nodeToTappedPodIPMap, mizuApiFilteringOptions, mizuValidationRules); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error creating resources: %v", errormessage.FormatError(err)))
return
}
go goUtils.HandleExcWrapper(createProxyToApiServerPod, ctx, kubernetesProvider, cancel)
go goUtils.HandleExcWrapper(watchApiServerPod, ctx, kubernetesProvider, cancel)
go goUtils.HandleExcWrapper(watchPodsForTapping, ctx, kubernetesProvider, targetNamespaces, cancel)
//block until exit signal or error
@@ -181,6 +181,8 @@ func createMizuApiServer(ctx context.Context, kubernetesProvider *kubernetes.Pro
IsNamespaceRestricted: config.Config.IsNsRestrictedMode(),
MizuApiFilteringOptions: mizuApiFilteringOptions,
MaxEntriesDBSizeBytes: config.Config.Tap.MaxEntriesDBSizeBytes(),
Resources: config.Config.Tap.ApiServerResources,
ImagePullPolicy: config.Config.ImagePullPolicy(),
}
_, err = kubernetesProvider.CreateMizuApiServerPod(ctx, opts)
if err != nil {
@@ -236,7 +238,8 @@ func updateMizuTappers(ctx context.Context, kubernetesProvider *kubernetes.Provi
fmt.Sprintf("%s.%s.svc.cluster.local", state.apiServerService.Name, state.apiServerService.Namespace),
nodeToTappedPodIPMap,
serviceAccountName,
config.Config.Tap.TapOutgoing(),
config.Config.Tap.TapperResources,
config.Config.ImagePullPolicy(),
); err != nil {
return err
}
@@ -250,23 +253,26 @@ func updateMizuTappers(ctx context.Context, kubernetesProvider *kubernetes.Provi
return nil
}
func cleanUpMizu(kubernetesProvider *kubernetes.Provider) {
telemetry.ReportAPICalls(config.Config.Tap.GuiPort)
cleanUpMizuResources(kubernetesProvider)
}
func cleanUpMizuResources(kubernetesProvider *kubernetes.Provider) {
func finishMizuExecution(kubernetesProvider *kubernetes.Provider) {
telemetry.ReportAPICalls()
removalCtx, cancel := context.WithTimeout(context.Background(), cleanupTimeout)
defer cancel()
dumpLogsIfNeeded(kubernetesProvider, removalCtx)
cleanUpMizuResources(kubernetesProvider, removalCtx, cancel)
}
if config.Config.DumpLogs {
mizuDir := mizu.GetMizuFolderPath()
filePath = path.Join(mizuDir, fmt.Sprintf("mizu_logs_%s.zip", time.Now().Format("2006_01_02__15_04_05")))
if err := fsUtils.DumpLogs(kubernetesProvider, removalCtx, filePath); err != nil {
logger.Log.Errorf("Failed dump logs %v", err)
}
func dumpLogsIfNeeded(kubernetesProvider *kubernetes.Provider, removalCtx context.Context) {
if !config.Config.DumpLogs {
return
}
mizuDir := mizu.GetMizuFolderPath()
filePath := path.Join(mizuDir, fmt.Sprintf("mizu_logs_%s.zip", time.Now().Format("2006_01_02__15_04_05")))
if err := fsUtils.DumpLogs(kubernetesProvider, removalCtx, filePath); err != nil {
logger.Log.Errorf("Failed dump logs %v", err)
}
}
func cleanUpMizuResources(kubernetesProvider *kubernetes.Provider, removalCtx context.Context, cancel context.CancelFunc) {
logger.Log.Infof("\nRemoving mizu resources\n")
if !config.Config.IsNsRestrictedMode() {
@@ -331,7 +337,7 @@ func waitUntilNamespaceDeleted(ctx context.Context, cancel context.CancelFunc, k
if err := kubernetesProvider.WaitUtilNamespaceDeleted(ctx, config.Config.MizuResourcesNamespace); err != nil {
switch {
case ctx.Err() == context.Canceled:
// Do nothing. User interrupted the wait.
logger.Log.Debugf("Do nothing. User interrupted the wait")
case err == wait.ErrWaitTimeout:
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Timeout while removing Namespace %s", config.Config.MizuResourcesNamespace))
default:
@@ -340,29 +346,6 @@ func waitUntilNamespaceDeleted(ctx context.Context, cancel context.CancelFunc, k
}
}
func reportTappedPods() {
mizuProxiedUrl := kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Tap.GuiPort)
tappedPodsUrl := fmt.Sprintf("http://%s/status/tappedPods", mizuProxiedUrl)
podInfos := make([]shared.PodInfo, 0)
for _, pod := range state.currentlyTappedPods {
podInfos = append(podInfos, shared.PodInfo{Name: pod.Name, Namespace: pod.Namespace})
}
tapStatus := shared.TapStatus{Pods: podInfos}
if jsonValue, err := json.Marshal(tapStatus); err != nil {
logger.Log.Debugf("[ERROR] failed Marshal the tapped pods %v", err)
} else {
if response, err := http.Post(tappedPodsUrl, "application/json", bytes.NewBuffer(jsonValue)); err != nil {
logger.Log.Debugf("[ERROR] failed sending to API server the tapped pods %v", err)
} else if response.StatusCode != 200 {
logger.Log.Debugf("[ERROR] failed sending to API server the tapped pods, response status code %v", response.StatusCode)
} else {
logger.Log.Debugf("Reported to server API about %d taped pods successfully", len(podInfos))
}
}
}
func watchPodsForTapping(ctx context.Context, kubernetesProvider *kubernetes.Provider, targetNamespaces []string, cancel context.CancelFunc) {
added, modified, removed, errorChan := kubernetes.FilteredWatch(ctx, kubernetesProvider, targetNamespaces, config.Config.Tap.PodRegex())
@@ -378,7 +361,9 @@ func watchPodsForTapping(ctx context.Context, kubernetesProvider *kubernetes.Pro
return
}
reportTappedPods()
if err := apiserver.Provider.ReportTappedPods(state.currentlyTappedPods); err != nil {
logger.Log.Debugf("[Error] failed update tapped pods %v", err)
}
nodeToTappedPodIPMap := getNodeHostToTappedPodIpsMap(state.currentlyTappedPods)
if err != nil {
@@ -485,7 +470,7 @@ func getMissingPods(pods1 []core.Pod, pods2 []core.Pod) []core.Pod {
return missingPods
}
func createProxyToApiServerPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
func watchApiServerPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s$", mizu.ApiServerPodName))
added, modified, removed, errorChan := kubernetes.FilteredWatch(ctx, kubernetesProvider, []string{config.Config.MizuResourcesNamespace}, podExactRegex)
isPodReady := false
@@ -511,10 +496,19 @@ func createProxyToApiServerPod(ctx context.Context, kubernetesProvider *kubernet
if modifiedPod.Status.Phase == core.PodRunning && !isPodReady {
isPodReady = true
go startProxyReportErrorIfAny(kubernetesProvider, cancel)
logger.Log.Infof("Mizu is available at http://%s\n", kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Tap.GuiPort))
time.Sleep(time.Second * 5) // Waiting to be sure the proxy is ready
requestForAnalysis()
reportTappedPods()
url := GetApiServerUrl()
if err := apiserver.Provider.InitAndTestConnection(url); err != nil {
logger.Log.Errorf(uiUtils.Error, "Couldn't connect to API server, check logs")
cancel()
break
}
logger.Log.Infof("Mizu is available at %s\n", url)
openBrowser(url)
requestForAnalysisIfNeeded()
if err := apiserver.Provider.ReportTappedPods(state.currentlyTappedPods); err != nil {
logger.Log.Debugf("[Error] failed update tapped pods %v", err)
}
}
case <-timeAfter:
if !isPodReady {
@@ -528,34 +522,12 @@ func createProxyToApiServerPod(ctx context.Context, kubernetesProvider *kubernet
}
}
func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
err := kubernetes.StartProxy(kubernetesProvider, config.Config.Tap.GuiPort, config.Config.MizuResourcesNamespace, mizu.ApiServerPodName)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error occured while running k8s proxy %v\n"+
"Try setting different port by using --%s", errormessage.FormatError(err), configStructs.GuiPortTapName))
cancel()
}
}
func requestForAnalysis() {
func requestForAnalysisIfNeeded() {
if !config.Config.Tap.Analysis {
return
}
mizuProxiedUrl := kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Tap.GuiPort)
urlPath := fmt.Sprintf("http://%s/api/uploadEntries?dest=%s&interval=%v", mizuProxiedUrl, url.QueryEscape(config.Config.Tap.AnalysisDestination), config.Config.Tap.SleepIntervalSec)
u, parseErr := url.ParseRequestURI(urlPath)
if parseErr != nil {
logger.Log.Fatal("Failed parsing the URL (consider changing the analysis dest URL), err: %v", parseErr)
}
logger.Log.Debugf("Sending get request to %v", u.String())
if response, requestErr := http.Get(u.String()); requestErr != nil {
logger.Log.Errorf("Failed to notify agent for analysis, err: %v", requestErr)
} else if response.StatusCode != 200 {
logger.Log.Errorf("Failed to notify agent for analysis, status code: %v", response.StatusCode)
} else {
logger.Log.Infof(uiUtils.Purple, "Traffic is uploading to UP9 for further analysis")
if err := apiserver.Provider.RequestAnalysis(config.Config.Tap.AnalysisDestination, config.Config.Tap.SleepIntervalSec); err != nil {
logger.Log.Debugf("[Error] failed requesting for analysis %v", err)
}
}
@@ -587,24 +559,11 @@ func getNodeHostToTappedPodIpsMap(tappedPods []core.Pod) map[string][]string {
return nodeToTappedPodIPMap
}
func waitForFinish(ctx context.Context, cancel context.CancelFunc) {
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT)
// block until ctx cancel is called or termination signal is received
select {
case <-ctx.Done():
break
case <-sigChan:
cancel()
}
}
func getNamespaces(kubernetesProvider *kubernetes.Provider) []string {
if config.Config.Tap.AllNamespaces {
return []string{mizu.K8sAllNamespaces}
} else if len(config.Config.Tap.Namespaces) > 0 {
return config.Config.Tap.Namespaces
return mizu.Unique(config.Config.Tap.Namespaces)
} else {
return []string{kubernetesProvider.CurrentNamespace()}
}

View File

@@ -25,5 +25,4 @@ func init() {
defaults.Set(&defaultViewConfig)
viewCmd.Flags().Uint16P(configStructs.GuiPortViewName, "p", defaultViewConfig.GuiPort, "Provide a custom port for the web interface webserver")
viewCmd.Flags().StringP(configStructs.KubeConfigPathViewName, "k", defaultViewConfig.KubeConfigPath, "Path to kube-config file")
}

View File

@@ -3,16 +3,19 @@ package cmd
import (
"context"
"fmt"
"net/http"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/cli/mizu/version"
"net/http"
"github.com/up9inc/mizu/cli/uiUtils"
)
func runMizuView() {
kubernetesProvider, err := kubernetes.NewProvider(config.Config.View.KubeConfigPath)
kubernetesProvider, err := kubernetes.NewProvider(config.Config.KubeConfigPath())
if err != nil {
logger.Log.Error(err)
return
@@ -33,18 +36,24 @@ func runMizuView() {
return
}
mizuProxiedUrl := kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.View.GuiPort)
_, err = http.Get(fmt.Sprintf("http://%s/", mizuProxiedUrl))
if err == nil {
url := GetApiServerUrl()
response, err := http.Get(fmt.Sprintf("%s/", url))
if err == nil && response.StatusCode == 200 {
logger.Log.Infof("Found a running service %s and open port %d", mizu.ApiServerPodName, config.Config.View.GuiPort)
return
}
logger.Log.Debugf("Found service %s, creating k8s proxy", mizu.ApiServerPodName)
logger.Log.Infof("Establishing connection to k8s cluster...")
go startProxyReportErrorIfAny(kubernetesProvider, cancel)
logger.Log.Infof("Mizu is available at http://%s\n", kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.View.GuiPort))
if isCompatible, err := version.CheckVersionCompatibility(config.Config.View.GuiPort); err != nil {
if err := apiserver.Provider.InitAndTestConnection(GetApiServerUrl()); err != nil {
logger.Log.Errorf(uiUtils.Error, "Couldn't connect to API server, check logs")
return
}
logger.Log.Infof("Mizu is available at %s\n", url)
openBrowser(url)
if isCompatible, err := version.CheckVersionCompatibility(); err != nil {
logger.Log.Errorf("Failed to check versions compatibility %v", err)
cancel()
return

View File

@@ -3,12 +3,10 @@ package config
import (
"errors"
"fmt"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"io/ioutil"
"os"
"path"
"reflect"
"strconv"
"strings"
@@ -27,38 +25,25 @@ const (
ReadonlyTag = "readonly"
)
var allowedSetFlags = []string{
AgentImageConfigName,
MizuResourcesNamespaceConfigName,
TelemetryConfigName,
DumpLogsConfigName,
KubeConfigPathName,
configStructs.AnalysisDestinationTapName,
configStructs.SleepIntervalSecTapName,
configStructs.IgnoredUserAgentsTapName,
}
var Config = ConfigStruct{}
func (config *ConfigStruct) Validate() error {
if config.IsNsRestrictedMode() {
if config.Tap.AllNamespaces || len(config.Tap.Namespaces) != 1 || config.Tap.Namespaces[0] != config.MizuResourcesNamespace {
return fmt.Errorf("Not supported mode. Mizu can't resolve IPs in other namespaces when running in namespace restricted mode.\n"+
"You can use the same namespace for --%s and --%s", configStructs.NamespacesTapName, MizuResourcesNamespaceConfigName)
}
}
return nil
}
var (
Config = ConfigStruct{}
cmdName string
)
func InitConfig(cmd *cobra.Command) error {
cmdName = cmd.Name()
if err := defaults.Set(&Config); err != nil {
return err
}
if err := mergeConfigFile(); err != nil {
return fmt.Errorf("invalid config %w\n"+
"you can regenerate the file using `mizu config -r` or just remove it %v", err, GetConfigFilePath())
configFilePathFlag := cmd.Flags().Lookup(ConfigFilePathCommandName)
configFilePath := configFilePathFlag.Value.String()
if err := mergeConfigFile(configFilePath); err != nil {
if configFilePathFlag.Changed || !os.IsNotExist(err) {
return fmt.Errorf("invalid config, %w\n"+
"you can regenerate the file by removing it (%v) and using `mizu config -r`", err, configFilePath)
}
}
cmd.Flags().Visit(initFlag)
@@ -81,14 +66,10 @@ func GetConfigWithDefaults() (string, error) {
return uiUtils.PrettyYaml(defaultConf)
}
func GetConfigFilePath() string {
return path.Join(mizu.GetMizuFolderPath(), "config.yaml")
}
func mergeConfigFile() error {
reader, openErr := os.Open(GetConfigFilePath())
func mergeConfigFile(configFilePath string) error {
reader, openErr := os.Open(configFilePath)
if openErr != nil {
return nil
return openErr
}
buf, readErr := ioutil.ReadAll(reader)
@@ -105,121 +86,140 @@ func mergeConfigFile() error {
}
func initFlag(f *pflag.Flag) {
configElem := reflect.ValueOf(&Config).Elem()
configElemValue := reflect.ValueOf(&Config).Elem()
var flagPath []string
if mizu.Contains([]string{ConfigFilePathCommandName}, f.Name) {
flagPath = []string{f.Name}
} else {
flagPath = []string{cmdName, f.Name}
}
sliceValue, isSliceValue := f.Value.(pflag.SliceValue)
if !isSliceValue {
mergeFlagValue(configElem, f.Name, f.Value.String())
if err := mergeFlagValue(configElemValue, flagPath, strings.Join(flagPath, "."), f.Value.String()); err != nil {
logger.Log.Warningf(uiUtils.Warning, err)
}
return
}
if f.Name == SetCommandName {
mergeSetFlag(configElem, sliceValue.GetSlice())
if err := mergeSetFlag(configElemValue, sliceValue.GetSlice()); err != nil {
logger.Log.Warningf(uiUtils.Warning, err)
}
return
}
mergeFlagValues(configElem, f.Name, sliceValue.GetSlice())
if err := mergeFlagValues(configElemValue, flagPath, strings.Join(flagPath, "."), sliceValue.GetSlice()); err != nil {
logger.Log.Warningf(uiUtils.Warning, err)
}
}
func mergeSetFlag(configElem reflect.Value, setValues []string) {
func mergeSetFlag(configElemValue reflect.Value, setValues []string) error {
var setErrors []string
setMap := map[string][]string{}
for _, setValue := range setValues {
if !strings.Contains(setValue, Separator) {
logger.Log.Warningf(uiUtils.Warning, fmt.Sprintf("Ignoring set argument %s (set argument format: <flag name>=<flag value>)", setValue))
setErrors = append(setErrors, fmt.Sprintf("Ignoring set argument %s (set argument format: <flag name>=<flag value>)", setValue))
continue
}
split := strings.SplitN(setValue, Separator, 2)
if len(split) != 2 {
logger.Log.Warningf(uiUtils.Warning, fmt.Sprintf("Ignoring set argument %s (set argument format: <flag name>=<flag value>)", setValue))
continue
}
argumentKey, argumentValue := split[0], split[1]
setMap[argumentKey] = append(setMap[argumentKey], argumentValue)
}
for argumentKey, argumentValues := range setMap {
if !mizu.Contains(allowedSetFlags, argumentKey) {
logger.Log.Warningf(uiUtils.Warning, fmt.Sprintf("Ignoring set argument name \"%s\", flag name must be one of the following: \"%s\"", argumentKey, strings.Join(allowedSetFlags, "\", \"")))
continue
}
flagPath := strings.Split(argumentKey, ".")
if len(argumentValues) > 1 {
mergeFlagValues(configElem, argumentKey, argumentValues)
if err := mergeFlagValues(configElemValue, flagPath, argumentKey, argumentValues); err != nil {
setErrors = append(setErrors, fmt.Sprintf("%v", err))
}
} else {
mergeFlagValue(configElem, argumentKey, argumentValues[0])
if err := mergeFlagValue(configElemValue, flagPath, argumentKey, argumentValues[0]); err != nil {
setErrors = append(setErrors, fmt.Sprintf("%v", err))
}
}
}
if len(setErrors) > 0 {
return fmt.Errorf(strings.Join(setErrors, "\n"))
}
return nil
}
func mergeFlagValue(currentElem reflect.Value, flagKey string, flagValue string) {
for i := 0; i < currentElem.NumField(); i++ {
currentField := currentElem.Type().Field(i)
currentFieldByName := currentElem.FieldByName(currentField.Name)
currentFieldKind := currentField.Type.Kind()
if currentFieldKind == reflect.Struct {
mergeFlagValue(currentFieldByName, flagKey, flagValue)
continue
}
if getFieldNameByTag(currentField) != flagKey {
continue
}
func mergeFlagValue(configElemValue reflect.Value, flagPath []string, fullFlagName string, flagValue string) error {
mergeFunction := func(flagName string, currentFieldStruct reflect.StructField, currentFieldElemValue reflect.Value, currentElemValue reflect.Value) error {
currentFieldKind := currentFieldStruct.Type.Kind()
if currentFieldKind == reflect.Slice {
mergeFlagValues(currentElem, flagKey, []string{flagValue})
return
return mergeFlagValues(currentElemValue, []string{flagName}, fullFlagName, []string{flagValue})
}
parsedValue, err := getParsedValue(currentFieldKind, flagValue)
if err != nil {
logger.Log.Warningf(uiUtils.Warning, fmt.Sprintf("Invalid value %s for flag name %s, expected %s", flagValue, flagKey, currentFieldKind))
return
return fmt.Errorf("invalid value %s for flag name %s, expected %s", flagValue, flagName, currentFieldKind)
}
currentFieldByName.Set(parsedValue)
currentFieldElemValue.Set(parsedValue)
return nil
}
return mergeFlag(configElemValue, flagPath, fullFlagName, mergeFunction)
}
func mergeFlagValues(currentElem reflect.Value, flagKey string, flagValues []string) {
for i := 0; i < currentElem.NumField(); i++ {
currentField := currentElem.Type().Field(i)
currentFieldByName := currentElem.FieldByName(currentField.Name)
currentFieldKind := currentField.Type.Kind()
if currentFieldKind == reflect.Struct {
mergeFlagValues(currentFieldByName, flagKey, flagValues)
continue
}
if getFieldNameByTag(currentField) != flagKey {
continue
}
func mergeFlagValues(configElemValue reflect.Value, flagPath []string, fullFlagName string, flagValues []string) error {
mergeFunction := func(flagName string, currentFieldStruct reflect.StructField, currentFieldElemValue reflect.Value, currentElemValue reflect.Value) error {
currentFieldKind := currentFieldStruct.Type.Kind()
if currentFieldKind != reflect.Slice {
logger.Log.Warningf(uiUtils.Warning, fmt.Sprintf("Invalid values %s for flag name %s, expected %s", strings.Join(flagValues, ","), flagKey, currentFieldKind))
return
return fmt.Errorf("invalid values %s for flag name %s, expected %s", strings.Join(flagValues, ","), flagName, currentFieldKind)
}
flagValueKind := currentField.Type.Elem().Kind()
flagValueKind := currentFieldStruct.Type.Elem().Kind()
parsedValues := reflect.MakeSlice(reflect.SliceOf(currentField.Type.Elem()), 0, 0)
parsedValues := reflect.MakeSlice(reflect.SliceOf(currentFieldStruct.Type.Elem()), 0, 0)
for _, flagValue := range flagValues {
parsedValue, err := getParsedValue(flagValueKind, flagValue)
if err != nil {
logger.Log.Warningf(uiUtils.Warning, fmt.Sprintf("Invalid value %s for flag name %s, expected %s", flagValue, flagKey, flagValueKind))
return
return fmt.Errorf("invalid value %s for flag name %s, expected %s", flagValue, flagName, flagValueKind)
}
parsedValues = reflect.Append(parsedValues, parsedValue)
}
currentFieldByName.Set(parsedValues)
currentFieldElemValue.Set(parsedValues)
return nil
}
return mergeFlag(configElemValue, flagPath, fullFlagName, mergeFunction)
}
func mergeFlag(currentElemValue reflect.Value, currentFlagPath []string, fullFlagName string, mergeFunction func(flagName string, currentFieldStruct reflect.StructField, currentFieldElemValue reflect.Value, currentElemValue reflect.Value) error) error {
if len(currentFlagPath) == 0 {
return fmt.Errorf("flag \"%s\" not found", fullFlagName)
}
for i := 0; i < currentElemValue.NumField(); i++ {
currentFieldStruct := currentElemValue.Type().Field(i)
currentFieldElemValue := currentElemValue.FieldByName(currentFieldStruct.Name)
if currentFieldStruct.Type.Kind() == reflect.Struct && getFieldNameByTag(currentFieldStruct) == currentFlagPath[0] {
return mergeFlag(currentFieldElemValue, currentFlagPath[1:], fullFlagName, mergeFunction)
}
if len(currentFlagPath) > 1 || getFieldNameByTag(currentFieldStruct) != currentFlagPath[0] {
continue
}
return mergeFunction(currentFlagPath[0], currentFieldStruct, currentFieldElemValue, currentElemValue)
}
return fmt.Errorf("flag \"%s\" not found", fullFlagName)
}
func getFieldNameByTag(field reflect.StructField) string {

View File

@@ -4,14 +4,16 @@ import (
"fmt"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/mizu"
v1 "k8s.io/api/core/v1"
"k8s.io/client-go/util/homedir"
"os"
"path"
"path/filepath"
)
const (
AgentImageConfigName = "agent-image"
MizuResourcesNamespaceConfigName = "mizu-resources-namespace"
TelemetryConfigName = "telemetry"
DumpLogsConfigName = "dump-logs"
KubeConfigPathName = "kube-config-path"
ConfigFilePathCommandName = "config-path"
)
type ConfigStruct struct {
@@ -19,17 +21,40 @@ type ConfigStruct struct {
Fetch configStructs.FetchConfig `yaml:"fetch"`
Version configStructs.VersionConfig `yaml:"version"`
View configStructs.ViewConfig `yaml:"view"`
Logs configStructs.LogsConfig `yaml:"logs"`
Config configStructs.ConfigConfig `yaml:"config,omitempty"`
AgentImage string `yaml:"agent-image,omitempty" readonly:""`
ImagePullPolicyStr string `yaml:"image-pull-policy" default:"Always"`
MizuResourcesNamespace string `yaml:"mizu-resources-namespace" default:"mizu"`
Telemetry bool `yaml:"telemetry" default:"true"`
DumpLogs bool `yaml:"dump-logs" default:"false"`
KubeConfigPath string `yaml:"kube-config-path" default:""`
KubeConfigPathStr string `yaml:"kube-config-path"`
ConfigFilePath string `yaml:"config-path,omitempty" readonly:""`
}
func (config *ConfigStruct) SetDefaults() {
config.AgentImage = fmt.Sprintf("gcr.io/up9-docker-hub/mizu/%s:%s", mizu.Branch, mizu.SemVer)
config.ConfigFilePath = path.Join(mizu.GetMizuFolderPath(), "config.yaml")
}
func (config *ConfigStruct) ImagePullPolicy() v1.PullPolicy {
return v1.PullPolicy(config.ImagePullPolicyStr)
}
func (config *ConfigStruct) IsNsRestrictedMode() bool {
return config.MizuResourcesNamespace != "mizu" // Notice "mizu" string must match the default MizuResourcesNamespace
}
func (config *ConfigStruct) KubeConfigPath() string {
if config.KubeConfigPathStr != "" {
return config.KubeConfigPathStr
}
envKubeConfigPath := os.Getenv("KUBECONFIG")
if envKubeConfigPath != "" {
return envKubeConfigPath
}
home := homedir.HomeDir()
return filepath.Join(home, ".kube", "config")
}

View File

@@ -0,0 +1,9 @@
package configStructs
const (
RegenerateConfigName = "regenerate"
)
type ConfigConfig struct {
Regenerate bool `yaml:"regenerate,omitempty" default:"false" readonly:""`
}

View File

@@ -0,0 +1,35 @@
package configStructs
import (
"fmt"
"os"
"path"
)
const (
FileLogsName = "file"
)
type LogsConfig struct {
FileStr string `yaml:"file"`
}
func (config *LogsConfig) Validate() error {
if config.FileStr == "" {
_, err := os.Getwd()
if err != nil {
return fmt.Errorf("failed to get PWD, %v (try using `mizu logs -f <full path dest zip file>)`", err)
}
}
return nil
}
func (config *LogsConfig) FilePath() string {
if config.FileStr == "" {
pwd, _ := os.Getwd()
return path.Join(pwd, "mizu_logs.zip")
}
return config.FileStr
}

View File

@@ -10,15 +10,12 @@ import (
)
const (
AnalysisDestinationTapName = "dest"
SleepIntervalSecTapName = "upload-interval"
GuiPortTapName = "gui-port"
NamespacesTapName = "namespaces"
AnalysisTapName = "analysis"
AllNamespacesTapName = "all-namespaces"
PlainTextFilterRegexesTapName = "regex-masking"
DisableRedactionTapName = "no-redact"
IgnoredUserAgentsTapName = "ignored-user-agents"
HumanMaxEntriesDBSizeTapName = "max-entries-db-size"
DirectionTapName = "direction"
DryRunTapName = "dry-run"
@@ -26,20 +23,29 @@ const (
)
type TapConfig struct {
AnalysisDestination string `yaml:"dest" default:"up9.app"`
SleepIntervalSec int `yaml:"upload-interval" default:"10"`
PodRegexStr string `yaml:"regex" default:".*"`
GuiPort uint16 `yaml:"gui-port" default:"8899"`
Namespaces []string `yaml:"namespaces"`
Analysis bool `yaml:"analysis" default:"false"`
AllNamespaces bool `yaml:"all-namespaces" default:"false"`
PlainTextFilterRegexes []string `yaml:"regex-masking"`
HealthChecksUserAgentHeaders []string `yaml:"ignored-user-agents"`
DisableRedaction bool `yaml:"no-redact" default:"false"`
HumanMaxEntriesDBSize string `yaml:"max-entries-db-size" default:"200MB"`
Direction string `yaml:"direction" default:"in"`
DryRun bool `yaml:"dry-run" default:"false"`
EnforcePolicyFile string `yaml:"test-rules"`
AnalysisDestination string `yaml:"dest" default:"up9.app"`
SleepIntervalSec int `yaml:"upload-interval" default:"10"`
PodRegexStr string `yaml:"regex" default:".*"`
GuiPort uint16 `yaml:"gui-port" default:"8899"`
Namespaces []string `yaml:"namespaces"`
Analysis bool `yaml:"analysis" default:"false"`
AllNamespaces bool `yaml:"all-namespaces" default:"false"`
PlainTextFilterRegexes []string `yaml:"regex-masking"`
HealthChecksUserAgentHeaders []string `yaml:"ignored-user-agents"`
DisableRedaction bool `yaml:"no-redact" default:"false"`
HumanMaxEntriesDBSize string `yaml:"max-entries-db-size" default:"200MB"`
Direction string `yaml:"direction" default:"in"`
DryRun bool `yaml:"dry-run" default:"false"`
EnforcePolicyFile string `yaml:"test-rules"`
ApiServerResources Resources `yaml:"api-server-resources"`
TapperResources Resources `yaml:"tapper-resources"`
}
type Resources struct {
CpuLimit string `yaml:"cpu-limit" default:"750m"`
MemoryLimit string `yaml:"memory-limit" default:"1Gi"`
CpuRequests string `yaml:"cpu-requests" default:"50m"`
MemoryRequests string `yaml:"memory-requests" default:"50Mi"`
}
func (config *TapConfig) PodRegex() *regexp.Regexp {
@@ -47,15 +53,6 @@ func (config *TapConfig) PodRegex() *regexp.Regexp {
return podRegex
}
func (config *TapConfig) TapOutgoing() bool {
directionLowerCase := strings.ToLower(config.Direction)
if directionLowerCase == "any" {
return true
}
return false
}
func (config *TapConfig) MaxEntriesDBSizeBytes() int64 {
maxEntriesDBSizeBytes, _ := units.HumanReadableToBytes(config.HumanMaxEntriesDBSize)
return maxEntriesDBSizeBytes

View File

@@ -1,11 +1,9 @@
package configStructs
const (
GuiPortViewName = "gui-port"
KubeConfigPathViewName = "kube-config"
GuiPortViewName = "gui-port"
)
type ViewConfig struct {
GuiPort uint16 `yaml:"gui-port" default:"8899"`
KubeConfigPath string `yaml:"kube-config"`
GuiPort uint16 `yaml:"gui-port" default:"8899"`
}

View File

@@ -0,0 +1,385 @@
package config
import (
"fmt"
"reflect"
"testing"
)
type ConfigMock struct {
SectionMock SectionMock `yaml:"section"`
Test string `yaml:"test"`
StringField string `yaml:"string-field"`
IntField int `yaml:"int-field"`
BoolField bool `yaml:"bool-field"`
UintField uint `yaml:"uint-field"`
StringSliceField []string `yaml:"string-slice-field"`
IntSliceField []int `yaml:"int-slice-field"`
BoolSliceField []bool `yaml:"bool-slice-field"`
UintSliceField []uint `yaml:"uint-slice-field"`
}
type SectionMock struct {
Test string `yaml:"test"`
}
type FieldSetValues struct {
SetValues []string
FieldName string
FieldValue interface{}
}
func TestMergeSetFlagNoSeparator(t *testing.T) {
tests := []struct {
Name string
SetValues []string
}{
{Name: "empty value", SetValues: []string{""}},
{Name: "single char", SetValues: []string{"t"}},
{Name: "combine empty value and single char", SetValues: []string{"", "t"}},
{Name: "two values without separator", SetValues: []string{"test", "test:true"}},
{Name: "four values without separator", SetValues: []string{"test", "test:true", "testing!", "true"}},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
configMock := ConfigMock{}
configMockElemValue := reflect.ValueOf(&configMock).Elem()
err := mergeSetFlag(configMockElemValue, test.SetValues)
if err == nil {
t.Errorf("unexpected unhandled error - SetValues: %v", test.SetValues)
return
}
for i := 0; i < configMockElemValue.NumField(); i++ {
currentField := configMockElemValue.Type().Field(i)
currentFieldByName := configMockElemValue.FieldByName(currentField.Name)
if !currentFieldByName.IsZero() {
t.Errorf("unexpected value with not default value - SetValues: %v", test.SetValues)
}
}
})
}
}
func TestMergeSetFlagInvalidFlagName(t *testing.T) {
tests := []struct {
Name string
SetValues []string
}{
{Name: "invalid flag name", SetValues: []string{"invalid_flag=true"}},
{Name: "invalid flag name inside section struct", SetValues: []string{"section.invalid_flag=test"}},
{Name: "flag name is a struct", SetValues: []string{"section=test"}},
{Name: "empty flag name", SetValues: []string{"=true"}},
{Name: "four tests combined", SetValues: []string{"invalid_flag=true", "config.invalid_flag=test", "section=test", "=true"}},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
configMock := ConfigMock{}
configMockElemValue := reflect.ValueOf(&configMock).Elem()
err := mergeSetFlag(configMockElemValue, test.SetValues)
if err == nil {
t.Errorf("unexpected unhandled error - SetValues: %v", test.SetValues)
return
}
for i := 0; i < configMockElemValue.NumField(); i++ {
currentField := configMockElemValue.Type().Field(i)
currentFieldByName := configMockElemValue.FieldByName(currentField.Name)
if !currentFieldByName.IsZero() {
t.Errorf("unexpected case - SetValues: %v", test.SetValues)
}
}
})
}
}
func TestMergeSetFlagInvalidFlagValue(t *testing.T) {
tests := []struct {
Name string
SetValues []string
}{
{Name: "bool value to int field", SetValues: []string{"int-field=true"}},
{Name: "int value to bool field", SetValues: []string{"bool-field:5"}},
{Name: "int value to uint field", SetValues: []string{"uint-field=-1"}},
{Name: "bool value to int slice field", SetValues: []string{"int-slice-field=true"}},
{Name: "int value to bool slice field", SetValues: []string{"bool-slice-field=5"}},
{Name: "int value to uint slice field", SetValues: []string{"uint-slice-field=-1"}},
{Name: "int slice value to int field", SetValues: []string{"int-field=6", "int-field=66"}},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
configMock := ConfigMock{}
configMockElemValue := reflect.ValueOf(&configMock).Elem()
err := mergeSetFlag(configMockElemValue, test.SetValues)
if err == nil {
t.Errorf("unexpected unhandled error - SetValues: %v", test.SetValues)
return
}
for i := 0; i < configMockElemValue.NumField(); i++ {
currentField := configMockElemValue.Type().Field(i)
currentFieldByName := configMockElemValue.FieldByName(currentField.Name)
if !currentFieldByName.IsZero() {
t.Errorf("unexpected case - SetValues: %v", test.SetValues)
}
}
})
}
}
func TestMergeSetFlagNotSliceValues(t *testing.T) {
tests := []struct {
Name string
FieldsSetValues []FieldSetValues
}{
{Name: "string field", FieldsSetValues: []FieldSetValues{{SetValues: []string{"string-field=test"}, FieldName: "StringField", FieldValue: "test"}}},
{Name: "int field", FieldsSetValues: []FieldSetValues{{SetValues: []string{"int-field=6"}, FieldName: "IntField", FieldValue: 6}}},
{Name: "bool field", FieldsSetValues: []FieldSetValues{{SetValues: []string{"bool-field=true"}, FieldName: "BoolField", FieldValue: true}}},
{Name: "uint field", FieldsSetValues: []FieldSetValues{{SetValues: []string{"uint-field=6"}, FieldName: "UintField", FieldValue: uint(6)}}},
{Name: "four fields combined", FieldsSetValues: []FieldSetValues {
{SetValues: []string{"string-field=test"}, FieldName: "StringField", FieldValue: "test"},
{SetValues: []string{"int-field=6"}, FieldName: "IntField", FieldValue: 6},
{SetValues: []string{"bool-field=true"}, FieldName: "BoolField", FieldValue: true},
{SetValues: []string{"uint-field=6"}, FieldName: "UintField", FieldValue: uint(6)},
}},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
configMock := ConfigMock{}
configMockElemValue := reflect.ValueOf(&configMock).Elem()
var setValues []string
for _, fieldSetValues := range test.FieldsSetValues {
setValues = append(setValues, fieldSetValues.SetValues...)
}
err := mergeSetFlag(configMockElemValue, setValues)
if err != nil {
t.Errorf("unexpected error result - err: %v", err)
return
}
for _, fieldSetValues := range test.FieldsSetValues {
fieldValue := configMockElemValue.FieldByName(fieldSetValues.FieldName).Interface()
if fieldValue != fieldSetValues.FieldValue {
t.Errorf("unexpected result - expected: %v, actual: %v", fieldSetValues.FieldValue, fieldValue)
}
}
})
}
}
func TestMergeSetFlagSliceValues(t *testing.T) {
tests := []struct {
Name string
FieldsSetValues []FieldSetValues
}{
{Name: "string slice field single value", FieldsSetValues: []FieldSetValues{{SetValues: []string{"string-slice-field=test"}, FieldName: "StringSliceField", FieldValue: []string{"test"}}}},
{Name: "int slice field single value", FieldsSetValues: []FieldSetValues{{SetValues: []string{"int-slice-field=6"}, FieldName: "IntSliceField", FieldValue: []int{6}}}},
{Name: "bool slice field single value", FieldsSetValues: []FieldSetValues{{SetValues: []string{"bool-slice-field=true"}, FieldName: "BoolSliceField", FieldValue: []bool{true}}}},
{Name: "uint slice field single value", FieldsSetValues: []FieldSetValues{{SetValues: []string{"uint-slice-field=6"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6)}}}},
{Name: "four single value fields combined", FieldsSetValues: []FieldSetValues{
{SetValues: []string{"string-slice-field=test"}, FieldName: "StringSliceField", FieldValue: []string{"test"}},
{SetValues: []string{"int-slice-field=6"}, FieldName: "IntSliceField", FieldValue: []int{6}},
{SetValues: []string{"bool-slice-field=true"}, FieldName: "BoolSliceField", FieldValue: []bool{true}},
{SetValues: []string{"uint-slice-field=6"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6)}},
}},
{Name: "string slice field two values", FieldsSetValues: []FieldSetValues{{SetValues: []string{"string-slice-field=test", "string-slice-field=test2"}, FieldName: "StringSliceField", FieldValue: []string{"test", "test2"}}}},
{Name: "int slice field two values", FieldsSetValues: []FieldSetValues{{SetValues: []string{"int-slice-field=6", "int-slice-field=66"}, FieldName: "IntSliceField", FieldValue: []int{6, 66}}}},
{Name: "bool slice field two values", FieldsSetValues: []FieldSetValues{{SetValues: []string{"bool-slice-field=true", "bool-slice-field=false"}, FieldName: "BoolSliceField", FieldValue: []bool{true, false}}}},
{Name: "uint slice field two values", FieldsSetValues: []FieldSetValues{{SetValues: []string{"uint-slice-field=6", "uint-slice-field=66"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6), uint(66)}}}},
{Name: "four two values fields combined", FieldsSetValues: []FieldSetValues{
{SetValues: []string{"string-slice-field=test", "string-slice-field=test2"}, FieldName: "StringSliceField", FieldValue: []string{"test", "test2"}},
{SetValues: []string{"int-slice-field=6", "int-slice-field=66"}, FieldName: "IntSliceField", FieldValue: []int{6, 66}},
{SetValues: []string{"bool-slice-field=true", "bool-slice-field=false"}, FieldName: "BoolSliceField", FieldValue: []bool{true, false}},
{SetValues: []string{"uint-slice-field=6", "uint-slice-field=66"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6), uint(66)}},
}},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
configMock := ConfigMock{}
configMockElemValue := reflect.ValueOf(&configMock).Elem()
var setValues []string
for _, fieldSetValues := range test.FieldsSetValues {
setValues = append(setValues, fieldSetValues.SetValues...)
}
err := mergeSetFlag(configMockElemValue, setValues)
if err != nil {
t.Errorf("unexpected error result - err: %v", err)
return
}
for _, fieldSetValues := range test.FieldsSetValues {
fieldValue := configMockElemValue.FieldByName(fieldSetValues.FieldName).Interface()
if !reflect.DeepEqual(fieldValue, fieldSetValues.FieldValue) {
t.Errorf("unexpected result - expected: %v, actual: %v", fieldSetValues.FieldValue, fieldValue)
}
}
})
}
}
func TestMergeSetFlagMixValues(t *testing.T) {
tests := []struct {
Name string
FieldsSetValues []FieldSetValues
}{
{Name: "single value all fields", FieldsSetValues: []FieldSetValues{
{SetValues: []string{"string-slice-field=test"}, FieldName: "StringSliceField", FieldValue: []string{"test"}},
{SetValues: []string{"int-slice-field=6"}, FieldName: "IntSliceField", FieldValue: []int{6}},
{SetValues: []string{"bool-slice-field=true"}, FieldName: "BoolSliceField", FieldValue: []bool{true}},
{SetValues: []string{"uint-slice-field=6"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6)}},
{SetValues: []string{"string-field=test"}, FieldName: "StringField", FieldValue: "test"},
{SetValues: []string{"int-field=6"}, FieldName: "IntField", FieldValue: 6},
{SetValues: []string{"bool-field=true"}, FieldName: "BoolField", FieldValue: true},
{SetValues: []string{"uint-field=6"}, FieldName: "UintField", FieldValue: uint(6)},
}},
{Name: "two values slice fields and single value fields", FieldsSetValues: []FieldSetValues{
{SetValues: []string{"string-slice-field=test", "string-slice-field=test2"}, FieldName: "StringSliceField", FieldValue: []string{"test", "test2"}},
{SetValues: []string{"int-slice-field=6", "int-slice-field=66"}, FieldName: "IntSliceField", FieldValue: []int{6, 66}},
{SetValues: []string{"bool-slice-field=true", "bool-slice-field=false"}, FieldName: "BoolSliceField", FieldValue: []bool{true, false}},
{SetValues: []string{"uint-slice-field=6", "uint-slice-field=66"}, FieldName: "UintSliceField", FieldValue: []uint{uint(6), uint(66)}},
{SetValues: []string{"string-field=test"}, FieldName: "StringField", FieldValue: "test"},
{SetValues: []string{"int-field=6"}, FieldName: "IntField", FieldValue: 6},
{SetValues: []string{"bool-field=true"}, FieldName: "BoolField", FieldValue: true},
{SetValues: []string{"uint-field=6"}, FieldName: "UintField", FieldValue: uint(6)},
}},
}
for _, test := range tests {
t.Run(test.Name, func(t *testing.T) {
configMock := ConfigMock{}
configMockElemValue := reflect.ValueOf(&configMock).Elem()
var setValues []string
for _, fieldSetValues := range test.FieldsSetValues {
setValues = append(setValues, fieldSetValues.SetValues...)
}
err := mergeSetFlag(configMockElemValue, setValues)
if err != nil {
t.Errorf("unexpected error result - err: %v", err)
return
}
for _, fieldSetValues := range test.FieldsSetValues {
fieldValue := configMockElemValue.FieldByName(fieldSetValues.FieldName).Interface()
if !reflect.DeepEqual(fieldValue, fieldSetValues.FieldValue) {
t.Errorf("unexpected result - expected: %v, actual: %v", fieldSetValues.FieldValue, fieldValue)
}
}
})
}
}
func TestGetParsedValueValidValue(t *testing.T) {
tests := []struct {
StringValue string
Kind reflect.Kind
ActualValue interface{}
}{
{StringValue: "test", Kind: reflect.String, ActualValue: "test"},
{StringValue: "123", Kind: reflect.String, ActualValue: "123"},
{StringValue: "true", Kind: reflect.Bool, ActualValue: true},
{StringValue: "false", Kind: reflect.Bool, ActualValue: false},
{StringValue: "6", Kind: reflect.Int, ActualValue: 6},
{StringValue: "-6", Kind: reflect.Int, ActualValue: -6},
{StringValue: "6", Kind: reflect.Int8, ActualValue: int8(6)},
{StringValue: "-6", Kind: reflect.Int8, ActualValue: int8(-6)},
{StringValue: "6", Kind: reflect.Int16, ActualValue: int16(6)},
{StringValue: "-6", Kind: reflect.Int16, ActualValue: int16(-6)},
{StringValue: "6", Kind: reflect.Int32, ActualValue: int32(6)},
{StringValue: "-6", Kind: reflect.Int32, ActualValue: int32(-6)},
{StringValue: "6", Kind: reflect.Int64, ActualValue: int64(6)},
{StringValue: "-6", Kind: reflect.Int64, ActualValue: int64(-6)},
{StringValue: "6", Kind: reflect.Uint, ActualValue: uint(6)},
{StringValue: "66", Kind: reflect.Uint, ActualValue: uint(66)},
{StringValue: "6", Kind: reflect.Uint8, ActualValue: uint8(6)},
{StringValue: "66", Kind: reflect.Uint8, ActualValue: uint8(66)},
{StringValue: "6", Kind: reflect.Uint16, ActualValue: uint16(6)},
{StringValue: "66", Kind: reflect.Uint16, ActualValue: uint16(66)},
{StringValue: "6", Kind: reflect.Uint32, ActualValue: uint32(6)},
{StringValue: "66", Kind: reflect.Uint32, ActualValue: uint32(66)},
{StringValue: "6", Kind: reflect.Uint64, ActualValue: uint64(6)},
{StringValue: "66", Kind: reflect.Uint64, ActualValue: uint64(66)},
}
for _, test := range tests {
t.Run(fmt.Sprintf("%v %v", test.Kind, test.StringValue), func(t *testing.T) {
parsedValue, err := getParsedValue(test.Kind, test.StringValue)
if err != nil {
t.Errorf("unexpected error result - err: %v", err)
return
}
if parsedValue.Interface() != test.ActualValue {
t.Errorf("unexpected result - expected: %v, actual: %v", test.ActualValue, parsedValue)
}
})
}
}
func TestGetParsedValueInvalidValue(t *testing.T) {
tests := []struct {
StringValue string
Kind reflect.Kind
}{
{StringValue: "test", Kind: reflect.Bool},
{StringValue: "123", Kind: reflect.Bool},
{StringValue: "test", Kind: reflect.Int},
{StringValue: "true", Kind: reflect.Int},
{StringValue: "test", Kind: reflect.Int8},
{StringValue: "true", Kind: reflect.Int8},
{StringValue: "test", Kind: reflect.Int16},
{StringValue: "true", Kind: reflect.Int16},
{StringValue: "test", Kind: reflect.Int32},
{StringValue: "true", Kind: reflect.Int32},
{StringValue: "test", Kind: reflect.Int64},
{StringValue: "true", Kind: reflect.Int64},
{StringValue: "test", Kind: reflect.Uint},
{StringValue: "-6", Kind: reflect.Uint},
{StringValue: "test", Kind: reflect.Uint8},
{StringValue: "-6", Kind: reflect.Uint8},
{StringValue: "test", Kind: reflect.Uint16},
{StringValue: "-6", Kind: reflect.Uint16},
{StringValue: "test", Kind: reflect.Uint32},
{StringValue: "-6", Kind: reflect.Uint32},
{StringValue: "test", Kind: reflect.Uint64},
{StringValue: "-6", Kind: reflect.Uint64},
}
for _, test := range tests {
t.Run(fmt.Sprintf("%v %v", test.Kind, test.StringValue), func(t *testing.T) {
parsedValue, err := getParsedValue(test.Kind, test.StringValue)
if err == nil {
t.Errorf("unexpected unhandled error - stringValue: %v, Kind: %v", test.StringValue, test.Kind)
return
}
if parsedValue != reflect.ValueOf(nil) {
t.Errorf("unexpected parsed value - parsedValue: %v", parsedValue)
}
})
}
}

View File

@@ -1,6 +1,7 @@
package config_test
import (
"fmt"
"github.com/up9inc/mizu/cli/config"
"reflect"
"strings"
@@ -13,11 +14,14 @@ func TestConfigWriteIgnoresReadonlyFields(t *testing.T) {
configElem := reflect.ValueOf(&config.ConfigStruct{}).Elem()
getFieldsWithReadonlyTag(configElem, &readonlyFields)
config, _ := config.GetConfigWithDefaults()
configWithDefaults, _ := config.GetConfigWithDefaults()
for _, readonlyField := range readonlyFields {
if strings.Contains(config, readonlyField) {
t.Errorf("unexpected result - readonly field: %v, config: %v", readonlyField, config)
}
t.Run(readonlyField, func(t *testing.T) {
readonlyFieldToCheck := fmt.Sprintf("\n%s:", readonlyField)
if strings.Contains(configWithDefaults, readonlyFieldToCheck) {
t.Errorf("unexpected result - readonly field: %v, config: %v", readonlyField, configWithDefaults)
}
})
}
}

24
cli/config/envConfig.go Normal file
View File

@@ -0,0 +1,24 @@
package config
import (
"os"
"strconv"
)
const (
ApiServerRetries = "API_SERVER_RETRIES"
)
func GetIntEnvConfig(key string, defaultValue int) int {
value := os.Getenv(key)
if value == "" {
return defaultValue
}
intValue, err := strconv.Atoi(value)
if err != nil {
return defaultValue
}
return intValue
}

View File

@@ -4,6 +4,7 @@ go 1.16
require (
github.com/creasty/defaults v1.5.1
github.com/denisbrodbeck/machineid v1.0.1
github.com/google/go-github/v37 v37.0.0
github.com/gorilla/websocket v1.4.2
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7

View File

@@ -88,6 +88,8 @@ github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/daviddengcn/go-colortext v0.0.0-20160507010035-511bcaf42ccd/go.mod h1:dv4zxwHi5C/8AeI+4gX4dCWOIvNi7I6JCSX0HvlKPgE=
github.com/denisbrodbeck/machineid v1.0.1 h1:geKr9qtkB876mXguW2X6TU4ZynleN6ezuMSRhl4D7AQ=
github.com/denisbrodbeck/machineid v1.0.1/go.mod h1:dJUwb7PTidGDeYyUBmXZ2GphQBbjJCrnectwCyxcUSI=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
@@ -412,7 +414,6 @@ github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca/go.mod h1:ce1O1j6UtZfjr22oyGxGLbauSBp2YVXpARAosm7dHBg=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0/go.mod h1:/LWChgwKmvncFJFHJ7Gvn9wZArjbV5/FppcK2fKk/tI=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=

View File

@@ -7,15 +7,17 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/up9inc/mizu/cli/logger"
"os"
"path/filepath"
"regexp"
"strconv"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/logger"
"io"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/shared"
"io"
core "k8s.io/api/core/v1"
rbac "k8s.io/api/rbac/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
@@ -37,7 +39,6 @@ import (
"k8s.io/client-go/tools/clientcmd"
_ "k8s.io/client-go/tools/portforward"
watchtools "k8s.io/client-go/tools/watch"
"k8s.io/client-go/util/homedir"
)
type Provider struct {
@@ -56,13 +57,23 @@ func NewProvider(kubeConfigPath string) (*Provider, error) {
restClientConfig, err := kubernetesConfig.ClientConfig()
if err != nil {
if clientcmd.IsEmptyConfig(err) {
return nil, fmt.Errorf("Couldn't find the kube config file, or file is empty. Try adding '--kube-config=<path to kube config file>'\n")
return nil, fmt.Errorf("couldn't find the kube config file, or file is empty (%s)\n"+
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
if clientcmd.IsConfigurationInvalid(err) {
return nil, fmt.Errorf("Invalid kube config file. Try using a different config with '--kube-config=<path to kube config file>'\n")
return nil, fmt.Errorf("invalid kube config file (%s)\n"+
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
return nil, fmt.Errorf("error while using kube config (%s)\n"+
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
clientSet, err := getClientSet(restClientConfig)
if err != nil {
return nil, fmt.Errorf("error while using kube config (%s)\n"+
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
clientSet := getClientSet(restClientConfig)
return &Provider{
clientSet: clientSet,
@@ -141,6 +152,8 @@ type ApiServerOptions struct {
IsNamespaceRestricted bool
MizuApiFilteringOptions *shared.TrafficFilteringOptions
MaxEntriesDBSizeBytes int64
Resources configStructs.Resources
ImagePullPolicy core.PullPolicy
}
func (provider *Provider) CreateMizuApiServerPod(ctx context.Context, opts *ApiServerOptions) (*core.Pod, error) {
@@ -153,19 +166,19 @@ func (provider *Provider) CreateMizuApiServerPod(ctx context.Context, opts *ApiS
configMapOptional := true
configMapVolumeName.Optional = &configMapOptional
cpuLimit, err := resource.ParseQuantity("750m")
cpuLimit, err := resource.ParseQuantity(opts.Resources.CpuLimit)
if err != nil {
return nil, errors.New(fmt.Sprintf("invalid cpu limit for %s container", opts.PodName))
}
memLimit, err := resource.ParseQuantity("512Mi")
memLimit, err := resource.ParseQuantity(opts.Resources.MemoryLimit)
if err != nil {
return nil, errors.New(fmt.Sprintf("invalid memory limit for %s container", opts.PodName))
}
cpuRequests, err := resource.ParseQuantity("50m")
cpuRequests, err := resource.ParseQuantity(opts.Resources.CpuRequests)
if err != nil {
return nil, errors.New(fmt.Sprintf("invalid cpu request for %s container", opts.PodName))
}
memRequests, err := resource.ParseQuantity("50Mi")
memRequests, err := resource.ParseQuantity(opts.Resources.MemoryRequests)
if err != nil {
return nil, errors.New(fmt.Sprintf("invalid memory request for %s container", opts.PodName))
}
@@ -186,7 +199,7 @@ func (provider *Provider) CreateMizuApiServerPod(ctx context.Context, opts *ApiS
{
Name: opts.PodName,
Image: opts.PodImage,
ImagePullPolicy: core.PullAlways,
ImagePullPolicy: opts.ImagePullPolicy,
VolumeMounts: []core.VolumeMount{
{
Name: mizu.ConfigMapName,
@@ -562,11 +575,11 @@ func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string,
return nil
}
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeToTappedPodIPMap map[string][]string, serviceAccountName string, tapOutgoing bool) error {
logger.Log.Debugf("Applying %d tapper deamonsets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeToTappedPodIPMap), namespace, daemonSetName, podImage, tapperPodName)
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeToTappedPodIPMap map[string][]string, serviceAccountName string, resources configStructs.Resources, imagePullPolicy core.PullPolicy) error {
logger.Log.Debugf("Applying %d tapper daemon sets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeToTappedPodIPMap), namespace, daemonSetName, podImage, tapperPodName)
if len(nodeToTappedPodIPMap) == 0 {
return fmt.Errorf("Daemon set %s must tap at least 1 pod", daemonSetName)
return fmt.Errorf("daemon set %s must tap at least 1 pod", daemonSetName)
}
nodeToTappedPodIPMapJsonStr, err := json.Marshal(nodeToTappedPodIPMap)
@@ -579,15 +592,13 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
"-i", "any",
"--tap",
"--api-server-address", fmt.Sprintf("ws://%s/wsTapper", apiServerPodIp),
}
if tapOutgoing {
mizuCmd = append(mizuCmd, "--anydirection")
"--nodefrag",
}
agentContainer := applyconfcore.Container()
agentContainer.WithName(tapperPodName)
agentContainer.WithImage(podImage)
agentContainer.WithImagePullPolicy(core.PullAlways)
agentContainer.WithImagePullPolicy(imagePullPolicy)
agentContainer.WithSecurityContext(applyconfcore.SecurityContext().WithPrivileged(true))
agentContainer.WithCommand(mizuCmd...)
agentContainer.WithEnv(
@@ -601,19 +612,19 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
),
),
)
cpuLimit, err := resource.ParseQuantity("500m")
cpuLimit, err := resource.ParseQuantity(resources.CpuLimit)
if err != nil {
return errors.New(fmt.Sprintf("invalid cpu limit for %s container", tapperPodName))
}
memLimit, err := resource.ParseQuantity("1Gi")
memLimit, err := resource.ParseQuantity(resources.MemoryLimit)
if err != nil {
return errors.New(fmt.Sprintf("invalid memory limit for %s container", tapperPodName))
}
cpuRequests, err := resource.ParseQuantity("50m")
cpuRequests, err := resource.ParseQuantity(resources.CpuRequests)
if err != nil {
return errors.New(fmt.Sprintf("invalid cpu request for %s container", tapperPodName))
}
memRequests, err := resource.ParseQuantity("50Mi")
memRequests, err := resource.ParseQuantity(resources.MemoryRequests)
if err != nil {
return errors.New(fmt.Sprintf("invalid memory request for %s container", tapperPodName))
}
@@ -728,24 +739,26 @@ func (provider *Provider) GetPodLogs(namespace string, podName string, ctx conte
return str, nil
}
func getClientSet(config *restclient.Config) *kubernetes.Clientset {
func (provider *Provider) GetNamespaceEvents(namespace string, ctx context.Context) (string, error) {
eventsOpts := metav1.ListOptions{}
eventList, err := provider.clientSet.CoreV1().Events(namespace).List(ctx, eventsOpts)
if err != nil {
return "", fmt.Errorf("error getting events on ns: %s, %w", namespace, err)
}
return eventList.String(), nil
}
func getClientSet(config *restclient.Config) (*kubernetes.Clientset, error) {
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
return nil, err
}
return clientSet
return clientSet, nil
}
func loadKubernetesConfiguration(kubeConfigPath string) clientcmd.ClientConfig {
if kubeConfigPath == "" {
kubeConfigPath = os.Getenv("KUBECONFIG")
}
if kubeConfigPath == "" {
home := homedir.HomeDir()
kubeConfigPath = filepath.Join(home, ".kube", "config")
}
logger.Log.Debugf("Using kube config %s", kubeConfigPath)
configPathList := filepath.SplitList(kubeConfigPath)
configLoadingRules := &clientcmd.ClientConfigLoadingRules{}

View File

@@ -39,6 +39,7 @@ func StartProxy(kubernetesProvider *Provider, mizuPort uint16, mizuNamespace str
server := http.Server{
Handler: mux,
}
return server.Serve(l)
}

View File

@@ -39,22 +39,39 @@ func DumpLogs(provider *kubernetes.Provider, ctx context.Context, filePath strin
} else {
logger.Log.Debugf("Successfully read log length %d for pod: %s.%s", len(logs), pod.Namespace, pod.Name)
}
if err := AddStrToZip(zipWriter, logs, fmt.Sprintf("%s.%s.log", pod.Namespace, pod.Name)); err != nil {
logger.Log.Errorf("Failed write logs, %v", err)
} else {
logger.Log.Debugf("Successfully added log length %d from pod: %s.%s", len(logs), pod.Namespace, pod.Name)
}
}
if err := AddFileToZip(zipWriter, config.GetConfigFilePath()); err != nil {
events, err := provider.GetNamespaceEvents(config.Config.MizuResourcesNamespace, ctx)
if err != nil {
logger.Log.Debugf("Failed to get k8b events, %v", err)
} else {
logger.Log.Debugf("Successfully read events for k8b namespace: %s", config.Config.MizuResourcesNamespace)
}
if err := AddStrToZip(zipWriter, events, fmt.Sprintf("%s_events.log", config.Config.MizuResourcesNamespace)); err != nil {
logger.Log.Debugf("Failed write logs, %v", err)
} else {
logger.Log.Debugf("Successfully added events for k8b namespace: %s", config.Config.MizuResourcesNamespace)
}
if err := AddFileToZip(zipWriter, config.Config.ConfigFilePath); err != nil {
logger.Log.Debugf("Failed write file, %v", err)
} else {
logger.Log.Debugf("Successfully added file %s", config.GetConfigFilePath())
logger.Log.Debugf("Successfully added file %s", config.Config.ConfigFilePath)
}
if err := AddFileToZip(zipWriter, logger.GetLogFilePath()); err != nil {
logger.Log.Debugf("Failed write file, %v", err)
} else {
logger.Log.Debugf("Successfully added file %s", logger.GetLogFilePath())
}
logger.Log.Infof("You can find the zip file with all logs in %s\n", filePath)
return nil
}

View File

@@ -3,9 +3,11 @@ package fsUtils
import (
"archive/zip"
"fmt"
"github.com/up9inc/mizu/cli/logger"
"io"
"os"
"path/filepath"
"strings"
)
func AddFileToZip(zipWriter *zip.Writer, filename string) error {
@@ -53,3 +55,60 @@ func AddStrToZip(writer *zip.Writer, logs string, fileName string) error {
}
return nil
}
func Unzip(reader *zip.Reader, dest string) error {
dest, _ = filepath.Abs(dest)
_ = os.MkdirAll(dest, os.ModePerm)
// Closure to address file descriptors issue with all the deferred .Close() methods
extractAndWriteFile := func(f *zip.File) error {
rc, err := f.Open()
if err != nil {
return err
}
defer func() {
if err := rc.Close(); err != nil {
panic(err)
}
}()
path := filepath.Join(dest, f.Name)
// Check for ZipSlip (Directory traversal)
if !strings.HasPrefix(path, filepath.Clean(dest)+string(os.PathSeparator)) {
return fmt.Errorf("illegal file path: %s", path)
}
if f.FileInfo().IsDir() {
_ = os.MkdirAll(path, f.Mode())
} else {
_ = os.MkdirAll(filepath.Dir(path), f.Mode())
logger.Log.Infof("writing HAR file [ %v ]", path)
f, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
if err != nil {
return err
}
defer func() {
if err := f.Close(); err != nil {
panic(err)
}
logger.Log.Info(" done")
}()
_, err = io.Copy(f, rc)
if err != nil {
return err
}
}
return nil
}
for _, f := range reader.File {
err := extractAndWriteFile(f)
if err != nil {
return err
}
}
return nil
}

View File

@@ -9,3 +9,17 @@ func Contains(slice []string, containsValue string) bool {
return false
}
func Unique(slice []string) []string {
keys := make(map[string]bool)
var list []string
for _, entry := range slice {
if _, value := keys[entry]; !value {
keys[entry] = true
list = append(list, entry)
}
}
return list
}

View File

@@ -1,82 +1,130 @@
package mizu_test
import (
"fmt"
"github.com/up9inc/mizu/cli/mizu"
"reflect"
"testing"
)
func TestContainsExists(t *testing.T) {
tests := []struct {
slice []string
containsValue string
expected bool
Slice []string
ContainsValue string
Expected bool
}{
{slice: []string{"apple", "orange", "banana", "grapes"}, containsValue: "apple", expected: true},
{slice: []string{"apple", "orange", "banana", "grapes"}, containsValue: "orange", expected: true},
{slice: []string{"apple", "orange", "banana", "grapes"}, containsValue: "banana", expected: true},
{slice: []string{"apple", "orange", "banana", "grapes"}, containsValue: "grapes", expected: true},
{Slice: []string{"apple", "orange", "banana", "grapes"}, ContainsValue: "apple", Expected: true},
{Slice: []string{"apple", "orange", "banana", "grapes"}, ContainsValue: "orange", Expected: true},
{Slice: []string{"apple", "orange", "banana", "grapes"}, ContainsValue: "banana", Expected: true},
{Slice: []string{"apple", "orange", "banana", "grapes"}, ContainsValue: "grapes", Expected: true},
}
for _, test := range tests {
actual := mizu.Contains(test.slice, test.containsValue)
if actual != test.expected {
t.Errorf("unexpected result - expected: %v, actual: %v", test.expected, actual)
}
t.Run(test.ContainsValue, func(t *testing.T) {
actual := mizu.Contains(test.Slice, test.ContainsValue)
if actual != test.Expected {
t.Errorf("unexpected result - Expected: %v, actual: %v", test.Expected, actual)
}
})
}
}
func TestContainsNotExists(t *testing.T) {
tests := []struct {
slice []string
containsValue string
expected bool
Slice []string
ContainsValue string
Expected bool
}{
{slice: []string{"apple", "orange", "banana", "grapes"}, containsValue: "cat", expected: false},
{slice: []string{"apple", "orange", "banana", "grapes"}, containsValue: "dog", expected: false},
{slice: []string{"apple", "orange", "banana", "grapes"}, containsValue: "apples", expected: false},
{slice: []string{"apple", "orange", "banana", "grapes"}, containsValue: "rapes", expected: false},
{Slice: []string{"apple", "orange", "banana", "grapes"}, ContainsValue: "cat", Expected: false},
{Slice: []string{"apple", "orange", "banana", "grapes"}, ContainsValue: "dog", Expected: false},
{Slice: []string{"apple", "orange", "banana", "grapes"}, ContainsValue: "apples", Expected: false},
{Slice: []string{"apple", "orange", "banana", "grapes"}, ContainsValue: "rapes", Expected: false},
}
for _, test := range tests {
actual := mizu.Contains(test.slice, test.containsValue)
if actual != test.expected {
t.Errorf("unexpected result - expected: %v, actual: %v", test.expected, actual)
}
t.Run(test.ContainsValue, func(t *testing.T) {
actual := mizu.Contains(test.Slice, test.ContainsValue)
if actual != test.Expected {
t.Errorf("unexpected result - Expected: %v, actual: %v", test.Expected, actual)
}
})
}
}
func TestContainsEmptySlice(t *testing.T) {
tests := []struct {
slice []string
containsValue string
expected bool
Slice []string
ContainsValue string
Expected bool
}{
{slice: []string{}, containsValue: "cat", expected: false},
{slice: []string{}, containsValue: "dog", expected: false},
{Slice: []string{}, ContainsValue: "cat", Expected: false},
{Slice: []string{}, ContainsValue: "dog", Expected: false},
}
for _, test := range tests {
actual := mizu.Contains(test.slice, test.containsValue)
if actual != test.expected {
t.Errorf("unexpected result - expected: %v, actual: %v", test.expected, actual)
}
t.Run(test.ContainsValue, func(t *testing.T) {
actual := mizu.Contains(test.Slice, test.ContainsValue)
if actual != test.Expected {
t.Errorf("unexpected result - Expected: %v, actual: %v", test.Expected, actual)
}
})
}
}
func TestContainsNilSlice(t *testing.T) {
tests := []struct {
slice []string
containsValue string
expected bool
Slice []string
ContainsValue string
Expected bool
}{
{slice: nil, containsValue: "cat", expected: false},
{slice: nil, containsValue: "dog", expected: false},
{Slice: nil, ContainsValue: "cat", Expected: false},
{Slice: nil, ContainsValue: "dog", Expected: false},
}
for _, test := range tests {
actual := mizu.Contains(test.slice, test.containsValue)
if actual != test.expected {
t.Errorf("unexpected result - expected: %v, actual: %v", test.expected, actual)
}
t.Run(test.ContainsValue, func(t *testing.T) {
actual := mizu.Contains(test.Slice, test.ContainsValue)
if actual != test.Expected {
t.Errorf("unexpected result - Expected: %v, actual: %v", test.Expected, actual)
}
})
}
}
func TestUniqueNoDuplicateValues(t *testing.T) {
tests := []struct {
Slice []string
Expected []string
}{
{Slice: []string{"apple", "orange", "banana", "grapes"}, Expected: []string{"apple", "orange", "banana", "grapes"}},
{Slice: []string{"dog", "cat", "mouse"}, Expected: []string{"dog", "cat", "mouse"}},
}
for index, test := range tests {
t.Run(fmt.Sprintf("%v", index), func(t *testing.T) {
actual := mizu.Unique(test.Slice)
if !reflect.DeepEqual(test.Expected, actual) {
t.Errorf("unexpected result - Expected: %v, actual: %v", test.Expected, actual)
}
})
}
}
func TestUniqueDuplicateValues(t *testing.T) {
tests := []struct {
Slice []string
Expected []string
}{
{Slice: []string{"apple", "apple", "orange", "orange", "banana", "banana", "grapes", "grapes"}, Expected: []string{"apple", "orange", "banana", "grapes"}},
{Slice: []string{"dog", "cat", "cat", "mouse"}, Expected: []string{"dog", "cat", "mouse"}},
}
for index, test := range tests {
t.Run(fmt.Sprintf("%v", index), func(t *testing.T) {
actual := mizu.Unique(test.Slice)
if !reflect.DeepEqual(test.Expected, actual) {
t.Errorf("unexpected result - Expected: %v, actual: %v", test.Expected, actual)
}
})
}
}

View File

@@ -2,43 +2,21 @@ package version
import (
"context"
"encoding/json"
"fmt"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"io/ioutil"
"net/http"
"net/url"
"time"
"github.com/google/go-github/v37/github"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/semver"
)
func getApiVersion(port uint16) (string, error) {
versionUrl, _ := url.Parse(fmt.Sprintf("http://localhost:%d/mizu/metadata/version", port))
req := &http.Request{
Method: http.MethodGet,
URL: versionUrl,
}
statusResp, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer statusResp.Body.Close()
versionResponse := &shared.VersionResponse{}
if err := json.NewDecoder(statusResp.Body).Decode(&versionResponse); err != nil {
return "", err
}
return versionResponse.SemVer, nil
}
func CheckVersionCompatibility(port uint16) (bool, error) {
apiSemVer, err := getApiVersion(port)
func CheckVersionCompatibility() (bool, error) {
apiSemVer, err := apiserver.Provider.GetVersion()
if err != nil {
return false, err
}
@@ -52,13 +30,14 @@ func CheckVersionCompatibility(port uint16) (bool, error) {
return false, nil
}
func CheckNewerVersion() {
func CheckNewerVersion(versionChan chan string) {
logger.Log.Debugf("Checking for newer version...")
start := time.Now()
client := github.NewClient(nil)
latestRelease, _, err := client.Repositories.GetLatestRelease(context.Background(), "up9inc", "mizu")
if err != nil {
logger.Log.Debugf("[ERROR] Failed to get latest release")
versionChan <- ""
return
}
@@ -71,12 +50,14 @@ func CheckNewerVersion() {
}
if versionFileUrl == "" {
logger.Log.Debugf("[ERROR] Version file not found in the latest release")
versionChan <- ""
return
}
res, err := http.Get(versionFileUrl)
if err != nil {
logger.Log.Debugf("[ERROR] Failed to get the version file %v", err)
versionChan <- ""
return
}
@@ -84,12 +65,19 @@ func CheckNewerVersion() {
res.Body.Close()
if err != nil {
logger.Log.Debugf("[ERROR] Failed to read the version file -> %v", err)
versionChan <- ""
return
}
gitHubVersion := string(data)
gitHubVersion = gitHubVersion[:len(gitHubVersion)-1]
logger.Log.Debugf("Finished version validation, took %v", time.Since(start))
if mizu.SemVer < gitHubVersion {
logger.Log.Infof(uiUtils.Yellow, fmt.Sprintf("Update available! %v -> %v (%v)", mizu.SemVer, gitHubVersion, *latestRelease.HTMLURL))
gitHubVersionSemVer := semver.SemVersion(gitHubVersion)
currentSemVer := semver.SemVersion(mizu.SemVer)
logger.Log.Debugf("Finished version validation, github version %v, current version %v, took %v", gitHubVersion, currentSemVer, time.Since(start))
if gitHubVersionSemVer.GreaterThan(currentSemVer) {
versionChan <- fmt.Sprintf("Update available! %v -> %v (%v)", mizu.SemVer, gitHubVersion, *latestRelease.HTMLURL)
} else {
versionChan <- ""
}
}

View File

@@ -4,11 +4,11 @@ import (
"bytes"
"encoding/json"
"fmt"
"github.com/denisbrodbeck/machineid"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"io/ioutil"
"net/http"
)
@@ -34,35 +34,15 @@ func ReportRun(cmd string, args interface{}) {
logger.Log.Debugf("successfully reported telemetry for cmd %v", cmd)
}
func ReportAPICalls(mizuPort uint16) {
func ReportAPICalls() {
if !shouldRunTelemetry() {
logger.Log.Debugf("not reporting telemetry")
return
}
mizuProxiedUrl := kubernetes.GetMizuApiServerProxiedHostAndPath(mizuPort)
generalStatsUrl := fmt.Sprintf("http://%s/api/generalStats", mizuProxiedUrl)
response, requestErr := http.Get(generalStatsUrl)
if requestErr != nil {
logger.Log.Debugf("ERROR: failed to get general stats for telemetry, err: %v", requestErr)
return
} else if response.StatusCode != 200 {
logger.Log.Debugf("ERROR: failed to get general stats for telemetry, status code: %v", response.StatusCode)
return
}
defer func() { _ = response.Body.Close() }()
data, readErr := ioutil.ReadAll(response.Body)
if readErr != nil {
logger.Log.Debugf("ERROR: failed to read general stats for telemetry, err: %v", readErr)
return
}
var generalStats map[string]interface{}
if parseErr := json.Unmarshal(data, &generalStats); parseErr != nil {
logger.Log.Debugf("ERROR: failed to parse general stats for telemetry, err: %v", parseErr)
generalStats, err := apiserver.Provider.GetGeneralStats()
if err != nil {
logger.Log.Debugf("[ERROR] failed get general stats from api server %v", err)
return
}
@@ -99,6 +79,10 @@ func sendTelemetry(telemetryType string, argsMap map[string]interface{}) error {
argsMap["branch"] = mizu.Branch
argsMap["version"] = mizu.SemVer
if machineId, err := machineid.ProtectedID("mizu"); err == nil {
argsMap["machineId"] = machineId
}
jsonValue, _ := json.Marshal(argsMap)
if resp, err := http.Post(telemetryUrl, "application/json", bytes.NewBuffer(jsonValue)); err != nil {

8
codecov.yml Normal file
View File

@@ -0,0 +1,8 @@
coverage:
status:
project:
default:
threshold: 1%
patch:
default:
enabled: no

View File

@@ -5,7 +5,7 @@ import (
"io/ioutil"
"strings"
yaml "gopkg.in/yaml.v3"
"gopkg.in/yaml.v3"
)
type WebSocketMessageType string

View File

@@ -26,3 +26,23 @@ func (v SemVersion) Patch() string {
_, _, patch := v.Breakdown()
return patch
}
func (v SemVersion) GreaterThan(v2 SemVersion) bool {
if v.Major() > v2.Major() {
return true
} else if v.Major() < v2.Major() {
return false
}
if v.Minor() > v2.Minor() {
return true
} else if v.Minor() < v2.Minor() {
return false
}
if v.Patch() > v2.Patch() {
return true
}
return false
}

176
tap/api/api.go Normal file
View File

@@ -0,0 +1,176 @@
package api
import (
"bufio"
"plugin"
"sync"
"time"
)
type Protocol struct {
Name string `json:"name"`
LongName string `json:"long_name"`
Abbreviation string `json:"abbreviation"`
Version string `json:"version"`
BackgroundColor string `json:"background_color"`
ForegroundColor string `json:"foreground_color"`
FontSize int8 `json:"font_size"`
ReferenceLink string `json:"reference_link"`
Ports []string `json:"ports"`
Priority uint8 `json:"priority"`
}
type Extension struct {
Protocol Protocol
Path string
Plug *plugin.Plugin
Dissector Dissector
MatcherMap *sync.Map
}
type ConnectionInfo struct {
ClientIP string
ClientPort string
ServerIP string
ServerPort string
IsOutgoing bool
}
type TcpID struct {
SrcIP string
DstIP string
SrcPort string
DstPort string
Ident string
}
type CounterPair struct {
Request uint
Response uint
}
type GenericMessage struct {
IsRequest bool `json:"is_request"`
CaptureTime time.Time `json:"capture_time"`
Payload interface{} `json:"payload"`
}
type RequestResponsePair struct {
Request GenericMessage `json:"request"`
Response GenericMessage `json:"response"`
}
// `Protocol` is modified in the later stages of data propagation. Therefore it's not a pointer.
type OutputChannelItem struct {
Protocol Protocol
Timestamp int64
ConnectionInfo *ConnectionInfo
Pair *RequestResponsePair
}
type SuperTimer struct {
CaptureTime time.Time
}
type Dissector interface {
Register(*Extension)
Ping()
Dissect(b *bufio.Reader, isClient bool, tcpID *TcpID, counterPair *CounterPair, superTimer *SuperTimer, emitter Emitter) error
Analyze(item *OutputChannelItem, entryId string, resolvedSource string, resolvedDestination string) *MizuEntry
Summarize(entry *MizuEntry) *BaseEntryDetails
Represent(entry *MizuEntry) (protocol Protocol, object []byte, bodySize int64, err error)
}
type Emitting struct {
OutputChannel chan *OutputChannelItem
}
type Emitter interface {
Emit(item *OutputChannelItem)
}
func (e *Emitting) Emit(item *OutputChannelItem) {
e.OutputChannel <- item
}
type MizuEntry struct {
ID uint `gorm:"primarykey"`
CreatedAt time.Time
UpdatedAt time.Time
ProtocolName string `json:"protocol_key" gorm:"column:protocolKey"`
ProtocolVersion string `json:"protocol_version" gorm:"column:protocolVersion"`
Entry string `json:"entry,omitempty" gorm:"column:entry"`
EntryId string `json:"entryId" gorm:"column:entryId"`
Url string `json:"url" gorm:"column:url"`
Method string `json:"method" gorm:"column:method"`
Status int `json:"status" gorm:"column:status"`
RequestSenderIp string `json:"requestSenderIp" gorm:"column:requestSenderIp"`
Service string `json:"service" gorm:"column:service"`
Timestamp int64 `json:"timestamp" gorm:"column:timestamp"`
ElapsedTime int64 `json:"elapsedTime" gorm:"column:elapsedTime"`
Path string `json:"path" gorm:"column:path"`
ResolvedSource string `json:"resolvedSource,omitempty" gorm:"column:resolvedSource"`
ResolvedDestination string `json:"resolvedDestination,omitempty" gorm:"column:resolvedDestination"`
SourceIp string `json:"sourceIp,omitempty" gorm:"column:sourceIp"`
DestinationIp string `json:"destinationIp,omitempty" gorm:"column:destinationIp"`
SourcePort string `json:"sourcePort,omitempty" gorm:"column:sourcePort"`
DestinationPort string `json:"destinationPort,omitempty" gorm:"column:destinationPort"`
IsOutgoing bool `json:"isOutgoing,omitempty" gorm:"column:isOutgoing"`
EstimatedSizeBytes int `json:"-" gorm:"column:estimatedSizeBytes"`
}
type MizuEntryWrapper struct {
Protocol Protocol `json:"protocol"`
Representation string `json:"representation"`
BodySize int64 `json:"bodySize"`
Data MizuEntry `json:"data"`
}
type BaseEntryDetails struct {
Id string `json:"id,omitempty"`
Protocol Protocol `json:"protocol,omitempty"`
Url string `json:"url,omitempty"`
RequestSenderIp string `json:"request_sender_ip,omitempty"`
Service string `json:"service,omitempty"`
Summary string `json:"summary,omitempty"`
StatusCode int `json:"status_code"`
Method string `json:"method,omitempty"`
Timestamp int64 `json:"timestamp,omitempty"`
SourceIp string `json:"source_ip,omitempty"`
DestinationIp string `json:"destination_ip,omitempty"`
SourcePort string `json:"source_port,omitempty"`
DestinationPort string `json:"destination_port,omitempty"`
IsOutgoing bool `json:"isOutgoing,omitempty"`
Latency int64 `json:"latency,omitempty"`
Rules ApplicableRules `json:"rules,omitempty"`
}
type ApplicableRules struct {
Latency int64 `json:"latency,omitempty"`
Status bool `json:"status,omitempty"`
NumberOfRules int `json:"numberOfRules,omitempty"`
}
type DataUnmarshaler interface {
UnmarshalData(*MizuEntry) error
}
func (bed *BaseEntryDetails) UnmarshalData(entry *MizuEntry) error {
entryUrl := entry.Url
service := entry.Service
bed.Id = entry.EntryId
bed.Url = entryUrl
bed.Service = service
bed.Summary = entry.Path
bed.StatusCode = entry.Status
bed.Method = entry.Method
bed.Timestamp = entry.Timestamp
bed.RequestSenderIp = entry.RequestSenderIp
bed.IsOutgoing = entry.IsOutgoing
return nil
}
const (
TABLE string = "table"
BODY string = "body"
)

3
tap/api/go.mod Normal file
View File

@@ -0,0 +1,3 @@
module github.com/up9inc/mizu/tap/api
go 1.16

View File

@@ -1,11 +1,12 @@
package tap
import (
"github.com/romana/rlog"
"sync"
"time"
"github.com/google/gopacket/reassembly"
"github.com/romana/rlog"
"github.com/up9inc/mizu/tap/api"
)
type CleanerStats struct {
@@ -17,7 +18,6 @@ type CleanerStats struct {
type Cleaner struct {
assembler *reassembly.Assembler
assemblerMutex *sync.Mutex
matcher *requestResponseMatcher
cleanPeriod time.Duration
connectionTimeout time.Duration
stats CleanerStats
@@ -32,13 +32,15 @@ func (cl *Cleaner) clean() {
flushed, closed := cl.assembler.FlushCloseOlderThan(startCleanTime.Add(-cl.connectionTimeout))
cl.assemblerMutex.Unlock()
deleted := cl.matcher.deleteOlderThan(startCleanTime.Add(-cl.connectionTimeout))
for _, extension := range extensions {
deleted := deleteOlderThan(extension.MatcherMap, startCleanTime.Add(-cl.connectionTimeout))
cl.stats.deleted += deleted
}
cl.statsMutex.Lock()
rlog.Debugf("Assembler Stats after cleaning %s", cl.assembler.Dump())
cl.stats.flushed += flushed
cl.stats.closed += closed
cl.stats.deleted += deleted
cl.statsMutex.Unlock()
}
@@ -70,3 +72,25 @@ func (cl *Cleaner) dumpStats() CleanerStats {
return stats
}
func deleteOlderThan(matcherMap *sync.Map, t time.Time) int {
numDeleted := 0
if matcherMap == nil {
return numDeleted
}
matcherMap.Range(func(key interface{}, value interface{}) bool {
message, _ := value.(*api.GenericMessage)
// TODO: Investigate the reason why `request` is `nil` in some rare occasion
if message != nil {
if message.CaptureTime.Before(t) {
matcherMap.Delete(key)
numDeleted++
}
}
return true
})
return numDeleted
}

View File

@@ -0,0 +1,9 @@
module github.com/up9inc/mizu/tap/extensions/amqp
go 1.16
require (
github.com/up9inc/mizu/tap/api v0.0.0
)
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../../api

View File

@@ -0,0 +1,664 @@
package main
import (
"encoding/json"
"fmt"
"strconv"
"time"
"github.com/up9inc/mizu/tap/api"
)
var connectionMethodMap = map[int]string{
10: "connection start",
11: "connection start-ok",
20: "connection secure",
21: "connection secure-ok",
30: "connection tune",
31: "connection tune-ok",
40: "connection open",
41: "connection open-ok",
50: "connection close",
51: "connection close-ok",
60: "connection blocked",
61: "connection unblocked",
}
var channelMethodMap = map[int]string{
10: "channel open",
11: "channel open-ok",
20: "channel flow",
21: "channel flow-ok",
40: "channel close",
41: "channel close-ok",
}
var exchangeMethodMap = map[int]string{
10: "exchange declare",
11: "exchange declare-ok",
20: "exchange delete",
21: "exchange delete-ok",
30: "exchange bind",
31: "exchange bind-ok",
40: "exchange unbind",
51: "exchange unbind-ok",
}
var queueMethodMap = map[int]string{
10: "queue declare",
11: "queue declare-ok",
20: "queue bind",
21: "queue bind-ok",
50: "queue unbind",
51: "queue unbind-ok",
30: "queue purge",
31: "queue purge-ok",
40: "queue delete",
41: "queue delete-ok",
}
var basicMethodMap = map[int]string{
10: "basic qos",
11: "basic qos-ok",
20: "basic consume",
21: "basic consume-ok",
30: "basic cancel",
31: "basic cancel-ok",
40: "basic publish",
50: "basic return",
60: "basic deliver",
70: "basic get",
71: "basic get-ok",
72: "basic get-empty",
80: "basic ack",
90: "basic reject",
100: "basic recover-async",
110: "basic recover",
111: "basic recover-ok",
120: "basic nack",
}
var txMethodMap = map[int]string{
10: "tx select",
11: "tx select-ok",
20: "tx commit",
21: "tx commit-ok",
30: "tx rollback",
31: "tx rollback-ok",
}
type AMQPWrapper struct {
Method string `json:"method"`
Url string `json:"url"`
Details interface{} `json:"details"`
}
func emitAMQP(event interface{}, _type string, method string, connectionInfo *api.ConnectionInfo, captureTime time.Time, emitter api.Emitter) {
request := &api.GenericMessage{
IsRequest: true,
CaptureTime: captureTime,
Payload: AMQPPayload{
Data: &AMQPWrapper{
Method: method,
Url: "",
Details: event,
},
},
}
item := &api.OutputChannelItem{
Protocol: protocol,
Timestamp: captureTime.UnixNano() / int64(time.Millisecond),
ConnectionInfo: connectionInfo,
Pair: &api.RequestResponsePair{
Request: *request,
Response: api.GenericMessage{},
},
}
emitter.Emit(item)
}
func representProperties(properties map[string]interface{}, rep []interface{}) ([]interface{}, string, string) {
contentType := ""
contentEncoding := ""
deliveryMode := ""
priority := ""
correlationId := ""
replyTo := ""
expiration := ""
messageId := ""
timestamp := ""
_type := ""
userId := ""
appId := ""
if properties["ContentType"] != nil {
contentType = properties["ContentType"].(string)
}
if properties["ContentEncoding"] != nil {
contentEncoding = properties["ContentEncoding"].(string)
}
if properties["Delivery Mode"] != nil {
deliveryMode = fmt.Sprintf("%g", properties["DeliveryMode"].(float64))
}
if properties["Priority"] != nil {
priority = fmt.Sprintf("%g", properties["Priority"].(float64))
}
if properties["CorrelationId"] != nil {
correlationId = properties["CorrelationId"].(string)
}
if properties["ReplyTo"] != nil {
replyTo = properties["ReplyTo"].(string)
}
if properties["Expiration"] != nil {
expiration = properties["Expiration"].(string)
}
if properties["MessageId"] != nil {
messageId = properties["MessageId"].(string)
}
if properties["Timestamp"] != nil {
timestamp = properties["Timestamp"].(string)
}
if properties["Type"] != nil {
_type = properties["Type"].(string)
}
if properties["UserId"] != nil {
userId = properties["UserId"].(string)
}
if properties["AppId"] != nil {
appId = properties["AppId"].(string)
}
props, _ := json.Marshal([]map[string]string{
{
"name": "Content Type",
"value": contentType,
},
{
"name": "Content Encoding",
"value": contentEncoding,
},
{
"name": "Delivery Mode",
"value": deliveryMode,
},
{
"name": "Priority",
"value": priority,
},
{
"name": "Correlation ID",
"value": correlationId,
},
{
"name": "Reply To",
"value": replyTo,
},
{
"name": "Expiration",
"value": expiration,
},
{
"name": "Message ID",
"value": messageId,
},
{
"name": "Timestamp",
"value": timestamp,
},
{
"name": "Type",
"value": _type,
},
{
"name": "User ID",
"value": userId,
},
{
"name": "App ID",
"value": appId,
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Properties",
"data": string(props),
})
return rep, contentType, contentEncoding
}
func representBasicPublish(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Exchange",
"value": event["Exchange"].(string),
},
{
"name": "Routing Key",
"value": event["RoutingKey"].(string),
},
{
"name": "Mandatory",
"value": strconv.FormatBool(event["Mandatory"].(bool)),
},
{
"name": "Immediate",
"value": strconv.FormatBool(event["Immediate"].(bool)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
properties := event["Properties"].(map[string]interface{})
rep, contentType, _ := representProperties(properties, rep)
if properties["Headers"] != nil {
headers := make([]map[string]string, 0)
for name, value := range properties["Headers"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Headers",
"data": string(headersMarshaled),
})
}
if event["Body"] != nil {
rep = append(rep, map[string]string{
"type": api.BODY,
"title": "Body",
"encoding": "base64",
"mime_type": contentType,
"data": event["Body"].(string),
})
}
return rep
}
func representBasicDeliver(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
consumerTag := ""
deliveryTag := ""
redelivered := ""
if event["ConsumerTag"] != nil {
consumerTag = event["ConsumerTag"].(string)
}
if event["DeliveryTag"] != nil {
deliveryTag = fmt.Sprintf("%g", event["DeliveryTag"].(float64))
}
if event["Redelivered"] != nil {
redelivered = strconv.FormatBool(event["Redelivered"].(bool))
}
details, _ := json.Marshal([]map[string]string{
{
"name": "Consumer Tag",
"value": consumerTag,
},
{
"name": "Delivery Tag",
"value": deliveryTag,
},
{
"name": "Redelivered",
"value": redelivered,
},
{
"name": "Exchange",
"value": event["Exchange"].(string),
},
{
"name": "Routing Key",
"value": event["RoutingKey"].(string),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
properties := event["Properties"].(map[string]interface{})
rep, contentType, _ := representProperties(properties, rep)
if properties["Headers"] != nil {
headers := make([]map[string]string, 0)
for name, value := range properties["Headers"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Headers",
"data": string(headersMarshaled),
})
}
if event["Body"] != nil {
rep = append(rep, map[string]string{
"type": api.BODY,
"title": "Body",
"encoding": "base64",
"mime_type": contentType,
"data": event["Body"].(string),
})
}
return rep
}
func representQueueDeclare(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Queue",
"value": event["Queue"].(string),
},
{
"name": "Passive",
"value": strconv.FormatBool(event["Passive"].(bool)),
},
{
"name": "Durable",
"value": strconv.FormatBool(event["Durable"].(bool)),
},
{
"name": "Exclusive",
"value": strconv.FormatBool(event["Exclusive"].(bool)),
},
{
"name": "Auto Delete",
"value": strconv.FormatBool(event["AutoDelete"].(bool)),
},
{
"name": "NoWait",
"value": strconv.FormatBool(event["NoWait"].(bool)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
if event["Arguments"] != nil {
headers := make([]map[string]string, 0)
for name, value := range event["Arguments"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Arguments",
"data": string(headersMarshaled),
})
}
return rep
}
func representExchangeDeclare(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Exchange",
"value": event["Exchange"].(string),
},
{
"name": "Type",
"value": event["Type"].(string),
},
{
"name": "Passive",
"value": strconv.FormatBool(event["Passive"].(bool)),
},
{
"name": "Durable",
"value": strconv.FormatBool(event["Durable"].(bool)),
},
{
"name": "Auto Delete",
"value": strconv.FormatBool(event["AutoDelete"].(bool)),
},
{
"name": "Internal",
"value": strconv.FormatBool(event["Internal"].(bool)),
},
{
"name": "NoWait",
"value": strconv.FormatBool(event["NoWait"].(bool)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
if event["Arguments"] != nil {
headers := make([]map[string]string, 0)
for name, value := range event["Arguments"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Arguments",
"data": string(headersMarshaled),
})
}
return rep
}
func representConnectionStart(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Version Major",
"value": fmt.Sprintf("%g", event["VersionMajor"].(float64)),
},
{
"name": "Version Minor",
"value": fmt.Sprintf("%g", event["VersionMinor"].(float64)),
},
{
"name": "Mechanisms",
"value": event["Mechanisms"].(string),
},
{
"name": "Locales",
"value": event["Locales"].(string),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
if event["ServerProperties"] != nil {
headers := make([]map[string]string, 0)
for name, value := range event["ServerProperties"].(map[string]interface{}) {
var outcome string
switch value.(type) {
case string:
outcome = value.(string)
break
case map[string]interface{}:
x, _ := json.Marshal(value)
outcome = string(x)
break
default:
panic("Unknown data type for the server property!")
}
headers = append(headers, map[string]string{
"name": name,
"value": outcome,
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Server Properties",
"data": string(headersMarshaled),
})
}
return rep
}
func representConnectionClose(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Reply Code",
"value": fmt.Sprintf("%g", event["ReplyCode"].(float64)),
},
{
"name": "Reply Text",
"value": event["ReplyText"].(string),
},
{
"name": "Class ID",
"value": fmt.Sprintf("%g", event["ClassId"].(float64)),
},
{
"name": "Method ID",
"value": fmt.Sprintf("%g", event["MethodId"].(float64)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
return rep
}
func representQueueBind(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Queue",
"value": event["Queue"].(string),
},
{
"name": "Exchange",
"value": event["Exchange"].(string),
},
{
"name": "RoutingKey",
"value": event["RoutingKey"].(string),
},
{
"name": "NoWait",
"value": strconv.FormatBool(event["NoWait"].(bool)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
if event["Arguments"] != nil {
headers := make([]map[string]string, 0)
for name, value := range event["Arguments"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Arguments",
"data": string(headersMarshaled),
})
}
return rep
}
func representBasicConsume(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Queue",
"value": event["Queue"].(string),
},
{
"name": "Consumer Tag",
"value": event["ConsumerTag"].(string),
},
{
"name": "No Local",
"value": strconv.FormatBool(event["NoLocal"].(bool)),
},
{
"name": "No Ack",
"value": strconv.FormatBool(event["NoAck"].(bool)),
},
{
"name": "Exclusive",
"value": strconv.FormatBool(event["Exclusive"].(bool)),
},
{
"name": "NoWait",
"value": strconv.FormatBool(event["NoWait"].(bool)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
if event["Arguments"] != nil {
headers := make([]map[string]string, 0)
for name, value := range event["Arguments"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Arguments",
"data": string(headersMarshaled),
})
}
return rep
}

344
tap/extensions/amqp/main.go Normal file
View File

@@ -0,0 +1,344 @@
package main
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"io"
"log"
"strconv"
"github.com/up9inc/mizu/tap/api"
)
var protocol api.Protocol = api.Protocol{
Name: "amqp",
LongName: "Advanced Message Queuing Protocol 0-9-1",
Abbreviation: "AMQP",
Version: "0-9-1",
BackgroundColor: "#ff6600",
ForegroundColor: "#ffffff",
FontSize: 12,
ReferenceLink: "https://www.rabbitmq.com/amqp-0-9-1-reference.html",
Ports: []string{"5671", "5672"},
Priority: 1,
}
func init() {
log.Println("Initializing AMQP extension...")
}
type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = protocol
}
func (d dissecting) Ping() {
log.Printf("pong %s\n", protocol.Name)
}
const amqpRequest string = "amqp_request"
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter) error {
r := AmqpReader{b}
var remaining int
var header *HeaderFrame
var body []byte
connectionInfo := &api.ConnectionInfo{
ClientIP: tcpID.SrcIP,
ClientPort: tcpID.SrcPort,
ServerIP: tcpID.DstIP,
ServerPort: tcpID.DstPort,
IsOutgoing: true,
}
eventBasicPublish := &BasicPublish{
Exchange: "",
RoutingKey: "",
Mandatory: false,
Immediate: false,
Body: nil,
Properties: Properties{},
}
eventBasicDeliver := &BasicDeliver{
ConsumerTag: "",
DeliveryTag: 0,
Redelivered: false,
Exchange: "",
RoutingKey: "",
Properties: Properties{},
Body: nil,
}
var lastMethodFrameMessage Message
for {
frame, err := r.ReadFrame()
if err == io.EOF {
// We must read until we see an EOF... very important!
return errors.New("AMQP EOF")
} else if err != nil {
// TODO: Causes ignoring some methods. Return only in case of a certain error. But what?
return err
}
switch f := frame.(type) {
case *HeartbeatFrame:
// drop
case *HeaderFrame:
// start content state
header = f
remaining = int(header.Size)
switch lastMethodFrameMessage.(type) {
case *BasicPublish:
eventBasicPublish.Properties = header.Properties
case *BasicDeliver:
eventBasicDeliver.Properties = header.Properties
default:
}
case *BodyFrame:
// continue until terminated
body = append(body, f.Body...)
remaining -= len(f.Body)
switch lastMethodFrameMessage.(type) {
case *BasicPublish:
eventBasicPublish.Body = f.Body
emitAMQP(*eventBasicPublish, amqpRequest, basicMethodMap[40], connectionInfo, superTimer.CaptureTime, emitter)
case *BasicDeliver:
eventBasicDeliver.Body = f.Body
emitAMQP(*eventBasicDeliver, amqpRequest, basicMethodMap[60], connectionInfo, superTimer.CaptureTime, emitter)
default:
}
case *MethodFrame:
lastMethodFrameMessage = f.Method
switch m := f.Method.(type) {
case *BasicPublish:
eventBasicPublish.Exchange = m.Exchange
eventBasicPublish.RoutingKey = m.RoutingKey
eventBasicPublish.Mandatory = m.Mandatory
eventBasicPublish.Immediate = m.Immediate
case *QueueBind:
eventQueueBind := &QueueBind{
Queue: m.Queue,
Exchange: m.Exchange,
RoutingKey: m.RoutingKey,
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventQueueBind, amqpRequest, queueMethodMap[20], connectionInfo, superTimer.CaptureTime, emitter)
case *BasicConsume:
eventBasicConsume := &BasicConsume{
Queue: m.Queue,
ConsumerTag: m.ConsumerTag,
NoLocal: m.NoLocal,
NoAck: m.NoAck,
Exclusive: m.Exclusive,
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventBasicConsume, amqpRequest, basicMethodMap[20], connectionInfo, superTimer.CaptureTime, emitter)
case *BasicDeliver:
eventBasicDeliver.ConsumerTag = m.ConsumerTag
eventBasicDeliver.DeliveryTag = m.DeliveryTag
eventBasicDeliver.Redelivered = m.Redelivered
eventBasicDeliver.Exchange = m.Exchange
eventBasicDeliver.RoutingKey = m.RoutingKey
case *QueueDeclare:
eventQueueDeclare := &QueueDeclare{
Queue: m.Queue,
Passive: m.Passive,
Durable: m.Durable,
AutoDelete: m.AutoDelete,
Exclusive: m.Exclusive,
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventQueueDeclare, amqpRequest, queueMethodMap[10], connectionInfo, superTimer.CaptureTime, emitter)
case *ExchangeDeclare:
eventExchangeDeclare := &ExchangeDeclare{
Exchange: m.Exchange,
Type: m.Type,
Passive: m.Passive,
Durable: m.Durable,
AutoDelete: m.AutoDelete,
Internal: m.Internal,
NoWait: m.NoWait,
Arguments: m.Arguments,
}
emitAMQP(*eventExchangeDeclare, amqpRequest, exchangeMethodMap[10], connectionInfo, superTimer.CaptureTime, emitter)
case *ConnectionStart:
eventConnectionStart := &ConnectionStart{
VersionMajor: m.VersionMajor,
VersionMinor: m.VersionMinor,
ServerProperties: m.ServerProperties,
Mechanisms: m.Mechanisms,
Locales: m.Locales,
}
emitAMQP(*eventConnectionStart, amqpRequest, connectionMethodMap[10], connectionInfo, superTimer.CaptureTime, emitter)
case *ConnectionClose:
eventConnectionClose := &ConnectionClose{
ReplyCode: m.ReplyCode,
ReplyText: m.ReplyText,
ClassId: m.ClassId,
MethodId: m.MethodId,
}
emitAMQP(*eventConnectionClose, amqpRequest, connectionMethodMap[50], connectionInfo, superTimer.CaptureTime, emitter)
default:
}
default:
// log.Printf("unexpected frame: %+v\n", f)
}
}
}
func (d dissecting) Analyze(item *api.OutputChannelItem, entryId string, resolvedSource string, resolvedDestination string) *api.MizuEntry {
request := item.Pair.Request.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
service := "amqp"
if resolvedDestination != "" {
service = resolvedDestination
} else if resolvedSource != "" {
service = resolvedSource
}
summary := ""
switch request["method"] {
case basicMethodMap[40]:
summary = reqDetails["Exchange"].(string)
break
case basicMethodMap[60]:
summary = reqDetails["Exchange"].(string)
break
case exchangeMethodMap[10]:
summary = reqDetails["Exchange"].(string)
break
case queueMethodMap[10]:
summary = reqDetails["Queue"].(string)
break
case connectionMethodMap[10]:
summary = fmt.Sprintf(
"%s.%s",
strconv.Itoa(int(reqDetails["VersionMajor"].(float64))),
strconv.Itoa(int(reqDetails["VersionMinor"].(float64))),
)
break
case connectionMethodMap[50]:
summary = reqDetails["ReplyText"].(string)
break
case queueMethodMap[20]:
summary = reqDetails["Queue"].(string)
break
case basicMethodMap[20]:
summary = reqDetails["Queue"].(string)
break
}
request["url"] = summary
entryBytes, _ := json.Marshal(item.Pair)
return &api.MizuEntry{
ProtocolName: protocol.Name,
ProtocolVersion: protocol.Version,
EntryId: entryId,
Entry: string(entryBytes),
Url: fmt.Sprintf("%s%s", service, summary),
Method: request["method"].(string),
Status: 0,
RequestSenderIp: item.ConnectionInfo.ClientIP,
Service: service,
Timestamp: item.Timestamp,
ElapsedTime: 0,
Path: summary,
ResolvedSource: resolvedSource,
ResolvedDestination: resolvedDestination,
SourceIp: item.ConnectionInfo.ClientIP,
DestinationIp: item.ConnectionInfo.ServerIP,
SourcePort: item.ConnectionInfo.ClientPort,
DestinationPort: item.ConnectionInfo.ServerPort,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
}
}
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
return &api.BaseEntryDetails{
Id: entry.EntryId,
Protocol: protocol,
Url: entry.Url,
RequestSenderIp: entry.RequestSenderIp,
Service: entry.Service,
Summary: entry.Path,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
SourceIp: entry.SourceIp,
DestinationIp: entry.DestinationIp,
SourcePort: entry.SourcePort,
DestinationPort: entry.DestinationPort,
IsOutgoing: entry.IsOutgoing,
Latency: 0,
Rules: api.ApplicableRules{
Latency: 0,
Status: false,
},
}
}
func (d dissecting) Represent(entry *api.MizuEntry) (p api.Protocol, object []byte, bodySize int64, err error) {
p = protocol
bodySize = 0
var root map[string]interface{}
json.Unmarshal([]byte(entry.Entry), &root)
representation := make(map[string]interface{}, 0)
request := root["request"].(map[string]interface{})["payload"].(map[string]interface{})
var repRequest []interface{}
details := request["details"].(map[string]interface{})
switch request["method"].(string) {
case basicMethodMap[40]:
repRequest = representBasicPublish(details)
break
case basicMethodMap[60]:
repRequest = representBasicDeliver(details)
break
case queueMethodMap[10]:
repRequest = representQueueDeclare(details)
break
case exchangeMethodMap[10]:
repRequest = representExchangeDeclare(details)
break
case connectionMethodMap[10]:
repRequest = representConnectionStart(details)
break
case connectionMethodMap[50]:
repRequest = representConnectionClose(details)
break
case queueMethodMap[20]:
repRequest = representQueueBind(details)
break
case basicMethodMap[20]:
repRequest = representBasicConsume(details)
break
}
representation["request"] = repRequest
object, err = json.Marshal(representation)
return
}
var Dissector dissecting

460
tap/extensions/amqp/read.go Normal file
View File

@@ -0,0 +1,460 @@
// Copyright (c) 2012, Sean Treadway, SoundCloud Ltd.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Source code and contact info at http://github.com/streadway/amqp
package main
import (
"bytes"
"encoding/binary"
"errors"
"io"
"time"
)
/*
Reads a frame from an input stream and returns an interface that can be cast into
one of the following:
MethodFrame
PropertiesFrame
BodyFrame
HeartbeatFrame
2.3.5 frame Details
All frames consist of a header (7 octets), a payload of arbitrary size, and a
'frame-end' octet that detects malformed frames:
0 1 3 7 size+7 size+8
+------+---------+-------------+ +------------+ +-----------+
| type | channel | size | | payload | | frame-end |
+------+---------+-------------+ +------------+ +-----------+
octet short long size octets octet
To read a frame, we:
1. Read the header and check the frame type and channel.
2. Depending on the frame type, we read the payload and process it.
3. Read the frame end octet.
In realistic implementations where performance is a concern, we would use
“read-ahead buffering” or
“gathering reads” to avoid doing three separate system calls to read a frame.
*/
func (r *AmqpReader) ReadFrame() (frame frame, err error) {
var scratch [7]byte
if _, err = io.ReadFull(r.R, scratch[:7]); err != nil {
return
}
typ := uint8(scratch[0])
channel := binary.BigEndian.Uint16(scratch[1:3])
size := binary.BigEndian.Uint32(scratch[3:7])
if size > 1000000 {
return nil, ErrMaxSize
}
switch typ {
case frameMethod:
if frame, err = r.parseMethodFrame(channel, size); err != nil {
return
}
case frameHeader:
if frame, err = r.parseHeaderFrame(channel, size); err != nil {
return
}
case frameBody:
if frame, err = r.parseBodyFrame(channel, size); err != nil {
return nil, err
}
case frameHeartbeat:
if frame, err = r.parseHeartbeatFrame(channel, size); err != nil {
return
}
default:
return nil, ErrFrame
}
if _, err = io.ReadFull(r.R, scratch[:1]); err != nil {
return nil, err
}
if scratch[0] != frameEnd {
return nil, ErrFrame
}
return
}
func readShortstr(r io.Reader) (v string, err error) {
var length uint8
if err = binary.Read(r, binary.BigEndian, &length); err != nil {
return
}
bytes := make([]byte, length)
if _, err = io.ReadFull(r, bytes); err != nil {
return
}
return string(bytes), nil
}
func readLongstr(r io.Reader) (v string, err error) {
var length uint32
if err = binary.Read(r, binary.BigEndian, &length); err != nil {
return
}
// slices can't be longer than max int32 value
if length > (^uint32(0) >> 1) {
return
}
bytes := make([]byte, length)
if _, err = io.ReadFull(r, bytes); err != nil {
return
}
return string(bytes), nil
}
func readDecimal(r io.Reader) (v Decimal, err error) {
if err = binary.Read(r, binary.BigEndian, &v.Scale); err != nil {
return
}
if err = binary.Read(r, binary.BigEndian, &v.Value); err != nil {
return
}
return
}
func readFloat32(r io.Reader) (v float32, err error) {
if err = binary.Read(r, binary.BigEndian, &v); err != nil {
return
}
return
}
func readFloat64(r io.Reader) (v float64, err error) {
if err = binary.Read(r, binary.BigEndian, &v); err != nil {
return
}
return
}
func readTimestamp(r io.Reader) (v time.Time, err error) {
var sec int64
if err = binary.Read(r, binary.BigEndian, &sec); err != nil {
return
}
return time.Unix(sec, 0), nil
}
/*
'A': []interface{}
'D': Decimal
'F': Table
'I': int32
'S': string
'T': time.Time
'V': nil
'b': byte
'd': float64
'f': float32
'l': int64
's': int16
't': bool
'x': []byte
*/
func readField(r io.Reader) (v interface{}, err error) {
var typ byte
if err = binary.Read(r, binary.BigEndian, &typ); err != nil {
return
}
switch typ {
case 't':
var value uint8
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return (value != 0), nil
case 'b':
var value [1]byte
if _, err = io.ReadFull(r, value[0:1]); err != nil {
return
}
return value[0], nil
case 's':
var value int16
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return value, nil
case 'I':
var value int32
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return value, nil
case 'l':
var value int64
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return value, nil
case 'f':
var value float32
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return value, nil
case 'd':
var value float64
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return value, nil
case 'D':
return readDecimal(r)
case 'S':
return readLongstr(r)
case 'A':
return readArray(r)
case 'T':
return readTimestamp(r)
case 'F':
return readTable(r)
case 'x':
var len int32
if err = binary.Read(r, binary.BigEndian, &len); err != nil {
return nil, err
}
value := make([]byte, len)
if _, err = io.ReadFull(r, value); err != nil {
return nil, err
}
return value, err
case 'V':
return nil, nil
}
return nil, ErrSyntax
}
/*
Field tables are long strings that contain packed name-value pairs. The
name-value pairs are encoded as short string defining the name, and octet
defining the values type and then the value itself. The valid field types for
tables are an extension of the native integer, bit, string, and timestamp
types, and are shown in the grammar. Multi-octet integer fields are always
held in network byte order.
*/
func readTable(r io.Reader) (table Table, err error) {
var nested bytes.Buffer
var str string
if str, err = readLongstr(r); err != nil {
return
}
nested.Write([]byte(str))
table = make(Table)
for nested.Len() > 0 {
var key string
var value interface{}
if key, err = readShortstr(&nested); err != nil {
return
}
if value, err = readField(&nested); err != nil {
return
}
table[key] = value
}
return
}
func readArray(r io.Reader) ([]interface{}, error) {
var (
size uint32
err error
)
if err = binary.Read(r, binary.BigEndian, &size); err != nil {
return nil, err
}
var (
lim = &io.LimitedReader{R: r, N: int64(size)}
arr = []interface{}{}
field interface{}
)
for {
if field, err = readField(lim); err != nil {
if err == io.EOF {
break
}
return nil, err
}
arr = append(arr, field)
}
return arr, nil
}
// Checks if this bit mask matches the flags bitset
func hasProperty(mask uint16, prop int) bool {
return int(mask)&prop > 0
}
func (r *AmqpReader) parseHeaderFrame(channel uint16, size uint32) (frame frame, err error) {
hf := &HeaderFrame{
ChannelId: channel,
}
if err = binary.Read(r.R, binary.BigEndian, &hf.ClassId); err != nil {
return
}
if err = binary.Read(r.R, binary.BigEndian, &hf.weight); err != nil {
return
}
if err = binary.Read(r.R, binary.BigEndian, &hf.Size); err != nil {
return
}
var flags uint16
if err = binary.Read(r.R, binary.BigEndian, &flags); err != nil {
return
}
if hasProperty(flags, flagContentType) {
if hf.Properties.ContentType, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagContentEncoding) {
if hf.Properties.ContentEncoding, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagHeaders) {
if hf.Properties.Headers, err = readTable(r.R); err != nil {
return
}
}
if hasProperty(flags, flagDeliveryMode) {
if err = binary.Read(r.R, binary.BigEndian, &hf.Properties.DeliveryMode); err != nil {
return
}
}
if hasProperty(flags, flagPriority) {
if err = binary.Read(r.R, binary.BigEndian, &hf.Properties.Priority); err != nil {
return
}
}
if hasProperty(flags, flagCorrelationId) {
if hf.Properties.CorrelationId, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagReplyTo) {
if hf.Properties.ReplyTo, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagExpiration) {
if hf.Properties.Expiration, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagMessageId) {
if hf.Properties.MessageId, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagTimestamp) {
if hf.Properties.Timestamp, err = readTimestamp(r.R); err != nil {
return
}
}
if hasProperty(flags, flagType) {
if hf.Properties.Type, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagUserId) {
if hf.Properties.UserId, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagAppId) {
if hf.Properties.AppId, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagReserved1) {
if hf.Properties.reserved1, err = readShortstr(r.R); err != nil {
return
}
}
return hf, nil
}
func (r *AmqpReader) parseBodyFrame(channel uint16, size uint32) (frame frame, err error) {
bf := &BodyFrame{
ChannelId: channel,
Body: make([]byte, size),
}
if _, err = io.ReadFull(r.R, bf.Body); err != nil {
return nil, err
}
return bf, nil
}
var errHeartbeatPayload = errors.New("Heartbeats should not have a payload")
func (r *AmqpReader) parseHeartbeatFrame(channel uint16, size uint32) (frame frame, err error) {
hf := &HeartbeatFrame{
ChannelId: channel,
}
if size > 0 {
return nil, errHeartbeatPayload
}
return hf, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,17 @@
package main
import (
"encoding/json"
)
type AMQPPayload struct {
Data interface{}
}
type AMQPPayloader interface {
MarshalJSON() ([]byte, error)
}
func (h AMQPPayload) MarshalJSON() ([]byte, error) {
return json.Marshal(h.Data)
}

View File

@@ -0,0 +1,431 @@
// Copyright (c) 2012, Sean Treadway, SoundCloud Ltd.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Source code and contact info at http://github.com/streadway/amqp
package main
import (
"fmt"
"io"
"time"
)
// Constants for standard AMQP 0-9-1 exchange types.
const (
ExchangeDirect = "direct"
ExchangeFanout = "fanout"
ExchangeTopic = "topic"
ExchangeHeaders = "headers"
)
var (
// ErrClosed is returned when the channel or connection is not open
ErrClosed = &Error{Code: ChannelError, Reason: "channel/connection is not open"}
// ErrChannelMax is returned when Connection.Channel has been called enough
// times that all channel IDs have been exhausted in the client or the
// server.
ErrChannelMax = &Error{Code: ChannelError, Reason: "channel id space exhausted"}
// ErrSASL is returned from Dial when the authentication mechanism could not
// be negoated.
ErrSASL = &Error{Code: AccessRefused, Reason: "SASL could not negotiate a shared mechanism"}
// ErrCredentials is returned when the authenticated client is not authorized
// to any vhost.
ErrCredentials = &Error{Code: AccessRefused, Reason: "username or password not allowed"}
// ErrVhost is returned when the authenticated user is not permitted to
// access the requested Vhost.
ErrVhost = &Error{Code: AccessRefused, Reason: "no access to this vhost"}
// ErrSyntax is hard protocol error, indicating an unsupported protocol,
// implementation or encoding.
ErrSyntax = &Error{Code: SyntaxError, Reason: "invalid field or value inside of a frame"}
// ErrFrame is returned when the protocol frame cannot be read from the
// server, indicating an unsupported protocol or unsupported frame type.
ErrFrame = &Error{Code: FrameError, Reason: "frame could not be parsed"}
// ErrCommandInvalid is returned when the server sends an unexpected response
// to this requested message type. This indicates a bug in this client.
ErrCommandInvalid = &Error{Code: CommandInvalid, Reason: "unexpected command received"}
// ErrUnexpectedFrame is returned when something other than a method or
// heartbeat frame is delivered to the Connection, indicating a bug in the
// client.
ErrUnexpectedFrame = &Error{Code: UnexpectedFrame, Reason: "unexpected frame received"}
// ErrFieldType is returned when writing a message containing a Go type unsupported by AMQP.
ErrFieldType = &Error{Code: SyntaxError, Reason: "unsupported table field type"}
// ErrClosed is returned when the channel or connection is not open
ErrMaxSize = &Error{Code: MaxSizeError, Reason: "an AMQP message cannot be bigger than 1MB"}
)
// Error captures the code and reason a channel or connection has been closed
// by the server.
type Error struct {
Code int // constant code from the specification
Reason string // description of the error
Server bool // true when initiated from the server, false when from this library
Recover bool // true when this error can be recovered by retrying later or with different parameters
}
func newError(code uint16, text string) *Error {
return &Error{
Code: int(code),
Reason: text,
Recover: isSoftExceptionCode(int(code)),
Server: true,
}
}
func (e Error) Error() string {
return fmt.Sprintf("Exception (%d) Reason: %q", e.Code, e.Reason)
}
// Used by header frames to capture routing and header information
type Properties struct {
ContentType string // MIME content type
ContentEncoding string // MIME content encoding
Headers Table // Application or header exchange table
DeliveryMode uint8 // queue implementation use - Transient (1) or Persistent (2)
Priority uint8 // queue implementation use - 0 to 9
CorrelationId string // application use - correlation identifier
ReplyTo string // application use - address to to reply to (ex: RPC)
Expiration string // implementation use - message expiration spec
MessageId string // application use - message identifier
Timestamp time.Time // application use - message timestamp
Type string // application use - message type name
UserId string // application use - creating user id
AppId string // application use - creating application
reserved1 string // was cluster-id - process for buffer consumption
}
// DeliveryMode. Transient means higher throughput but messages will not be
// restored on broker restart. The delivery mode of publishings is unrelated
// to the durability of the queues they reside on. Transient messages will
// not be restored to durable queues, persistent messages will be restored to
// durable queues and lost on non-durable queues during server restart.
//
// This remains typed as uint8 to match Publishing.DeliveryMode. Other
// delivery modes specific to custom queue implementations are not enumerated
// here.
const (
Transient uint8 = 1
Persistent uint8 = 2
)
// The property flags are an array of bits that indicate the presence or
// absence of each property value in sequence. The bits are ordered from most
// high to low - bit 15 indicates the first property.
const (
flagContentType = 0x8000
flagContentEncoding = 0x4000
flagHeaders = 0x2000
flagDeliveryMode = 0x1000
flagPriority = 0x0800
flagCorrelationId = 0x0400
flagReplyTo = 0x0200
flagExpiration = 0x0100
flagMessageId = 0x0080
flagTimestamp = 0x0040
flagType = 0x0020
flagUserId = 0x0010
flagAppId = 0x0008
flagReserved1 = 0x0004
)
// Queue captures the current server state of the queue on the server returned
// from Channel.QueueDeclare or Channel.QueueInspect.
type Queue struct {
Name string // server confirmed or generated name
Messages int // count of messages not awaiting acknowledgment
Consumers int // number of consumers receiving deliveries
}
// Publishing captures the client message sent to the server. The fields
// outside of the Headers table included in this struct mirror the underlying
// fields in the content frame. They use native types for convenience and
// efficiency.
type Publishing struct {
// Application or exchange specific fields,
// the headers exchange will inspect this field.
Headers Table
// Properties
ContentType string // MIME content type
ContentEncoding string // MIME content encoding
DeliveryMode uint8 // Transient (0 or 1) or Persistent (2)
Priority uint8 // 0 to 9
CorrelationId string // correlation identifier
ReplyTo string // address to to reply to (ex: RPC)
Expiration string // message expiration spec
MessageId string // message identifier
Timestamp time.Time // message timestamp
Type string // message type name
UserId string // creating user id - ex: "guest"
AppId string // creating application id
// The application specific payload of the message
Body []byte
}
// Blocking notifies the server's TCP flow control of the Connection. When a
// server hits a memory or disk alarm it will block all connections until the
// resources are reclaimed. Use NotifyBlock on the Connection to receive these
// events.
type Blocking struct {
Active bool // TCP pushback active/inactive on server
Reason string // Server reason for activation
}
// Confirmation notifies the acknowledgment or negative acknowledgement of a
// publishing identified by its delivery tag. Use NotifyPublish on the Channel
// to consume these events.
type Confirmation struct {
DeliveryTag uint64 // A 1 based counter of publishings from when the channel was put in Confirm mode
Ack bool // True when the server successfully received the publishing
}
// Decimal matches the AMQP decimal type. Scale is the number of decimal
// digits Scale == 2, Value == 12345, Decimal == 123.45
type Decimal struct {
Scale uint8
Value int32
}
// Table stores user supplied fields of the following types:
//
// bool
// byte
// float32
// float64
// int
// int16
// int32
// int64
// nil
// string
// time.Time
// amqp.Decimal
// amqp.Table
// []byte
// []interface{} - containing above types
//
// Functions taking a table will immediately fail when the table contains a
// value of an unsupported type.
//
// The caller must be specific in which precision of integer it wishes to
// encode.
//
// Use a type assertion when reading values from a table for type conversion.
//
// RabbitMQ expects int32 for integer values.
//
type Table map[string]interface{}
func validateField(f interface{}) error {
switch fv := f.(type) {
case nil, bool, byte, int, int16, int32, int64, float32, float64, string, []byte, Decimal, time.Time:
return nil
case []interface{}:
for _, v := range fv {
if err := validateField(v); err != nil {
return fmt.Errorf("in array %s", err)
}
}
return nil
case Table:
for k, v := range fv {
if err := validateField(v); err != nil {
return fmt.Errorf("table field %q %s", k, err)
}
}
return nil
}
return fmt.Errorf("value %T not supported", f)
}
// Validate returns and error if any Go types in the table are incompatible with AMQP types.
func (t Table) Validate() error {
return validateField(t)
}
// Heap interface for maintaining delivery tags
type tagSet []uint64
func (set tagSet) Len() int { return len(set) }
func (set tagSet) Less(i, j int) bool { return (set)[i] < (set)[j] }
func (set tagSet) Swap(i, j int) { (set)[i], (set)[j] = (set)[j], (set)[i] }
func (set *tagSet) Push(tag interface{}) { *set = append(*set, tag.(uint64)) }
func (set *tagSet) Pop() interface{} {
val := (*set)[len(*set)-1]
*set = (*set)[:len(*set)-1]
return val
}
type Message interface {
id() (uint16, uint16)
wait() bool
read(io.Reader) error
write(io.Writer) error
}
type messageWithContent interface {
Message
getContent() (Properties, []byte)
setContent(Properties, []byte)
}
/*
The base interface implemented as:
2.3.5 frame Details
All frames consist of a header (7 octets), a payload of arbitrary size, and a 'frame-end' octet that detects
malformed frames:
0 1 3 7 size+7 size+8
+------+---------+-------------+ +------------+ +-----------+
| type | channel | size | | payload | | frame-end |
+------+---------+-------------+ +------------+ +-----------+
octet short long size octets octet
To read a frame, we:
1. Read the header and check the frame type and channel.
2. Depending on the frame type, we read the payload and process it.
3. Read the frame end octet.
In realistic implementations where performance is a concern, we would use
“read-ahead buffering” or “gathering reads” to avoid doing three separate
system calls to read a frame.
*/
type frame interface {
write(io.Writer) error
channel() uint16
}
type AmqpReader struct {
R io.Reader
}
type writer struct {
w io.Writer
}
// Implements the frame interface for Connection RPC
type protocolHeader struct{}
func (protocolHeader) write(w io.Writer) error {
_, err := w.Write([]byte{'A', 'M', 'Q', 'P', 0, 0, 9, 1})
return err
}
func (protocolHeader) channel() uint16 {
panic("only valid as initial handshake")
}
/*
Method frames carry the high-level protocol commands (which we call "methods").
One method frame carries one command. The method frame payload has this format:
0 2 4
+----------+-----------+-------------- - -
| class-id | method-id | arguments...
+----------+-----------+-------------- - -
short short ...
To process a method frame, we:
1. Read the method frame payload.
2. Unpack it into a structure. A given method always has the same structure,
so we can unpack the method rapidly. 3. Check that the method is allowed in
the current context.
4. Check that the method arguments are valid.
5. Execute the method.
Method frame bodies are constructed as a list of AMQP data fields (bits,
integers, strings and string tables). The marshalling code is trivially
generated directly from the protocol specifications, and can be very rapid.
*/
type MethodFrame struct {
ChannelId uint16
ClassId uint16
MethodId uint16
Method Message
}
func (f *MethodFrame) channel() uint16 { return f.ChannelId }
/*
Heartbeating is a technique designed to undo one of TCP/IP's features, namely
its ability to recover from a broken physical connection by closing only after
a quite long time-out. In some scenarios we need to know very rapidly if a
peer is disconnected or not responding for other reasons (e.g. it is looping).
Since heartbeating can be done at a low level, we implement this as a special
type of frame that peers exchange at the transport level, rather than as a
class method.
*/
type HeartbeatFrame struct {
ChannelId uint16
}
func (f *HeartbeatFrame) channel() uint16 { return f.ChannelId }
/*
Certain methods (such as Basic.Publish, Basic.Deliver, etc.) are formally
defined as carrying content. When a peer sends such a method frame, it always
follows it with a content header and zero or more content body frames.
A content header frame has this format:
0 2 4 12 14
+----------+--------+-----------+----------------+------------- - -
| class-id | weight | body size | property flags | property list...
+----------+--------+-----------+----------------+------------- - -
short short long long short remainder...
We place content body in distinct frames (rather than including it in the
method) so that AMQP may support "zero copy" techniques in which content is
never marshalled or encoded. We place the content properties in their own
frame so that recipients can selectively discard contents they do not want to
process
*/
type HeaderFrame struct {
ChannelId uint16
ClassId uint16
weight uint16
Size uint64
Properties Properties
}
func (f *HeaderFrame) channel() uint16 { return f.ChannelId }
/*
Content is the application data we carry from client-to-client via the AMQP
server. Content is, roughly speaking, a set of properties plus a binary data
part. The set of allowed properties are defined by the Basic class, and these
form the "content header frame". The data can be any size, and MAY be broken
into several (or many) chunks, each forming a "content body frame".
Looking at the frames for a specific channel, as they pass on the wire, we
might see something like this:
[method]
[method] [header] [body] [body]
[method]
...
*/
type BodyFrame struct {
ChannelId uint16
Body []byte
}
func (f *BodyFrame) channel() uint16 { return f.ChannelId }

View File

@@ -0,0 +1,416 @@
// Copyright (c) 2012, Sean Treadway, SoundCloud Ltd.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Source code and contact info at http://github.com/streadway/amqp
package main
import (
"bufio"
"bytes"
"encoding/binary"
"errors"
"io"
"math"
"time"
)
func (w *writer) WriteFrame(frame frame) (err error) {
if err = frame.write(w.w); err != nil {
return
}
if buf, ok := w.w.(*bufio.Writer); ok {
err = buf.Flush()
}
return
}
func (f *MethodFrame) write(w io.Writer) (err error) {
var payload bytes.Buffer
if f.Method == nil {
return errors.New("malformed frame: missing method")
}
class, method := f.Method.id()
if err = binary.Write(&payload, binary.BigEndian, class); err != nil {
return
}
if err = binary.Write(&payload, binary.BigEndian, method); err != nil {
return
}
if err = f.Method.write(&payload); err != nil {
return
}
return writeFrame(w, frameMethod, f.ChannelId, payload.Bytes())
}
// Heartbeat
//
// Payload is empty
func (f *HeartbeatFrame) write(w io.Writer) (err error) {
return writeFrame(w, frameHeartbeat, f.ChannelId, []byte{})
}
// CONTENT HEADER
// 0 2 4 12 14
// +----------+--------+-----------+----------------+------------- - -
// | class-id | weight | body size | property flags | property list...
// +----------+--------+-----------+----------------+------------- - -
// short short long long short remainder...
//
func (f *HeaderFrame) write(w io.Writer) (err error) {
var payload bytes.Buffer
var zeroTime time.Time
if err = binary.Write(&payload, binary.BigEndian, f.ClassId); err != nil {
return
}
if err = binary.Write(&payload, binary.BigEndian, f.weight); err != nil {
return
}
if err = binary.Write(&payload, binary.BigEndian, f.Size); err != nil {
return
}
// First pass will build the mask to be serialized, second pass will serialize
// each of the fields that appear in the mask.
var mask uint16
if len(f.Properties.ContentType) > 0 {
mask = mask | flagContentType
}
if len(f.Properties.ContentEncoding) > 0 {
mask = mask | flagContentEncoding
}
if f.Properties.Headers != nil && len(f.Properties.Headers) > 0 {
mask = mask | flagHeaders
}
if f.Properties.DeliveryMode > 0 {
mask = mask | flagDeliveryMode
}
if f.Properties.Priority > 0 {
mask = mask | flagPriority
}
if len(f.Properties.CorrelationId) > 0 {
mask = mask | flagCorrelationId
}
if len(f.Properties.ReplyTo) > 0 {
mask = mask | flagReplyTo
}
if len(f.Properties.Expiration) > 0 {
mask = mask | flagExpiration
}
if len(f.Properties.MessageId) > 0 {
mask = mask | flagMessageId
}
if f.Properties.Timestamp != zeroTime {
mask = mask | flagTimestamp
}
if len(f.Properties.Type) > 0 {
mask = mask | flagType
}
if len(f.Properties.UserId) > 0 {
mask = mask | flagUserId
}
if len(f.Properties.AppId) > 0 {
mask = mask | flagAppId
}
if err = binary.Write(&payload, binary.BigEndian, mask); err != nil {
return
}
if hasProperty(mask, flagContentType) {
if err = writeShortstr(&payload, f.Properties.ContentType); err != nil {
return
}
}
if hasProperty(mask, flagContentEncoding) {
if err = writeShortstr(&payload, f.Properties.ContentEncoding); err != nil {
return
}
}
if hasProperty(mask, flagHeaders) {
if err = writeTable(&payload, f.Properties.Headers); err != nil {
return
}
}
if hasProperty(mask, flagDeliveryMode) {
if err = binary.Write(&payload, binary.BigEndian, f.Properties.DeliveryMode); err != nil {
return
}
}
if hasProperty(mask, flagPriority) {
if err = binary.Write(&payload, binary.BigEndian, f.Properties.Priority); err != nil {
return
}
}
if hasProperty(mask, flagCorrelationId) {
if err = writeShortstr(&payload, f.Properties.CorrelationId); err != nil {
return
}
}
if hasProperty(mask, flagReplyTo) {
if err = writeShortstr(&payload, f.Properties.ReplyTo); err != nil {
return
}
}
if hasProperty(mask, flagExpiration) {
if err = writeShortstr(&payload, f.Properties.Expiration); err != nil {
return
}
}
if hasProperty(mask, flagMessageId) {
if err = writeShortstr(&payload, f.Properties.MessageId); err != nil {
return
}
}
if hasProperty(mask, flagTimestamp) {
if err = binary.Write(&payload, binary.BigEndian, uint64(f.Properties.Timestamp.Unix())); err != nil {
return
}
}
if hasProperty(mask, flagType) {
if err = writeShortstr(&payload, f.Properties.Type); err != nil {
return
}
}
if hasProperty(mask, flagUserId) {
if err = writeShortstr(&payload, f.Properties.UserId); err != nil {
return
}
}
if hasProperty(mask, flagAppId) {
if err = writeShortstr(&payload, f.Properties.AppId); err != nil {
return
}
}
return writeFrame(w, frameHeader, f.ChannelId, payload.Bytes())
}
// Body
//
// Payload is one byterange from the full body who's size is declared in the
// Header frame
func (f *BodyFrame) write(w io.Writer) (err error) {
return writeFrame(w, frameBody, f.ChannelId, f.Body)
}
func writeFrame(w io.Writer, typ uint8, channel uint16, payload []byte) (err error) {
end := []byte{frameEnd}
size := uint(len(payload))
_, err = w.Write([]byte{
byte(typ),
byte((channel & 0xff00) >> 8),
byte((channel & 0x00ff) >> 0),
byte((size & 0xff000000) >> 24),
byte((size & 0x00ff0000) >> 16),
byte((size & 0x0000ff00) >> 8),
byte((size & 0x000000ff) >> 0),
})
if err != nil {
return
}
if _, err = w.Write(payload); err != nil {
return
}
if _, err = w.Write(end); err != nil {
return
}
return
}
func writeShortstr(w io.Writer, s string) (err error) {
b := []byte(s)
var length = uint8(len(b))
if err = binary.Write(w, binary.BigEndian, length); err != nil {
return
}
if _, err = w.Write(b[:length]); err != nil {
return
}
return
}
func writeLongstr(w io.Writer, s string) (err error) {
b := []byte(s)
var length = uint32(len(b))
if err = binary.Write(w, binary.BigEndian, length); err != nil {
return
}
if _, err = w.Write(b[:length]); err != nil {
return
}
return
}
/*
'A': []interface{}
'D': Decimal
'F': Table
'I': int32
'S': string
'T': time.Time
'V': nil
'b': byte
'd': float64
'f': float32
'l': int64
's': int16
't': bool
'x': []byte
*/
func writeField(w io.Writer, value interface{}) (err error) {
var buf [9]byte
var enc []byte
switch v := value.(type) {
case bool:
buf[0] = 't'
if v {
buf[1] = byte(1)
} else {
buf[1] = byte(0)
}
enc = buf[:2]
case byte:
buf[0] = 'b'
buf[1] = byte(v)
enc = buf[:2]
case int16:
buf[0] = 's'
binary.BigEndian.PutUint16(buf[1:3], uint16(v))
enc = buf[:3]
case int:
buf[0] = 'I'
binary.BigEndian.PutUint32(buf[1:5], uint32(v))
enc = buf[:5]
case int32:
buf[0] = 'I'
binary.BigEndian.PutUint32(buf[1:5], uint32(v))
enc = buf[:5]
case int64:
buf[0] = 'l'
binary.BigEndian.PutUint64(buf[1:9], uint64(v))
enc = buf[:9]
case float32:
buf[0] = 'f'
binary.BigEndian.PutUint32(buf[1:5], math.Float32bits(v))
enc = buf[:5]
case float64:
buf[0] = 'd'
binary.BigEndian.PutUint64(buf[1:9], math.Float64bits(v))
enc = buf[:9]
case Decimal:
buf[0] = 'D'
buf[1] = byte(v.Scale)
binary.BigEndian.PutUint32(buf[2:6], uint32(v.Value))
enc = buf[:6]
case string:
buf[0] = 'S'
binary.BigEndian.PutUint32(buf[1:5], uint32(len(v)))
enc = append(buf[:5], []byte(v)...)
case []interface{}: // field-array
buf[0] = 'A'
sec := new(bytes.Buffer)
for _, val := range v {
if err = writeField(sec, val); err != nil {
return
}
}
binary.BigEndian.PutUint32(buf[1:5], uint32(sec.Len()))
if _, err = w.Write(buf[:5]); err != nil {
return
}
if _, err = w.Write(sec.Bytes()); err != nil {
return
}
return
case time.Time:
buf[0] = 'T'
binary.BigEndian.PutUint64(buf[1:9], uint64(v.Unix()))
enc = buf[:9]
case Table:
if _, err = w.Write([]byte{'F'}); err != nil {
return
}
return writeTable(w, v)
case []byte:
buf[0] = 'x'
binary.BigEndian.PutUint32(buf[1:5], uint32(len(v)))
if _, err = w.Write(buf[0:5]); err != nil {
return
}
if _, err = w.Write(v); err != nil {
return
}
return
case nil:
buf[0] = 'V'
enc = buf[:1]
default:
return ErrFieldType
}
_, err = w.Write(enc)
return
}
func writeTable(w io.Writer, table Table) (err error) {
var buf bytes.Buffer
for key, val := range table {
if err = writeShortstr(&buf, key); err != nil {
return
}
if err = writeField(&buf, val); err != nil {
return
}
}
return writeLongstr(w, string(buf.Bytes()))
}

View File

@@ -0,0 +1,13 @@
module github.com/up9inc/mizu/tap/extensions/http
go 1.16
require (
github.com/google/martian v2.1.0+incompatible
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7
github.com/up9inc/mizu/tap/api v0.0.0
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7
golang.org/x/text v0.3.5 // indirect
)
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../../api

View File

@@ -0,0 +1,12 @@
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7 h1:jkvpcEatpwuMF5O5LVxTnehj6YZ/aEZN4NWD/Xml4pI=
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7/go.mod h1:KTrHyWpO1sevuXPZwyeZc72ddWRFqNSKDFl7uVWKpg0=
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7 h1:OgUuv8lsRpBibGNbSizVwKWlysjaNzmC9gYMhPVfqFM=
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5 h1:i6eZZ+zk0SOf0xgBpEpPD18qWcJda6q1sxt3S0kzyUQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -1,4 +1,4 @@
package tap
package main
import (
"bufio"
@@ -17,17 +17,19 @@ import (
)
const frameHeaderLen = 9
var clientPreface = []byte(http2.ClientPreface)
const initialHeaderTableSize = 4096
const protoHTTP2 = "HTTP/2.0"
const protoMajorHTTP2 = 2
const protoMinorHTTP2 = 0
var maxHTTP2DataLen int = maxHTTP2DataLenDefault // value initialized during init
var maxHTTP2DataLen = 1 * 1024 * 1024 // 1MB
type messageFragment struct {
headers []hpack.HeaderField
data []byte
data []byte
}
type fragmentsByStream map[uint32]*messageFragment
@@ -46,7 +48,7 @@ func (fbs *fragmentsByStream) appendFrame(streamID uint32, frame http2.Frame) {
if existingFragment, ok := (*fbs)[streamID]; ok {
existingDataLen := len(existingFragment.data)
// Never save more than maxHTTP2DataLen bytes
numBytesToAppend := int(math.Min(float64(maxHTTP2DataLen - existingDataLen), float64(newDataLen)))
numBytesToAppend := int(math.Min(float64(maxHTTP2DataLen-existingDataLen), float64(newDataLen)))
existingFragment.data = append(existingFragment.data, frame.Data()[:numBytesToAppend]...)
} else {
@@ -69,19 +71,19 @@ func (fbs *fragmentsByStream) pop(streamID uint32) ([]hpack.HeaderField, []byte)
return headers, data
}
func createGrpcAssembler(b *bufio.Reader) GrpcAssembler {
func createGrpcAssembler(b *bufio.Reader) *GrpcAssembler {
var framerOutput bytes.Buffer
framer := http2.NewFramer(&framerOutput, b)
framer.ReadMetaHeaders = hpack.NewDecoder(initialHeaderTableSize, nil)
return GrpcAssembler{
return &GrpcAssembler{
fragmentsByStream: make(fragmentsByStream),
framer: framer,
framer: framer,
}
}
type GrpcAssembler struct {
fragmentsByStream fragmentsByStream
framer *http2.Framer
framer *http2.Framer
}
func (ga *GrpcAssembler) readMessage() (uint32, interface{}, error) {
@@ -118,26 +120,26 @@ func (ga *GrpcAssembler) readMessage() (uint32, interface{}, error) {
var messageHTTP1 interface{}
if _, ok := headersHTTP1[":method"]; ok {
messageHTTP1 = http.Request{
URL: &url.URL{},
Method: "POST",
Header: headersHTTP1,
Proto: protoHTTP2,
ProtoMajor: protoMajorHTTP2,
ProtoMinor: protoMinorHTTP2,
Body: io.NopCloser(strings.NewReader(dataString)),
URL: &url.URL{},
Method: "POST",
Header: headersHTTP1,
Proto: protoHTTP2,
ProtoMajor: protoMajorHTTP2,
ProtoMinor: protoMinorHTTP2,
Body: io.NopCloser(strings.NewReader(dataString)),
ContentLength: int64(len(dataString)),
}
} else if _, ok := headersHTTP1[":status"]; ok {
messageHTTP1 = http.Response{
Header: headersHTTP1,
Proto: protoHTTP2,
ProtoMajor: protoMajorHTTP2,
ProtoMinor: protoMinorHTTP2,
Body: io.NopCloser(strings.NewReader(dataString)),
Header: headersHTTP1,
Proto: protoHTTP2,
ProtoMajor: protoMajorHTTP2,
ProtoMinor: protoMinorHTTP2,
Body: io.NopCloser(strings.NewReader(dataString)),
ContentLength: int64(len(dataString)),
}
} else {
return 0, nil, errors.New("Failed to assemble stream: neither a request nor a message")
return 0, nil, errors.New("failed to assemble stream: neither a request nor a message")
}
return streamID, messageHTTP1, nil
@@ -225,7 +227,7 @@ func checkClientPreface(b *bufio.Reader) (bool, error) {
func discardClientPreface(b *bufio.Reader) error {
if isClientPrefacePresent, err := checkClientPreface(b); err != nil {
return err
} else if !isClientPrefacePresent{
} else if !isClientPrefacePresent {
return errors.New("discardClientPreface: does not begin with client preface")
}

View File

@@ -0,0 +1,163 @@
package main
import (
"bufio"
"bytes"
"fmt"
"io"
"io/ioutil"
"net/http"
"github.com/romana/rlog"
"github.com/up9inc/mizu/tap/api"
)
func handleHTTP2Stream(grpcAssembler *GrpcAssembler, tcpID *api.TcpID, superTimer *api.SuperTimer, emitter api.Emitter) error {
streamID, messageHTTP1, err := grpcAssembler.readMessage()
if err != nil {
return err
}
var item *api.OutputChannelItem
switch messageHTTP1 := messageHTTP1.(type) {
case http.Request:
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
tcpID.DstPort,
streamID,
)
item = reqResMatcher.registerRequest(ident, &messageHTTP1, superTimer.CaptureTime)
if item != nil {
item.ConnectionInfo = &api.ConnectionInfo{
ClientIP: tcpID.SrcIP,
ClientPort: tcpID.SrcPort,
ServerIP: tcpID.DstIP,
ServerPort: tcpID.DstPort,
IsOutgoing: true,
}
}
case http.Response:
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
tcpID.DstIP,
tcpID.SrcIP,
tcpID.DstPort,
tcpID.SrcPort,
streamID,
)
item = reqResMatcher.registerResponse(ident, &messageHTTP1, superTimer.CaptureTime)
if item != nil {
item.ConnectionInfo = &api.ConnectionInfo{
ClientIP: tcpID.DstIP,
ClientPort: tcpID.DstPort,
ServerIP: tcpID.SrcIP,
ServerPort: tcpID.SrcPort,
IsOutgoing: false,
}
}
}
if item != nil {
item.Protocol = http2Protocol
emitter.Emit(item)
}
return nil
}
func handleHTTP1ClientStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter) error {
req, err := http.ReadRequest(b)
if err != nil {
// log.Println("Error reading stream:", err)
return err
}
counterPair.Request++
body, err := ioutil.ReadAll(req.Body)
req.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
s := len(body)
if err != nil {
rlog.Debugf("[HTTP-request-body] stream %s Got body err: %s", tcpID.Ident, err)
}
if err := req.Body.Close(); err != nil {
rlog.Debugf("[HTTP-request-body-close] stream %s Failed to close request body: %s", tcpID.Ident, err)
}
encoding := req.Header["Content-Encoding"]
rlog.Tracef(1, "HTTP/1 Request: %s %s %s (Body:%d) -> %s", tcpID.Ident, req.Method, req.URL, s, encoding)
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
tcpID.DstPort,
counterPair.Request,
)
item := reqResMatcher.registerRequest(ident, req, superTimer.CaptureTime)
if item != nil {
item.ConnectionInfo = &api.ConnectionInfo{
ClientIP: tcpID.SrcIP,
ClientPort: tcpID.SrcPort,
ServerIP: tcpID.DstIP,
ServerPort: tcpID.DstPort,
IsOutgoing: true,
}
emitter.Emit(item)
}
return nil
}
func handleHTTP1ServerStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter) error {
res, err := http.ReadResponse(b, nil)
if err != nil {
// log.Println("Error reading stream:", err)
return err
}
counterPair.Response++
body, err := ioutil.ReadAll(res.Body)
res.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
s := len(body)
if err != nil {
rlog.Debugf("[HTTP-response-body] HTTP/%s: failed to get body(parsed len:%d): %s", tcpID.Ident, s, err)
}
if err := res.Body.Close(); err != nil {
rlog.Debugf("[HTTP-response-body-close] HTTP/%s: failed to close body(parsed len:%d): %s", tcpID.Ident, s, err)
}
sym := ","
if res.ContentLength > 0 && res.ContentLength != int64(s) {
sym = "!="
}
contentType, ok := res.Header["Content-Type"]
if !ok {
contentType = []string{http.DetectContentType(body)}
}
encoding := res.Header["Content-Encoding"]
rlog.Tracef(1, "HTTP/1 Response: %s %s (%d%s%d%s) -> %s", tcpID.Ident, res.Status, res.ContentLength, sym, s, contentType, encoding)
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
tcpID.DstIP,
tcpID.SrcIP,
tcpID.DstPort,
tcpID.SrcPort,
counterPair.Response,
)
item := reqResMatcher.registerResponse(ident, res, superTimer.CaptureTime)
if item != nil {
item.ConnectionInfo = &api.ConnectionInfo{
ClientIP: tcpID.DstIP,
ClientPort: tcpID.DstPort,
ServerIP: tcpID.SrcIP,
ServerPort: tcpID.SrcPort,
IsOutgoing: false,
}
emitter.Emit(item)
}
return nil
}

387
tap/extensions/http/main.go Normal file
View File

@@ -0,0 +1,387 @@
package main
import (
"bufio"
"encoding/json"
"fmt"
"io"
"log"
"net/url"
"time"
"github.com/romana/rlog"
"github.com/up9inc/mizu/tap/api"
)
var protocol api.Protocol = api.Protocol{
Name: "http",
LongName: "Hypertext Transfer Protocol -- HTTP/1.1",
Abbreviation: "HTTP",
Version: "1.1",
BackgroundColor: "#205cf5",
ForegroundColor: "#ffffff",
FontSize: 12,
ReferenceLink: "https://datatracker.ietf.org/doc/html/rfc2616",
Ports: []string{"80", "8080", "50051"},
Priority: 0,
}
var http2Protocol api.Protocol = api.Protocol{
Name: "http",
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2) (gRPC)",
Abbreviation: "HTTP/2",
Version: "2.0",
BackgroundColor: "#244c5a",
ForegroundColor: "#ffffff",
FontSize: 11,
ReferenceLink: "https://datatracker.ietf.org/doc/html/rfc7540",
Ports: []string{"80", "8080"},
Priority: 0,
}
const (
TypeHttpRequest = iota
TypeHttpResponse
)
func init() {
log.Println("Initializing HTTP extension...")
}
type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = protocol
extension.MatcherMap = reqResMatcher.openMessagesMap
}
func (d dissecting) Ping() {
log.Printf("pong %s\n", protocol.Name)
}
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter) error {
ident := fmt.Sprintf("%s->%s:%s->%s", tcpID.SrcIP, tcpID.DstIP, tcpID.SrcPort, tcpID.DstPort)
isHTTP2, err := checkIsHTTP2Connection(b, isClient)
if err != nil {
rlog.Debugf("[HTTP/2-Prepare-Connection] stream %s Failed to check if client is HTTP/2: %s (%v,%+v)", ident, err, err, err)
// Do something?
}
var grpcAssembler *GrpcAssembler
if isHTTP2 {
err := prepareHTTP2Connection(b, isClient)
if err != nil {
rlog.Debugf("[HTTP/2-Prepare-Connection-After-Check] stream %s error: %s (%v,%+v)", ident, err, err, err)
}
grpcAssembler = createGrpcAssembler(b)
}
success := false
for {
if isHTTP2 {
err = handleHTTP2Stream(grpcAssembler, tcpID, superTimer, emitter)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
rlog.Debugf("[HTTP/2] stream %s error: %s (%v,%+v)", ident, err, err, err)
continue
}
success = true
} else if isClient {
err = handleHTTP1ClientStream(b, tcpID, counterPair, superTimer, emitter)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
rlog.Debugf("[HTTP-request] stream %s Request error: %s (%v,%+v)", ident, err, err, err)
continue
}
success = true
} else {
err = handleHTTP1ServerStream(b, tcpID, counterPair, superTimer, emitter)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
rlog.Debugf("[HTTP-response], stream %s Response error: %s (%v,%+v)", ident, err, err, err)
continue
}
success = true
}
}
if !success {
return err
}
return nil
}
func SetHostname(address, newHostname string) string {
replacedUrl, err := url.Parse(address)
if err != nil {
log.Printf("error replacing hostname to %s in address %s, returning original %v", newHostname, address, err)
return address
}
replacedUrl.Host = newHostname
return replacedUrl.String()
}
func (d dissecting) Analyze(item *api.OutputChannelItem, entryId string, resolvedSource string, resolvedDestination string) *api.MizuEntry {
var host, scheme, authority, path, service string
request := item.Pair.Request.Payload.(map[string]interface{})
response := item.Pair.Response.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
resDetails := response["details"].(map[string]interface{})
for _, header := range reqDetails["headers"].([]interface{}) {
h := header.(map[string]interface{})
if h["name"] == "Host" {
host = h["value"].(string)
}
if h["name"] == ":authority" {
authority = h["value"].(string)
}
if h["name"] == ":scheme" {
scheme = h["value"].(string)
}
if h["name"] == ":path" {
path = h["value"].(string)
}
}
if item.Protocol.Version == "2.0" {
service = fmt.Sprintf("%s://%s", scheme, authority)
} else {
service = fmt.Sprintf("http://%s", host)
path = reqDetails["url"].(string)
}
request["url"] = path
if resolvedDestination != "" {
service = SetHostname(service, resolvedDestination)
} else if resolvedSource != "" {
service = SetHostname(service, resolvedSource)
}
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
entryBytes, _ := json.Marshal(item.Pair)
return &api.MizuEntry{
ProtocolName: protocol.Name,
ProtocolVersion: item.Protocol.Version,
EntryId: entryId,
Entry: string(entryBytes),
Url: fmt.Sprintf("%s%s", service, path),
Method: reqDetails["method"].(string),
Status: int(resDetails["status"].(float64)),
RequestSenderIp: item.ConnectionInfo.ClientIP,
Service: service,
Timestamp: item.Timestamp,
ElapsedTime: elapsedTime,
Path: path,
ResolvedSource: resolvedSource,
ResolvedDestination: resolvedDestination,
SourceIp: item.ConnectionInfo.ClientIP,
DestinationIp: item.ConnectionInfo.ServerIP,
SourcePort: item.ConnectionInfo.ClientPort,
DestinationPort: item.ConnectionInfo.ServerPort,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
}
}
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
var p api.Protocol
if entry.ProtocolVersion == "2.0" {
p = http2Protocol
} else {
p = protocol
}
return &api.BaseEntryDetails{
Id: entry.EntryId,
Protocol: p,
Url: entry.Url,
RequestSenderIp: entry.RequestSenderIp,
Service: entry.Service,
Summary: entry.Path,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
SourceIp: entry.SourceIp,
DestinationIp: entry.DestinationIp,
SourcePort: entry.SourcePort,
DestinationPort: entry.DestinationPort,
IsOutgoing: entry.IsOutgoing,
Latency: 0,
Rules: api.ApplicableRules{
Latency: 0,
Status: false,
},
}
}
func representRequest(request map[string]interface{}) (repRequest []interface{}) {
details, _ := json.Marshal([]map[string]string{
{
"name": "Method",
"value": request["method"].(string),
},
{
"name": "URL",
"value": request["url"].(string),
},
{
"name": "Body Size",
"value": fmt.Sprintf("%g bytes", request["bodySize"].(float64)),
},
})
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
headers, _ := json.Marshal(request["headers"].([]interface{}))
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "Headers",
"data": string(headers),
})
cookies, _ := json.Marshal(request["cookies"].([]interface{}))
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "Cookies",
"data": string(cookies),
})
queryString, _ := json.Marshal(request["queryString"].([]interface{}))
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "Query String",
"data": string(queryString),
})
postData, _ := request["postData"].(map[string]interface{})
mimeType, _ := postData["mimeType"]
if mimeType == nil || len(mimeType.(string)) == 0 {
mimeType = "text/html"
}
text, _ := postData["text"]
if text != nil {
repRequest = append(repRequest, map[string]string{
"type": api.BODY,
"title": "POST Data (text/plain)",
"encoding": "",
"mime_type": mimeType.(string),
"data": text.(string),
})
}
if postData["params"] != nil {
params, _ := json.Marshal(postData["params"].([]interface{}))
if len(params) > 0 {
if mimeType == "multipart/form-data" {
multipart, _ := json.Marshal([]map[string]string{
{
"name": "Files",
"value": string(params),
},
})
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "POST Data (multipart/form-data)",
"data": string(multipart),
})
} else {
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "POST Data (application/x-www-form-urlencoded)",
"data": string(params),
})
}
}
}
return
}
func representResponse(response map[string]interface{}) (repResponse []interface{}, bodySize int64) {
repResponse = make([]interface{}, 0)
bodySize = int64(response["bodySize"].(float64))
details, _ := json.Marshal([]map[string]string{
{
"name": "Status",
"value": fmt.Sprintf("%g", response["status"].(float64)),
},
{
"name": "Status Text",
"value": response["statusText"].(string),
},
{
"name": "Body Size",
"value": fmt.Sprintf("%d bytes", bodySize),
},
})
repResponse = append(repResponse, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
headers, _ := json.Marshal(response["headers"].([]interface{}))
repResponse = append(repResponse, map[string]string{
"type": api.TABLE,
"title": "Headers",
"data": string(headers),
})
cookies, _ := json.Marshal(response["cookies"].([]interface{}))
repResponse = append(repResponse, map[string]string{
"type": api.TABLE,
"title": "Cookies",
"data": string(cookies),
})
content, _ := response["content"].(map[string]interface{})
mimeType, _ := content["mimeType"]
if mimeType == nil || len(mimeType.(string)) == 0 {
mimeType = "text/html"
}
encoding, _ := content["encoding"]
text, _ := content["text"]
if text != nil {
repResponse = append(repResponse, map[string]string{
"type": api.BODY,
"title": "Body",
"encoding": encoding.(string),
"mime_type": mimeType.(string),
"data": text.(string),
})
}
return
}
func (d dissecting) Represent(entry *api.MizuEntry) (p api.Protocol, object []byte, bodySize int64, err error) {
if entry.ProtocolVersion == "2.0" {
p = http2Protocol
} else {
p = protocol
}
var root map[string]interface{}
json.Unmarshal([]byte(entry.Entry), &root)
representation := make(map[string]interface{}, 0)
request := root["request"].(map[string]interface{})["payload"].(map[string]interface{})
response := root["response"].(map[string]interface{})["payload"].(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
resDetails := response["details"].(map[string]interface{})
repRequest := representRequest(reqDetails)
repResponse, bodySize := representResponse(resDetails)
representation["request"] = repRequest
representation["response"] = repResponse
object, err = json.Marshal(representation)
return
}
var Dissector dissecting

View File

@@ -0,0 +1,105 @@
package main
import (
"fmt"
"net/http"
"strings"
"sync"
"time"
"github.com/romana/rlog"
"github.com/up9inc/mizu/tap/api"
)
var reqResMatcher = createResponseRequestMatcher() // global
// Key is {client_addr}:{client_port}->{dest_addr}:{dest_port}_{incremental_counter}
type requestResponseMatcher struct {
openMessagesMap *sync.Map
}
func createResponseRequestMatcher() requestResponseMatcher {
newMatcher := &requestResponseMatcher{openMessagesMap: &sync.Map{}}
return *newMatcher
}
func (matcher *requestResponseMatcher) registerRequest(ident string, request *http.Request, captureTime time.Time) *api.OutputChannelItem {
split := splitIdent(ident)
key := genKey(split)
requestHTTPMessage := api.GenericMessage{
IsRequest: true,
CaptureTime: captureTime,
Payload: HTTPPayload{
Type: TypeHttpRequest,
Data: request,
},
}
if response, found := matcher.openMessagesMap.LoadAndDelete(key); found {
// Type assertion always succeeds because all of the map's values are of api.GenericMessage type
responseHTTPMessage := response.(*api.GenericMessage)
if responseHTTPMessage.IsRequest {
rlog.Debugf("[Request-Duplicate] Got duplicate request with same identifier")
return nil
}
rlog.Tracef(1, "Matched open Response for %s", key)
return matcher.preparePair(&requestHTTPMessage, responseHTTPMessage)
}
matcher.openMessagesMap.Store(key, &requestHTTPMessage)
rlog.Tracef(1, "Registered open Request for %s", key)
return nil
}
func (matcher *requestResponseMatcher) registerResponse(ident string, response *http.Response, captureTime time.Time) *api.OutputChannelItem {
split := splitIdent(ident)
key := genKey(split)
responseHTTPMessage := api.GenericMessage{
IsRequest: false,
CaptureTime: captureTime,
Payload: HTTPPayload{
Type: TypeHttpResponse,
Data: response,
},
}
if request, found := matcher.openMessagesMap.LoadAndDelete(key); found {
// Type assertion always succeeds because all of the map's values are of api.GenericMessage type
requestHTTPMessage := request.(*api.GenericMessage)
if !requestHTTPMessage.IsRequest {
rlog.Debugf("[Response-Duplicate] Got duplicate response with same identifier")
return nil
}
rlog.Tracef(1, "Matched open Request for %s", key)
return matcher.preparePair(requestHTTPMessage, &responseHTTPMessage)
}
matcher.openMessagesMap.Store(key, &responseHTTPMessage)
rlog.Tracef(1, "Registered open Response for %s", key)
return nil
}
func (matcher *requestResponseMatcher) preparePair(requestHTTPMessage *api.GenericMessage, responseHTTPMessage *api.GenericMessage) *api.OutputChannelItem {
return &api.OutputChannelItem{
Protocol: protocol,
Timestamp: requestHTTPMessage.CaptureTime.UnixNano() / int64(time.Millisecond),
ConnectionInfo: nil,
Pair: &api.RequestResponsePair{
Request: *requestHTTPMessage,
Response: *responseHTTPMessage,
},
}
}
func splitIdent(ident string) []string {
ident = strings.Replace(ident, "->", " ", -1)
return strings.Split(ident, " ")
}
func genKey(split []string) string {
key := fmt.Sprintf("%s:%s->%s:%s,%s", split[0], split[2], split[1], split[3], split[4])
return key
}

View File

@@ -0,0 +1,55 @@
package main
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/google/martian/har"
"github.com/romana/rlog"
)
type HTTPPayload struct {
Type uint8
Data interface{}
}
type HTTPPayloader interface {
MarshalJSON() ([]byte, error)
}
type HTTPWrapper struct {
Method string `json:"method"`
Url string `json:"url"`
Details interface{} `json:"details"`
}
func (h HTTPPayload) MarshalJSON() ([]byte, error) {
switch h.Type {
case TypeHttpRequest:
harRequest, err := har.NewRequest(h.Data.(*http.Request), true)
if err != nil {
rlog.Debugf("convert-request-to-har", "Failed converting request to HAR %s (%v,%+v)", err, err, err)
return nil, errors.New("Failed converting request to HAR")
}
return json.Marshal(&HTTPWrapper{
Method: harRequest.Method,
Url: "",
Details: harRequest,
})
case TypeHttpResponse:
harResponse, err := har.NewResponse(h.Data.(*http.Response), true)
if err != nil {
rlog.Debugf("convert-response-to-har", "Failed converting response to HAR %s (%v,%+v)", err, err, err)
return nil, errors.New("Failed converting response to HAR")
}
return json.Marshal(&HTTPWrapper{
Method: "",
Url: "",
Details: harResponse,
})
default:
panic(fmt.Sprintf("HTTP payload cannot be marshaled: %s\n", h.Type))
}
}

View File

@@ -0,0 +1,645 @@
package main
import (
"bytes"
"fmt"
"io"
"math"
"sync"
"sync/atomic"
)
// Bytes is an interface implemented by types that represent immutable
// sequences of bytes.
//
// Bytes values are used to abstract the location where record keys and
// values are read from (e.g. in-memory buffers, network sockets, files).
//
// The Close method should be called to release resources held by the object
// when the program is done with it.
//
// Bytes values are generally not safe to use concurrently from multiple
// goroutines.
type Bytes interface {
io.ReadCloser
// Returns the number of bytes remaining to be read from the payload.
Len() int
}
// NewBytes constructs a Bytes value from b.
//
// The returned value references b, it does not make a copy of the backing
// array.
//
// If b is nil, nil is returned to represent a null BYTES value in the kafka
// protocol.
func NewBytes(b []byte) Bytes {
if b == nil {
return nil
}
r := new(bytesReader)
r.Reset(b)
return r
}
// ReadAll is similar to ioutil.ReadAll, but it takes advantage of knowing the
// length of b to minimize the memory footprint.
//
// The function returns a nil slice if b is nil.
// func ReadAll(b Bytes) ([]byte, error) {
// if b == nil {
// return nil, nil
// }
// s := make([]byte, b.Len())
// _, err := io.ReadFull(b, s)
// return s, err
// }
type bytesReader struct{ bytes.Reader }
func (*bytesReader) Close() error { return nil }
type refCount uintptr
func (rc *refCount) ref() { atomic.AddUintptr((*uintptr)(rc), 1) }
func (rc *refCount) unref(onZero func()) {
if atomic.AddUintptr((*uintptr)(rc), ^uintptr(0)) == 0 {
onZero()
}
}
const (
// Size of the memory buffer for a single page. We use a farily
// large size here (64 KiB) because batches exchanged with kafka
// tend to be multiple kilobytes in size, sometimes hundreds.
// Using large pages amortizes the overhead of the page metadata
// and algorithms to manage the pages.
pageSize = 65536
)
type page struct {
refc refCount
offset int64
length int
buffer *[pageSize]byte
}
func newPage(offset int64) *page {
p, _ := pagePool.Get().(*page)
if p != nil {
p.offset = offset
p.length = 0
p.ref()
} else {
p = &page{
refc: 1,
offset: offset,
buffer: &[pageSize]byte{},
}
}
return p
}
func (p *page) ref() { p.refc.ref() }
func (p *page) unref() { p.refc.unref(func() { pagePool.Put(p) }) }
func (p *page) slice(begin, end int64) []byte {
i, j := begin-p.offset, end-p.offset
if i < 0 {
i = 0
} else if i > pageSize {
i = pageSize
}
if j < 0 {
j = 0
} else if j > pageSize {
j = pageSize
}
if i < j {
return p.buffer[i:j]
}
return nil
}
func (p *page) Cap() int { return pageSize }
func (p *page) Len() int { return p.length }
func (p *page) Size() int64 { return int64(p.length) }
func (p *page) Truncate(n int) {
if n < p.length {
p.length = n
}
}
func (p *page) ReadAt(b []byte, off int64) (int, error) {
if off -= p.offset; off < 0 || off > pageSize {
panic("offset out of range")
}
if off > int64(p.length) {
return 0, nil
}
return copy(b, p.buffer[off:p.length]), nil
}
func (p *page) ReadFrom(r io.Reader) (int64, error) {
n, err := io.ReadFull(r, p.buffer[p.length:])
if err == io.EOF || err == io.ErrUnexpectedEOF {
err = nil
}
p.length += n
return int64(n), err
}
func (p *page) WriteAt(b []byte, off int64) (int, error) {
if off -= p.offset; off < 0 || off > pageSize {
panic("offset out of range")
}
n := copy(p.buffer[off:], b)
if end := int(off) + n; end > p.length {
p.length = end
}
return n, nil
}
func (p *page) Write(b []byte) (int, error) {
return p.WriteAt(b, p.offset+int64(p.length))
}
var (
_ io.ReaderAt = (*page)(nil)
_ io.ReaderFrom = (*page)(nil)
_ io.Writer = (*page)(nil)
_ io.WriterAt = (*page)(nil)
)
type pageBuffer struct {
refc refCount
pages contiguousPages
length int
cursor int
}
func newPageBuffer() *pageBuffer {
b, _ := pageBufferPool.Get().(*pageBuffer)
if b != nil {
b.cursor = 0
b.refc.ref()
} else {
b = &pageBuffer{
refc: 1,
pages: make(contiguousPages, 0, 16),
}
}
return b
}
func (pb *pageBuffer) refTo(ref *pageRef, begin, end int64) {
length := end - begin
if length > math.MaxUint32 {
panic("reference to contiguous buffer pages exceeds the maximum size of 4 GB")
}
ref.pages = append(ref.buffer[:0], pb.pages.slice(begin, end)...)
ref.pages.ref()
ref.offset = begin
ref.length = uint32(length)
}
func (pb *pageBuffer) ref(begin, end int64) *pageRef {
ref := new(pageRef)
pb.refTo(ref, begin, end)
return ref
}
func (pb *pageBuffer) unref() {
pb.refc.unref(func() {
pb.pages.unref()
pb.pages.clear()
pb.pages = pb.pages[:0]
pb.length = 0
pageBufferPool.Put(pb)
})
}
func (pb *pageBuffer) newPage() *page {
return newPage(int64(pb.length))
}
func (pb *pageBuffer) Close() error {
return nil
}
func (pb *pageBuffer) Len() int {
return pb.length - pb.cursor
}
func (pb *pageBuffer) Size() int64 {
return int64(pb.length)
}
func (pb *pageBuffer) Discard(n int) (int, error) {
remain := pb.length - pb.cursor
if remain < n {
n = remain
}
pb.cursor += n
return n, nil
}
func (pb *pageBuffer) Truncate(n int) {
if n < pb.length {
pb.length = n
if n < pb.cursor {
pb.cursor = n
}
for i := range pb.pages {
if p := pb.pages[i]; p.length <= n {
n -= p.length
} else {
if n > 0 {
pb.pages[i].Truncate(n)
i++
}
pb.pages[i:].unref()
pb.pages[i:].clear()
pb.pages = pb.pages[:i]
break
}
}
}
}
func (pb *pageBuffer) Seek(offset int64, whence int) (int64, error) {
c, err := seek(int64(pb.cursor), int64(pb.length), offset, whence)
if err != nil {
return -1, err
}
pb.cursor = int(c)
return c, nil
}
func (pb *pageBuffer) ReadByte() (byte, error) {
b := [1]byte{}
_, err := pb.Read(b[:])
return b[0], err
}
func (pb *pageBuffer) Read(b []byte) (int, error) {
if pb.cursor >= pb.length {
return 0, io.EOF
}
n, err := pb.ReadAt(b, int64(pb.cursor))
pb.cursor += n
return n, err
}
func (pb *pageBuffer) ReadAt(b []byte, off int64) (int, error) {
return pb.pages.ReadAt(b, off)
}
func (pb *pageBuffer) ReadFrom(r io.Reader) (int64, error) {
if len(pb.pages) == 0 {
pb.pages = append(pb.pages, pb.newPage())
}
rn := int64(0)
for {
tail := pb.pages[len(pb.pages)-1]
free := tail.Cap() - tail.Len()
if free == 0 {
tail = pb.newPage()
free = pageSize
pb.pages = append(pb.pages, tail)
}
n, err := tail.ReadFrom(r)
pb.length += int(n)
rn += n
if n < int64(free) {
return rn, err
}
}
}
func (pb *pageBuffer) WriteString(s string) (int, error) {
return pb.Write([]byte(s))
}
func (pb *pageBuffer) Write(b []byte) (int, error) {
wn := len(b)
if wn == 0 {
return 0, nil
}
if len(pb.pages) == 0 {
pb.pages = append(pb.pages, pb.newPage())
}
for len(b) != 0 {
tail := pb.pages[len(pb.pages)-1]
free := tail.Cap() - tail.Len()
if len(b) <= free {
tail.Write(b)
pb.length += len(b)
break
}
tail.Write(b[:free])
b = b[free:]
pb.length += free
pb.pages = append(pb.pages, pb.newPage())
}
return wn, nil
}
func (pb *pageBuffer) WriteAt(b []byte, off int64) (int, error) {
n, err := pb.pages.WriteAt(b, off)
if err != nil {
return n, err
}
if n < len(b) {
pb.Write(b[n:])
}
return len(b), nil
}
func (pb *pageBuffer) WriteTo(w io.Writer) (int64, error) {
var wn int
var err error
pb.pages.scan(int64(pb.cursor), int64(pb.length), func(b []byte) bool {
var n int
n, err = w.Write(b)
wn += n
return err == nil
})
pb.cursor += wn
return int64(wn), err
}
var (
_ io.ReaderAt = (*pageBuffer)(nil)
_ io.ReaderFrom = (*pageBuffer)(nil)
_ io.StringWriter = (*pageBuffer)(nil)
_ io.Writer = (*pageBuffer)(nil)
_ io.WriterAt = (*pageBuffer)(nil)
_ io.WriterTo = (*pageBuffer)(nil)
pagePool sync.Pool
pageBufferPool sync.Pool
)
type contiguousPages []*page
func (pages contiguousPages) ref() {
for _, p := range pages {
p.ref()
}
}
func (pages contiguousPages) unref() {
for _, p := range pages {
p.unref()
}
}
func (pages contiguousPages) clear() {
for i := range pages {
pages[i] = nil
}
}
func (pages contiguousPages) ReadAt(b []byte, off int64) (int, error) {
rn := 0
for _, p := range pages.slice(off, off+int64(len(b))) {
n, _ := p.ReadAt(b, off)
b = b[n:]
rn += n
off += int64(n)
}
return rn, nil
}
func (pages contiguousPages) WriteAt(b []byte, off int64) (int, error) {
wn := 0
for _, p := range pages.slice(off, off+int64(len(b))) {
n, _ := p.WriteAt(b, off)
b = b[n:]
wn += n
off += int64(n)
}
return wn, nil
}
func (pages contiguousPages) slice(begin, end int64) contiguousPages {
i := pages.indexOf(begin)
j := pages.indexOf(end)
if j < len(pages) {
j++
}
return pages[i:j]
}
func (pages contiguousPages) indexOf(offset int64) int {
if len(pages) == 0 {
return 0
}
return int((offset - pages[0].offset) / pageSize)
}
func (pages contiguousPages) scan(begin, end int64, f func([]byte) bool) {
for _, p := range pages.slice(begin, end) {
if !f(p.slice(begin, end)) {
break
}
}
}
var (
_ io.ReaderAt = contiguousPages{}
_ io.WriterAt = contiguousPages{}
)
type pageRef struct {
buffer [2]*page
pages contiguousPages
offset int64
cursor int64
length uint32
once uint32
}
func (ref *pageRef) unref() {
if atomic.CompareAndSwapUint32(&ref.once, 0, 1) {
ref.pages.unref()
ref.pages.clear()
ref.pages = nil
ref.offset = 0
ref.cursor = 0
ref.length = 0
}
}
func (ref *pageRef) Len() int { return int(ref.Size() - ref.cursor) }
func (ref *pageRef) Size() int64 { return int64(ref.length) }
func (ref *pageRef) Close() error { ref.unref(); return nil }
func (ref *pageRef) String() string {
return fmt.Sprintf("[offset=%d cursor=%d length=%d]", ref.offset, ref.cursor, ref.length)
}
func (ref *pageRef) Seek(offset int64, whence int) (int64, error) {
c, err := seek(ref.cursor, int64(ref.length), offset, whence)
if err != nil {
return -1, err
}
ref.cursor = c
return c, nil
}
func (ref *pageRef) ReadByte() (byte, error) {
var c byte
var ok bool
ref.scan(ref.cursor, func(b []byte) bool {
c, ok = b[0], true
return false
})
if ok {
ref.cursor++
} else {
return 0, io.EOF
}
return c, nil
}
func (ref *pageRef) Read(b []byte) (int, error) {
if ref.cursor >= int64(ref.length) {
return 0, io.EOF
}
n, err := ref.ReadAt(b, ref.cursor)
ref.cursor += int64(n)
return n, err
}
func (ref *pageRef) ReadAt(b []byte, off int64) (int, error) {
limit := ref.offset + int64(ref.length)
off += ref.offset
if off >= limit {
return 0, io.EOF
}
if off+int64(len(b)) > limit {
b = b[:limit-off]
}
if len(b) == 0 {
return 0, nil
}
n, err := ref.pages.ReadAt(b, off)
if n == 0 && err == nil {
err = io.EOF
}
return n, err
}
func (ref *pageRef) WriteTo(w io.Writer) (wn int64, err error) {
ref.scan(ref.cursor, func(b []byte) bool {
var n int
n, err = w.Write(b)
wn += int64(n)
return err == nil
})
ref.cursor += wn
return
}
func (ref *pageRef) scan(off int64, f func([]byte) bool) {
begin := ref.offset + off
end := ref.offset + int64(ref.length)
ref.pages.scan(begin, end, f)
}
var (
_ io.Closer = (*pageRef)(nil)
_ io.Seeker = (*pageRef)(nil)
_ io.Reader = (*pageRef)(nil)
_ io.ReaderAt = (*pageRef)(nil)
_ io.WriterTo = (*pageRef)(nil)
)
type pageRefAllocator struct {
refs []pageRef
head int
size int
}
func (a *pageRefAllocator) newPageRef() *pageRef {
if a.head == len(a.refs) {
a.refs = make([]pageRef, a.size)
a.head = 0
}
ref := &a.refs[a.head]
a.head++
return ref
}
func unref(x interface{}) {
if r, _ := x.(interface{ unref() }); r != nil {
r.unref()
}
}
func seek(cursor, limit, offset int64, whence int) (int64, error) {
switch whence {
case io.SeekStart:
// absolute offset
case io.SeekCurrent:
offset = cursor + offset
case io.SeekEnd:
offset = limit - offset
default:
return -1, fmt.Errorf("seek: invalid whence value: %d", whence)
}
if offset < 0 {
offset = 0
}
if offset > limit {
offset = limit
}
return offset, nil
}
func closeBytes(b Bytes) {
if b != nil {
b.Close()
}
}
func resetBytes(b Bytes) {
if r, _ := b.(interface{ Reset() }); r != nil {
r.Reset()
}
}

View File

@@ -0,0 +1,143 @@
package main
import (
"fmt"
"sort"
"strings"
"text/tabwriter"
)
type Cluster struct {
ClusterID string
Controller int32
Brokers map[int32]Broker
Topics map[string]Topic
}
func (c Cluster) BrokerIDs() []int32 {
brokerIDs := make([]int32, 0, len(c.Brokers))
for id := range c.Brokers {
brokerIDs = append(brokerIDs, id)
}
sort.Slice(brokerIDs, func(i, j int) bool {
return brokerIDs[i] < brokerIDs[j]
})
return brokerIDs
}
func (c Cluster) TopicNames() []string {
topicNames := make([]string, 0, len(c.Topics))
for name := range c.Topics {
topicNames = append(topicNames, name)
}
sort.Strings(topicNames)
return topicNames
}
func (c Cluster) IsZero() bool {
return c.ClusterID == "" && c.Controller == 0 && len(c.Brokers) == 0 && len(c.Topics) == 0
}
func (c Cluster) Format(w fmt.State, _ rune) {
tw := new(tabwriter.Writer)
fmt.Fprintf(w, "CLUSTER: %q\n\n", c.ClusterID)
tw.Init(w, 0, 8, 2, ' ', 0)
fmt.Fprint(tw, " BROKER\tHOST\tPORT\tRACK\tCONTROLLER\n")
for _, id := range c.BrokerIDs() {
broker := c.Brokers[id]
fmt.Fprintf(tw, " %d\t%s\t%d\t%s\t%t\n", broker.ID, broker.Host, broker.Port, broker.Rack, broker.ID == c.Controller)
}
tw.Flush()
fmt.Fprintln(w)
tw.Init(w, 0, 8, 2, ' ', 0)
fmt.Fprint(tw, " TOPIC\tPARTITIONS\tBROKERS\n")
topicNames := c.TopicNames()
brokers := make(map[int32]struct{}, len(c.Brokers))
brokerIDs := make([]int32, 0, len(c.Brokers))
for _, name := range topicNames {
topic := c.Topics[name]
for _, p := range topic.Partitions {
for _, id := range p.Replicas {
brokers[id] = struct{}{}
}
}
for id := range brokers {
brokerIDs = append(brokerIDs, id)
}
fmt.Fprintf(tw, " %s\t%d\t%s\n", topic.Name, len(topic.Partitions), formatBrokerIDs(brokerIDs, -1))
for id := range brokers {
delete(brokers, id)
}
brokerIDs = brokerIDs[:0]
}
tw.Flush()
fmt.Fprintln(w)
if w.Flag('+') {
for _, name := range topicNames {
fmt.Fprintf(w, " TOPIC: %q\n\n", name)
tw.Init(w, 0, 8, 2, ' ', 0)
fmt.Fprint(tw, " PARTITION\tREPLICAS\tISR\tOFFLINE\n")
for _, p := range c.Topics[name].Partitions {
fmt.Fprintf(tw, " %d\t%s\t%s\t%s\n", p.ID,
formatBrokerIDs(p.Replicas, -1),
formatBrokerIDs(p.ISR, p.Leader),
formatBrokerIDs(p.Offline, -1),
)
}
tw.Flush()
fmt.Fprintln(w)
}
}
}
func formatBrokerIDs(brokerIDs []int32, leader int32) string {
if len(brokerIDs) == 0 {
return ""
}
if len(brokerIDs) == 1 {
return itoa(brokerIDs[0])
}
sort.Slice(brokerIDs, func(i, j int) bool {
id1 := brokerIDs[i]
id2 := brokerIDs[j]
if id1 == leader {
return true
}
if id2 == leader {
return false
}
return id1 < id2
})
brokerNames := make([]string, len(brokerIDs))
for i, id := range brokerIDs {
brokerNames[i] = itoa(id)
}
return strings.Join(brokerNames, ",")
}
var (
_ fmt.Formatter = Cluster{}
)

View File

@@ -0,0 +1,30 @@
package main
import (
"errors"
"github.com/segmentio/kafka-go/compress"
)
type Compression = compress.Compression
type CompressionCodec = compress.Codec
// TODO: this file should probably go away once the internals of the package
// have moved to use the protocol package.
const (
compressionCodecMask = 0x07
)
var (
errUnknownCodec = errors.New("the compression code is invalid or its codec has not been imported")
)
// resolveCodec looks up a codec by Code()
func resolveCodec(code int8) (CompressionCodec, error) {
codec := compress.Compression(code).Codec()
if codec == nil {
return nil, errUnknownCodec
}
return codec, nil
}

View File

@@ -0,0 +1,598 @@
package main
import (
"bytes"
"encoding/binary"
"fmt"
"hash/crc32"
"io"
"io/ioutil"
"reflect"
"strings"
"sync"
"sync/atomic"
)
type discarder interface {
Discard(int) (int, error)
}
type decoder struct {
reader io.Reader
remain int
buffer [8]byte
err error
table *crc32.Table
crc32 uint32
}
func (d *decoder) Reset(r io.Reader, n int) {
d.reader = r
d.remain = n
d.buffer = [8]byte{}
d.err = nil
d.table = nil
d.crc32 = 0
}
func (d *decoder) Read(b []byte) (int, error) {
if d.err != nil {
return 0, d.err
}
if d.remain == 0 {
return 0, io.EOF
}
if len(b) > d.remain {
b = b[:d.remain]
}
n, err := d.reader.Read(b)
if n > 0 && d.table != nil {
d.crc32 = crc32.Update(d.crc32, d.table, b[:n])
}
d.remain -= n
return n, err
}
func (d *decoder) ReadByte() (byte, error) {
c := d.readByte()
return c, d.err
}
func (d *decoder) done() bool {
return d.remain == 0 || d.err != nil
}
func (d *decoder) setCRC(table *crc32.Table) {
d.table, d.crc32 = table, 0
}
func (d *decoder) decodeBool(v value) {
v.setBool(d.readBool())
}
func (d *decoder) decodeInt8(v value) {
v.setInt8(d.readInt8())
}
func (d *decoder) decodeInt16(v value) {
v.setInt16(d.readInt16())
}
func (d *decoder) decodeInt32(v value) {
v.setInt32(d.readInt32())
}
func (d *decoder) decodeInt64(v value) {
v.setInt64(d.readInt64())
}
func (d *decoder) decodeString(v value) {
v.setString(d.readString())
}
func (d *decoder) decodeCompactString(v value) {
v.setString(d.readCompactString())
}
func (d *decoder) decodeBytes(v value) {
v.setBytes(d.readBytes())
}
func (d *decoder) decodeCompactBytes(v value) {
v.setBytes(d.readCompactBytes())
}
func (d *decoder) decodeArray(v value, elemType reflect.Type, decodeElem decodeFunc) {
if n := d.readInt32(); n < 0 || n > 65535 {
v.setArray(array{})
} else {
a := makeArray(elemType, int(n))
for i := 0; i < int(n) && d.remain > 0; i++ {
decodeElem(d, a.index(i))
}
v.setArray(a)
}
}
func (d *decoder) decodeCompactArray(v value, elemType reflect.Type, decodeElem decodeFunc) {
if n := d.readUnsignedVarInt(); n < 1 || n > 65535 {
v.setArray(array{})
} else {
a := makeArray(elemType, int(n-1))
for i := 0; i < int(n-1) && d.remain > 0; i++ {
decodeElem(d, a.index(i))
}
v.setArray(a)
}
}
func (d *decoder) decodeRecordV0(v value) {
x := &RecordV0{}
x.Unknown = d.readInt8()
x.Attributes = d.readInt8()
x.TimestampDelta = d.readInt8()
x.OffsetDelta = d.readInt8()
x.KeyLength = int8(d.readVarInt())
key := strings.Builder{}
for i := 0; i < int(x.KeyLength); i++ {
key.WriteString(fmt.Sprintf("%c", d.readInt8()))
}
x.Key = key.String()
x.ValueLen = int8(d.readVarInt())
value := strings.Builder{}
for i := 0; i < int(x.ValueLen); i++ {
value.WriteString(fmt.Sprintf("%c", d.readInt8()))
}
x.Value = value.String()
headerLen := d.readInt8() / 2
headers := make([]RecordHeader, 0)
for i := 0; i < int(headerLen); i++ {
header := &RecordHeader{}
header.HeaderKeyLength = int8(d.readVarInt())
headerKey := strings.Builder{}
for j := 0; j < int(header.HeaderKeyLength); j++ {
headerKey.WriteString(fmt.Sprintf("%c", d.readInt8()))
}
header.HeaderKey = headerKey.String()
header.HeaderValueLength = int8(d.readVarInt())
headerValue := strings.Builder{}
for j := 0; j < int(header.HeaderValueLength); j++ {
headerValue.WriteString(fmt.Sprintf("%c", d.readInt8()))
}
header.Value = headerValue.String()
headers = append(headers, *header)
}
x.Headers = headers
v.val.Set(valueOf(x).val)
}
func (d *decoder) discardAll() {
d.discard(d.remain)
}
func (d *decoder) discard(n int) {
if n > d.remain {
n = d.remain
}
var err error
if r, _ := d.reader.(discarder); r != nil {
n, err = r.Discard(n)
d.remain -= n
} else {
_, err = io.Copy(ioutil.Discard, d)
}
d.setError(err)
}
func (d *decoder) read(n int) []byte {
b := make([]byte, n)
n, err := io.ReadFull(d, b)
b = b[:n]
d.setError(err)
return b
}
func (d *decoder) writeTo(w io.Writer, n int) {
limit := d.remain
if n < limit {
d.remain = n
}
c, err := io.Copy(w, d)
if int(c) < n && err == nil {
err = io.ErrUnexpectedEOF
}
d.remain = limit - int(c)
d.setError(err)
}
func (d *decoder) setError(err error) {
if d.err == nil && err != nil {
d.err = err
d.discardAll()
}
}
func (d *decoder) readFull(b []byte) bool {
n, err := io.ReadFull(d, b)
d.setError(err)
return n == len(b)
}
func (d *decoder) readByte() byte {
if d.readFull(d.buffer[:1]) {
return d.buffer[0]
}
return 0
}
func (d *decoder) readBool() bool {
return d.readByte() != 0
}
func (d *decoder) readInt8() int8 {
if d.readFull(d.buffer[:1]) {
return decodeReadInt8(d.buffer[:1])
}
return 0
}
func (d *decoder) readInt16() int16 {
if d.readFull(d.buffer[:2]) {
return decodeReadInt16(d.buffer[:2])
}
return 0
}
func (d *decoder) readInt32() int32 {
if d.readFull(d.buffer[:4]) {
return decodeReadInt32(d.buffer[:4])
}
return 0
}
func (d *decoder) readInt64() int64 {
if d.readFull(d.buffer[:8]) {
return decodeReadInt64(d.buffer[:8])
}
return 0
}
func (d *decoder) readString() string {
if n := d.readInt16(); n < 0 {
return ""
} else {
return bytesToString(d.read(int(n)))
}
}
func (d *decoder) readVarString() string {
if n := d.readVarInt(); n < 0 {
return ""
} else {
return bytesToString(d.read(int(n)))
}
}
func (d *decoder) readCompactString() string {
if n := d.readUnsignedVarInt(); n < 1 {
return ""
} else {
return bytesToString(d.read(int(n - 1)))
}
}
func (d *decoder) readBytes() []byte {
if n := d.readInt32(); n < 0 {
return nil
} else {
return d.read(int(n))
}
}
func (d *decoder) readBytesTo(w io.Writer) bool {
if n := d.readInt32(); n < 0 {
return false
} else {
d.writeTo(w, int(n))
return d.err == nil
}
}
func (d *decoder) readVarBytes() []byte {
if n := d.readVarInt(); n < 0 {
return nil
} else {
return d.read(int(n))
}
}
func (d *decoder) readVarBytesTo(w io.Writer) bool {
if n := d.readVarInt(); n < 0 {
return false
} else {
d.writeTo(w, int(n))
return d.err == nil
}
}
func (d *decoder) readCompactBytes() []byte {
if n := d.readUnsignedVarInt(); n < 1 {
return nil
} else {
return d.read(int(n - 1))
}
}
func (d *decoder) readCompactBytesTo(w io.Writer) bool {
if n := d.readUnsignedVarInt(); n < 1 {
return false
} else {
d.writeTo(w, int(n-1))
return d.err == nil
}
}
func (d *decoder) readVarInt() int64 {
n := 11 // varints are at most 11 bytes
if n > d.remain {
n = d.remain
}
x := uint64(0)
s := uint(0)
for n > 0 {
b := d.readByte()
if (b & 0x80) == 0 {
x |= uint64(b) << s
return int64(x>>1) ^ -(int64(x) & 1)
}
x |= uint64(b&0x7f) << s
s += 7
n--
}
d.setError(fmt.Errorf("cannot decode varint from input stream"))
return 0
}
func (d *decoder) readUnsignedVarInt() uint64 {
n := 11 // varints are at most 11 bytes
if n > d.remain {
n = d.remain
}
x := uint64(0)
s := uint(0)
for n > 0 {
b := d.readByte()
if (b & 0x80) == 0 {
x |= uint64(b) << s
return x
}
x |= uint64(b&0x7f) << s
s += 7
n--
}
d.setError(fmt.Errorf("cannot decode unsigned varint from input stream"))
return 0
}
type decodeFunc func(*decoder, value)
var (
_ io.Reader = (*decoder)(nil)
_ io.ByteReader = (*decoder)(nil)
readerFrom = reflect.TypeOf((*io.ReaderFrom)(nil)).Elem()
)
func decodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) decodeFunc {
if reflect.PtrTo(typ).Implements(readerFrom) {
return readerDecodeFuncOf(typ)
}
switch typ.Kind() {
case reflect.Bool:
return (*decoder).decodeBool
case reflect.Int8:
return (*decoder).decodeInt8
case reflect.Int16:
return (*decoder).decodeInt16
case reflect.Int32:
return (*decoder).decodeInt32
case reflect.Int64:
return (*decoder).decodeInt64
case reflect.String:
return stringDecodeFuncOf(flexible, tag)
case reflect.Struct:
return structDecodeFuncOf(typ, version, flexible)
case reflect.Slice:
if typ.Elem().Kind() == reflect.Uint8 { // []byte
return bytesDecodeFuncOf(flexible, tag)
}
return arrayDecodeFuncOf(typ, version, flexible, tag)
default:
panic("unsupported type: " + typ.String())
}
}
func stringDecodeFuncOf(flexible bool, tag structTag) decodeFunc {
if flexible {
// In flexible messages, all strings are compact
return (*decoder).decodeCompactString
}
return (*decoder).decodeString
}
func bytesDecodeFuncOf(flexible bool, tag structTag) decodeFunc {
if flexible {
// In flexible messages, all arrays are compact
return (*decoder).decodeCompactBytes
}
return (*decoder).decodeBytes
}
func structDecodeFuncOf(typ reflect.Type, version int16, flexible bool) decodeFunc {
type field struct {
decode decodeFunc
index index
tagID int
}
var fields []field
taggedFields := map[int]*field{}
if typ == reflect.TypeOf(RecordV0{}) {
return (*decoder).decodeRecordV0
}
forEachStructField(typ, func(typ reflect.Type, index index, tag string) {
forEachStructTag(tag, func(tag structTag) bool {
if tag.MinVersion <= version && version <= tag.MaxVersion {
f := field{
decode: decodeFuncOf(typ, version, flexible, tag),
index: index,
tagID: tag.TagID,
}
if tag.TagID < -1 {
// Normal required field
fields = append(fields, f)
} else {
// Optional tagged field (flexible messages only)
taggedFields[tag.TagID] = &f
}
return false
}
return true
})
})
return func(d *decoder, v value) {
for i := range fields {
f := &fields[i]
f.decode(d, v.fieldByIndex(f.index))
}
if flexible {
// See https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields
// for details of tag buffers in "flexible" messages.
n := int(d.readUnsignedVarInt())
for i := 0; i < n; i++ {
tagID := int(d.readUnsignedVarInt())
size := int(d.readUnsignedVarInt())
f, ok := taggedFields[tagID]
if ok {
f.decode(d, v.fieldByIndex(f.index))
} else {
d.read(size)
}
}
}
}
}
func arrayDecodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) decodeFunc {
elemType := typ.Elem()
elemFunc := decodeFuncOf(elemType, version, flexible, tag)
if flexible {
// In flexible messages, all arrays are compact
return func(d *decoder, v value) { d.decodeCompactArray(v, elemType, elemFunc) }
}
return func(d *decoder, v value) { d.decodeArray(v, elemType, elemFunc) }
}
func readerDecodeFuncOf(typ reflect.Type) decodeFunc {
typ = reflect.PtrTo(typ)
return func(d *decoder, v value) {
if d.err == nil {
_, err := v.iface(typ).(io.ReaderFrom).ReadFrom(d)
if err != nil {
d.setError(err)
}
}
}
}
func decodeReadInt8(b []byte) int8 {
return int8(b[0])
}
func decodeReadInt16(b []byte) int16 {
return int16(binary.BigEndian.Uint16(b))
}
func decodeReadInt32(b []byte) int32 {
return int32(binary.BigEndian.Uint32(b))
}
func decodeReadInt64(b []byte) int64 {
return int64(binary.BigEndian.Uint64(b))
}
func Unmarshal(data []byte, version int16, value interface{}) error {
typ := elemTypeOf(value)
cache, _ := unmarshalers.Load().(map[versionedType]decodeFunc)
key := versionedType{typ: typ, version: version}
decode := cache[key]
if decode == nil {
decode = decodeFuncOf(reflect.TypeOf(value).Elem(), version, false, structTag{
MinVersion: -1,
MaxVersion: -1,
TagID: -2,
Compact: true,
Nullable: true,
})
newCache := make(map[versionedType]decodeFunc, len(cache)+1)
newCache[key] = decode
for typ, fun := range cache {
newCache[typ] = fun
}
unmarshalers.Store(newCache)
}
d, _ := decoders.Get().(*decoder)
if d == nil {
d = &decoder{reader: bytes.NewReader(nil)}
}
d.remain = len(data)
r, _ := d.reader.(*bytes.Reader)
r.Reset(data)
defer func() {
r.Reset(nil)
d.Reset(r, 0)
decoders.Put(d)
}()
decode(d, valueOf(value))
return dontExpectEOF(d.err)
}
var (
decoders sync.Pool // *decoder
unmarshalers atomic.Value // map[versionedType]decodeFunc
)

View File

@@ -0,0 +1,50 @@
package main
import "bufio"
func discardN(r *bufio.Reader, sz int, n int) (int, error) {
var err error
if n <= sz {
n, err = r.Discard(n)
} else {
n, err = r.Discard(sz)
if err == nil {
err = errShortRead
}
}
return sz - n, err
}
func discardInt8(r *bufio.Reader, sz int) (int, error) {
return discardN(r, sz, 1)
}
func discardInt16(r *bufio.Reader, sz int) (int, error) {
return discardN(r, sz, 2)
}
func discardInt32(r *bufio.Reader, sz int) (int, error) {
return discardN(r, sz, 4)
}
func discardInt64(r *bufio.Reader, sz int) (int, error) {
return discardN(r, sz, 8)
}
func discardString(r *bufio.Reader, sz int) (int, error) {
return readStringWith(r, sz, func(r *bufio.Reader, sz int, n int) (int, error) {
if n < 0 {
return sz, nil
}
return discardN(r, sz, n)
})
}
func discardBytes(r *bufio.Reader, sz int) (int, error) {
return readBytesWith(r, sz, func(r *bufio.Reader, sz int, n int) (int, error) {
if n < 0 {
return sz, nil
}
return discardN(r, sz, n)
})
}

View File

@@ -0,0 +1,645 @@
package main
import (
"bytes"
"encoding/binary"
"fmt"
"hash/crc32"
"io"
"reflect"
"sync"
"sync/atomic"
)
type encoder struct {
writer io.Writer
err error
table *crc32.Table
crc32 uint32
buffer [32]byte
}
type encoderChecksum struct {
reader io.Reader
encoder *encoder
}
func (e *encoderChecksum) Read(b []byte) (int, error) {
n, err := e.reader.Read(b)
if n > 0 {
e.encoder.update(b[:n])
}
return n, err
}
func (e *encoder) Reset(w io.Writer) {
e.writer = w
e.err = nil
e.table = nil
e.crc32 = 0
e.buffer = [32]byte{}
}
func (e *encoder) ReadFrom(r io.Reader) (int64, error) {
if e.table != nil {
r = &encoderChecksum{
reader: r,
encoder: e,
}
}
return io.Copy(e.writer, r)
}
func (e *encoder) Write(b []byte) (int, error) {
if e.err != nil {
return 0, e.err
}
n, err := e.writer.Write(b)
if n > 0 {
e.update(b[:n])
}
if err != nil {
e.err = err
}
return n, err
}
func (e *encoder) WriteByte(b byte) error {
e.buffer[0] = b
_, err := e.Write(e.buffer[:1])
return err
}
func (e *encoder) WriteString(s string) (int, error) {
// This implementation is an optimization to avoid the heap allocation that
// would occur when converting the string to a []byte to call crc32.Update.
//
// Strings are rarely long in the kafka protocol, so the use of a 32 byte
// buffer is a good comprise between keeping the encoder value small and
// limiting the number of calls to Write.
//
// We introduced this optimization because memory profiles on the benchmarks
// showed that most heap allocations were caused by this code path.
n := 0
for len(s) != 0 {
c := copy(e.buffer[:], s)
w, err := e.Write(e.buffer[:c])
n += w
if err != nil {
return n, err
}
s = s[c:]
}
return n, nil
}
func (e *encoder) setCRC(table *crc32.Table) {
e.table, e.crc32 = table, 0
}
func (e *encoder) update(b []byte) {
if e.table != nil {
e.crc32 = crc32.Update(e.crc32, e.table, b)
}
}
func (e *encoder) encodeBool(v value) {
b := int8(0)
if v.bool() {
b = 1
}
e.writeInt8(b)
}
func (e *encoder) encodeInt8(v value) {
e.writeInt8(v.int8())
}
func (e *encoder) encodeInt16(v value) {
e.writeInt16(v.int16())
}
func (e *encoder) encodeInt32(v value) {
e.writeInt32(v.int32())
}
func (e *encoder) encodeInt64(v value) {
e.writeInt64(v.int64())
}
func (e *encoder) encodeString(v value) {
e.writeString(v.string())
}
func (e *encoder) encodeVarString(v value) {
e.writeVarString(v.string())
}
func (e *encoder) encodeCompactString(v value) {
e.writeCompactString(v.string())
}
func (e *encoder) encodeNullString(v value) {
e.writeNullString(v.string())
}
func (e *encoder) encodeVarNullString(v value) {
e.writeVarNullString(v.string())
}
func (e *encoder) encodeCompactNullString(v value) {
e.writeCompactNullString(v.string())
}
func (e *encoder) encodeBytes(v value) {
e.writeBytes(v.bytes())
}
func (e *encoder) encodeVarBytes(v value) {
e.writeVarBytes(v.bytes())
}
func (e *encoder) encodeCompactBytes(v value) {
e.writeCompactBytes(v.bytes())
}
func (e *encoder) encodeNullBytes(v value) {
e.writeNullBytes(v.bytes())
}
func (e *encoder) encodeVarNullBytes(v value) {
e.writeVarNullBytes(v.bytes())
}
func (e *encoder) encodeCompactNullBytes(v value) {
e.writeCompactNullBytes(v.bytes())
}
func (e *encoder) encodeArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
a := v.array(elemType)
n := a.length()
e.writeInt32(int32(n))
for i := 0; i < n; i++ {
encodeElem(e, a.index(i))
}
}
func (e *encoder) encodeCompactArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
a := v.array(elemType)
n := a.length()
e.writeUnsignedVarInt(uint64(n + 1))
for i := 0; i < n; i++ {
encodeElem(e, a.index(i))
}
}
func (e *encoder) encodeNullArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
a := v.array(elemType)
if a.isNil() {
e.writeInt32(-1)
return
}
n := a.length()
e.writeInt32(int32(n))
for i := 0; i < n; i++ {
encodeElem(e, a.index(i))
}
}
func (e *encoder) encodeCompactNullArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
a := v.array(elemType)
if a.isNil() {
e.writeUnsignedVarInt(0)
return
}
n := a.length()
e.writeUnsignedVarInt(uint64(n + 1))
for i := 0; i < n; i++ {
encodeElem(e, a.index(i))
}
}
func (e *encoder) writeInt8(i int8) {
writeInt8(e.buffer[:1], i)
e.Write(e.buffer[:1])
}
func (e *encoder) writeInt16(i int16) {
writeInt16(e.buffer[:2], i)
e.Write(e.buffer[:2])
}
func (e *encoder) writeInt32(i int32) {
writeInt32(e.buffer[:4], i)
e.Write(e.buffer[:4])
}
func (e *encoder) writeInt64(i int64) {
writeInt64(e.buffer[:8], i)
e.Write(e.buffer[:8])
}
func (e *encoder) writeString(s string) {
e.writeInt16(int16(len(s)))
e.WriteString(s)
}
func (e *encoder) writeVarString(s string) {
e.writeVarInt(int64(len(s)))
e.WriteString(s)
}
func (e *encoder) writeCompactString(s string) {
e.writeUnsignedVarInt(uint64(len(s)) + 1)
e.WriteString(s)
}
func (e *encoder) writeNullString(s string) {
if s == "" {
e.writeInt16(-1)
} else {
e.writeInt16(int16(len(s)))
e.WriteString(s)
}
}
func (e *encoder) writeVarNullString(s string) {
if s == "" {
e.writeVarInt(-1)
} else {
e.writeVarInt(int64(len(s)))
e.WriteString(s)
}
}
func (e *encoder) writeCompactNullString(s string) {
if s == "" {
e.writeUnsignedVarInt(0)
} else {
e.writeUnsignedVarInt(uint64(len(s)) + 1)
e.WriteString(s)
}
}
func (e *encoder) writeBytes(b []byte) {
e.writeInt32(int32(len(b)))
e.Write(b)
}
func (e *encoder) writeVarBytes(b []byte) {
e.writeVarInt(int64(len(b)))
e.Write(b)
}
func (e *encoder) writeCompactBytes(b []byte) {
e.writeUnsignedVarInt(uint64(len(b)) + 1)
e.Write(b)
}
func (e *encoder) writeNullBytes(b []byte) {
if b == nil {
e.writeInt32(-1)
} else {
e.writeInt32(int32(len(b)))
e.Write(b)
}
}
func (e *encoder) writeVarNullBytes(b []byte) {
if b == nil {
e.writeVarInt(-1)
} else {
e.writeVarInt(int64(len(b)))
e.Write(b)
}
}
func (e *encoder) writeCompactNullBytes(b []byte) {
if b == nil {
e.writeUnsignedVarInt(0)
} else {
e.writeUnsignedVarInt(uint64(len(b)) + 1)
e.Write(b)
}
}
func (e *encoder) writeBytesFrom(b Bytes) error {
size := int64(b.Len())
e.writeInt32(int32(size))
n, err := io.Copy(e, b)
if err == nil && n != size {
err = fmt.Errorf("size of bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
}
return err
}
func (e *encoder) writeNullBytesFrom(b Bytes) error {
if b == nil {
e.writeInt32(-1)
return nil
} else {
size := int64(b.Len())
e.writeInt32(int32(size))
n, err := io.Copy(e, b)
if err == nil && n != size {
err = fmt.Errorf("size of nullable bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
}
return err
}
}
func (e *encoder) writeVarNullBytesFrom(b Bytes) error {
if b == nil {
e.writeVarInt(-1)
return nil
} else {
size := int64(b.Len())
e.writeVarInt(size)
n, err := io.Copy(e, b)
if err == nil && n != size {
err = fmt.Errorf("size of nullable bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
}
return err
}
}
func (e *encoder) writeCompactNullBytesFrom(b Bytes) error {
if b == nil {
e.writeUnsignedVarInt(0)
return nil
} else {
size := int64(b.Len())
e.writeUnsignedVarInt(uint64(size + 1))
n, err := io.Copy(e, b)
if err == nil && n != size {
err = fmt.Errorf("size of compact nullable bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
}
return err
}
}
func (e *encoder) writeVarInt(i int64) {
e.writeUnsignedVarInt(uint64((i << 1) ^ (i >> 63)))
}
func (e *encoder) writeUnsignedVarInt(i uint64) {
b := e.buffer[:]
n := 0
for i >= 0x80 && n < len(b) {
b[n] = byte(i) | 0x80
i >>= 7
n++
}
if n < len(b) {
b[n] = byte(i)
n++
}
e.Write(b[:n])
}
type encodeFunc func(*encoder, value)
var (
_ io.ReaderFrom = (*encoder)(nil)
_ io.Writer = (*encoder)(nil)
_ io.ByteWriter = (*encoder)(nil)
_ io.StringWriter = (*encoder)(nil)
writerTo = reflect.TypeOf((*io.WriterTo)(nil)).Elem()
)
func encodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) encodeFunc {
if reflect.PtrTo(typ).Implements(writerTo) {
return writerEncodeFuncOf(typ)
}
switch typ.Kind() {
case reflect.Bool:
return (*encoder).encodeBool
case reflect.Int8:
return (*encoder).encodeInt8
case reflect.Int16:
return (*encoder).encodeInt16
case reflect.Int32:
return (*encoder).encodeInt32
case reflect.Int64:
return (*encoder).encodeInt64
case reflect.String:
return stringEncodeFuncOf(flexible, tag)
case reflect.Struct:
return structEncodeFuncOf(typ, version, flexible)
case reflect.Slice:
if typ.Elem().Kind() == reflect.Uint8 { // []byte
return bytesEncodeFuncOf(flexible, tag)
}
return arrayEncodeFuncOf(typ, version, flexible, tag)
default:
panic("unsupported type: " + typ.String())
}
}
func stringEncodeFuncOf(flexible bool, tag structTag) encodeFunc {
switch {
case flexible && tag.Nullable:
// In flexible messages, all strings are compact
return (*encoder).encodeCompactNullString
case flexible:
// In flexible messages, all strings are compact
return (*encoder).encodeCompactString
case tag.Nullable:
return (*encoder).encodeNullString
default:
return (*encoder).encodeString
}
}
func bytesEncodeFuncOf(flexible bool, tag structTag) encodeFunc {
switch {
case flexible && tag.Nullable:
// In flexible messages, all arrays are compact
return (*encoder).encodeCompactNullBytes
case flexible:
// In flexible messages, all arrays are compact
return (*encoder).encodeCompactBytes
case tag.Nullable:
return (*encoder).encodeNullBytes
default:
return (*encoder).encodeBytes
}
}
func structEncodeFuncOf(typ reflect.Type, version int16, flexible bool) encodeFunc {
type field struct {
encode encodeFunc
index index
tagID int
}
var fields []field
var taggedFields []field
forEachStructField(typ, func(typ reflect.Type, index index, tag string) {
if typ.Size() != 0 { // skip struct{}
forEachStructTag(tag, func(tag structTag) bool {
if tag.MinVersion <= version && version <= tag.MaxVersion {
f := field{
encode: encodeFuncOf(typ, version, flexible, tag),
index: index,
tagID: tag.TagID,
}
if tag.TagID < -1 {
// Normal required field
fields = append(fields, f)
} else {
// Optional tagged field (flexible messages only)
taggedFields = append(taggedFields, f)
}
return false
}
return true
})
}
})
return func(e *encoder, v value) {
for i := range fields {
f := &fields[i]
f.encode(e, v.fieldByIndex(f.index))
}
if flexible {
// See https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields
// for details of tag buffers in "flexible" messages.
e.writeUnsignedVarInt(uint64(len(taggedFields)))
for i := range taggedFields {
f := &taggedFields[i]
e.writeUnsignedVarInt(uint64(f.tagID))
buf := &bytes.Buffer{}
se := &encoder{writer: buf}
f.encode(se, v.fieldByIndex(f.index))
e.writeUnsignedVarInt(uint64(buf.Len()))
e.Write(buf.Bytes())
}
}
}
}
func arrayEncodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) encodeFunc {
elemType := typ.Elem()
elemFunc := encodeFuncOf(elemType, version, flexible, tag)
switch {
case flexible && tag.Nullable:
// In flexible messages, all arrays are compact
return func(e *encoder, v value) { e.encodeCompactNullArray(v, elemType, elemFunc) }
case flexible:
// In flexible messages, all arrays are compact
return func(e *encoder, v value) { e.encodeCompactArray(v, elemType, elemFunc) }
case tag.Nullable:
return func(e *encoder, v value) { e.encodeNullArray(v, elemType, elemFunc) }
default:
return func(e *encoder, v value) { e.encodeArray(v, elemType, elemFunc) }
}
}
func writerEncodeFuncOf(typ reflect.Type) encodeFunc {
typ = reflect.PtrTo(typ)
return func(e *encoder, v value) {
// Optimization to write directly into the buffer when the encoder
// does no need to compute a crc32 checksum.
w := io.Writer(e)
if e.table == nil {
w = e.writer
}
_, err := v.iface(typ).(io.WriterTo).WriteTo(w)
if err != nil {
e.err = err
}
}
}
func writeInt8(b []byte, i int8) {
b[0] = byte(i)
}
func writeInt16(b []byte, i int16) {
binary.BigEndian.PutUint16(b, uint16(i))
}
func writeInt32(b []byte, i int32) {
binary.BigEndian.PutUint32(b, uint32(i))
}
func writeInt64(b []byte, i int64) {
binary.BigEndian.PutUint64(b, uint64(i))
}
func Marshal(version int16, value interface{}) ([]byte, error) {
typ := typeOf(value)
cache, _ := marshalers.Load().(map[versionedType]encodeFunc)
key := versionedType{typ: typ, version: version}
encode := cache[key]
if encode == nil {
encode = encodeFuncOf(reflect.TypeOf(value), version, false, structTag{
MinVersion: -1,
MaxVersion: -1,
TagID: -2,
Compact: true,
Nullable: true,
})
newCache := make(map[versionedType]encodeFunc, len(cache)+1)
newCache[key] = encode
for typ, fun := range cache {
newCache[typ] = fun
}
marshalers.Store(newCache)
}
e, _ := encoders.Get().(*encoder)
if e == nil {
e = &encoder{writer: new(bytes.Buffer)}
}
b, _ := e.writer.(*bytes.Buffer)
defer func() {
b.Reset()
e.Reset(b)
encoders.Put(e)
}()
encode(e, nonAddressableValueOf(value))
if e.err != nil {
return nil, e.err
}
buf := b.Bytes()
out := make([]byte, len(buf))
copy(out, buf)
return out, nil
}
type versionedType struct {
typ _type
version int16
}
var (
encoders sync.Pool // *encoder
marshalers atomic.Value // map[versionedType]encodeFunc
)

View File

@@ -0,0 +1,91 @@
package main
import (
"fmt"
)
// Error represents client-side protocol errors.
type Error string
func (e Error) Error() string { return string(e) }
func Errorf(msg string, args ...interface{}) Error {
return Error(fmt.Sprintf(msg, args...))
}
const (
// ErrNoTopic is returned when a request needs to be sent to a specific
ErrNoTopic Error = "topic not found"
// ErrNoPartition is returned when a request needs to be sent to a specific
// partition, but the client did not find it in the cluster metadata.
ErrNoPartition Error = "topic partition not found"
// ErrNoLeader is returned when a request needs to be sent to a partition
// leader, but the client could not determine what the leader was at this
// time.
ErrNoLeader Error = "topic partition has no leader"
// ErrNoRecord is returned when attempting to write a message containing an
// empty record set (which kafka forbids).
//
// We handle this case client-side because kafka will close the connection
// that it received an empty produce request on, causing all concurrent
// requests to be aborted.
ErrNoRecord Error = "record set contains no records"
// ErrNoReset is returned by ResetRecordReader when the record reader does
// not support being reset.
ErrNoReset Error = "record sequence does not support reset"
)
type TopicError struct {
Topic string
Err error
}
func NewTopicError(topic string, err error) *TopicError {
return &TopicError{Topic: topic, Err: err}
}
func NewErrNoTopic(topic string) *TopicError {
return NewTopicError(topic, ErrNoTopic)
}
func (e *TopicError) Error() string {
return fmt.Sprintf("%v (topic=%q)", e.Err, e.Topic)
}
func (e *TopicError) Unwrap() error {
return e.Err
}
type TopicPartitionError struct {
Topic string
Partition int32
Err error
}
func NewTopicPartitionError(topic string, partition int32, err error) *TopicPartitionError {
return &TopicPartitionError{
Topic: topic,
Partition: partition,
Err: err,
}
}
func NewErrNoPartition(topic string, partition int32) *TopicPartitionError {
return NewTopicPartitionError(topic, partition, ErrNoPartition)
}
func NewErrNoLeader(topic string, partition int32) *TopicPartitionError {
return NewTopicPartitionError(topic, partition, ErrNoLeader)
}
func (e *TopicPartitionError) Error() string {
return fmt.Sprintf("%v (topic=%q partition=%d)", e.Err, e.Topic, e.Partition)
}
func (e *TopicPartitionError) Unwrap() error {
return e.Err
}

View File

@@ -0,0 +1,10 @@
module github.com/up9inc/mizu/tap/extensions/kafka
go 1.16
require (
github.com/segmentio/kafka-go v0.4.17
github.com/up9inc/mizu/tap/api v0.0.0
)
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../../api

View File

@@ -0,0 +1,35 @@
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 h1:YEetp8/yCZMuEPMUDHG0CW/brkkEp8mzqk2+ODEitlw=
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/frankban/quicktest v1.11.3 h1:8sXhOn0uLys67V8EsXLc6eszDs8VXWxL3iRvebPhedY=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/go-cmp v0.5.4 h1:L8R9j+yAqZuZjsqh/z+F1NCffTKKLShY6zXTItVIZ8M=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/klauspost/compress v1.9.8 h1:VMAMUUOh+gaxKTMk+zqbjsSjsIcUcL/LF4o63i82QyA=
github.com/klauspost/compress v1.9.8/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A=
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/pierrec/lz4 v2.6.0+incompatible h1:Ix9yFKn1nSPBLFl/yZknTp8TU5G4Ps0JDmguYK6iH1A=
github.com/pierrec/lz4 v2.6.0+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/segmentio/kafka-go v0.4.17 h1:IyqRstL9KUTDb3kyGPOOa5VffokKWSEzN6geJ92dSDY=
github.com/segmentio/kafka-go v0.4.17/go.mod h1:19+Eg7KwrNKy/PFhiIthEPkO8k+ac7/ZYXwYM9Df10w=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c/go.mod h1:lB8K/P019DLNhemzwFU4jHLhdvlE6uDZjXFejJXr49I=
github.com/xdg/stringprep v1.0.0/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190506204251-e1dfcc566284/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -0,0 +1,650 @@
package main
import (
"encoding/json"
"fmt"
"strconv"
"github.com/up9inc/mizu/tap/api"
)
type KafkaPayload struct {
Data interface{}
}
type KafkaPayloader interface {
MarshalJSON() ([]byte, error)
}
func (h KafkaPayload) MarshalJSON() ([]byte, error) {
return json.Marshal(h.Data)
}
type KafkaWrapper struct {
Method string `json:"method"`
Url string `json:"url"`
Details interface{} `json:"details"`
}
func representRequestHeader(data map[string]interface{}, rep []interface{}) []interface{} {
requestHeader, _ := json.Marshal([]map[string]string{
{
"name": "ApiKey",
"value": apiNames[int(data["ApiKey"].(float64))],
},
{
"name": "ApiVersion",
"value": fmt.Sprintf("%d", int(data["ApiVersion"].(float64))),
},
{
"name": "Client ID",
"value": data["ClientID"].(string),
},
{
"name": "Correlation ID",
"value": fmt.Sprintf("%d", int(data["CorrelationID"].(float64))),
},
{
"name": "Size",
"value": fmt.Sprintf("%d", int(data["Size"].(float64))),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Request Header",
"data": string(requestHeader),
})
return rep
}
func representResponseHeader(data map[string]interface{}, rep []interface{}) []interface{} {
requestHeader, _ := json.Marshal([]map[string]string{
{
"name": "Correlation ID",
"value": fmt.Sprintf("%d", int(data["CorrelationID"].(float64))),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Response Header",
"data": string(requestHeader),
})
return rep
}
func representMetadataRequest(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representRequestHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
topics := ""
allowAutoTopicCreation := ""
includeClusterAuthorizedOperations := ""
includeTopicAuthorizedOperations := ""
if payload["Topics"] != nil {
x, _ := json.Marshal(payload["Topics"].([]interface{}))
topics = string(x)
}
if payload["AllowAutoTopicCreation"] != nil {
allowAutoTopicCreation = strconv.FormatBool(payload["AllowAutoTopicCreation"].(bool))
}
if payload["IncludeClusterAuthorizedOperations"] != nil {
includeClusterAuthorizedOperations = strconv.FormatBool(payload["IncludeClusterAuthorizedOperations"].(bool))
}
if payload["IncludeTopicAuthorizedOperations"] != nil {
includeTopicAuthorizedOperations = strconv.FormatBool(payload["IncludeTopicAuthorizedOperations"].(bool))
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Topics",
"value": topics,
},
{
"name": "Allow Auto Topic Creation",
"value": allowAutoTopicCreation,
},
{
"name": "Include Cluster Authorized Operations",
"value": includeClusterAuthorizedOperations,
},
{
"name": "Include Topic Authorized Operations",
"value": includeTopicAuthorizedOperations,
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representMetadataResponse(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representResponseHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
brokers, _ := json.Marshal(payload["Brokers"].([]interface{}))
controllerID := ""
clusterID := ""
throttleTimeMs := ""
clusterAuthorizedOperations := ""
if payload["ControllerID"] != nil {
controllerID = fmt.Sprintf("%d", int(payload["ControllerID"].(float64)))
}
if payload["ClusterID"] != nil {
clusterID = payload["ClusterID"].(string)
}
if payload["ThrottleTimeMs"] != nil {
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
}
if payload["ClusterAuthorizedOperations"] != nil {
clusterAuthorizedOperations = fmt.Sprintf("%d", int(payload["ClusterAuthorizedOperations"].(float64)))
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Throttle Time (ms)",
"value": throttleTimeMs,
},
{
"name": "Brokers",
"value": string(brokers),
},
{
"name": "Cluster ID",
"value": clusterID,
},
{
"name": "Controller ID",
"value": controllerID,
},
{
"name": "Topics",
"value": string(topics),
},
{
"name": "Cluster Authorized Operations",
"value": clusterAuthorizedOperations,
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representApiVersionsRequest(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representRequestHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
clientSoftwareName := ""
clientSoftwareVersion := ""
if payload["ClientSoftwareName"] != nil {
clientSoftwareName = payload["ClientSoftwareName"].(string)
}
if payload["ClientSoftwareVersion"] != nil {
clientSoftwareVersion = payload["ClientSoftwareVersion"].(string)
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Client Software Name",
"value": clientSoftwareName,
},
{
"name": "Client Software Version",
"value": clientSoftwareVersion,
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representApiVersionsResponse(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representResponseHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
apiKeys := ""
if payload["TopicNames"] != nil {
x, _ := json.Marshal(payload["ApiKeys"].([]interface{}))
apiKeys = string(x)
}
throttleTimeMs := ""
if payload["ThrottleTimeMs"] != nil {
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Error Code",
"value": fmt.Sprintf("%d", int(payload["ErrorCode"].(float64))),
},
{
"name": "ApiKeys",
"value": apiKeys,
},
{
"name": "Throttle Time (ms)",
"value": throttleTimeMs,
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representProduceRequest(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representRequestHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
topicData := ""
_topicData := payload["TopicData"]
if _topicData != nil {
x, _ := json.Marshal(_topicData.([]interface{}))
topicData = string(x)
}
transactionalID := ""
if payload["TransactionalID"] != nil {
transactionalID = payload["TransactionalID"].(string)
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Transactional ID",
"value": transactionalID,
},
{
"name": "Required Acknowledgements",
"value": fmt.Sprintf("%d", int(payload["RequiredAcks"].(float64))),
},
{
"name": "Timeout",
"value": fmt.Sprintf("%d", int(payload["Timeout"].(float64))),
},
{
"name": "Topic Data",
"value": topicData,
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representProduceResponse(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representResponseHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
responses, _ := json.Marshal(payload["Responses"].([]interface{}))
throttleTimeMs := ""
if payload["ThrottleTimeMs"] != nil {
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Responses",
"value": string(responses),
},
{
"name": "Throttle Time (ms)",
"value": throttleTimeMs,
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representFetchRequest(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representRequestHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
replicaId := ""
if payload["ReplicaId"] != nil {
replicaId = fmt.Sprintf("%d", int(payload["ReplicaId"].(float64)))
}
maxBytes := ""
if payload["MaxBytes"] != nil {
maxBytes = fmt.Sprintf("%d", int(payload["MaxBytes"].(float64)))
}
isolationLevel := ""
if payload["IsolationLevel"] != nil {
isolationLevel = fmt.Sprintf("%d", int(payload["IsolationLevel"].(float64)))
}
sessionId := ""
if payload["SessionId"] != nil {
sessionId = fmt.Sprintf("%d", int(payload["SessionId"].(float64)))
}
sessionEpoch := ""
if payload["SessionEpoch"] != nil {
sessionEpoch = fmt.Sprintf("%d", int(payload["SessionEpoch"].(float64)))
}
forgottenTopicsData := ""
if payload["ForgottenTopicsData"] != nil {
x, _ := json.Marshal(payload["ForgottenTopicsData"].(map[string]interface{}))
forgottenTopicsData = string(x)
}
rackId := ""
if payload["RackId"] != nil {
rackId = fmt.Sprintf("%d", int(payload["RackId"].(float64)))
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Replica ID",
"value": replicaId,
},
{
"name": "Maximum Wait (ms)",
"value": fmt.Sprintf("%d", int(payload["MaxWaitMs"].(float64))),
},
{
"name": "Minimum Bytes",
"value": fmt.Sprintf("%d", int(payload["MinBytes"].(float64))),
},
{
"name": "Maximum Bytes",
"value": maxBytes,
},
{
"name": "Isolation Level",
"value": isolationLevel,
},
{
"name": "Session ID",
"value": sessionId,
},
{
"name": "Session Epoch",
"value": sessionEpoch,
},
{
"name": "Topics",
"value": string(topics),
},
{
"name": "Forgotten Topics Data",
"value": forgottenTopicsData,
},
{
"name": "Rack ID",
"value": rackId,
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representFetchResponse(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representResponseHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
responses, _ := json.Marshal(payload["Responses"].([]interface{}))
throttleTimeMs := ""
if payload["ThrottleTimeMs"] != nil {
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
}
errorCode := ""
if payload["ErrorCode"] != nil {
errorCode = fmt.Sprintf("%d", int(payload["ErrorCode"].(float64)))
}
sessionId := ""
if payload["SessionId"] != nil {
sessionId = fmt.Sprintf("%d", int(payload["SessionId"].(float64)))
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Throttle Time (ms)",
"value": throttleTimeMs,
},
{
"name": "Error Code",
"value": errorCode,
},
{
"name": "Session ID",
"value": sessionId,
},
{
"name": "Responses",
"value": string(responses),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representListOffsetsRequest(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representRequestHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Replica ID",
"value": fmt.Sprintf("%d", int(payload["ReplicaId"].(float64))),
},
{
"name": "Topics",
"value": string(topics),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representListOffsetsResponse(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representResponseHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
throttleTimeMs := ""
if payload["ThrottleTimeMs"] != nil {
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Throttle Time (ms)",
"value": throttleTimeMs,
},
{
"name": "Topics",
"value": string(topics),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representCreateTopicsRequest(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representRequestHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
validateOnly := ""
if payload["ValidateOnly"] != nil {
validateOnly = strconv.FormatBool(payload["ValidateOnly"].(bool))
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Topics",
"value": string(topics),
},
{
"name": "Timeout (ms)",
"value": fmt.Sprintf("%d", int(payload["TimeoutMs"].(float64))),
},
{
"name": "Validate Only",
"value": validateOnly,
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representCreateTopicsResponse(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representResponseHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
topics, _ := json.Marshal(payload["Topics"].([]interface{}))
throttleTimeMs := ""
if payload["ThrottleTimeMs"] != nil {
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Throttle Time (ms)",
"value": throttleTimeMs,
},
{
"name": "Topics",
"value": string(topics),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representDeleteTopicsRequest(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representRequestHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
topics := ""
if payload["Topics"] != nil {
x, _ := json.Marshal(payload["Topics"].([]interface{}))
topics = string(x)
}
topicNames := ""
if payload["TopicNames"] != nil {
x, _ := json.Marshal(payload["TopicNames"].([]interface{}))
topicNames = string(x)
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "TopicNames",
"value": string(topicNames),
},
{
"name": "Topics",
"value": string(topics),
},
{
"name": "Timeout (ms)",
"value": fmt.Sprintf("%d", int(payload["TimeoutMs"].(float64))),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}
func representDeleteTopicsResponse(data map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
rep = representResponseHeader(data, rep)
payload := data["Payload"].(map[string]interface{})
responses, _ := json.Marshal(payload["Responses"].([]interface{}))
throttleTimeMs := ""
if payload["ThrottleTimeMs"] != nil {
throttleTimeMs = fmt.Sprintf("%d", int(payload["ThrottleTimeMs"].(float64)))
}
repPayload, _ := json.Marshal([]map[string]string{
{
"name": "Throttle Time (ms)",
"value": throttleTimeMs,
},
{
"name": "Responses",
"value": string(responses),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Payload",
"data": string(repPayload),
})
return rep
}

View File

@@ -0,0 +1,236 @@
package main
import (
"bufio"
"encoding/json"
"fmt"
"log"
"time"
"github.com/up9inc/mizu/tap/api"
)
var _protocol api.Protocol = api.Protocol{
Name: "kafka",
LongName: "Apache Kafka Protocol",
Abbreviation: "KAFKA",
Version: "12",
BackgroundColor: "#000000",
ForegroundColor: "#ffffff",
FontSize: 11,
ReferenceLink: "https://kafka.apache.org/protocol",
Ports: []string{"9092"},
Priority: 2,
}
func init() {
log.Println("Initializing Kafka extension...")
}
type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = _protocol
extension.MatcherMap = reqResMatcher.openMessagesMap
}
func (d dissecting) Ping() {
log.Printf("pong %s\n", _protocol.Name)
}
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter) error {
for {
if isClient {
_, _, err := ReadRequest(b, tcpID, superTimer)
if err != nil {
return err
}
} else {
err := ReadResponse(b, tcpID, superTimer, emitter)
if err != nil {
return err
}
}
}
}
func (d dissecting) Analyze(item *api.OutputChannelItem, entryId string, resolvedSource string, resolvedDestination string) *api.MizuEntry {
request := item.Pair.Request.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
service := "kafka"
if resolvedDestination != "" {
service = resolvedDestination
} else if resolvedSource != "" {
service = resolvedSource
}
apiKey := ApiKey(reqDetails["ApiKey"].(float64))
summary := ""
switch apiKey {
case Metadata:
_topics := reqDetails["Payload"].(map[string]interface{})["Topics"]
if _topics == nil {
break
}
topics := _topics.([]interface{})
for _, topic := range topics {
summary += fmt.Sprintf("%s, ", topic.(map[string]interface{})["Name"].(string))
}
if len(summary) > 0 {
summary = summary[:len(summary)-2]
}
break
case ApiVersions:
summary = reqDetails["ClientID"].(string)
break
case Produce:
_topics := reqDetails["Payload"].(map[string]interface{})["TopicData"]
if _topics == nil {
break
}
topics := _topics.([]interface{})
for _, topic := range topics {
summary += fmt.Sprintf("%s, ", topic.(map[string]interface{})["Topic"].(string))
}
if len(summary) > 0 {
summary = summary[:len(summary)-2]
}
break
case Fetch:
topics := reqDetails["Payload"].(map[string]interface{})["Topics"].([]interface{})
for _, topic := range topics {
summary += fmt.Sprintf("%s, ", topic.(map[string]interface{})["Topic"].(string))
}
if len(summary) > 0 {
summary = summary[:len(summary)-2]
}
break
case ListOffsets:
topics := reqDetails["Payload"].(map[string]interface{})["Topics"].([]interface{})
for _, topic := range topics {
summary += fmt.Sprintf("%s, ", topic.(map[string]interface{})["Name"].(string))
}
if len(summary) > 0 {
summary = summary[:len(summary)-2]
}
break
case CreateTopics:
topics := reqDetails["Payload"].(map[string]interface{})["Topics"].([]interface{})
for _, topic := range topics {
summary += fmt.Sprintf("%s, ", topic.(map[string]interface{})["Name"].(string))
}
if len(summary) > 0 {
summary = summary[:len(summary)-2]
}
break
case DeleteTopics:
topicNames := reqDetails["TopicNames"].([]string)
for _, name := range topicNames {
summary += fmt.Sprintf("%s, ", name)
}
break
}
request["url"] = summary
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
entryBytes, _ := json.Marshal(item.Pair)
return &api.MizuEntry{
ProtocolName: _protocol.Name,
ProtocolVersion: _protocol.Version,
EntryId: entryId,
Entry: string(entryBytes),
Url: fmt.Sprintf("%s%s", service, summary),
Method: apiNames[apiKey],
Status: 0,
RequestSenderIp: item.ConnectionInfo.ClientIP,
Service: service,
Timestamp: item.Timestamp,
ElapsedTime: elapsedTime,
Path: summary,
ResolvedSource: resolvedSource,
ResolvedDestination: resolvedDestination,
SourceIp: item.ConnectionInfo.ClientIP,
DestinationIp: item.ConnectionInfo.ServerIP,
SourcePort: item.ConnectionInfo.ClientPort,
DestinationPort: item.ConnectionInfo.ServerPort,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
}
}
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
return &api.BaseEntryDetails{
Id: entry.EntryId,
Protocol: _protocol,
Url: entry.Url,
RequestSenderIp: entry.RequestSenderIp,
Service: entry.Service,
Summary: entry.Path,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
SourceIp: entry.SourceIp,
DestinationIp: entry.DestinationIp,
SourcePort: entry.SourcePort,
DestinationPort: entry.DestinationPort,
IsOutgoing: entry.IsOutgoing,
Latency: 0,
Rules: api.ApplicableRules{
Latency: 0,
Status: false,
},
}
}
func (d dissecting) Represent(entry *api.MizuEntry) (p api.Protocol, object []byte, bodySize int64, err error) {
p = _protocol
bodySize = 0
var root map[string]interface{}
json.Unmarshal([]byte(entry.Entry), &root)
representation := make(map[string]interface{}, 0)
request := root["request"].(map[string]interface{})["payload"].(map[string]interface{})
response := root["response"].(map[string]interface{})["payload"].(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
resDetails := response["details"].(map[string]interface{})
apiKey := ApiKey(reqDetails["ApiKey"].(float64))
var repRequest []interface{}
var repResponse []interface{}
switch apiKey {
case Metadata:
repRequest = representMetadataRequest(reqDetails)
repResponse = representMetadataResponse(resDetails)
break
case ApiVersions:
repRequest = representApiVersionsRequest(reqDetails)
repResponse = representApiVersionsResponse(resDetails)
break
case Produce:
repRequest = representProduceRequest(reqDetails)
repResponse = representProduceResponse(resDetails)
break
case Fetch:
repRequest = representFetchRequest(reqDetails)
repResponse = representFetchResponse(resDetails)
break
case ListOffsets:
repRequest = representListOffsetsRequest(reqDetails)
repResponse = representListOffsetsResponse(resDetails)
break
case CreateTopics:
repRequest = representCreateTopicsRequest(reqDetails)
repResponse = representCreateTopicsResponse(resDetails)
break
case DeleteTopics:
repRequest = representDeleteTopicsRequest(reqDetails)
repResponse = representDeleteTopicsResponse(resDetails)
break
}
representation["request"] = repRequest
representation["response"] = repResponse
object, err = json.Marshal(representation)
return
}
var Dissector dissecting

View File

@@ -0,0 +1,58 @@
package main
import (
"sync"
"time"
)
var reqResMatcher = CreateResponseRequestMatcher() // global
const maxTry int = 3000
type RequestResponsePair struct {
Request Request
Response Response
}
// Key is {client_addr}:{client_port}->{dest_addr}:{dest_port}::{correlation_id}
type requestResponseMatcher struct {
openMessagesMap *sync.Map
}
func CreateResponseRequestMatcher() requestResponseMatcher {
newMatcher := &requestResponseMatcher{openMessagesMap: &sync.Map{}}
return *newMatcher
}
func (matcher *requestResponseMatcher) registerRequest(key string, request *Request) *RequestResponsePair {
if response, found := matcher.openMessagesMap.LoadAndDelete(key); found {
// Check for a situation that only occurs when a Kafka broker is initiating
switch response.(type) {
case *Response:
return matcher.preparePair(request, response.(*Response))
}
}
matcher.openMessagesMap.Store(key, request)
return nil
}
func (matcher *requestResponseMatcher) registerResponse(key string, response *Response) *RequestResponsePair {
try := 0
for {
try++
if try > maxTry {
return nil
}
if request, found := matcher.openMessagesMap.LoadAndDelete(key); found {
return matcher.preparePair(request.(*Request), response)
}
time.Sleep(1 * time.Millisecond)
}
}
func (matcher *requestResponseMatcher) preparePair(request *Request, response *Response) *RequestResponsePair {
return &RequestResponsePair{
Request: *request,
Response: *response,
}
}

Some files were not shown because too many files have changed in this diff Show More