Compare commits

..

20 Commits

Author SHA1 Message Date
Igor Gov
afa81c7ec2 Fixing bad conflict resolution 2021-08-19 13:33:14 +03:00
Igor Gov
e84c7d3310 Merge branch 'develop' 2021-08-19 13:18:06 +03:00
Igor Gov
7d0a90cb78 Merge branch 'main' into develop
# Conflicts:
#	cli/config/configStruct.go
#	cli/mizu/config.go
#	tap/http_reader.go
2021-08-19 13:16:19 +03:00
Nimrod Gilboa Markevich
24f79922e9 Add to periodic stats print in tapper (#221)
#patch
2021-08-16 15:50:04 +03:00
RoyUP9
c3995009ee Hotfix - ignore not allowed set flags (#192)
#patch
2021-08-10 14:21:16 +03:00
RoyUP9
6e9fe2986e removed duplicate har page header (#187) 2021-08-09 13:31:53 +03:00
RoyUP9
603240fedb temp fix - ignore agent image in config command (#185) 2021-08-09 11:55:45 +03:00
Igor Gov
e61871a68e Merge pull request #182 from up9inc/develop
Release 2021-08-08
2021-08-08 14:50:30 +03:00
nimrod-up9
379af59f07 Merge pull request #121 from up9inc/develop
Missing request body (#120)
2021-07-19 13:53:49 +03:00
gadotroee
ef9afe31a4 Merge pull request #119 from up9inc/develop
Mizu release
2021-07-18 16:54:31 +03:00
gadotroee
dca636b0fd Merge pull request #94 from up9inc/develop
Mizu release
2021-07-06 21:05:40 +03:00
Roee Gadot
9b72cc7aa6 Merge branch 'develop' into main
# Conflicts:
#	README.md
#	api/main.go
#	api/pkg/api/main.go
#	api/pkg/models/models.go
#	api/pkg/resolver/resolver.go
#	cli/Makefile
#	cli/cmd/tap.go
#	cli/cmd/tapRunner.go
#	tap/http_matcher.go
#	tap/http_reader.go
#	tap/tcp_stream_factory.go
2021-06-29 11:16:47 +03:00
Alex Haiut
d3c023b3ba mizu release 2021-06-21 (#79)
* Show pod name and namespace (#61)

* WIP

* Update main.go, consts.go, and 2 more files...

* Update messageSensitiveDataCleaner.go

* Update consts.go and messageSensitiveDataCleaner.go

* Update messageSensitiveDataCleaner.go

* Update main.go, consts.go, and 3 more files...

* WIP

* Update main.go, messageSensitiveDataCleaner.go, and 6 more files...

* Update main.go, messageSensitiveDataCleaner.go, and 3 more files...

* Update consts.go, messageSensitiveDataCleaner.go, and tap.go

* Update provider.go

* Update serializableRegexp.go

* Update tap.go

* TRA-3234 fetch with _source + no hard limit (#64)

* remove the HARD limit of 5000

* TRA-3299 Reduce footprint and Add Tolerances(#65)

* Use lib const for DNSClusterFirstWithHostNet.

* Whitespace.

* Break lines.

* Added affinity to pod names.

* Added tolerations to NoExecute and NoSchedule taints.

* Implementation of Mizu view command

* .

* .

* Update main.go and messageSensitiveDataCleaner.go

* Update main.go

* String and not pointers (#68)

* TRA-3318 - Cookies not null and fix har file names  (#69)

* no message

* TRA-3212 Passive-Tapper and Mizu share code (#70)

* Use log in tap package instead of fmt.

* Moved api/pkg/tap to root.

* Added go.mod and go.sum for tap.

* Added replace for shared.

* api uses tap module instead of tap package.

* Removed dependency of tap in shared by moving env var out of tap.

* Fixed compilation bugs.

* Fixed: Forgot to export struct field HostMode.

* Removed unused flag.

* Close har output channel when done.

* Moved websocket out of mizu and into passive-tapper.

* Send connection details over har output channel.

* Fixed compilation errors.

* Removed unused info from request response cache.

* Renamed connection -> connectionID.

* Fixed rename bug.

* Export setters and getters for filter ips and ports.

* Added tap dependency to Dockerfile.

* Uncomment error messages.

* Renamed `filterIpAddresses` -> `filterAuthorities`.

* Renamed ConnectionID -> ConnectionInfo.

* Fixed: Missed one replace.

* TRA-3342 Mizu/tap dump to har directory fails on Linux (#71)

* Instead of saving incomplete temp har files in a temp dir, save them in the output dir with a *.har.tmp suffix.

* API only loads har from *.har files (by extension).

* Add export entries endpoint for better up9 connect funcionality  (#72)

* no message
* no message
* no message

* Filter 'cookie' header

* Release action  (#73)

* Create main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* trying new approach

* no message

* yaml error

* no message

* no message

* no message

* missing )

* no message

* no message

* remove main.yml and fix branches

* Create tag-temp.yaml

* Update tag-temp.yaml

* Update tag-temp.yaml

* no message

* no message

* no message

* no message

* no message

* no message

* no message

* #minor

* no message

* no message

* added checksum calc to CLI makefile

* fixed build error - created bin directory upfront

* using markdown for release text

* use separate checksum files

* fixed release readme

* #minor

* readme updated

Co-authored-by: Alex Haiut <alex@up9.com>

* TRA-3360 Fix: Mizu ignores -n namespace flag and records traffic from all pods (#75)

Do not tap pods in namespaces which were not requested.

* added apple/m1 binary, updated readme (#77)

Co-authored-by: Alex Haiut <alex@up9.com>

* Update README.md (#78)

Co-authored-by: lirazyehezkel <61656597+lirazyehezkel@users.noreply.github.com>
Co-authored-by: RamiBerm <rami.berman@up9.com>
Co-authored-by: RamiBerm <54766858+RamiBerm@users.noreply.github.com>
Co-authored-by: gadotroee <55343099+gadotroee@users.noreply.github.com>
Co-authored-by: nimrod-up9 <59927337+nimrod-up9@users.noreply.github.com>
Co-authored-by: Igor Gov <igor.govorov1@gmail.com>
Co-authored-by: Alex Haiut <alex@up9.com>
2021-06-21 15:17:31 +03:00
Roee Gadot
5f2a4deb19 remove file 2021-05-26 18:08:37 +03:00
Roee Gadot
91f290987e Merge branch 'develop' into main
# Conflicts:
#	cli/cmd/tap.go
#	cli/cmd/version.go
#	cli/kubernetes/provider.go
#	cli/mizu/consts.go
#	cli/mizu/mizuRunner.go
#	debug.Dockerfile
#	ui/src/components/HarPage.tsx
2021-05-26 17:58:17 +03:00
gadotroee
2f3215b71a Fix mizu image parameter (#53) 2021-05-23 13:34:32 +03:00
Alex Haiut
2e87a01346 end of week - develop to master (#50)
* Provide cli version as git hash from makefile

* Update Makefile, version.go, and 3 more files...

* Update mizuRunner.go

* Update README.md, resolver.go, and 2 more files...

* Update provider.go

* Feature/UI/light theme (#44)

* light theme

* css polish

* unused code

* css

* text shadow

* footer style

* Update mizuRunner.go

* Handle nullable vars (#47)

* Decode gRPC body (#48)

* Decode grpc.

* Better variable names.

* Added protobuf-decoder dependency.

* Updated protobuf-decoder's version.

Co-authored-by: RamiBerm <rami.berman@up9.com>
Co-authored-by: RamiBerm <54766858+RamiBerm@users.noreply.github.com>
Co-authored-by: lirazyehezkel <61656597+lirazyehezkel@users.noreply.github.com>
Co-authored-by: nimrod-up9 <59927337+nimrod-up9@users.noreply.github.com>
2021-05-13 20:29:31 +03:00
gadotroee
453003bf14 remove leftovers (#43) 2021-05-10 17:35:59 +03:00
Roee Gadot
80ca377668 Merge branch 'develop' into main
# Conflicts:
#	Dockerfile
#	Makefile
#	api/go.mod
#	api/go.sum
#	api/main.go
#	api/pkg/controllers/entries_controller.go
#	api/pkg/inserter/main.go
#	api/pkg/models/models.go
#	api/pkg/tap/grpc_assembler.go
#	api/pkg/tap/har_writer.go
#	api/pkg/tap/http_matcher.go
#	api/pkg/tap/http_reader.go
#	api/pkg/tap/passive_tapper.go
#	api/pkg/utils/utils.go
#	cli/Makefile
#	cli/cmd/tap.go
#	cli/cmd/version.go
#	cli/config/config.go
#	cli/kubernetes/provider.go
#	cli/mizu/mizuRunner.go
2021-05-10 17:27:32 +03:00
gadotroee
d21297bc9c 0.9 (#37)
* Update .gitignore

* WIP

* WIP

* Update README.md, root.go, and 4 more files...

* Update README.md

* Update README.md

* Update root.go

* Update provider.go

* Update provider.go

* Update root.go, go.mod, and go.sum

* Update mizu.go

* Update go.sum and provider.go

* Update portForward.go, watch.go, and mizu.go

* Update README.md

* Update watch.go

* Update mizu.go

* Update mizu.go

* no message

* no message

* remove unused things and use external for object id (instead of copy)

* no message

* Update mizu.go

* Update go.mod, go.sum, and 2 more files...

* no message

* Update README.md, go.mod, and resolver.go

* Update README.md

* Update go.mod

* Update loader.go

* some refactor

* Update loader.go

* no message

* status to statusCode

* return data directly

* Traffic viewer

* cleaning

* css

* no message

* Clean warnings

* Makefile - first draft

* Update Makefile

* Update Makefile

* Update Makefile, README.md, and 4 more files...

* Add api build and clean to makefile (files restructure) (#9)

* no message
* add clean api command

* no message

* stating with web socket

* Add tap as a separate executable (#10)

* Added tap.

* Ignore build directories.

* Added tapper build to Makefile.

* Improvements  (#12)

* no message

* no message

* Feature/makefile (#11)

* minor fixes

* makefile fixes - docker build

* minor fix in Makefile
Co-authored-by: Alex Haiut <alex@up9.com>

* Update Dockerfile, multi-runner.sh, and 31 more files...

* Update multi-runner.sh

* no message

* Update .dockerignore, Dockerfile, and 30 more files...

* Update cleaner.go, grpc_assembler.go, and 2 more files...

* start the pod with host network and privileged

* fix multi runner passive tapper command

* add HOST_MODE env var

* do not return true in the should tap function

* remove line in the end

* default value in api is input
fix description and pass the parameter in the multi runner script

* missing flag.parse

* no message

* fix image

* Create main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Update main.yml

* Small fixes - permission + har writing exception (#17)

* Select node by pod (#18)

* Select node by pod.

* Removed watch pod by regex. Irrelevant for now.

* Changed default image to develop:latest.

* Features/clifix (#19)

* makefile fixes - docker build

* readme update, CLI usage fix

* added chmod

Co-authored-by: Alex Haiut <alex@up9.com>

* meta information

* Only record traffic of the requested pod. Filtered by pod IP. (#21)

* fixed readme and reduced batch size to 5 (#22)

Co-authored-by: Alex Haiut <alex@up9.com>

* API and TAP in single process  (#24)

* no message
* no message

* CLI make --pod required flag and faster api image build (#25)

* makefile fixes - docker build

* readme update, CLI usage fix

* added chmod

* typo

* run example incorreect in makefile

* no message

* no message

* no message

Co-authored-by: Alex Haiut <alex@up9.com>

* Reduce delay between tap and UI - Skip dump to file (#26)

* Pass HARs between tap and api via channel.

* Fixed make docker commad.

* Various fixes.

* Added .DS_Store to .gitignore.

* Parse flags in Mizu main instead of in tap_output.go.

* Use channel to pass HAR by default instead of files.

* Infinite scroll (#28)

* no message

* infinite scroll + new ws implementation

* no message

* scrolling top

* fetch button

* more Backend changes

* fix go mod and sum

* mire fixes against develop

* unused code

* small ui refactor

Co-authored-by: Roee Gadot <roee.gadot@up9.com>

* Fix gRPC crash, display gRPC as base64, display gRPC URL and status code (#27)

* Added Method (POST) and URL (emtpy) to gRPC requests.

* Removed quickfix that skips writing HTTP/2 to HAR.

* Use HTTP/2 body to fill out http.Request and htt.Response.

* Make sure that in HARs request.postData.mimeType and response.content.mimeType are application/grpc in case of grpc.

* Comment.

* Add URL and status code for gRPC.

* Don't assume http scheme.

* Use http.Header.Set instead of manually acccessing the underlaying map.

* General stats api fix  (#29)

* refactor and validation

* Show gRPC as ASCII (#31)

* Moved try-catch up one block.

* Display grpc as ASCII.

* Better code in entries fetch endpoint (#30)

* no message
* no message

* Feature/UI/filters (#32)

* UI filters

* refactor

* Revert "refactor"

This reverts commit 70e7d4b6ac.

* remove recursive func

* CLI cleanup (#33)

* Moved cli root command to tap subcommand.

* tap subcommand works.

* Added view and fetch placeholders.

* Updated descriptions.

* Fixed indentation.

* Added versio subcommand.

* Removed version flag.

* gofmt.

* Changed pod from flag to arg.

* Commented out "all namespaces" flag.

* CLI cleanup 2 (#34)

* Renamed dashboard -> GUI/web interface.

* Commented out --quiet, removed unused config variables.

* Quiter output when calling unimplemented subcommands.

* Leftovers from PR #30 (#36)

Co-authored-by: up9-github <info@up9.com>
Co-authored-by: RamiBerm <54766858+RamiBerm@users.noreply.github.com>
Co-authored-by: Liraz Yehezkel <lirazy@up9.com>
Co-authored-by: Alex Haiut <alex@testr.io>
Co-authored-by: lirazyehezkel <61656597+lirazyehezkel@users.noreply.github.com>
Co-authored-by: Alex Haiut <alex@up9.com>
Co-authored-by: nimrod-up9 <59927337+nimrod-up9@users.noreply.github.com>
Co-authored-by: RamiBerm <rami.berman@up9.com>
Co-authored-by: Alex Haiut <alex.haiut@gmail.com>
2021-05-09 11:45:39 +03:00
168 changed files with 25695 additions and 17678 deletions

View File

@@ -2,7 +2,7 @@
.dockerignore
.editorconfig
.gitignore
**/.env*
.env.*
Dockerfile
Makefile
LICENSE

View File

@@ -1,15 +1,9 @@
name: PR validation
on:
pull_request:
branches:
- 'develop'
- 'main'
concurrency:
group: mizu-pr-validation-${{ github.ref }}
cancel-in-progress: true
jobs:
build-cli:
name: Build CLI
@@ -18,7 +12,7 @@ jobs:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '1.16'
go-version: '^1.16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
@@ -33,7 +27,7 @@ jobs:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '1.16'
go-version: '^1.16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
@@ -44,3 +38,43 @@ jobs:
- name: Build Agent
run: make agent
run-tests-cli:
name: Run CLI tests
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '^1.16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Test
run: make test-cli
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2
run-tests-agent:
name: Run Agent tests
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '^1.16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- shell: bash
run: |
sudo apt-get install libpcap-dev
- name: Test
run: make test-agent
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2

View File

@@ -1,56 +0,0 @@
name: tests validation
on:
pull_request:
branches:
- 'develop'
- 'main'
push:
branches:
- 'develop'
- 'main'
concurrency:
group: mizu-tests-validation-${{ github.ref }}
cancel-in-progress: true
jobs:
run-tests-cli:
name: Run CLI tests
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '^1.16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- name: Test
run: make test-cli
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2
run-tests-agent:
name: Run Agent tests
runs-on: ubuntu-latest
steps:
- name: Set up Go 1.16
uses: actions/setup-go@v2
with:
go-version: '^1.16'
- name: Check out code into the Go module directory
uses: actions/checkout@v2
- shell: bash
run: |
sudo apt-get install libpcap-dev
- name: Test
run: make test-agent
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v2

10
.gitignore vendored
View File

@@ -19,13 +19,3 @@ build
# Mac OS
.DS_Store
.vscode/
# Ignore the scripts that are created for development
*dev.*
# Environment variables
.env
# pprof
pprof/*

View File

@@ -1,20 +1,18 @@
![Mizu: The API Traffic Viewer for Kubernetes](../assets/mizu-logo.svg)
# Contributing to Mizu
![Mizu: The API Traffic Viewer for Kubernetes](assets/mizu-logo.svg)
# CONTRIBUTE
We welcome code contributions from the community.
Please read and follow the guidelines below.
## Communication
* Before starting work on a major feature, please reach out to us via [GitHub](https://github.com/up9inc/mizu), [Slack](https://join.slack.com/share/zt-u6bbs3pg-X1zhQOXOH0yEoqILgH~csw), [email](mailto:mizu@up9.com), etc. We will make sure no one else is already working on it. A _major feature_ is defined as any change that is > 100 LOC altered (not including tests), or changes any user-facing behavior
* Small patches and bug fixes don't need prior communication.
## Contribution Requirements
## Contribution requirements
* Code style - most of the code is written in Go, please follow [these guidelines](https://golang.org/doc/effective_go)
* Go-tools compatible (`go get`, `go test`, etc.)
* Code coverage for unit tests must not decrease.
* Go-tools compatible (`go get`, `go test`, etc)
* Unit-test coverage cant go down ..
* Code must be usefully commented. Not only for developers on the project, but also for external users of these packages
* When reviewing PRs, you are encouraged to use Golang's [code review comments page](https://github.com/golang/go/wiki/CodeReviewComments)
* Project follows [Google JSON Style Guide](https://google.github.io/styleguide/jsoncstyleguide.xml) for the REST APIs that are provided.

View File

@@ -11,7 +11,7 @@ FROM golang:1.16-alpine AS builder
# Set necessary environment variables needed for our image.
ENV CGO_ENABLED=1 GOOS=linux GOARCH=amd64
RUN apk add libpcap-dev gcc g++ make bash
RUN apk add libpcap-dev gcc g++ make
# Move to agent working directory (/agent-build).
WORKDIR /app/agent-build
@@ -19,7 +19,6 @@ WORKDIR /app/agent-build
COPY agent/go.mod agent/go.sum ./
COPY shared/go.mod shared/go.mod ../shared/
COPY tap/go.mod tap/go.mod ../tap/
COPY tap/api/go.* ../tap/api/
RUN go mod download
# cheap trick to make the build faster (As long as go.mod wasn't changes)
RUN go list -f '{{.Path}}@{{.Version}}' -m all | sed 1d | grep -e 'go-cache' -e 'sqlite' | xargs go get
@@ -39,8 +38,6 @@ RUN go build -ldflags="-s -w \
-X 'mizuserver/pkg/version.BuildTimestamp=${BUILD_TIMESTAMP}' \
-X 'mizuserver/pkg/version.SemVer=${SEM_VER}'" -o mizuagent .
COPY devops/build_extensions.sh ..
RUN cd .. && /bin/bash build_extensions.sh
FROM alpine:3.13.5
@@ -49,7 +46,6 @@ WORKDIR /app
# Copy binary and config files from /build to root folder of scratch container.
COPY --from=builder ["/app/agent-build/mizuagent", "."]
COPY --from=builder ["/app/agent/build/extensions", "extensions"]
COPY --from=site-build ["/app/ui-build/build", "site"]
# gin-gonic runs in debug mode without this

View File

@@ -23,7 +23,7 @@ export SEM_VER?=0.0.0
ui: ## Build UI.
@(cd ui; npm i ; npm run build; )
@ls -l ui/build
@ls -l ui/build
cli: ## Build CLI.
@echo "building cli"; cd cli && $(MAKE) build
@@ -34,7 +34,6 @@ build-cli-ci: ## Build CLI for CI.
agent: ## Build agent.
@(echo "building mizu agent .." )
@(cd agent; go build -o build/mizuagent main.go)
${MAKE} extensions
@ls -l agent/build
docker: ## Build and publish agent docker image.
@@ -44,11 +43,11 @@ push: push-docker push-cli ## Build and publish agent docker image & CLI.
push-docker: ## Build and publish agent docker image.
@echo "publishing Docker image .. "
devops/build-push-featurebranch.sh
./build-push-featurebranch.sh
build-docker-ci: ## Build agent docker image for CI.
@echo "building docker image for ci"
devops/build-agent-ci.sh
./build-agent-ci.sh
push-cli: ## Build and publish CLI.
@echo "publishing CLI .. "
@@ -72,9 +71,6 @@ clean-cli: ## Clean CLI.
clean-docker:
@(echo "DOCKER cleanup - NOT IMPLEMENTED YET " )
extensions:
devops/build_extensions.sh
test-cli:
@echo "running cli tests"; cd cli && $(MAKE) test

View File

@@ -1,4 +1,4 @@
![Mizu: The API Traffic Viewer for Kubernetes](../assets/mizu-logo.svg)
![Mizu: The API Traffic Viewer for Kubernetes](assets/mizu-logo.svg)
# Kubernetes permissions for MIZU
This document describes in details all permissions required for full and correct operation of Mizu

View File

@@ -2,9 +2,7 @@
# The API Traffic Viewer for Kubernetes
A simple-yet-powerful API traffic viewer for Kubernetes enabling you to view all API communication between microservices to help your debug and troubleshoot regressions.
Think TCPDump and Chrome Dev Tools combined.
A simple-yet-powerful API traffic viewer for Kubernetes to help you troubleshoot and debug your microservices. Think TCPDump and Chrome Dev Tools combined
![Simple UI](assets/mizu-ui.png)
@@ -40,13 +38,11 @@ SHA256 checksums are available on the [Releases](https://github.com/up9inc/mizu/
### Development (unstable) Build
Pick one from the [Releases](https://github.com/up9inc/mizu/releases) page
## Kubeconfig & Permissions
While `mizu`most often works out of the box, you can influence its behavior:
1. [OPTIONAL] Set `KUBECONFIG` environment variable to your Kubernetes configuration. If this is not set, Mizu assumes that configuration is at `${HOME}/.kube/config`
## Prerequisites
1. Set `KUBECONFIG` environment variable to your Kubernetes configuration. If this is not set, Mizu assumes that configuration is at `${HOME}/.kube/config`
2. `mizu` assumes user running the command has permissions to create resources (such as pods, services, namespaces) on your Kubernetes cluster (no worries - `mizu` resources are cleaned up upon termination)
For detailed list of k8s permissions see [PERMISSIONS](docs/PERMISSIONS.md) document
For detailed list of k8s permissions see [PERMISSIONS](PERMISSIONS.md) document
## How to Run
@@ -143,21 +139,22 @@ Setting `mizu-resources-namespace=mizu` resets Mizu to its default behavior
User-agent filtering (like health checks) - can be configured using command-line options:
```shell
$ mizu tap "^ca.*" --set tap.ignored-user-agents=kube-probe --set tap.ignored-user-agents=prometheus
$ mizu tap "^ca.*" --set ignored-user-agents=kube-probe --set ignored-user-agents=prometheus
+carts-66c77f5fbb-fq65r
+catalogue-5f4cb7cf5-7zrmn
Web interface is now available at http://localhost:8899
^C
```
Any request that contains `User-Agent` header with one of the specified values (`kube-probe` or `prometheus`) will not be captured
### Traffic validation rules
### API Rules validation
This feature allows you to define set of simple rules, and test the traffic against them.
This feature allows you to define set of simple rules, and test the API against them.
Such validation may test response for specific JSON fields, headers, etc.
Please see [TRAFFIC RULES](docs/POLICY_RULES.md) page for more details and syntax.
Please see [API RULES](docs/POLICY_RULES.md) page for more details and syntax.
## How to Run local UI

15
TESTING.md Normal file
View File

@@ -0,0 +1,15 @@
![Mizu: The API Traffic Viewer for Kubernetes](assets/mizu-logo.svg)
# TESTING
Testing guidelines for Mizu project
## Unit-tests
* TBD
* TBD
* TBD
## System tests
* TBD
* TBD
* TBD

View File

@@ -1,2 +1,2 @@
test: ## Run acceptance tests.
@go test ./... -timeout 1h
@go test ./...

View File

@@ -1,283 +0,0 @@
package acceptanceTests
import (
"fmt"
"gopkg.in/yaml.v3"
"io/ioutil"
"os"
"os/exec"
"testing"
)
type tapConfig struct {
GuiPort uint16 `yaml:"gui-port"`
}
type configStruct struct {
Tap tapConfig `yaml:"tap"`
}
func TestConfigRegenerate(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
configPath, configPathErr := getConfigPath()
if configPathErr != nil {
t.Errorf("failed to get config path, err: %v", cliPathErr)
return
}
configCmdArgs := getDefaultConfigCommandArgs()
configCmdArgs = append(configCmdArgs, "-r")
configCmd := exec.Command(cliPath, configCmdArgs...)
t.Logf("running command: %v", configCmd.String())
t.Cleanup(func() {
if err := os.Remove(configPath); err != nil {
t.Logf("failed to delete config file, err: %v", err)
}
})
if err := configCmd.Start(); err != nil {
t.Errorf("failed to start config command, err: %v", err)
return
}
if err := configCmd.Wait(); err != nil {
t.Errorf("failed to wait config command, err: %v", err)
return
}
_, readFileErr := ioutil.ReadFile(configPath)
if readFileErr != nil {
t.Errorf("failed to read config file, err: %v", readFileErr)
return
}
}
func TestConfigGuiPort(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
tests := []uint16{8898}
for _, guiPort := range tests {
t.Run(fmt.Sprintf("%d", guiPort), func(t *testing.T) {
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
configPath, configPathErr := getConfigPath()
if configPathErr != nil {
t.Errorf("failed to get config path, err: %v", cliPathErr)
return
}
config := configStruct{}
config.Tap.GuiPort = guiPort
configBytes, marshalErr := yaml.Marshal(config)
if marshalErr != nil {
t.Errorf("failed to marshal config, err: %v", marshalErr)
return
}
if writeErr := ioutil.WriteFile(configPath, configBytes, 0644); writeErr != nil {
t.Errorf("failed to write config to file, err: %v", writeErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
if err := os.Remove(configPath); err != nil {
t.Logf("failed to delete config file, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(guiPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
})
}
}
func TestConfigSetGuiPort(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
tests := []struct {
ConfigFileGuiPort uint16
SetGuiPort uint16
}{
{ConfigFileGuiPort: 8898, SetGuiPort: 8897},
}
for _, guiPortStruct := range tests {
t.Run(fmt.Sprintf("%d", guiPortStruct.SetGuiPort), func(t *testing.T) {
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
configPath, configPathErr := getConfigPath()
if configPathErr != nil {
t.Errorf("failed to get config path, err: %v", cliPathErr)
return
}
config := configStruct{}
config.Tap.GuiPort = guiPortStruct.ConfigFileGuiPort
configBytes, marshalErr := yaml.Marshal(config)
if marshalErr != nil {
t.Errorf("failed to marshal config, err: %v", marshalErr)
return
}
if writeErr := ioutil.WriteFile(configPath, configBytes, 0644); writeErr != nil {
t.Errorf("failed to write config to file, err: %v", writeErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "--set", fmt.Sprintf("tap.gui-port=%v", guiPortStruct.SetGuiPort))
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
if err := os.Remove(configPath); err != nil {
t.Logf("failed to delete config file, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(guiPortStruct.SetGuiPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
})
}
}
func TestConfigFlagGuiPort(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
tests := []struct {
ConfigFileGuiPort uint16
FlagGuiPort uint16
}{
{ConfigFileGuiPort: 8898, FlagGuiPort: 8896},
}
for _, guiPortStruct := range tests {
t.Run(fmt.Sprintf("%d", guiPortStruct.FlagGuiPort), func(t *testing.T) {
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
configPath, configPathErr := getConfigPath()
if configPathErr != nil {
t.Errorf("failed to get config path, err: %v", cliPathErr)
return
}
config := configStruct{}
config.Tap.GuiPort = guiPortStruct.ConfigFileGuiPort
configBytes, marshalErr := yaml.Marshal(config)
if marshalErr != nil {
t.Errorf("failed to marshal config, err: %v", marshalErr)
return
}
if writeErr := ioutil.WriteFile(configPath, configBytes, 0644); writeErr != nil {
t.Errorf("failed to write config to file, err: %v", writeErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "-p", fmt.Sprintf("%v", guiPortStruct.FlagGuiPort))
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
if err := os.Remove(configPath); err != nil {
t.Logf("failed to delete config file, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(guiPortStruct.FlagGuiPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
})
}
}

View File

@@ -1,5 +1,3 @@
module github.com/up9inc/mizu/tests
go 1.16
require gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b

View File

@@ -1,4 +0,0 @@
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@@ -1,196 +0,0 @@
package acceptanceTests
import (
"archive/zip"
"os/exec"
"testing"
)
func TestLogs(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
logsCmdArgs := getDefaultLogsCommandArgs()
logsCmd := exec.Command(cliPath, logsCmdArgs...)
t.Logf("running command: %v", logsCmd.String())
if err := logsCmd.Start(); err != nil {
t.Errorf("failed to start logs command, err: %v", err)
return
}
if err := logsCmd.Wait(); err != nil {
t.Errorf("failed to wait logs command, err: %v", err)
return
}
logsPath, logsPathErr := getLogsPath()
if logsPathErr != nil {
t.Errorf("failed to get logs path, err: %v", logsPathErr)
return
}
zipReader, zipError := zip.OpenReader(logsPath)
if zipError != nil {
t.Errorf("failed to get zip reader, err: %v", zipError)
return
}
t.Cleanup(func() {
if err := zipReader.Close(); err != nil {
t.Logf("failed to close zip reader, err: %v", err)
}
})
var logsFileNames []string
for _, file := range zipReader.File {
logsFileNames = append(logsFileNames, file.Name)
}
if !Contains(logsFileNames, "mizu.mizu-api-server.log") {
t.Errorf("api server logs not found")
return
}
if !Contains(logsFileNames, "mizu_cli.log") {
t.Errorf("cli logs not found")
return
}
if !Contains(logsFileNames, "mizu_events.log") {
t.Errorf("events logs not found")
return
}
if !ContainsPartOfValue(logsFileNames, "mizu.mizu-tapper-daemon-set") {
t.Errorf("tapper logs not found")
return
}
}
func TestLogsPath(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
logsCmdArgs := getDefaultLogsCommandArgs()
logsPath := "../logs.zip"
logsCmdArgs = append(logsCmdArgs, "-f", logsPath)
logsCmd := exec.Command(cliPath, logsCmdArgs...)
t.Logf("running command: %v", logsCmd.String())
if err := logsCmd.Start(); err != nil {
t.Errorf("failed to start logs command, err: %v", err)
return
}
if err := logsCmd.Wait(); err != nil {
t.Errorf("failed to wait logs command, err: %v", err)
return
}
zipReader, zipError := zip.OpenReader(logsPath)
if zipError != nil {
t.Errorf("failed to get zip reader, err: %v", zipError)
return
}
t.Cleanup(func() {
if err := zipReader.Close(); err != nil {
t.Logf("failed to close zip reader, err: %v", err)
}
})
var logsFileNames []string
for _, file := range zipReader.File {
logsFileNames = append(logsFileNames, file.Name)
}
if !Contains(logsFileNames, "mizu.mizu-api-server.log") {
t.Errorf("api server logs not found")
return
}
if !Contains(logsFileNames, "mizu_cli.log") {
t.Errorf("cli logs not found")
return
}
if !Contains(logsFileNames, "mizu_events.log") {
t.Errorf("events logs not found")
return
}
if !ContainsPartOfValue(logsFileNames, "mizu.mizu-tapper-daemon-set") {
t.Errorf("tapper logs not found")
return
}
}

View File

@@ -26,21 +26,14 @@ fi
echo "Starting minikube..."
minikube start
echo "Creating mizu tests namespaces"
echo "Creating mizu tests namespace"
kubectl create namespace mizu-tests
kubectl create namespace mizu-tests2
echo "Creating httpbin deployments"
echo "Creating httpbin deployment"
kubectl create deployment httpbin --image=kennethreitz/httpbin -n mizu-tests
kubectl create deployment httpbin2 --image=kennethreitz/httpbin -n mizu-tests
kubectl create deployment httpbin --image=kennethreitz/httpbin -n mizu-tests2
echo "Creating httpbin services"
echo "Creating httpbin service"
kubectl expose deployment httpbin --type=NodePort --port=80 -n mizu-tests
kubectl expose deployment httpbin2 --type=NodePort --port=80 -n mizu-tests
kubectl expose deployment httpbin --type=NodePort --port=80 -n mizu-tests2
echo "Starting proxy"
kubectl proxy --port=8080 &

View File

@@ -1,44 +1,34 @@
package acceptanceTests
import (
"archive/zip"
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"os/exec"
"path"
"strings"
"testing"
"time"
)
func TestTap(t *testing.T) {
func TestTapAndFetch(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
tests := []int{50}
tests := []int{1, 100}
for _, entriesCount := range tests {
t.Run(fmt.Sprintf("%d", entriesCount), func(t *testing.T) {
cliPath, cliPathErr := getCliPath()
cliPath, cliPathErr := GetCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs := GetDefaultTapCommandArgs()
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
if err := CleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
@@ -48,818 +38,88 @@ func TestTap(t *testing.T) {
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
time.Sleep(30 * time.Second)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
proxyUrl := getProxyUrl(defaultNamespaceName, defaultServiceName)
proxyUrl := "http://localhost:8080/api/v1/namespaces/mizu-tests/services/httpbin/proxy/get"
for i := 0; i < entriesCount; i++ {
if _, requestErr := executeHttpGetRequest(fmt.Sprintf("%v/get", proxyUrl)); requestErr != nil {
if _, requestErr := ExecuteHttpRequest(proxyUrl); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
}
entriesCheckFunc := func() error {
timestamp := time.Now().UnixNano() / int64(time.Millisecond)
time.Sleep(5 * time.Second)
timestamp := time.Now().UnixNano() / int64(time.Millisecond)
entriesUrl := fmt.Sprintf("%v/api/entries?limit=%v&operator=lt&timestamp=%v", apiServerUrl, entriesCount, timestamp)
requestResult, requestErr := executeHttpGetRequest(entriesUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entries, err: %v", requestErr)
}
entries := requestResult.([]interface{})
if len(entries) == 0 {
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
}
entry := entries[0].(map[string]interface{})
entryUrl := fmt.Sprintf("%v/api/entries/%v", apiServerUrl, entry["id"])
requestResult, requestErr = executeHttpGetRequest(entryUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entry, err: %v", requestErr)
}
if requestResult == nil {
return fmt.Errorf("unexpected nil entry result")
}
return nil
}
if err := retriesExecute(shortRetriesCount, entriesCheckFunc); err != nil {
t.Errorf("%v", err)
return
}
})
}
}
func TestTapGuiPort(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
tests := []uint16{8898}
for _, guiPort := range tests {
t.Run(fmt.Sprintf("%d", guiPort), func(t *testing.T) {
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
entriesUrl := fmt.Sprintf("http://localhost:8899/mizu/api/entries?limit=%v&operator=lt&timestamp=%v", entriesCount, timestamp)
requestResult, requestErr := ExecuteHttpRequest(entriesUrl)
if requestErr != nil {
t.Errorf("failed to get entries, err: %v", requestErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
entries, ok := requestResult.([]interface{})
if !ok {
t.Errorf("invalid entries type")
return
}
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
if len(entries) != entriesCount {
t.Errorf("unexpected entries result - Expected: %v, actual: %v", entriesCount, len(entries))
return
}
tapCmdArgs = append(tapCmdArgs, "-p", fmt.Sprintf("%d", guiPort))
entry, ok := entries[0].(map[string]interface{})
if !ok {
t.Errorf("invalid entry type")
return
}
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
entryUrl := fmt.Sprintf("http://localhost:8899/mizu/api/entries/%v", entry["id"])
requestResult, requestErr = ExecuteHttpRequest(entryUrl)
if requestErr != nil {
t.Errorf("failed to get entry, err: %v", requestErr)
return
}
if requestResult == nil {
t.Errorf("unexpected nil entry result")
return
}
fetchCmdArgs := GetDefaultFetchCommandArgs()
fetchCmd := exec.Command(cliPath, fetchCmdArgs...)
t.Logf("running command: %v", fetchCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
if err := CleanupCommand(fetchCmd); err != nil {
t.Logf("failed to cleanup fetch command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
if err := fetchCmd.Start(); err != nil {
t.Errorf("failed to start fetch command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(guiPort)
time.Sleep(5 * time.Second)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
harBytes, readFileErr := ioutil.ReadFile("./unknown_source.har")
if readFileErr != nil {
t.Errorf("failed to read har file, err: %v", readFileErr)
return
}
harEntries, err := GetEntriesFromHarBytes(harBytes)
if err != nil {
t.Errorf("failed to get entries from har, err: %v", err)
return
}
if len(harEntries) != entriesCount {
t.Errorf("unexpected har entries result - Expected: %v, actual: %v", entriesCount, len(harEntries))
return
}
})
}
}
func TestTapAllNamespaces(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
expectedPods := []struct{
Name string
Namespace string
}{
{Name: "httpbin", Namespace: "mizu-tests"},
{Name: "httpbin", Namespace: "mizu-tests2"},
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapCmdArgs = append(tapCmdArgs, "-A")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
podsUrl := fmt.Sprintf("%v/api/tapStatus", apiServerUrl)
requestResult, requestErr := executeHttpGetRequest(podsUrl)
if requestErr != nil {
t.Errorf("failed to get tap status, err: %v", requestErr)
return
}
pods, err := getPods(requestResult)
if err != nil {
t.Errorf("failed to get pods, err: %v", err)
return
}
for _, expectedPod := range expectedPods {
podFound := false
for _, pod := range pods {
podNamespace := pod["namespace"].(string)
podName := pod["name"].(string)
if expectedPod.Namespace == podNamespace && strings.Contains(podName, expectedPod.Name) {
podFound = true
break
}
}
if !podFound {
t.Errorf("unexpected result - expected pod not found, pod namespace: %v, pod name: %v", expectedPod.Namespace, expectedPod.Name)
return
}
}
}
func TestTapMultipleNamespaces(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
expectedPods := []struct{
Name string
Namespace string
}{
{Name: "httpbin", Namespace: "mizu-tests"},
{Name: "httpbin2", Namespace: "mizu-tests"},
{Name: "httpbin", Namespace: "mizu-tests2"},
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
var namespacesCmd []string
for _, expectedPod := range expectedPods {
namespacesCmd = append(namespacesCmd, "-n", expectedPod.Namespace)
}
tapCmdArgs = append(tapCmdArgs, namespacesCmd...)
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
podsUrl := fmt.Sprintf("%v/api/tapStatus", apiServerUrl)
requestResult, requestErr := executeHttpGetRequest(podsUrl)
if requestErr != nil {
t.Errorf("failed to get tap status, err: %v", requestErr)
return
}
pods, err := getPods(requestResult)
if err != nil {
t.Errorf("failed to get pods, err: %v", err)
return
}
if len(expectedPods) != len(pods) {
t.Errorf("unexpected result - expected pods length: %v, actual pods length: %v", len(expectedPods), len(pods))
return
}
for _, expectedPod := range expectedPods {
podFound := false
for _, pod := range pods {
podNamespace := pod["namespace"].(string)
podName := pod["name"].(string)
if expectedPod.Namespace == podNamespace && strings.Contains(podName, expectedPod.Name) {
podFound = true
break
}
}
if !podFound {
t.Errorf("unexpected result - expected pod not found, pod namespace: %v, pod name: %v", expectedPod.Namespace, expectedPod.Name)
return
}
}
}
func TestTapRegex(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
regexPodName := "httpbin2"
expectedPods := []struct{
Name string
Namespace string
}{
{Name: regexPodName, Namespace: "mizu-tests"},
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgsWithRegex(regexPodName)
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
podsUrl := fmt.Sprintf("%v/api/tapStatus", apiServerUrl)
requestResult, requestErr := executeHttpGetRequest(podsUrl)
if requestErr != nil {
t.Errorf("failed to get tap status, err: %v", requestErr)
return
}
pods, err := getPods(requestResult)
if err != nil {
t.Errorf("failed to get pods, err: %v", err)
return
}
if len(expectedPods) != len(pods) {
t.Errorf("unexpected result - expected pods length: %v, actual pods length: %v", len(expectedPods), len(pods))
return
}
for _, expectedPod := range expectedPods {
podFound := false
for _, pod := range pods {
podNamespace := pod["namespace"].(string)
podName := pod["name"].(string)
if expectedPod.Namespace == podNamespace && strings.Contains(podName, expectedPod.Name) {
podFound = true
break
}
}
if !podFound {
t.Errorf("unexpected result - expected pod not found, pod namespace: %v, pod name: %v", expectedPod.Namespace, expectedPod.Name)
return
}
}
}
func TestTapDryRun(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "--dry-run")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
resultChannel := make(chan string, 1)
go func() {
if err := tapCmd.Wait(); err != nil {
resultChannel <- "fail"
return
}
resultChannel <- "success"
}()
go func() {
time.Sleep(shortRetriesCount * time.Second)
resultChannel <- "fail"
}()
testResult := <- resultChannel
if testResult != "success" {
t.Errorf("unexpected result - dry run cmd not done")
}
}
func TestTapRedact(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
proxyUrl := getProxyUrl(defaultNamespaceName, defaultServiceName)
requestBody := map[string]string{"User": "Mizu"}
for i := 0; i < defaultEntriesCount; i++ {
if _, requestErr := executeHttpPostRequest(fmt.Sprintf("%v/post", proxyUrl), requestBody); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
}
redactCheckFunc := func() error {
timestamp := time.Now().UnixNano() / int64(time.Millisecond)
entriesUrl := fmt.Sprintf("%v/api/entries?limit=%v&operator=lt&timestamp=%v", apiServerUrl, defaultEntriesCount, timestamp)
requestResult, requestErr := executeHttpGetRequest(entriesUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entries, err: %v", requestErr)
}
entries := requestResult.([]interface{})
if len(entries) == 0 {
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
}
firstEntry := entries[0].(map[string]interface{})
entryUrl := fmt.Sprintf("%v/api/entries/%v", apiServerUrl, firstEntry["id"])
requestResult, requestErr = executeHttpGetRequest(entryUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entry, err: %v", requestErr)
}
data := requestResult.(map[string]interface{})["data"].(map[string]interface{})
entryJson := data["entry"].(string)
var entry map[string]interface{}
if parseErr := json.Unmarshal([]byte(entryJson), &entry); parseErr != nil {
return fmt.Errorf("failed to parse entry, err: %v", parseErr)
}
entryRequest := entry["request"].(map[string]interface{})
entryPayload := entryRequest["payload"].(map[string]interface{})
entryDetails := entryPayload["details"].(map[string]interface{})
headers := entryDetails["headers"].([]interface{})
for _, headerInterface := range headers {
header := headerInterface.(map[string]interface{})
if header["name"].(string) != "User-Agent" {
continue
}
userAgent := header["value"].(string)
if userAgent != "[REDACTED]" {
return fmt.Errorf("unexpected result - user agent is not redacted")
}
}
postData := entryDetails["postData"].(map[string]interface{})
textDataStr := postData["text"].(string)
var textData map[string]string
if parseErr := json.Unmarshal([]byte(textDataStr), &textData); parseErr != nil {
return fmt.Errorf("failed to parse text data, err: %v", parseErr)
}
if textData["User"] != "[REDACTED]" {
return fmt.Errorf("unexpected result - user in body is not redacted")
}
return nil
}
if err := retriesExecute(shortRetriesCount, redactCheckFunc); err != nil {
t.Errorf("%v", err)
return
}
}
func TestTapNoRedact(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "--no-redact")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
proxyUrl := getProxyUrl(defaultNamespaceName, defaultServiceName)
requestBody := map[string]string{"User": "Mizu"}
for i := 0; i < defaultEntriesCount; i++ {
if _, requestErr := executeHttpPostRequest(fmt.Sprintf("%v/post", proxyUrl), requestBody); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
}
redactCheckFunc := func() error {
timestamp := time.Now().UnixNano() / int64(time.Millisecond)
entriesUrl := fmt.Sprintf("%v/api/entries?limit=%v&operator=lt&timestamp=%v", apiServerUrl, defaultEntriesCount, timestamp)
requestResult, requestErr := executeHttpGetRequest(entriesUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entries, err: %v", requestErr)
}
entries := requestResult.([]interface{})
if len(entries) == 0 {
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
}
firstEntry := entries[0].(map[string]interface{})
entryUrl := fmt.Sprintf("%v/api/entries/%v", apiServerUrl, firstEntry["id"])
requestResult, requestErr = executeHttpGetRequest(entryUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entry, err: %v", requestErr)
}
data := requestResult.(map[string]interface{})["data"].(map[string]interface{})
entryJson := data["entry"].(string)
var entry map[string]interface{}
if parseErr := json.Unmarshal([]byte(entryJson), &entry); parseErr != nil {
return fmt.Errorf("failed to parse entry, err: %v", parseErr)
}
entryRequest := entry["request"].(map[string]interface{})
entryPayload := entryRequest["payload"].(map[string]interface{})
entryDetails := entryPayload["details"].(map[string]interface{})
headers := entryDetails["headers"].([]interface{})
for _, headerInterface := range headers {
header := headerInterface.(map[string]interface{})
if header["name"].(string) != "User-Agent" {
continue
}
userAgent := header["value"].(string)
if userAgent == "[REDACTED]" {
return fmt.Errorf("unexpected result - user agent is redacted")
}
}
postData := entryDetails["postData"].(map[string]interface{})
textDataStr := postData["text"].(string)
var textData map[string]string
if parseErr := json.Unmarshal([]byte(textDataStr), &textData); parseErr != nil {
return fmt.Errorf("failed to parse text data, err: %v", parseErr)
}
if textData["User"] == "[REDACTED]" {
return fmt.Errorf("unexpected result - user in body is redacted")
}
return nil
}
if err := retriesExecute(shortRetriesCount, redactCheckFunc); err != nil {
t.Errorf("%v", err)
return
}
}
func TestTapRegexMasking(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "-r", "Mizu")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
t.Cleanup(func() {
if err := cleanupCommand(tapCmd); err != nil {
t.Logf("failed to cleanup tap command, err: %v", err)
}
})
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
proxyUrl := getProxyUrl(defaultNamespaceName, defaultServiceName)
for i := 0; i < defaultEntriesCount; i++ {
response, requestErr := http.Post(fmt.Sprintf("%v/post", proxyUrl), "text/plain", bytes.NewBufferString("Mizu"))
if _, requestErr = executeHttpRequest(response, requestErr); requestErr != nil {
t.Errorf("failed to send proxy request, err: %v", requestErr)
return
}
}
redactCheckFunc := func() error {
timestamp := time.Now().UnixNano() / int64(time.Millisecond)
entriesUrl := fmt.Sprintf("%v/api/entries?limit=%v&operator=lt&timestamp=%v", apiServerUrl, defaultEntriesCount, timestamp)
requestResult, requestErr := executeHttpGetRequest(entriesUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entries, err: %v", requestErr)
}
entries := requestResult.([]interface{})
if len(entries) == 0 {
return fmt.Errorf("unexpected entries result - Expected more than 0 entries")
}
firstEntry := entries[0].(map[string]interface{})
entryUrl := fmt.Sprintf("%v/api/entries/%v", apiServerUrl, firstEntry["id"])
requestResult, requestErr = executeHttpGetRequest(entryUrl)
if requestErr != nil {
return fmt.Errorf("failed to get entry, err: %v", requestErr)
}
data := requestResult.(map[string]interface{})["data"].(map[string]interface{})
entryJson := data["entry"].(string)
var entry map[string]interface{}
if parseErr := json.Unmarshal([]byte(entryJson), &entry); parseErr != nil {
return fmt.Errorf("failed to parse entry, err: %v", parseErr)
}
entryRequest := entry["request"].(map[string]interface{})
entryPayload := entryRequest["payload"].(map[string]interface{})
entryDetails := entryPayload["details"].(map[string]interface{})
postData := entryDetails["postData"].(map[string]interface{})
textData := postData["text"].(string)
if textData != "[REDACTED]" {
return fmt.Errorf("unexpected result - body is not redacted")
}
return nil
}
if err := retriesExecute(shortRetriesCount, redactCheckFunc); err != nil {
t.Errorf("%v", err)
return
}
}
func TestTapDumpLogs(t *testing.T) {
if testing.Short() {
t.Skip("ignored acceptance test")
}
cliPath, cliPathErr := getCliPath()
if cliPathErr != nil {
t.Errorf("failed to get cli path, err: %v", cliPathErr)
return
}
tapCmdArgs := getDefaultTapCommandArgs()
tapNamespace := getDefaultTapNamespace()
tapCmdArgs = append(tapCmdArgs, tapNamespace...)
tapCmdArgs = append(tapCmdArgs, "--set", "dump-logs=true")
tapCmd := exec.Command(cliPath, tapCmdArgs...)
t.Logf("running command: %v", tapCmd.String())
if err := tapCmd.Start(); err != nil {
t.Errorf("failed to start tap command, err: %v", err)
return
}
apiServerUrl := getApiServerUrl(defaultApiServerPort)
if err := waitTapPodsReady(apiServerUrl); err != nil {
t.Errorf("failed to start tap pods on time, err: %v", err)
return
}
if err := cleanupCommand(tapCmd); err != nil {
t.Errorf("failed to cleanup tap command, err: %v", err)
return
}
mizuFolderPath, mizuPathErr := getMizuFolderPath()
if mizuPathErr != nil {
t.Errorf("failed to get mizu folder path, err: %v", mizuPathErr)
return
}
files, readErr := ioutil.ReadDir(mizuFolderPath)
if readErr != nil {
t.Errorf("failed to read mizu folder files, err: %v", readErr)
return
}
var dumpsLogsPath string
for _, file := range files {
fileName := file.Name()
if strings.Contains(fileName, "mizu_logs") {
dumpsLogsPath = path.Join(mizuFolderPath, fileName)
break
}
}
if dumpsLogsPath == "" {
t.Errorf("dump logs file not found")
return
}
zipReader, zipError := zip.OpenReader(dumpsLogsPath)
if zipError != nil {
t.Errorf("failed to get zip reader, err: %v", zipError)
return
}
t.Cleanup(func() {
if err := zipReader.Close(); err != nil {
t.Logf("failed to close zip reader, err: %v", err)
}
})
var logsFileNames []string
for _, file := range zipReader.File {
logsFileNames = append(logsFileNames, file.Name)
}
if !Contains(logsFileNames, "mizu.mizu-api-server.log") {
t.Errorf("api server logs not found")
return
}
if !Contains(logsFileNames, "mizu_cli.log") {
t.Errorf("cli logs not found")
return
}
if !Contains(logsFileNames, "mizu_events.log") {
t.Errorf("events logs not found")
return
}
if !ContainsPartOfValue(logsFileNames, "mizu.mizu-tapper-daemon-set") {
t.Errorf("tapper logs not found")
return
}
}

View File

@@ -1,29 +1,18 @@
package acceptanceTests
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"net/http"
"os"
"os/exec"
"path"
"strings"
"syscall"
"time"
)
const (
longRetriesCount = 100
shortRetriesCount = 10
defaultApiServerPort = 8899
defaultNamespaceName = "mizu-tests"
defaultServiceName = "httpbin"
defaultEntriesCount = 50
)
func getCliPath() (string, error) {
func GetCliPath() (string, error) {
dir, filePathErr := os.Getwd()
if filePathErr != nil {
return "", filePathErr
@@ -33,120 +22,34 @@ func getCliPath() (string, error) {
return cliPath, nil
}
func getMizuFolderPath() (string, error) {
home, homeDirErr := os.UserHomeDir()
if homeDirErr != nil {
return "", homeDirErr
}
return path.Join(home, ".mizu"), nil
}
func getConfigPath() (string, error) {
mizuFolderPath, mizuPathError := getMizuFolderPath()
if mizuPathError != nil {
return "", mizuPathError
}
return path.Join(mizuFolderPath, "config.yaml"), nil
}
func getProxyUrl(namespace string, service string) string {
return fmt.Sprintf("http://localhost:8080/api/v1/namespaces/%v/services/%v/proxy", namespace, service)
}
func getApiServerUrl(port uint16) string {
return fmt.Sprintf("http://localhost:%v/mizu", port)
}
func getDefaultCommandArgs() []string {
func GetDefaultCommandArgs() []string {
setFlag := "--set"
telemetry := "telemetry=false"
return []string{setFlag, telemetry}
}
func GetDefaultTapCommandArgs() []string {
tapCommand := "tap"
setFlag := "--set"
namespaces := "tap.namespaces=mizu-tests"
agentImage := "agent-image=gcr.io/up9-docker-hub/mizu/ci:0.0.0"
imagePullPolicy := "image-pull-policy=Never"
return []string{setFlag, telemetry, setFlag, agentImage, setFlag, imagePullPolicy}
defaultCmdArgs := GetDefaultCommandArgs()
return append([]string{tapCommand, setFlag, namespaces, setFlag, agentImage, setFlag, imagePullPolicy}, defaultCmdArgs...)
}
func getDefaultTapCommandArgs() []string {
tapCommand := "tap"
defaultCmdArgs := getDefaultCommandArgs()
func GetDefaultFetchCommandArgs() []string {
tapCommand := "fetch"
defaultCmdArgs := GetDefaultCommandArgs()
return append([]string{tapCommand}, defaultCmdArgs...)
}
func getDefaultTapCommandArgsWithRegex(regex string) []string {
tapCommand := "tap"
defaultCmdArgs := getDefaultCommandArgs()
return append([]string{tapCommand, regex}, defaultCmdArgs...)
}
func getDefaultLogsCommandArgs() []string {
logsCommand := "logs"
defaultCmdArgs := getDefaultCommandArgs()
return append([]string{logsCommand}, defaultCmdArgs...)
}
func getDefaultTapNamespace() []string {
return []string{"-n", "mizu-tests"}
}
func getDefaultConfigCommandArgs() []string {
configCommand := "config"
defaultCmdArgs := getDefaultCommandArgs()
return append([]string{configCommand}, defaultCmdArgs...)
}
func retriesExecute(retriesCount int, executeFunc func() error) error {
var lastError interface{}
for i := 0; i < retriesCount; i++ {
if err := tryExecuteFunc(executeFunc); err != nil {
lastError = err
time.Sleep(1 * time.Second)
continue
}
return nil
}
return fmt.Errorf("reached max retries count, retries count: %v, last err: %v", retriesCount, lastError)
}
func tryExecuteFunc(executeFunc func() error) (err interface{}) {
defer func() {
if panicErr := recover(); panicErr != nil {
err = panicErr
}
}()
return executeFunc()
}
func waitTapPodsReady(apiServerUrl string) error {
resolvingUrl := fmt.Sprintf("%v/status/tappersCount", apiServerUrl)
tapPodsReadyFunc := func() error {
requestResult, requestErr := executeHttpGetRequest(resolvingUrl)
if requestErr != nil {
return requestErr
}
tappersCount := requestResult.(float64)
if tappersCount == 0 {
return fmt.Errorf("no tappers running")
}
return nil
}
return retriesExecute(longRetriesCount, tapPodsReadyFunc)
}
func jsonBytesToInterface(jsonBytes []byte) (interface{}, error) {
func JsonBytesToInterface(jsonBytes []byte) (interface{}, error) {
var result interface{}
if parseErr := json.Unmarshal(jsonBytes, &result); parseErr != nil {
return nil, parseErr
@@ -155,39 +58,23 @@ func jsonBytesToInterface(jsonBytes []byte) (interface{}, error) {
return result, nil
}
func executeHttpRequest(response *http.Response, requestErr error) (interface{}, error) {
func ExecuteHttpRequest(url string) (interface{}, error) {
response, requestErr := http.Get(url)
if requestErr != nil {
return nil, requestErr
} else if response.StatusCode != 200 {
return nil, fmt.Errorf("invalid status code %v", response.StatusCode)
}
defer func() { response.Body.Close() }()
data, readErr := ioutil.ReadAll(response.Body)
if readErr != nil {
return nil, readErr
}
return jsonBytesToInterface(data)
return JsonBytesToInterface(data)
}
func executeHttpGetRequest(url string) (interface{}, error) {
response, requestErr := http.Get(url)
return executeHttpRequest(response, requestErr)
}
func executeHttpPostRequest(url string, body interface{}) (interface{}, error) {
requestBody, jsonErr := json.Marshal(body)
if jsonErr != nil {
return nil, jsonErr
}
response, requestErr := http.Post(url, "application/json", bytes.NewBuffer(requestBody))
return executeHttpRequest(response, requestErr)
}
func cleanupCommand(cmd *exec.Cmd) error {
func CleanupCommand(cmd *exec.Cmd) error {
if err := cmd.Process.Signal(syscall.SIGQUIT); err != nil {
return err
}
@@ -199,44 +86,28 @@ func cleanupCommand(cmd *exec.Cmd) error {
return nil
}
func getPods(tapStatusInterface interface{}) ([]map[string]interface{}, error) {
tapStatus := tapStatusInterface.(map[string]interface{})
podsInterface := tapStatus["pods"].([]interface{})
var pods []map[string]interface{}
for _, podInterface := range podsInterface {
pods = append(pods, podInterface.(map[string]interface{}))
func GetEntriesFromHarBytes(harBytes []byte) ([]interface{}, error){
harInterface, convertErr := JsonBytesToInterface(harBytes)
if convertErr != nil {
return nil, convertErr
}
return pods, nil
}
func getLogsPath() (string, error) {
dir, filePathErr := os.Getwd()
if filePathErr != nil {
return "", filePathErr
har, ok := harInterface.(map[string]interface{})
if !ok {
return nil, errors.New("invalid har type")
}
logsPath := path.Join(dir, "mizu_logs.zip")
return logsPath, nil
}
func Contains(slice []string, containsValue string) bool {
for _, sliceValue := range slice {
if sliceValue == containsValue {
return true
}
harLogInterface := har["log"]
harLog, ok := harLogInterface.(map[string]interface{})
if !ok {
return nil, errors.New("invalid har log type")
}
return false
}
func ContainsPartOfValue(slice []string, containsValue string) bool {
for _, sliceValue := range slice {
if strings.Contains(sliceValue, containsValue) {
return true
}
harEntriesInterface := harLog["entries"]
harEntries, ok := harEntriesInterface.([]interface{})
if !ok {
return nil, errors.New("invalid har entries type")
}
return false
return harEntries, nil
}

View File

@@ -1,6 +1,7 @@
# mizu agent
Agent for MIZU (API server and tapper)
Basic APIs:
* /fetch - retrieve traffic data
* /stats - retrieve statistics of collected data
* /viewer - web ui

View File

@@ -3,6 +3,7 @@ module mizuserver
go 1.16
require (
github.com/beevik/etree v1.1.0
github.com/djherbis/atime v1.0.0
github.com/fsnotify/fsnotify v1.4.9
github.com/gin-contrib/static v0.0.1
@@ -17,9 +18,8 @@ require (
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7
github.com/up9inc/mizu/shared v0.0.0
github.com/up9inc/mizu/tap v0.0.0
github.com/up9inc/mizu/tap/api v0.0.0
github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0
go.mongodb.org/mongo-driver v1.7.1
go.mongodb.org/mongo-driver v1.5.1
gorm.io/driver/sqlite v1.1.4
gorm.io/gorm v1.21.8
k8s.io/api v0.21.0
@@ -30,5 +30,3 @@ require (
replace github.com/up9inc/mizu/shared v0.0.0 => ../shared
replace github.com/up9inc/mizu/tap v0.0.0 => ../tap
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../tap/api

View File

@@ -42,6 +42,9 @@ github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb0
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48=
github.com/beevik/etree v1.1.0 h1:T0xke/WvNtMoCqgzPhkX2r4rjY3GDZFi+FjpRZY2Jbs=
github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A=
github.com/bradleyfalzon/tlsx v0.0.0-20170624122154-28fd0e59bac4 h1:NJOOlc6ZJjix0A1rAU+nxruZtR8KboG1848yqpIUo4M=
github.com/bradleyfalzon/tlsx v0.0.0-20170624122154-28fd0e59bac4/go.mod h1:DQPxZS994Ld1Y8uwnJT+dRL04XPD0cElP/pHH/zEBHM=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
@@ -98,6 +101,7 @@ github.com/go-playground/validator/v10 v10.2.0/go.mod h1:uOYAAleCW8F/7oMFd6aG0GO
github.com/go-playground/validator/v10 v10.4.1/go.mod h1:nlOn6nFhuKACm19sB/8EGNn9GlaMV7XkbRSipzJ0Ii4=
github.com/go-playground/validator/v10 v10.5.0 h1:X9rflw/KmpACwT8zdrm1upefpvdy6ur8d1kWyq6sg3E=
github.com/go-playground/validator/v10 v10.5.0/go.mod h1:xm76BBt941f7yWdGnI2DVPFFg1UK3YY04qifoXU3lOk=
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gobuffalo/attrs v0.0.0-20190224210810-a9411de4debd/go.mod h1:4duuawTqi2wkkpB4ePgWMaai6/Kc6WEz83bhFwpHzj0=
github.com/gobuffalo/depgen v0.0.0-20190329151759-d478694a28d3/go.mod h1:3STtPUQYuzV0gBVOY3vy6CfMm/ljR4pABfrTeHNLHUY=
@@ -190,6 +194,8 @@ github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkr
github.com/jinzhu/now v1.1.1/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/jinzhu/now v1.1.2 h1:eVKgfIdy9b6zbWBMgFpfDPoAMifwSZagU9HmEU6zgiI=
github.com/jinzhu/now v1.1.2/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
@@ -286,8 +292,8 @@ github.com/yalp/jsonpath v0.0.0-20180802001716-5cc68e5049a0/go.mod h1:/LWChgwKmv
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.mongodb.org/mongo-driver v1.7.1 h1:jwqTeEM3x6L9xDXrCxN0Hbg7vdGfPBOTIkr0+/LYZDA=
go.mongodb.org/mongo-driver v1.7.1/go.mod h1:Q4oFMbo1+MSNqICAdYMlC/zSTrwCogR4R8NzkI+yfU8=
go.mongodb.org/mongo-driver v1.5.1 h1:9nOVLGDfOaZ9R0tBumx/BcuqkbFpyTCU2r/Po7A2azI=
go.mongodb.org/mongo-driver v1.5.1/go.mod h1:gRXCHX4Jo7J0IJ1oDQyUxF7jfy19UfxniMS4xxMmUqw=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
@@ -356,8 +362,9 @@ golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLL
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7 h1:OgUuv8lsRpBibGNbSizVwKWlysjaNzmC9gYMhPVfqFM=
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210421230115-4e50805a0758 h1:aEpZnXcAmXkd6AvLb2OPt+EN1Zu/8Ne3pCqPjja5PXY=
golang.org/x/net v0.0.0-20210421230115-4e50805a0758/go.mod h1:72T/g9IO56b78aLF+1Kcs5dz7/ng1VjMUvfKvpfy+jM=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -403,8 +410,9 @@ golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073 h1:8qxJSnu+7dRq6upnbntrmriWByIakBuct5OM/MdQC1M=
golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe h1:WdX7u8s3yOigWAhHEaDl8r9G+4XwFQEQFtBMYyN+kXQ=
golang.org/x/sys v0.0.0-20210420072515-93ed5bcd2bfe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d h1:SZxvLBoTP5yHO3Frd4z4vrF+DBX9vMVanchswa69toE=
@@ -415,8 +423,9 @@ golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5 h1:i6eZZ+zk0SOf0xgBpEpPD18qWcJda6q1sxt3S0kzyUQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=

View File

@@ -4,29 +4,21 @@ import (
"encoding/json"
"flag"
"fmt"
"io/ioutil"
"log"
"mizuserver/pkg/api"
"mizuserver/pkg/controllers"
"mizuserver/pkg/models"
"mizuserver/pkg/routes"
"mizuserver/pkg/utils"
"net/http"
"os"
"os/signal"
"path"
"path/filepath"
"plugin"
"sort"
"strings"
"github.com/gin-contrib/static"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
"github.com/romana/rlog"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap"
tapApi "github.com/up9inc/mizu/tap/api"
"mizuserver/pkg/api"
"mizuserver/pkg/models"
"mizuserver/pkg/routes"
"mizuserver/pkg/sensitiveDataFiltering"
"mizuserver/pkg/utils"
"net/http"
"os"
"os/signal"
"strings"
)
var tapperMode = flag.Bool("tap", false, "Run in tapper mode without API")
@@ -37,34 +29,28 @@ var namespace = flag.String("namespace", "", "Resolve IPs if they belong to reso
var harsReaderMode = flag.Bool("hars-read", false, "Run in hars-read mode")
var harsDir = flag.String("hars-dir", "", "Directory to read hars from")
var extensions []*tapApi.Extension // global
var extensionsMap map[string]*tapApi.Extension // global
func main() {
flag.Parse()
loadExtensions()
hostMode := os.Getenv(shared.HostModeEnvVar) == "1"
tapOpts := &tap.TapOpts{HostMode: hostMode}
if !*tapperMode && !*apiServerMode && !*standaloneMode && !*harsReaderMode {
if !*tapperMode && !*apiServerMode && !*standaloneMode && !*harsReaderMode{
panic("One of the flags --tap, --api or --standalone or --hars-read must be provided")
}
filteringOptions := getTrafficFilteringOptions()
if *standaloneMode {
api.StartResolving(*namespace)
outputItemsChannel := make(chan *tapApi.OutputChannelItem)
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)
tap.StartPassiveTapper(tapOpts, outputItemsChannel, extensions, filteringOptions)
harOutputChannel, outboundLinkOutputChannel := tap.StartPassiveTapper(tapOpts)
filteredHarChannel := make(chan *tap.OutputChannelItem)
go filterItems(outputItemsChannel, filteredOutputItemsChannel, filteringOptions)
go api.StartReadingEntries(filteredOutputItemsChannel, nil, extensionsMap)
go filterHarItems(harOutputChannel, filteredHarChannel, getTrafficFilteringOptions())
go api.StartReadingEntries(filteredHarChannel, nil)
go api.StartReadingOutbound(outboundLinkOutputChannel)
hostApi(nil)
} else if *tapperMode {
rlog.Infof("Starting tapper, websocket address: %s", *apiServerAddress)
if *apiServerAddress == "" {
panic("API server address must be provided with --api-server-address when using --tap")
}
@@ -75,31 +61,31 @@ func main() {
rlog.Infof("Filtering for the following authorities: %v", tap.GetFilterIPs())
}
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)
tap.StartPassiveTapper(tapOpts, filteredOutputItemsChannel, extensions, filteringOptions)
socketConnection, err := utils.ConnectToSocketServer(*apiServerAddress)
harOutputChannel, outboundLinkOutputChannel := tap.StartPassiveTapper(tapOpts)
socketConnection, err := shared.ConnectToSocketServer(*apiServerAddress, shared.DEFAULT_SOCKET_RETRIES, shared.DEFAULT_SOCKET_RETRY_SLEEP_TIME, false)
if err != nil {
panic(fmt.Sprintf("Error connecting to socket server at %s %v", *apiServerAddress, err))
}
rlog.Infof("Connected successfully to websocket %s", *apiServerAddress)
go pipeTapChannelToSocket(socketConnection, filteredOutputItemsChannel)
go pipeTapChannelToSocket(socketConnection, harOutputChannel)
go pipeOutboundLinksChannelToSocket(socketConnection, outboundLinkOutputChannel)
} else if *apiServerMode {
api.StartResolving(*namespace)
outputItemsChannel := make(chan *tapApi.OutputChannelItem)
filteredOutputItemsChannel := make(chan *tapApi.OutputChannelItem)
socketHarOutChannel := make(chan *tap.OutputChannelItem, 1000)
filteredHarChannel := make(chan *tap.OutputChannelItem)
go filterItems(outputItemsChannel, filteredOutputItemsChannel, filteringOptions)
go api.StartReadingEntries(filteredOutputItemsChannel, nil, extensionsMap)
go filterHarItems(socketHarOutChannel, filteredHarChannel, getTrafficFilteringOptions())
go api.StartReadingEntries(filteredHarChannel, nil)
hostApi(outputItemsChannel)
hostApi(socketHarOutChannel)
} else if *harsReaderMode {
outputItemsChannel := make(chan *tapApi.OutputChannelItem, 1000)
filteredHarChannel := make(chan *tapApi.OutputChannelItem)
socketHarOutChannel := make(chan *tap.OutputChannelItem, 1000)
filteredHarChannel := make(chan *tap.OutputChannelItem)
go filterItems(outputItemsChannel, filteredHarChannel, filteringOptions)
go api.StartReadingEntries(filteredHarChannel, harsDir, extensionsMap)
go filterHarItems(socketHarOutChannel, filteredHarChannel, getTrafficFilteringOptions())
go api.StartReadingEntries(filteredHarChannel, harsDir)
hostApi(nil)
}
@@ -110,50 +96,7 @@ func main() {
rlog.Info("Exiting")
}
func loadExtensions() {
dir, _ := filepath.Abs(filepath.Dir(os.Args[0]))
extensionsDir := path.Join(dir, "./extensions/")
files, err := ioutil.ReadDir(extensionsDir)
if err != nil {
log.Fatal(err)
}
extensions = make([]*tapApi.Extension, len(files))
extensionsMap = make(map[string]*tapApi.Extension)
for i, file := range files {
filename := file.Name()
rlog.Infof("Loading extension: %s\n", filename)
extension := &tapApi.Extension{
Path: path.Join(extensionsDir, filename),
}
plug, _ := plugin.Open(extension.Path)
extension.Plug = plug
symDissector, err := plug.Lookup("Dissector")
var dissector tapApi.Dissector
var ok bool
dissector, ok = symDissector.(tapApi.Dissector)
if err != nil || !ok {
panic(fmt.Sprintf("Failed to load the extension: %s\n", extension.Path))
}
dissector.Register(extension)
extension.Dissector = dissector
extensions[i] = extension
extensionsMap[extension.Protocol.Name] = extension
}
sort.Slice(extensions, func(i, j int) bool {
return extensions[i].Protocol.Priority < extensions[j].Protocol.Priority
})
for _, extension := range extensions {
log.Printf("Extension Properties: %+v\n", extension)
}
controllers.InitExtensionsMap(extensionsMap)
}
func hostApi(socketHarOutputChannel chan<- *tapApi.OutputChannelItem) {
func hostApi(socketHarOutputChannel chan<- *tap.OutputChannelItem) {
app := gin.Default()
app.GET("/echo", func(c *gin.Context) {
@@ -161,10 +104,9 @@ func hostApi(socketHarOutputChannel chan<- *tapApi.OutputChannelItem) {
})
eventHandlers := api.RoutesEventHandlers{
SocketOutChannel: socketHarOutputChannel,
SocketHarOutChannel: socketHarOutputChannel,
}
app.Use(DisableRootStaticCache())
app.Use(static.ServeRoot("/", "./site"))
app.Use(CORSMiddleware()) // This has to be called after the static middleware, does not work if its called before
@@ -177,17 +119,6 @@ func hostApi(socketHarOutputChannel chan<- *tapApi.OutputChannelItem) {
utils.StartServer(app)
}
func DisableRootStaticCache() gin.HandlerFunc {
return func(c *gin.Context) {
if c.Request.RequestURI == "/" {
// Disable cache only for the main static route
c.Writer.Header().Set("Cache-Control", "no-store")
}
c.Next()
}
}
func CORSMiddleware() gin.HandlerFunc {
return func(c *gin.Context) {
c.Writer.Header().Set("Access-Control-Allow-Origin", "*")
@@ -204,45 +135,31 @@ func CORSMiddleware() gin.HandlerFunc {
}
}
func parseEnvVar(env string) map[string][]string {
var mapOfList map[string][]string
val, present := os.LookupEnv(env)
if !present {
return mapOfList
}
err := json.Unmarshal([]byte(val), &mapOfList)
if err != nil {
panic(fmt.Sprintf("env var %s's value of %s is invalid! must be map[string][]string %v", env, mapOfList, err))
}
return mapOfList
}
func getTapTargets() []string {
nodeName := os.Getenv(shared.NodeNameEnvVar)
tappedAddressesPerNodeDict := parseEnvVar(shared.TappedAddressesPerNodeDictEnvVar)
var tappedAddressesPerNodeDict map[string][]string
err := json.Unmarshal([]byte(os.Getenv(shared.TappedAddressesPerNodeDictEnvVar)), &tappedAddressesPerNodeDict)
if err != nil {
panic(fmt.Sprintf("env var %s's value of %s is invalid! must be map[string][]string %v", shared.TappedAddressesPerNodeDictEnvVar, tappedAddressesPerNodeDict, err))
}
return tappedAddressesPerNodeDict[nodeName]
}
func getTrafficFilteringOptions() *tapApi.TrafficFilteringOptions {
func getTrafficFilteringOptions() *shared.TrafficFilteringOptions {
filteringOptionsJson := os.Getenv(shared.MizuFilteringOptionsEnvVar)
if filteringOptionsJson == "" {
return &tapApi.TrafficFilteringOptions{
HealthChecksUserAgentHeaders: []string{},
}
return nil
}
var filteringOptions tapApi.TrafficFilteringOptions
var filteringOptions shared.TrafficFilteringOptions
err := json.Unmarshal([]byte(filteringOptionsJson), &filteringOptions)
if err != nil {
panic(fmt.Sprintf("env var %s's value of %s is invalid! json must match the api.TrafficFilteringOptions struct %v", shared.MizuFilteringOptionsEnvVar, filteringOptionsJson, err))
panic(fmt.Sprintf("env var %s's value of %s is invalid! json must match the shared.TrafficFilteringOptions struct %v", shared.MizuFilteringOptionsEnvVar, filteringOptionsJson, err))
}
return &filteringOptions
}
func filterItems(inChannel <-chan *tapApi.OutputChannelItem, outChannel chan *tapApi.OutputChannelItem, filterOptions *tapApi.TrafficFilteringOptions) {
func filterHarItems(inChannel <-chan *tap.OutputChannelItem, outChannel chan *tap.OutputChannelItem, filterOptions *shared.TrafficFilteringOptions) {
for message := range inChannel {
if message.ConnectionInfo.IsOutgoing && api.CheckIsServiceIP(message.ConnectionInfo.ServerIP) {
continue
@@ -252,23 +169,19 @@ func filterItems(inChannel <-chan *tapApi.OutputChannelItem, outChannel chan *ta
continue
}
if !filterOptions.DisableRedaction {
sensitiveDataFiltering.FilterSensitiveInfoFromHarRequest(message, filterOptions)
}
outChannel <- message
}
}
func isHealthCheckByUserAgent(item *tapApi.OutputChannelItem, userAgentsToIgnore []string) bool {
if item.Protocol.Name != "http" {
return false
}
request := item.Pair.Request.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
for _, header := range reqDetails["headers"].([]interface{}) {
h := header.(map[string]interface{})
if strings.ToLower(h["name"].(string)) == "user-agent" {
func isHealthCheckByUserAgent(message *tap.OutputChannelItem, userAgentsToIgnore []string) bool {
for _, header := range message.HarEntry.Request.Headers {
if strings.ToLower(header.Name) == "user-agent" {
for _, userAgent := range userAgentsToIgnore {
if strings.Contains(strings.ToLower(h["value"].(string)), strings.ToLower(userAgent)) {
if strings.Contains(strings.ToLower(header.Value), strings.ToLower(userAgent)) {
return true
}
}
@@ -278,7 +191,7 @@ func isHealthCheckByUserAgent(item *tapApi.OutputChannelItem, userAgentsToIgnore
return false
}
func pipeTapChannelToSocket(connection *websocket.Conn, messageDataChannel <-chan *tapApi.OutputChannelItem) {
func pipeTapChannelToSocket(connection *websocket.Conn, messageDataChannel <-chan *tap.OutputChannelItem) {
if connection == nil {
panic("Websocket connection is nil")
}
@@ -290,16 +203,32 @@ func pipeTapChannelToSocket(connection *websocket.Conn, messageDataChannel <-cha
for messageData := range messageDataChannel {
marshaledData, err := models.CreateWebsocketTappedEntryMessage(messageData)
if err != nil {
rlog.Errorf("error converting message to json %v, err: %s, (%v,%+v)", messageData, err, err, err)
rlog.Infof("error converting message to json %s, (%v,%+v)\n", err, err, err)
continue
}
// NOTE: This is where the `*tapApi.OutputChannelItem` leaves the code
// and goes into the intermediate WebSocket.
err = connection.WriteMessage(websocket.TextMessage, marshaledData)
if err != nil {
rlog.Errorf("error sending message through socket server %v, err: %s, (%v,%+v)", messageData, err, err, err)
rlog.Infof("error sending message through socket server %s, (%v,%+v)\n", err, err, err)
continue
}
}
}
func pipeOutboundLinksChannelToSocket(connection *websocket.Conn, outboundLinkChannel <-chan *tap.OutboundLink) {
for outboundLink := range outboundLinkChannel {
if outboundLink.SuggestedProtocol == tap.TLSProtocol {
marshaledData, err := models.CreateWebsocketOutboundLinkMessage(outboundLink)
if err != nil {
rlog.Infof("Error converting outbound link to json %s, (%v,%+v)", err, err, err)
continue
}
err = connection.WriteMessage(websocket.TextMessage, marshaledData)
if err != nil {
rlog.Infof("error sending outbound link message through socket server %s, (%v,%+v)", err, err, err)
continue
}
}
}
}

View File

@@ -5,21 +5,21 @@ import (
"context"
"encoding/json"
"fmt"
"mizuserver/pkg/database"
"mizuserver/pkg/holder"
"mizuserver/pkg/providers"
"net/url"
"os"
"path"
"sort"
"strings"
"time"
"go.mongodb.org/mongo-driver/bson/primitive"
"github.com/google/martian/har"
"github.com/romana/rlog"
tapApi "github.com/up9inc/mizu/tap/api"
"github.com/up9inc/mizu/tap"
"go.mongodb.org/mongo-driver/bson/primitive"
"mizuserver/pkg/database"
"mizuserver/pkg/models"
"mizuserver/pkg/resolver"
"mizuserver/pkg/utils"
@@ -49,19 +49,17 @@ func StartResolving(namespace string) {
holder.SetResolver(res)
}
func StartReadingEntries(harChannel <-chan *tapApi.OutputChannelItem, workingDir *string, extensionsMap map[string]*tapApi.Extension) {
func StartReadingEntries(harChannel <-chan *tap.OutputChannelItem, workingDir *string) {
if workingDir != nil && *workingDir != "" {
startReadingFiles(*workingDir)
} else {
startReadingChannel(harChannel, extensionsMap)
startReadingChannel(harChannel)
}
}
func startReadingFiles(workingDir string) {
if err := os.MkdirAll(workingDir, os.ModePerm); err != nil {
rlog.Errorf("Failed to make dir: %s, err: %v", workingDir, err)
return
}
err := os.MkdirAll(workingDir, os.ModePerm)
utils.CheckErr(err)
for true {
dir, _ := os.Open(workingDir)
@@ -89,42 +87,48 @@ func startReadingFiles(workingDir string) {
decErr := json.NewDecoder(bufio.NewReader(file)).Decode(&inputHar)
utils.CheckErr(decErr)
for _, entry := range inputHar.Log.Entries {
time.Sleep(time.Millisecond * 250)
connectionInfo := &tap.ConnectionInfo{
ClientIP: fileInfo.Name(),
ClientPort: "",
ServerIP: "",
ServerPort: "",
IsOutgoing: false,
}
saveHarToDb(entry, connectionInfo)
}
rmErr := os.Remove(inputFilePath)
utils.CheckErr(rmErr)
}
}
func startReadingChannel(outputItems <-chan *tapApi.OutputChannelItem, extensionsMap map[string]*tapApi.Extension) {
func startReadingChannel(outputItems <-chan *tap.OutputChannelItem) {
if outputItems == nil {
panic("Channel of captured messages is nil")
}
for item := range outputItems {
providers.EntryAdded()
extension := extensionsMap[item.Protocol.Name]
resolvedSource, resolvedDestionation := resolveIP(item.ConnectionInfo)
mizuEntry := extension.Dissector.Analyze(item, primitive.NewObjectID().Hex(), resolvedSource, resolvedDestionation)
baseEntry := extension.Dissector.Summarize(mizuEntry)
mizuEntry.EstimatedSizeBytes = getEstimatedEntrySizeBytes(mizuEntry)
database.CreateEntry(mizuEntry)
if extension.Protocol.Name == "http" {
var pair tapApi.RequestResponsePair
json.Unmarshal([]byte(mizuEntry.Entry), &pair)
harEntry, err := utils.NewEntry(&pair)
if err == nil {
rules, _ := models.RunValidationRulesState(*harEntry, mizuEntry.Service)
baseEntry.Rules = rules
baseEntry.Latency = mizuEntry.ElapsedTime
}
}
baseEntryBytes, _ := models.CreateBaseEntryWebSocketMessage(baseEntry)
BroadcastToBrowserClients(baseEntryBytes)
saveHarToDb(item.HarEntry, item.ConnectionInfo)
}
}
func resolveIP(connectionInfo *tapApi.ConnectionInfo) (resolvedSource string, resolvedDestination string) {
func StartReadingOutbound(outboundLinkChannel <-chan *tap.OutboundLink) {
// tcpStreamFactory will block on write to channel. Empty channel to unblock.
// TODO: Make write to channel optional.
for range outboundLinkChannel {
}
}
func saveHarToDb(entry *har.Entry, connectionInfo *tap.ConnectionInfo) {
entryBytes, _ := json.Marshal(entry)
serviceName, urlPath := getServiceNameFromUrl(entry.Request.URL)
entryId := primitive.NewObjectID().Hex()
var (
resolvedSource string
resolvedDestination string
)
if k8sResolver != nil {
unresolvedSource := connectionInfo.ClientIP
resolvedSource = k8sResolver.Resolve(unresolvedSource)
@@ -143,18 +147,46 @@ func resolveIP(connectionInfo *tapApi.ConnectionInfo) (resolvedSource string, re
}
}
}
return resolvedSource, resolvedDestination
mizuEntry := models.MizuEntry{
EntryId: entryId,
Entry: string(entryBytes), // simple way to store it and not convert to bytes
Service: serviceName,
Url: entry.Request.URL,
Path: urlPath,
Method: entry.Request.Method,
Status: entry.Response.Status,
RequestSenderIp: connectionInfo.ClientIP,
Timestamp: entry.StartedDateTime.UnixNano() / int64(time.Millisecond),
ResolvedSource: resolvedSource,
ResolvedDestination: resolvedDestination,
IsOutgoing: connectionInfo.IsOutgoing,
}
mizuEntry.EstimatedSizeBytes = getEstimatedEntrySizeBytes(mizuEntry)
database.CreateEntry(&mizuEntry)
baseEntry := models.BaseEntryDetails{}
if err := models.GetEntry(&mizuEntry, &baseEntry); err != nil {
return
}
baseEntry.Rules = models.RunValidationRulesState(*entry, serviceName)
baseEntry.Latency = entry.Timings.Receive
baseEntryBytes, _ := models.CreateBaseEntryWebSocketMessage(&baseEntry)
BroadcastToBrowserClients(baseEntryBytes)
}
func getServiceNameFromUrl(inputUrl string) (string, string) {
parsed, err := url.Parse(inputUrl)
utils.CheckErr(err)
return fmt.Sprintf("%s://%s", parsed.Scheme, parsed.Host), parsed.Path
}
func CheckIsServiceIP(address string) bool {
if k8sResolver == nil {
return false
}
return k8sResolver.CheckIsServiceIP(address)
}
// gives a rough estimate of the size this will take up in the db, good enough for maintaining db size limit accurately
func getEstimatedEntrySizeBytes(mizuEntry *tapApi.MizuEntry) int {
func getEstimatedEntrySizeBytes(mizuEntry models.MizuEntry) int {
sizeBytes := len(mizuEntry.Entry)
sizeBytes += len(mizuEntry.EntryId)
sizeBytes += len(mizuEntry.Service)

View File

@@ -8,10 +8,9 @@ import (
"mizuserver/pkg/up9"
"sync"
tapApi "github.com/up9inc/mizu/tap/api"
"github.com/romana/rlog"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap"
)
var browserClientSocketUUIDs = make([]int, 0)
@@ -19,7 +18,7 @@ var socketListLock = sync.Mutex{}
type RoutesEventHandlers struct {
EventHandlers
SocketOutChannel chan<- *tapApi.OutputChannelItem
SocketHarOutChannel chan<- *tap.OutputChannelItem
}
func init() {
@@ -29,7 +28,6 @@ func init() {
func (h *RoutesEventHandlers) WebSocketConnect(socketId int, isTapper bool) {
if isTapper {
rlog.Infof("Websocket event - Tapper connected, socket ID: %d", socketId)
providers.TapperAdded()
} else {
rlog.Infof("Websocket event - Browser socket connected, socket ID: %d", socketId)
socketListLock.Lock()
@@ -41,7 +39,6 @@ func (h *RoutesEventHandlers) WebSocketConnect(socketId int, isTapper bool) {
func (h *RoutesEventHandlers) WebSocketDisconnect(socketId int, isTapper bool) {
if isTapper {
rlog.Infof("Websocket event - Tapper disconnected, socket ID: %d", socketId)
providers.TapperRemoved()
} else {
rlog.Infof("Websocket event - Browser socket disconnected, socket ID: %d", socketId)
socketListLock.Lock()
@@ -74,8 +71,7 @@ func (h *RoutesEventHandlers) WebSocketMessage(_ int, message []byte) {
if err != nil {
rlog.Infof("Could not unmarshal message of message type %s %v\n", socketMessageBase.MessageType, err)
} else {
// NOTE: This is where the message comes back from the intermediate WebSocket to code.
h.SocketOutChannel <- tappedEntryMessage.Data
h.SocketHarOutChannel <- tappedEntryMessage.Data
}
case shared.WebSocketMessageTypeUpdateStatus:
var statusMessage shared.WebSocketStatusMessage

View File

@@ -10,22 +10,14 @@ import (
"mizuserver/pkg/utils"
"mizuserver/pkg/validation"
"net/http"
"strings"
"time"
"github.com/google/martian/har"
"github.com/gin-gonic/gin"
"github.com/google/martian/har"
"github.com/romana/rlog"
tapApi "github.com/up9inc/mizu/tap/api"
)
var extensionsMap map[string]*tapApi.Extension // global
func InitExtensionsMap(ref map[string]*tapApi.Extension) {
extensionsMap = ref
}
func GetEntries(c *gin.Context) {
entriesFilter := &models.EntriesFilter{}
@@ -39,7 +31,7 @@ func GetEntries(c *gin.Context) {
order := database.OperatorToOrderMapping[entriesFilter.Operator]
operatorSymbol := database.OperatorToSymbolMapping[entriesFilter.Operator]
var entries []tapApi.MizuEntry
var entries []models.MizuEntry
database.GetEntriesTable().
Order(fmt.Sprintf("timestamp %s", order)).
Where(fmt.Sprintf("timestamp %s %v", operatorSymbol, entriesFilter.Timestamp)).
@@ -48,13 +40,13 @@ func GetEntries(c *gin.Context) {
Find(&entries)
if len(entries) > 0 && order == database.OrderDesc {
// the entries always order from oldest to newest - we should reverse
// the entries always order from oldest to newest so we should revers
utils.ReverseSlice(entries)
}
baseEntries := make([]tapApi.BaseEntryDetails, 0)
baseEntries := make([]models.BaseEntryDetails, 0)
for _, data := range entries {
harEntry := tapApi.BaseEntryDetails{}
harEntry := models.BaseEntryDetails{}
if err := models.GetEntry(&data, &harEntry); err != nil {
continue
}
@@ -64,6 +56,93 @@ func GetEntries(c *gin.Context) {
c.JSON(http.StatusOK, baseEntries)
}
func GetHARs(c *gin.Context) {
entriesFilter := &models.HarFetchRequestQuery{}
order := database.OrderDesc
if err := c.BindQuery(entriesFilter); err != nil {
c.JSON(http.StatusBadRequest, err)
}
err := validation.Validate(entriesFilter)
if err != nil {
c.JSON(http.StatusBadRequest, err)
}
var timestampFrom, timestampTo int64
if entriesFilter.From < 0 {
timestampFrom = 0
} else {
timestampFrom = entriesFilter.From
}
if entriesFilter.To <= 0 {
timestampTo = time.Now().UnixNano() / int64(time.Millisecond)
} else {
timestampTo = entriesFilter.To
}
var entries []models.MizuEntry
database.GetEntriesTable().
Where(fmt.Sprintf("timestamp BETWEEN %v AND %v", timestampFrom, timestampTo)).
Order(fmt.Sprintf("timestamp %s", order)).
Find(&entries)
if len(entries) > 0 {
// the entries always order from oldest to newest so we should revers
utils.ReverseSlice(entries)
}
harsObject := map[string]*models.ExtendedHAR{}
for _, entryData := range entries {
var harEntry har.Entry
_ = json.Unmarshal([]byte(entryData.Entry), &harEntry)
if entryData.ResolvedDestination != "" {
harEntry.Request.URL = utils.SetHostname(harEntry.Request.URL, entryData.ResolvedDestination)
}
var fileName string
sourceOfEntry := entryData.ResolvedSource
if sourceOfEntry != "" {
// naively assumes the proper service source is http
sourceOfEntry = fmt.Sprintf("http://%s", sourceOfEntry)
//replace / from the file name cause they end up creating a corrupted folder
fileName = fmt.Sprintf("%s.har", strings.ReplaceAll(sourceOfEntry, "/", "_"))
} else {
fileName = "unknown_source.har"
}
if harOfSource, ok := harsObject[fileName]; ok {
harOfSource.Log.Entries = append(harOfSource.Log.Entries, &harEntry)
} else {
var entriesHar []*har.Entry
entriesHar = append(entriesHar, &harEntry)
harsObject[fileName] = &models.ExtendedHAR{
Log: &models.ExtendedLog{
Version: "1.2",
Creator: &models.ExtendedCreator{
Creator: &har.Creator{
Name: "mizu",
Version: "0.0.2",
},
},
Entries: entriesHar,
},
}
// leave undefined when no source is present, otherwise modeler assumes source is empty string ""
if sourceOfEntry != "" {
harsObject[fileName].Log.Creator.Source = &sourceOfEntry
}
}
}
retObj := map[string][]byte{}
for k, v := range harsObject {
bytesData, _ := json.Marshal(v)
retObj[k] = bytesData
}
buffer := utils.ZipData(retObj)
c.Data(http.StatusOK, "application/octet-stream", buffer.Bytes())
}
func UploadEntries(c *gin.Context) {
rlog.Infof("Upload entries - started\n")
@@ -115,56 +194,45 @@ func GetFullEntries(c *gin.Context) {
timestampTo = entriesFilter.To
}
entriesArray := database.GetEntriesFromDb(timestampFrom, timestampTo, nil)
result := make([]har.Entry, 0)
entriesArray := database.GetEntriesFromDb(timestampFrom, timestampTo)
result := make([]models.FullEntryDetails, 0)
for _, data := range entriesArray {
var pair tapApi.RequestResponsePair
if err := json.Unmarshal([]byte(data.Entry), &pair); err != nil {
harEntry := models.FullEntryDetails{}
if err := models.GetEntry(&data, &harEntry); err != nil {
continue
}
harEntry, err := utils.NewEntry(&pair)
if err != nil {
continue
}
result = append(result, *harEntry)
result = append(result, harEntry)
}
c.JSON(http.StatusOK, result)
}
func GetEntry(c *gin.Context) {
var entryData tapApi.MizuEntry
var entryData models.MizuEntry
database.GetEntriesTable().
Where(map[string]string{"entryId": c.Param("entryId")}).
First(&entryData)
extension := extensionsMap[entryData.ProtocolName]
protocol, representation, bodySize, _ := extension.Dissector.Represent(&entryData)
var rules []map[string]interface{}
if entryData.ProtocolName == "http" {
var pair tapApi.RequestResponsePair
json.Unmarshal([]byte(entryData.Entry), &pair)
harEntry, _ := utils.NewEntry(&pair)
_, rulesMatched := models.RunValidationRulesState(*harEntry, entryData.Service)
inrec, _ := json.Marshal(rulesMatched)
json.Unmarshal(inrec, &rules)
fullEntry := models.FullEntryDetails{}
if err := models.GetEntry(&entryData, &fullEntry); err != nil {
c.JSON(http.StatusInternalServerError, map[string]interface{}{
"error": true,
"msg": "Can't get entry details",
})
}
c.JSON(http.StatusOK, tapApi.MizuEntryWrapper{
Protocol: protocol,
Representation: string(representation),
BodySize: bodySize,
Data: entryData,
Rules: rules,
})
fullEntryWithPolicy := models.FullEntryWithPolicy{}
if err := models.GetEntry(&entryData, &fullEntryWithPolicy); err != nil {
c.JSON(http.StatusInternalServerError, map[string]interface{}{
"error": true,
"msg": "Can't get entry details",
})
}
c.JSON(http.StatusOK, fullEntryWithPolicy)
}
func DeleteAllEntries(c *gin.Context) {
database.GetEntriesTable().
Where("1 = 1").
Delete(&tapApi.MizuEntry{})
Delete(&models.MizuEntry{})
c.JSON(http.StatusOK, map[string]string{
"msg": "Success",

View File

@@ -30,7 +30,3 @@ func PostTappedPods(c *gin.Context) {
api.BroadcastToBrowserClients(jsonBytes)
}
}
func GetTappersCount(c *gin.Context) {
c.JSON(http.StatusOK, providers.TappersCount)
}

View File

@@ -2,18 +2,16 @@ package database
import (
"fmt"
"mizuserver/pkg/utils"
"time"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"gorm.io/gorm/logger"
tapApi "github.com/up9inc/mizu/tap/api"
"mizuserver/pkg/models"
"mizuserver/pkg/utils"
"time"
)
const (
DBPath = "./entries.db"
DBPath = "./entries.db"
OrderDesc = "desc"
OrderAsc = "asc"
LT = "lt"
@@ -21,8 +19,8 @@ const (
)
var (
DB *gorm.DB
IsDBLocked = false
DB *gorm.DB
IsDBLocked = false
OperatorToSymbolMapping = map[string]string{
LT: "<",
GT: ">",
@@ -42,7 +40,7 @@ func GetEntriesTable() *gorm.DB {
return DB.Table("mizu_entries")
}
func CreateEntry(entry *tapApi.MizuEntry) {
func CreateEntry(entry *models.MizuEntry) {
if IsDBLocked {
return
}
@@ -53,20 +51,15 @@ func initDataBase(databasePath string) *gorm.DB {
temp, _ := gorm.Open(sqlite.Open(databasePath), &gorm.Config{
Logger: &utils.TruncatingLogger{LogLevel: logger.Warn, SlowThreshold: 500 * time.Millisecond},
})
_ = temp.AutoMigrate(&tapApi.MizuEntry{}) // this will ensure table is created
_ = temp.AutoMigrate(&models.MizuEntry{}) // this will ensure table is created
return temp
}
func GetEntriesFromDb(timestampFrom int64, timestampTo int64, protocolName *string) []tapApi.MizuEntry {
order := OrderDesc
protocolNameCondition := "1 = 1"
if protocolName != nil {
protocolNameCondition = fmt.Sprintf("protocolName = '%s'", *protocolName)
}
var entries []tapApi.MizuEntry
func GetEntriesFromDb(timestampFrom int64, timestampTo int64) []models.MizuEntry {
order := OrderDesc
var entries []models.MizuEntry
GetEntriesTable().
Where(protocolNameCondition).
Where(fmt.Sprintf("timestamp BETWEEN %v AND %v", timestampFrom, timestampTo)).
Order(fmt.Sprintf("timestamp %s", order)).
Find(&entries)
@@ -77,3 +70,4 @@ func GetEntriesFromDb(timestampFrom int64, timestampTo int64, protocolName *stri
}
return entries
}

View File

@@ -1,17 +1,16 @@
package database
import (
"log"
"os"
"strconv"
"time"
"github.com/fsnotify/fsnotify"
"github.com/romana/rlog"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/debounce"
"github.com/up9inc/mizu/shared/units"
tapApi "github.com/up9inc/mizu/tap/api"
"log"
"mizuserver/pkg/models"
"os"
"strconv"
"time"
)
const percentageOfMaxSizeBytesToPrune = 15
@@ -100,7 +99,7 @@ func pruneOldEntries(currentFileSize int64) {
if bytesToBeRemoved >= amountOfBytesToTrim {
break
}
var entry tapApi.MizuEntry
var entry models.MizuEntry
err = DB.ScanRows(rows, &entry)
if err != nil {
rlog.Errorf("Error scanning db row: %v", err)
@@ -112,7 +111,7 @@ func pruneOldEntries(currentFileSize int64) {
}
if len(entryIdsToRemove) > 0 {
GetEntriesTable().Where(entryIdsToRemove).Delete(tapApi.MizuEntry{})
GetEntriesTable().Where(entryIdsToRemove).Delete(models.MizuEntry{})
// VACUUM causes sqlite to shrink the db file after rows have been deleted, the db file will not shrink without this
DB.Exec("VACUUM")
rlog.Errorf("Removed %d rows and cleared %s", len(entryIdsToRemove), units.BytesToHumanReadable(bytesToBeRemoved))

View File

@@ -3,19 +3,123 @@ package models
import (
"encoding/json"
tapApi "github.com/up9inc/mizu/tap/api"
"mizuserver/pkg/rules"
"mizuserver/pkg/utils"
"time"
"github.com/google/martian/har"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap"
)
func GetEntry(r *tapApi.MizuEntry, v tapApi.DataUnmarshaler) error {
type DataUnmarshaler interface {
UnmarshalData(*MizuEntry) error
}
func GetEntry(r *MizuEntry, v DataUnmarshaler) error {
return v.UnmarshalData(r)
}
type MizuEntry struct {
ID uint `gorm:"primarykey"`
CreatedAt time.Time
UpdatedAt time.Time
Entry string `json:"entry,omitempty" gorm:"column:entry"`
EntryId string `json:"entryId" gorm:"column:entryId"`
Url string `json:"url" gorm:"column:url"`
Method string `json:"method" gorm:"column:method"`
Status int `json:"status" gorm:"column:status"`
RequestSenderIp string `json:"requestSenderIp" gorm:"column:requestSenderIp"`
Service string `json:"service" gorm:"column:service"`
Timestamp int64 `json:"timestamp" gorm:"column:timestamp"`
Path string `json:"path" gorm:"column:path"`
ResolvedSource string `json:"resolvedSource,omitempty" gorm:"column:resolvedSource"`
ResolvedDestination string `json:"resolvedDestination,omitempty" gorm:"column:resolvedDestination"`
IsOutgoing bool `json:"isOutgoing,omitempty" gorm:"column:isOutgoing"`
EstimatedSizeBytes int `json:"-" gorm:"column:estimatedSizeBytes"`
}
type BaseEntryDetails struct {
Id string `json:"id,omitempty"`
Url string `json:"url,omitempty"`
RequestSenderIp string `json:"requestSenderIp,omitempty"`
Service string `json:"service,omitempty"`
Path string `json:"path,omitempty"`
StatusCode int `json:"statusCode,omitempty"`
Method string `json:"method,omitempty"`
Timestamp int64 `json:"timestamp,omitempty"`
IsOutgoing bool `json:"isOutgoing,omitempty"`
Latency int64 `json:"latency,omitempty"`
Rules ApplicableRules `json:"rules,omitempty"`
}
type ApplicableRules struct {
Latency int64 `json:"latency,omitempty"`
Status bool `json:"status,omitempty"`
NumberOfRules int `json:"numberOfRules,omitempty"`
}
func NewApplicableRules(status bool, latency int64, number int) ApplicableRules {
ar := ApplicableRules{}
ar.Status = status
ar.Latency = latency
ar.NumberOfRules = number
return ar
}
type FullEntryDetails struct {
har.Entry
}
type FullEntryDetailsExtra struct {
har.Entry
}
func (bed *BaseEntryDetails) UnmarshalData(entry *MizuEntry) error {
entryUrl := entry.Url
service := entry.Service
if entry.ResolvedDestination != "" {
entryUrl = utils.SetHostname(entryUrl, entry.ResolvedDestination)
service = utils.SetHostname(service, entry.ResolvedDestination)
}
bed.Id = entry.EntryId
bed.Url = entryUrl
bed.Service = service
bed.Path = entry.Path
bed.StatusCode = entry.Status
bed.Method = entry.Method
bed.Timestamp = entry.Timestamp
bed.RequestSenderIp = entry.RequestSenderIp
bed.IsOutgoing = entry.IsOutgoing
return nil
}
func (fed *FullEntryDetails) UnmarshalData(entry *MizuEntry) error {
if err := json.Unmarshal([]byte(entry.Entry), &fed.Entry); err != nil {
return err
}
if entry.ResolvedDestination != "" {
fed.Entry.Request.URL = utils.SetHostname(fed.Entry.Request.URL, entry.ResolvedDestination)
}
return nil
}
func (fedex *FullEntryDetailsExtra) UnmarshalData(entry *MizuEntry) error {
if err := json.Unmarshal([]byte(entry.Entry), &fedex.Entry); err != nil {
return err
}
if entry.ResolvedSource != "" {
fedex.Entry.Request.Headers = append(fedex.Request.Headers, har.Header{Name: "x-mizu-source", Value: entry.ResolvedSource})
}
if entry.ResolvedDestination != "" {
fedex.Entry.Request.Headers = append(fedex.Request.Headers, har.Header{Name: "x-mizu-destination", Value: entry.ResolvedDestination})
fedex.Entry.Request.URL = utils.SetHostname(fedex.Entry.Request.URL, entry.ResolvedDestination)
}
return nil
}
type EntriesFilter struct {
Limit int `form:"limit" validate:"required,min=1,max=200"`
Operator string `form:"operator" validate:"required,oneof='lt' 'gt'"`
@@ -34,12 +138,12 @@ type HarFetchRequestQuery struct {
type WebSocketEntryMessage struct {
*shared.WebSocketMessageMetadata
Data *tapApi.BaseEntryDetails `json:"data,omitempty"`
Data *BaseEntryDetails `json:"data,omitempty"`
}
type WebSocketTappedEntryMessage struct {
*shared.WebSocketMessageMetadata
Data *tapApi.OutputChannelItem
Data *tap.OutputChannelItem
}
type WebsocketOutboundLinkMessage struct {
@@ -47,7 +151,7 @@ type WebsocketOutboundLinkMessage struct {
Data *tap.OutboundLink
}
func CreateBaseEntryWebSocketMessage(base *tapApi.BaseEntryDetails) ([]byte, error) {
func CreateBaseEntryWebSocketMessage(base *BaseEntryDetails) ([]byte, error) {
message := &WebSocketEntryMessage{
WebSocketMessageMetadata: &shared.WebSocketMessageMetadata{
MessageType: shared.WebSocketMessageTypeEntry,
@@ -57,7 +161,7 @@ func CreateBaseEntryWebSocketMessage(base *tapApi.BaseEntryDetails) ([]byte, err
return json.Marshal(message)
}
func CreateWebsocketTappedEntryMessage(base *tapApi.OutputChannelItem) ([]byte, error) {
func CreateWebsocketTappedEntryMessage(base *tap.OutputChannelItem) ([]byte, error) {
message := &WebSocketTappedEntryMessage{
WebSocketMessageMetadata: &shared.WebSocketMessageMetadata{
MessageType: shared.WebSocketMessageTypeTappedEntry,
@@ -97,8 +201,26 @@ type ExtendedCreator struct {
Source *string `json:"_source"`
}
func RunValidationRulesState(harEntry har.Entry, service string) (tapApi.ApplicableRules, []rules.RulesMatched) {
resultPolicyToSend := rules.MatchRequestPolicy(harEntry, service)
statusPolicyToSend, latency, numberOfRules := rules.PassedValidationRules(resultPolicyToSend)
return tapApi.ApplicableRules{Status: statusPolicyToSend, Latency: latency, NumberOfRules: numberOfRules}, resultPolicyToSend
type FullEntryWithPolicy struct {
RulesMatched []rules.RulesMatched `json:"rulesMatched,omitempty"`
Entry har.Entry `json:"entry"`
Service string `json:"service"`
}
func (fewp *FullEntryWithPolicy) UnmarshalData(entry *MizuEntry) error {
if err := json.Unmarshal([]byte(entry.Entry), &fewp.Entry); err != nil {
return err
}
_, resultPolicyToSend := rules.MatchRequestPolicy(fewp.Entry, entry.Service)
fewp.RulesMatched = resultPolicyToSend
fewp.Service = entry.Service
return nil
}
func RunValidationRulesState(harEntry har.Entry, service string) ApplicableRules {
numberOfRules, resultPolicyToSend := rules.MatchRequestPolicy(harEntry, service)
statusPolicyToSend, latency, numberOfRules := rules.PassedValidationRules(resultPolicyToSend, numberOfRules)
ar := NewApplicableRules(statusPolicyToSend, latency, numberOfRules)
return ar
}

View File

@@ -4,18 +4,14 @@ import (
"github.com/patrickmn/go-cache"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap"
"sync"
"time"
)
const tlsLinkRetainmentTime = time.Minute * 15
var (
TappersCount int
TapStatus shared.TapStatus
TapStatus shared.TapStatus
RecentTLSLinks = cache.New(tlsLinkRetainmentTime, tlsLinkRetainmentTime)
tappersCountLock = sync.Mutex{}
)
func GetAllRecentTLSAddresses() []string {
@@ -30,15 +26,3 @@ func GetAllRecentTLSAddresses() []string {
return recentTLSLinks
}
func TapperAdded() {
tappersCountLock.Lock()
TappersCount++
tappersCountLock.Unlock()
}
func TapperRemoved() {
tappersCountLock.Lock()
TappersCount--
tappersCountLock.Unlock()
}

View File

@@ -4,11 +4,10 @@ import (
"context"
"errors"
"fmt"
"github.com/romana/rlog"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
cmap "github.com/orcaman/concurrent-map"
"github.com/orcaman/concurrent-map"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/watch"

View File

@@ -15,6 +15,8 @@ func EntriesRoutes(ginApp *gin.Engine) {
routeGroup.GET("/uploadEntries", controllers.UploadEntries)
routeGroup.GET("/resolving", controllers.GetCurrentResolvingInformation)
routeGroup.GET("/har", controllers.GetHARs)
routeGroup.GET("/resetDB", controllers.DeleteAllEntries) // get single (full) entry
routeGroup.GET("/generalStats", controllers.GetGeneralStats) // get general stats about entries in DB

View File

@@ -9,6 +9,4 @@ func StatusRoutes(ginApp *gin.Engine) {
routeGroup := ginApp.Group("/status")
routeGroup.POST("/tappedPods", controllers.PostTappedPods)
routeGroup.GET("/tappersCount", controllers.GetTappersCount)
}

View File

@@ -1,10 +1,8 @@
package rules
import (
"encoding/base64"
"encoding/json"
"fmt"
"github.com/romana/rlog"
"reflect"
"regexp"
"strings"
@@ -43,7 +41,7 @@ func ValidateService(serviceFromRule string, service string) bool {
return true
}
func MatchRequestPolicy(harEntry har.Entry, service string) []RulesMatched {
func MatchRequestPolicy(harEntry har.Entry, service string) (int, []RulesMatched) {
enforcePolicy, _ := shared.DecodeEnforcePolicy(fmt.Sprintf("%s/%s", shared.RulePolicyPath, shared.RulePolicyFileName))
var resultPolicyToSend []RulesMatched
for _, rule := range enforcePolicy.Rules {
@@ -52,8 +50,7 @@ func MatchRequestPolicy(harEntry har.Entry, service string) []RulesMatched {
}
if rule.Type == "json" {
var bodyJsonMap interface{}
contentTextDecoded, _ := base64.StdEncoding.DecodeString(string(harEntry.Response.Content.Text))
if err := json.Unmarshal(contentTextDecoded, &bodyJsonMap); err != nil {
if err := json.Unmarshal(harEntry.Response.Content.Text, &bodyJsonMap); err != nil {
continue
}
out, err := jsonpath.Read(bodyJsonMap, rule.Key)
@@ -66,7 +63,6 @@ func MatchRequestPolicy(harEntry har.Entry, service string) []RulesMatched {
if err != nil {
continue
}
rlog.Info(matchValue, rule.Value)
} else {
val := fmt.Sprint(out)
matchValue, err = regexp.MatchString(rule.Value, val)
@@ -93,28 +89,22 @@ func MatchRequestPolicy(harEntry har.Entry, service string) []RulesMatched {
resultPolicyToSend = appendRulesMatched(resultPolicyToSend, true, rule)
}
}
return resultPolicyToSend
return len(enforcePolicy.Rules), resultPolicyToSend
}
func PassedValidationRules(rulesMatched []RulesMatched) (bool, int64, int) {
var numberOfRulesMatched = len(rulesMatched)
var latency int64 = -1
if numberOfRulesMatched == 0 {
return false, 0, numberOfRulesMatched
func PassedValidationRules(rulesMatched []RulesMatched, numberOfRules int) (bool, int64, int) {
if len(rulesMatched) == 0 {
return false, 0, 0
}
for _, rule := range rulesMatched {
if rule.Matched == false {
return false, latency, numberOfRulesMatched
} else {
if strings.ToLower(rule.Rule.Type) == "latency" {
if rule.Rule.Latency < latency || latency == -1 {
latency = rule.Rule.Latency
}
}
return false, -1, len(rulesMatched)
}
}
return true, latency, numberOfRulesMatched
for _, rule := range rulesMatched {
if strings.ToLower(rule.Rule.Type) == "latency" {
return true, rule.Rule.Latency, len(rulesMatched)
}
}
return true, -1, len(rulesMatched)
}

View File

@@ -0,0 +1,200 @@
package sensitiveDataFiltering
import (
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"github.com/up9inc/mizu/tap"
"net/url"
"strings"
"github.com/beevik/etree"
"github.com/google/martian/har"
"github.com/up9inc/mizu/shared"
)
func FilterSensitiveInfoFromHarRequest(harOutputItem *tap.OutputChannelItem, options *shared.TrafficFilteringOptions) {
harOutputItem.HarEntry.Request.Headers = filterHarHeaders(harOutputItem.HarEntry.Request.Headers)
harOutputItem.HarEntry.Response.Headers = filterHarHeaders(harOutputItem.HarEntry.Response.Headers)
harOutputItem.HarEntry.Request.Cookies = make([]har.Cookie, 0, 0)
harOutputItem.HarEntry.Response.Cookies = make([]har.Cookie, 0, 0)
harOutputItem.HarEntry.Request.URL = filterUrl(harOutputItem.HarEntry.Request.URL)
for i, queryString := range harOutputItem.HarEntry.Request.QueryString {
if isFieldNameSensitive(queryString.Name) {
harOutputItem.HarEntry.Request.QueryString[i].Value = maskedFieldPlaceholderValue
}
}
if harOutputItem.HarEntry.Request.PostData != nil {
requestContentType := getContentTypeHeaderValue(harOutputItem.HarEntry.Request.Headers)
filteredRequestBody, err := filterHttpBody([]byte(harOutputItem.HarEntry.Request.PostData.Text), requestContentType, options)
if err == nil {
harOutputItem.HarEntry.Request.PostData.Text = string(filteredRequestBody)
}
}
if harOutputItem.HarEntry.Response.Content != nil {
responseContentType := getContentTypeHeaderValue(harOutputItem.HarEntry.Response.Headers)
filteredResponseBody, err := filterHttpBody(harOutputItem.HarEntry.Response.Content.Text, responseContentType, options)
if err == nil {
harOutputItem.HarEntry.Response.Content.Text = filteredResponseBody
}
}
}
func filterHarHeaders(headers []har.Header) []har.Header {
newHeaders := make([]har.Header, 0)
for i, header := range headers {
if strings.ToLower(header.Name) == "cookie" {
continue
} else if isFieldNameSensitive(header.Name) {
newHeaders = append(newHeaders, har.Header{Name: header.Name, Value: maskedFieldPlaceholderValue})
headers[i].Value = maskedFieldPlaceholderValue
} else {
newHeaders = append(newHeaders, header)
}
}
return newHeaders
}
func getContentTypeHeaderValue(headers []har.Header) string {
for _, header := range headers {
if strings.ToLower(header.Name) == "content-type" {
return header.Value
}
}
return ""
}
func isFieldNameSensitive(fieldName string) bool {
name := strings.ToLower(fieldName)
name = strings.ReplaceAll(name, "_", "")
name = strings.ReplaceAll(name, "-", "")
name = strings.ReplaceAll(name, " ", "")
for _, sensitiveField := range personallyIdentifiableDataFields {
if strings.Contains(name, sensitiveField) {
return true
}
}
return false
}
func filterHttpBody(bytes []byte, contentType string, options *shared.TrafficFilteringOptions) ([]byte, error) {
mimeType := strings.Split(contentType, ";")[0]
switch strings.ToLower(mimeType) {
case "application/json":
return filterJsonBody(bytes)
case "text/html":
fallthrough
case "application/xhtml+xml":
fallthrough
case "text/xml":
fallthrough
case "application/xml":
return filterXmlEtree(bytes)
case "text/plain":
if options != nil && options.PlainTextMaskingRegexes != nil {
return filterPlainText(bytes, options), nil
}
}
return bytes, nil
}
func filterPlainText(bytes []byte, options *shared.TrafficFilteringOptions) []byte {
for _, regex := range options.PlainTextMaskingRegexes {
bytes = regex.ReplaceAll(bytes, []byte(maskedFieldPlaceholderValue))
}
return bytes
}
func filterXmlEtree(bytes []byte) ([]byte, error) {
if !IsValidXML(bytes) {
return nil, errors.New("Invalid XML")
}
xmlDoc := etree.NewDocument()
err := xmlDoc.ReadFromBytes(bytes)
if err != nil {
return nil, err
} else {
filterXmlElement(xmlDoc.Root())
}
return xmlDoc.WriteToBytes()
}
func IsValidXML(data []byte) bool {
return xml.Unmarshal(data, new(interface{})) == nil
}
func filterXmlElement(element *etree.Element) {
for i, attribute := range element.Attr {
if isFieldNameSensitive(attribute.Key) {
element.Attr[i].Value = maskedFieldPlaceholderValue
}
}
if element.ChildElements() == nil || len(element.ChildElements()) == 0 {
if isFieldNameSensitive(element.Tag) {
element.SetText(maskedFieldPlaceholderValue)
}
} else {
for _, element := range element.ChildElements() {
filterXmlElement(element)
}
}
}
func filterJsonBody(bytes []byte) ([]byte, error) {
var bodyJsonMap map[string] interface{}
err := json.Unmarshal(bytes ,&bodyJsonMap)
if err != nil {
return nil, err
}
filterJsonMap(bodyJsonMap)
return json.Marshal(bodyJsonMap)
}
func filterJsonMap(jsonMap map[string] interface{}) {
for key, value := range jsonMap {
// Do not replace nil values with maskedFieldPlaceholderValue
if value == nil {
continue
}
nestedMap, isNested := value.(map[string] interface{})
if isNested {
filterJsonMap(nestedMap)
} else {
if isFieldNameSensitive(key) {
jsonMap[key] = maskedFieldPlaceholderValue
}
}
}
}
// receives string representing url, returns string url without sensitive query param values (http://service/api?userId=bob&password=123&type=login -> http://service/api?userId=[REDACTED]&password=[REDACTED]&type=login)
func filterUrl(originalUrl string) string {
parsedUrl, err := url.Parse(originalUrl)
if err != nil {
return fmt.Sprintf("http://%s", maskedFieldPlaceholderValue)
} else {
if len(parsedUrl.RawQuery) > 0 {
newQueryArgs := make([]string, 0)
for urlQueryParamName, urlQueryParamValues := range parsedUrl.Query() {
newValues := urlQueryParamValues
if isFieldNameSensitive(urlQueryParamName) {
newValues = []string {maskedFieldPlaceholderValue}
}
for _, paramValue := range newValues {
newQueryArgs = append(newQueryArgs, fmt.Sprintf("%s=%s", urlQueryParamName, paramValue))
}
}
parsedUrl.RawQuery = strings.Join(newQueryArgs, "&")
}
return parsedUrl.String()
}
}

View File

@@ -5,14 +5,12 @@ import (
"compress/zlib"
"encoding/json"
"fmt"
"github.com/google/martian/har"
"github.com/romana/rlog"
"github.com/up9inc/mizu/shared"
tapApi "github.com/up9inc/mizu/tap/api"
"io/ioutil"
"log"
"mizuserver/pkg/database"
"mizuserver/pkg/utils"
"mizuserver/pkg/models"
"net/http"
"net/url"
"strings"
@@ -131,33 +129,21 @@ func UploadEntriesImpl(token string, model string, envPrefix string, sleepInterv
for {
timestampTo := time.Now().UnixNano() / int64(time.Millisecond)
rlog.Infof("Getting entries from %v, to %v\n", timestampFrom, timestampTo)
protocolFilter := "http"
entriesArray := database.GetEntriesFromDb(timestampFrom, timestampTo, &protocolFilter)
entriesArray := database.GetEntriesFromDb(timestampFrom, timestampTo)
if len(entriesArray) > 0 {
result := make([]har.Entry, 0)
fullEntriesExtra := make([]models.FullEntryDetailsExtra, 0)
for _, data := range entriesArray {
var pair tapApi.RequestResponsePair
if err := json.Unmarshal([]byte(data.Entry), &pair); err != nil {
harEntry := models.FullEntryDetailsExtra{}
if err := models.GetEntry(&data, &harEntry); err != nil {
continue
}
harEntry, err := utils.NewEntry(&pair)
if err != nil {
continue
}
if data.ResolvedSource != "" {
harEntry.Request.Headers = append(harEntry.Request.Headers, har.Header{Name: "x-mizu-source", Value: data.ResolvedSource})
}
if data.ResolvedDestination != "" {
harEntry.Request.Headers = append(harEntry.Request.Headers, har.Header{Name: "x-mizu-destination", Value: data.ResolvedDestination})
harEntry.Request.URL = utils.SetHostname(harEntry.Request.URL, data.ResolvedDestination)
}
result = append(result, *harEntry)
fullEntriesExtra = append(fullEntriesExtra, harEntry)
}
rlog.Infof("About to upload %v entries\n", len(fullEntriesExtra))
rlog.Infof("About to upload %v entries\n", len(result))
body, jMarshalErr := json.Marshal(result)
body, jMarshalErr := json.Marshal(fullEntriesExtra)
if jMarshalErr != nil {
analyzeInformation.Reset()
rlog.Infof("Stopping analyzing")

View File

@@ -1,256 +0,0 @@
package utils
import (
"bytes"
"errors"
"fmt"
"github.com/google/martian/har"
"github.com/romana/rlog"
"github.com/up9inc/mizu/tap/api"
"strconv"
"strings"
"time"
)
// Keep it because we might want cookies in the future
//func BuildCookies(rawCookies []interface{}) []har.Cookie {
// cookies := make([]har.Cookie, 0, len(rawCookies))
//
// for _, cookie := range rawCookies {
// c := cookie.(map[string]interface{})
// expiresStr := ""
// if c["expires"] != nil {
// expiresStr = c["expires"].(string)
// }
// expires, _ := time.Parse(time.RFC3339, expiresStr)
// httpOnly := false
// if c["httponly"] != nil {
// httpOnly, _ = strconv.ParseBool(c["httponly"].(string))
// }
// secure := false
// if c["secure"] != nil {
// secure, _ = strconv.ParseBool(c["secure"].(string))
// }
// path := ""
// if c["path"] != nil {
// path = c["path"].(string)
// }
// domain := ""
// if c["domain"] != nil {
// domain = c["domain"].(string)
// }
//
// cookies = append(cookies, har.Cookie{
// Name: c["name"].(string),
// Value: c["value"].(string),
// Path: path,
// Domain: domain,
// HTTPOnly: httpOnly,
// Secure: secure,
// Expires: expires,
// Expires8601: expiresStr,
// })
// }
//
// return cookies
//}
func BuildHeaders(rawHeaders []interface{}) ([]har.Header, string, string, string, string, string) {
var host, scheme, authority, path, status string
headers := make([]har.Header, 0, len(rawHeaders))
for _, header := range rawHeaders {
h := header.(map[string]interface{})
headers = append(headers, har.Header{
Name: h["name"].(string),
Value: h["value"].(string),
})
if h["name"] == "Host" {
host = h["value"].(string)
}
if h["name"] == ":authority" {
authority = h["value"].(string)
}
if h["name"] == ":scheme" {
scheme = h["value"].(string)
}
if h["name"] == ":path" {
path = h["value"].(string)
}
if h["name"] == ":status" {
status = h["value"].(string)
}
}
return headers, host, scheme, authority, path, status
}
func BuildPostParams(rawParams []interface{}) []har.Param {
params := make([]har.Param, 0, len(rawParams))
for _, param := range rawParams {
p := param.(map[string]interface{})
name := ""
if p["name"] != nil {
name = p["name"].(string)
}
value := ""
if p["value"] != nil {
value = p["value"].(string)
}
fileName := ""
if p["fileName"] != nil {
fileName = p["fileName"].(string)
}
contentType := ""
if p["contentType"] != nil {
contentType = p["contentType"].(string)
}
params = append(params, har.Param{
Name: name,
Value: value,
Filename: fileName,
ContentType: contentType,
})
}
return params
}
func NewRequest(request *api.GenericMessage) (harRequest *har.Request, err error) {
reqDetails := request.Payload.(map[string]interface{})["details"].(map[string]interface{})
headers, host, scheme, authority, path, _ := BuildHeaders(reqDetails["headers"].([]interface{}))
cookies := make([]har.Cookie, 0) // BuildCookies(reqDetails["cookies"].([]interface{}))
postData, _ := reqDetails["postData"].(map[string]interface{})
mimeType, _ := postData["mimeType"]
if mimeType == nil || len(mimeType.(string)) == 0 {
mimeType = "text/html"
}
text, _ := postData["text"]
postDataText := ""
if text != nil {
postDataText = text.(string)
}
queryString := make([]har.QueryString, 0)
for _, _qs := range reqDetails["queryString"].([]interface{}) {
qs := _qs.(map[string]interface{})
queryString = append(queryString, har.QueryString{
Name: qs["name"].(string),
Value: qs["value"].(string),
})
}
url := fmt.Sprintf("http://%s%s", host, reqDetails["url"].(string))
if strings.HasPrefix(mimeType.(string), "application/grpc") {
url = fmt.Sprintf("%s://%s%s", scheme, authority, path)
}
harParams := make([]har.Param, 0)
if postData["params"] != nil {
harParams = BuildPostParams(postData["params"].([]interface{}))
}
harRequest = &har.Request{
Method: reqDetails["method"].(string),
URL: url,
HTTPVersion: reqDetails["httpVersion"].(string),
HeadersSize: -1,
BodySize: int64(bytes.NewBufferString(postDataText).Len()),
QueryString: queryString,
Headers: headers,
Cookies: cookies,
PostData: &har.PostData{
MimeType: mimeType.(string),
Params: harParams,
Text: postDataText,
},
}
return
}
func NewResponse(response *api.GenericMessage) (harResponse *har.Response, err error) {
resDetails := response.Payload.(map[string]interface{})["details"].(map[string]interface{})
headers, _, _, _, _, _status := BuildHeaders(resDetails["headers"].([]interface{}))
cookies := make([]har.Cookie, 0) // BuildCookies(resDetails["cookies"].([]interface{}))
content, _ := resDetails["content"].(map[string]interface{})
mimeType, _ := content["mimeType"]
if mimeType == nil || len(mimeType.(string)) == 0 {
mimeType = "text/html"
}
encoding, _ := content["encoding"]
text, _ := content["text"]
bodyText := ""
if text != nil {
bodyText = text.(string)
}
harContent := &har.Content{
Encoding: encoding.(string),
MimeType: mimeType.(string),
Text: []byte(bodyText),
Size: int64(len(bodyText)),
}
status := int(resDetails["status"].(float64))
if strings.HasPrefix(mimeType.(string), "application/grpc") {
status, err = strconv.Atoi(_status)
if err != nil {
rlog.Errorf("Failed converting status to int %s (%v,%+v)", err, err, err)
return nil, errors.New("failed converting response status to int for HAR")
}
}
harResponse = &har.Response{
HTTPVersion: resDetails["httpVersion"].(string),
Status: status,
StatusText: resDetails["statusText"].(string),
HeadersSize: -1,
BodySize: int64(bytes.NewBufferString(bodyText).Len()),
Headers: headers,
Cookies: cookies,
Content: harContent,
}
return
}
func NewEntry(pair *api.RequestResponsePair) (*har.Entry, error) {
harRequest, err := NewRequest(&pair.Request)
if err != nil {
rlog.Errorf("Failed converting request to HAR %s (%v,%+v)", err, err, err)
return nil, errors.New("failed converting request to HAR")
}
harResponse, err := NewResponse(&pair.Response)
if err != nil {
rlog.Errorf("Failed converting response to HAR %s (%v,%+v)", err, err, err)
return nil, errors.New("failed converting response to HAR")
}
totalTime := pair.Response.CaptureTime.Sub(pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
if totalTime < 1 {
totalTime = 1
}
harEntry := har.Entry{
StartedDateTime: pair.Request.CaptureTime,
Time: totalTime,
Request: harRequest,
Response: harResponse,
Cache: &har.Cache{},
Timings: &har.Timings{
Send: -1,
Wait: -1,
Receive: totalTime,
},
}
return &harEntry, nil
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"github.com/gin-gonic/gin"
"github.com/romana/rlog"
"log"
"net/http"
"net/url"
"os"
@@ -17,8 +18,8 @@ import (
func StartServer(app *gin.Engine) {
signals := make(chan os.Signal, 2)
signal.Notify(signals,
os.Interrupt, // this catch ctrl + c
syscall.SIGTSTP, // this catch ctrl + z
os.Interrupt, // this catch ctrl + c
syscall.SIGTSTP, // this catch ctrl + z
)
srv := &http.Server{
@@ -35,9 +36,8 @@ func StartServer(app *gin.Engine) {
}()
// Run server.
rlog.Infof("Starting the server...")
if err := app.Run(":8899"); err != nil {
rlog.Errorf("Server is not running! Reason: %v", err)
log.Printf("Oops... Server is not running! Reason: %v", err)
}
}
@@ -54,14 +54,15 @@ func ReverseSlice(data interface{}) {
func CheckErr(e error) {
if e != nil {
rlog.Errorf("%v", e)
log.Printf("%v", e)
//panic(e)
}
}
func SetHostname(address, newHostname string) string {
replacedUrl, err := url.Parse(address)
if err != nil {
rlog.Errorf("error replacing hostname to %s in address %s, returning original %v", newHostname, address, err)
if err != nil{
log.Printf("error replacing hostname to %s in address %s, returning original %v",newHostname, address, err)
return address
}
replacedUrl.Host = newHostname

View File

@@ -1,155 +0,0 @@
package apiserver
import (
"bytes"
"encoding/json"
"fmt"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared"
"io/ioutil"
core "k8s.io/api/core/v1"
"net/http"
"net/url"
"time"
)
type apiServerProvider struct {
url string
isReady bool
retries int
}
var Provider = apiServerProvider{retries: config.GetIntEnvConfig(config.ApiServerRetries, 20)}
func (provider *apiServerProvider) InitAndTestConnection(url string) error {
healthUrl := fmt.Sprintf("%s/", url)
retriesLeft := provider.retries
for retriesLeft > 0 {
if response, err := http.Get(healthUrl); err != nil {
logger.Log.Debugf("[ERROR] failed connecting to api server %v", err)
} else if response.StatusCode != 200 {
responseBody := ""
data, readErr := ioutil.ReadAll(response.Body)
if readErr == nil {
responseBody = string(data)
}
logger.Log.Debugf("can't connect to api server yet, response status code: %v, body: %v", response.StatusCode, responseBody)
response.Body.Close()
} else {
logger.Log.Debugf("connection test to api server passed successfully")
break
}
retriesLeft -= 1
time.Sleep(time.Second)
}
if retriesLeft == 0 {
provider.isReady = false
return fmt.Errorf("couldn't reach the api server after %v retries", provider.retries)
}
provider.url = url
provider.isReady = true
return nil
}
func (provider *apiServerProvider) ReportTappedPods(pods []core.Pod) error {
if !provider.isReady {
return fmt.Errorf("trying to reach api server when not initialized yet")
}
tappedPodsUrl := fmt.Sprintf("%s/status/tappedPods", provider.url)
podInfos := make([]shared.PodInfo, 0)
for _, pod := range pods {
podInfos = append(podInfos, shared.PodInfo{Name: pod.Name, Namespace: pod.Namespace})
}
tapStatus := shared.TapStatus{Pods: podInfos}
if jsonValue, err := json.Marshal(tapStatus); err != nil {
return fmt.Errorf("failed Marshal the tapped pods %w", err)
} else {
if response, err := http.Post(tappedPodsUrl, "application/json", bytes.NewBuffer(jsonValue)); err != nil {
return fmt.Errorf("failed sending to API server the tapped pods %w", err)
} else if response.StatusCode != 200 {
return fmt.Errorf("failed sending to API server the tapped pods, response status code %v", response.StatusCode)
} else {
logger.Log.Debugf("Reported to server API about %d taped pods successfully", len(podInfos))
return nil
}
}
}
func (provider *apiServerProvider) RequestAnalysis(analysisDestination string, sleepIntervalSec int) error {
if !provider.isReady {
return fmt.Errorf("trying to reach api server when not initialized yet")
}
urlPath := fmt.Sprintf("%s/api/uploadEntries?dest=%s&interval=%v", provider.url, url.QueryEscape(analysisDestination), sleepIntervalSec)
u, parseErr := url.ParseRequestURI(urlPath)
if parseErr != nil {
logger.Log.Fatal("Failed parsing the URL (consider changing the analysis dest URL), err: %v", parseErr)
}
logger.Log.Debugf("Analysis url %v", u.String())
if response, requestErr := http.Get(u.String()); requestErr != nil {
return fmt.Errorf("failed to notify agent for analysis, err: %w", requestErr)
} else if response.StatusCode != 200 {
return fmt.Errorf("failed to notify agent for analysis, status code: %v", response.StatusCode)
} else {
logger.Log.Infof(uiUtils.Purple, "Traffic is uploading to UP9 for further analysis")
return nil
}
}
func (provider *apiServerProvider) GetGeneralStats() (map[string]interface{}, error) {
if !provider.isReady {
return nil, fmt.Errorf("trying to reach api server when not initialized yet")
}
generalStatsUrl := fmt.Sprintf("%s/api/generalStats", provider.url)
response, requestErr := http.Get(generalStatsUrl)
if requestErr != nil {
return nil, fmt.Errorf("failed to get general stats for telemetry, err: %w", requestErr)
} else if response.StatusCode != 200 {
return nil, fmt.Errorf("failed to get general stats for telemetry, status code: %v", response.StatusCode)
}
defer func() { _ = response.Body.Close() }()
data, readErr := ioutil.ReadAll(response.Body)
if readErr != nil {
return nil, fmt.Errorf("failed to read general stats for telemetry, err: %v", readErr)
}
var generalStats map[string]interface{}
if parseErr := json.Unmarshal(data, &generalStats); parseErr != nil {
return nil, fmt.Errorf("failed to parse general stats for telemetry, err: %v", parseErr)
}
return generalStats, nil
}
func (provider *apiServerProvider) GetVersion() (string, error) {
if !provider.isReady {
return "", fmt.Errorf("trying to reach api server when not initialized yet")
}
versionUrl, _ := url.Parse(fmt.Sprintf("%s/metadata/version", provider.url))
req := &http.Request{
Method: http.MethodGet,
URL: versionUrl,
}
statusResp, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer statusResp.Body.Close()
versionResponse := &shared.VersionResponse{}
if err := json.NewDecoder(statusResp.Body).Decode(&versionResponse); err != nil {
return "", err
}
return versionResponse.SemVer, nil
}

View File

@@ -1,69 +0,0 @@
package cmd
import (
"context"
"fmt"
"os"
"os/exec"
"os/signal"
"runtime"
"syscall"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/errormessage"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/cli/uiUtils"
)
func GetApiServerUrl() string {
return fmt.Sprintf("http://%s", kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Tap.GuiPort))
}
func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
err := kubernetes.StartProxy(kubernetesProvider, config.Config.Tap.GuiPort, config.Config.MizuResourcesNamespace, mizu.ApiServerPodName)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error occured while running k8s proxy %v\n"+
"Try setting different port by using --%s", errormessage.FormatError(err), configStructs.GuiPortTapName))
cancel()
}
logger.Log.Debugf("proxy ended")
}
func waitForFinish(ctx context.Context, cancel context.CancelFunc) {
logger.Log.Debugf("waiting for finish...")
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT)
// block until ctx cancel is called or termination signal is received
select {
case <-ctx.Done():
logger.Log.Debugf("ctx done")
break
case <-sigChan:
logger.Log.Debugf("Got termination signal, canceling execution...")
cancel()
}
}
func openBrowser(url string) {
var err error
switch runtime.GOOS {
case "linux":
err = exec.Command("xdg-open", url).Start()
case "windows":
err = exec.Command("rundll32", "url.dll,FileProtocolHandler", url).Start()
case "darwin":
err = exec.Command("open", url).Start()
default:
err = fmt.Errorf("unsupported platform")
}
if err != nil {
logger.Log.Errorf("error while opening browser, %v", err)
}
}

View File

@@ -25,11 +25,11 @@ var configCmd = &cobra.Command{
}
if config.Config.Config.Regenerate {
data := []byte(template)
if err := ioutil.WriteFile(config.Config.ConfigFilePath, data, 0644); err != nil {
if err := ioutil.WriteFile(config.GetConfigFilePath(), data, 0644); err != nil {
logger.Log.Errorf("Failed writing config %v", err)
return nil
}
logger.Log.Infof(fmt.Sprintf("Template File written to %s", fmt.Sprintf(uiUtils.Purple, config.Config.ConfigFilePath)))
logger.Log.Infof(fmt.Sprintf("Template File written to %s", fmt.Sprintf(uiUtils.Purple, config.GetConfigFilePath())))
} else {
logger.Log.Debugf("Writing template config.\n%v", template)
fmt.Printf("%v", template)
@@ -41,8 +41,8 @@ var configCmd = &cobra.Command{
func init() {
rootCmd.AddCommand(configCmd)
defaultConfig := config.ConfigStruct{}
defaults.Set(&defaultConfig)
defaultConfigConfig := configStructs.ConfigConfig{}
defaults.Set(&defaultConfigConfig)
configCmd.Flags().BoolP(configStructs.RegenerateConfigName, "r", defaultConfig.Config.Regenerate, fmt.Sprintf("Regenerate the config file with default values to path %s or to chosen path using --%s", defaultConfig.ConfigFilePath, config.ConfigFilePathCommandName))
configCmd.Flags().BoolP(configStructs.RegenerateConfigName, "r", defaultConfigConfig.Regenerate, fmt.Sprintf("Regenerate the config file with default values %s", config.GetConfigFilePath()))
}

38
cli/cmd/fetch.go Normal file
View File

@@ -0,0 +1,38 @@
package cmd
import (
"github.com/creasty/defaults"
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/mizu/version"
"github.com/up9inc/mizu/cli/telemetry"
)
var fetchCmd = &cobra.Command{
Use: "fetch",
Short: "Download recorded traffic to files",
RunE: func(cmd *cobra.Command, args []string) error {
go telemetry.ReportRun("fetch", config.Config.Fetch)
if isCompatible, err := version.CheckVersionCompatibility(config.Config.Fetch.GuiPort); err != nil {
return err
} else if !isCompatible {
return nil
}
RunMizuFetch()
return nil
},
}
func init() {
rootCmd.AddCommand(fetchCmd)
defaultFetchConfig := configStructs.FetchConfig{}
defaults.Set(&defaultFetchConfig)
fetchCmd.Flags().StringP(configStructs.DirectoryFetchName, "d", defaultFetchConfig.Directory, "Provide a custom directory for fetched entries")
fetchCmd.Flags().Int(configStructs.FromTimestampFetchName, defaultFetchConfig.FromTimestamp, "Custom start timestamp for fetched entries")
fetchCmd.Flags().Int(configStructs.ToTimestampFetchName, defaultFetchConfig.ToTimestamp, "Custom end timestamp fetched entries")
fetchCmd.Flags().Uint16P(configStructs.GuiPortFetchName, "p", defaultFetchConfig.GuiPort, "Provide a custom port for the web interface webserver")
}

96
cli/cmd/fetchRunner.go Normal file
View File

@@ -0,0 +1,96 @@
package cmd
import (
"archive/zip"
"bytes"
"fmt"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/logger"
"io"
"io/ioutil"
"log"
"net/http"
"os"
"path/filepath"
"strings"
)
func RunMizuFetch() {
mizuProxiedUrl := kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Fetch.GuiPort)
resp, err := http.Get(fmt.Sprintf("http://%s/api/har?from=%v&to=%v", mizuProxiedUrl, config.Config.Fetch.FromTimestamp, config.Config.Fetch.ToTimestamp))
if err != nil {
log.Fatal(err)
}
defer func() { _ = resp.Body.Close() }()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Fatal(err)
}
zipReader, err := zip.NewReader(bytes.NewReader(body), int64(len(body)))
if err != nil {
log.Fatal(err)
}
_ = Unzip(zipReader, config.Config.Fetch.Directory)
}
func Unzip(reader *zip.Reader, dest string) error {
dest, _ = filepath.Abs(dest)
_ = os.MkdirAll(dest, os.ModePerm)
// Closure to address file descriptors issue with all the deferred .Close() methods
extractAndWriteFile := func(f *zip.File) error {
rc, err := f.Open()
if err != nil {
return err
}
defer func() {
if err := rc.Close(); err != nil {
panic(err)
}
}()
path := filepath.Join(dest, f.Name)
// Check for ZipSlip (Directory traversal)
if !strings.HasPrefix(path, filepath.Clean(dest)+string(os.PathSeparator)) {
return fmt.Errorf("illegal file path: %s", path)
}
if f.FileInfo().IsDir() {
_ = os.MkdirAll(path, f.Mode())
} else {
_ = os.MkdirAll(filepath.Dir(path), f.Mode())
logger.Log.Infof("writing HAR file [ %v ]", path)
f, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
if err != nil {
return err
}
defer func() {
if err := f.Close(); err != nil {
panic(err)
}
logger.Log.Info(" done")
}()
_, err = io.Copy(f, rc)
if err != nil {
return err
}
}
return nil
}
for _, f := range reader.File {
err := extractAndWriteFile(f)
if err != nil {
return err
}
}
return nil
}

View File

@@ -2,7 +2,6 @@ package cmd
import (
"fmt"
"github.com/creasty/defaults"
"github.com/spf13/cobra"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/logger"
@@ -10,7 +9,6 @@ import (
"github.com/up9inc/mizu/cli/mizu/fsUtils"
"github.com/up9inc/mizu/cli/mizu/version"
"github.com/up9inc/mizu/cli/uiUtils"
"time"
)
var rootCmd = &cobra.Command{
@@ -27,20 +25,13 @@ Further info is available at https://github.com/up9inc/mizu`,
}
func init() {
defaultConfig := config.ConfigStruct{}
defaults.Set(&defaultConfig)
rootCmd.PersistentFlags().StringSlice(config.SetCommandName, []string{}, fmt.Sprintf("Override values using --%s", config.SetCommandName))
rootCmd.PersistentFlags().String(config.ConfigFilePathCommandName, defaultConfig.ConfigFilePath, fmt.Sprintf("Override config file path using --%s", config.ConfigFilePathCommandName))
}
func printNewVersionIfNeeded(versionChan chan string) {
select {
case versionMsg := <-versionChan:
if versionMsg != "" {
logger.Log.Infof(uiUtils.Yellow, versionMsg)
}
case <-time.After(2 * time.Second):
versionMsg := <-versionChan
if versionMsg != "" {
logger.Log.Infof(uiUtils.Yellow, versionMsg)
}
}

View File

@@ -2,13 +2,11 @@ package cmd
import (
"errors"
"fmt"
"os"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/telemetry"
"os"
"github.com/creasty/defaults"
"github.com/spf13/cobra"
@@ -66,9 +64,7 @@ func init() {
tapCmd.Flags().StringSliceP(configStructs.PlainTextFilterRegexesTapName, "r", defaultTapConfig.PlainTextFilterRegexes, "List of regex expressions that are used to filter matching values from text/plain http bodies")
tapCmd.Flags().Bool(configStructs.DisableRedactionTapName, defaultTapConfig.DisableRedaction, "Disables redaction of potentially sensitive request/response headers and body values")
tapCmd.Flags().String(configStructs.HumanMaxEntriesDBSizeTapName, defaultTapConfig.HumanMaxEntriesDBSize, "Override the default max entries db size")
tapCmd.Flags().String(configStructs.DirectionTapName, defaultTapConfig.Direction, "Record traffic that goes in this direction (relative to the tapped pod): in/any")
tapCmd.Flags().Bool(configStructs.DryRunTapName, defaultTapConfig.DryRun, "Preview of all pods matching the regex, without tapping them")
tapCmd.Flags().String(configStructs.EnforcePolicyFile, defaultTapConfig.EnforcePolicyFile, "Yaml file path with policy rules")
tapCmd.Flags().String(configStructs.EnforcePolicyFileDeprecated, defaultTapConfig.EnforcePolicyFileDeprecated, "Yaml file with policy rules")
tapCmd.Flags().MarkDeprecated(configStructs.EnforcePolicyFileDeprecated, fmt.Sprintf("Use --%s instead", configStructs.EnforcePolicyFile))
tapCmd.Flags().String(configStructs.EnforcePolicyFile, defaultTapConfig.EnforcePolicyFile, "Yaml file with policy rules")
}

View File

@@ -1,27 +1,32 @@
package cmd
import (
"bytes"
"context"
"encoding/json"
"fmt"
"path"
"regexp"
"strings"
"time"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/errormessage"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/cli/mizu/fsUtils"
"github.com/up9inc/mizu/cli/mizu/goUtils"
"github.com/up9inc/mizu/cli/telemetry"
"net/http"
"net/url"
"os"
"os/signal"
"path"
"regexp"
"strings"
"syscall"
"time"
"github.com/up9inc/mizu/cli/errormessage"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/debounce"
"github.com/up9inc/mizu/tap/api"
yaml "gopkg.in/yaml.v3"
core "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/util/wait"
@@ -47,17 +52,9 @@ func RunMizuTap() {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error parsing regex-masking: %v", errormessage.FormatError(err)))
return
}
var mizuValidationRules string
if config.Config.Tap.EnforcePolicyFile != "" || config.Config.Tap.EnforcePolicyFileDeprecated != "" {
var trafficValidation string
if config.Config.Tap.EnforcePolicyFile != "" {
trafficValidation = config.Config.Tap.EnforcePolicyFile
} else {
trafficValidation = config.Config.Tap.EnforcePolicyFileDeprecated
}
mizuValidationRules, err = readValidationRules(trafficValidation)
if config.Config.Tap.EnforcePolicyFile != "" {
mizuValidationRules, err = readValidationRules(config.Config.Tap.EnforcePolicyFile)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error reading policy file: %v", errormessage.FormatError(err)))
return
@@ -111,14 +108,14 @@ func RunMizuTap() {
nodeToTappedPodIPMap := getNodeHostToTappedPodIpsMap(state.currentlyTappedPods)
defer finishMizuExecution(kubernetesProvider)
defer cleanUpMizu(kubernetesProvider)
if err := createMizuResources(ctx, kubernetesProvider, nodeToTappedPodIPMap, mizuApiFilteringOptions, mizuValidationRules); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error creating resources: %v", errormessage.FormatError(err)))
return
}
go goUtils.HandleExcWrapper(watchApiServerPod, ctx, kubernetesProvider, cancel)
go goUtils.HandleExcWrapper(watchPodsForTapping, ctx, kubernetesProvider, targetNamespaces, cancel, mizuApiFilteringOptions)
go goUtils.HandleExcWrapper(createProxyToApiServerPod, ctx, kubernetesProvider, cancel)
go goUtils.HandleExcWrapper(watchPodsForTapping, ctx, kubernetesProvider, targetNamespaces, cancel)
//block until exit signal or error
waitForFinish(ctx, cancel)
@@ -133,7 +130,7 @@ func readValidationRules(file string) (string, error) {
return string(newContent), nil
}
func createMizuResources(ctx context.Context, kubernetesProvider *kubernetes.Provider, nodeToTappedPodIPMap map[string][]string, mizuApiFilteringOptions *api.TrafficFilteringOptions, mizuValidationRules string) error {
func createMizuResources(ctx context.Context, kubernetesProvider *kubernetes.Provider, nodeToTappedPodIPMap map[string][]string, mizuApiFilteringOptions *shared.TrafficFilteringOptions, mizuValidationRules string) error {
if !config.Config.IsNsRestrictedMode() {
if err := createMizuNamespace(ctx, kubernetesProvider); err != nil {
return err
@@ -144,7 +141,7 @@ func createMizuResources(ctx context.Context, kubernetesProvider *kubernetes.Pro
return err
}
if err := updateMizuTappers(ctx, kubernetesProvider, nodeToTappedPodIPMap, mizuApiFilteringOptions); err != nil {
if err := updateMizuTappers(ctx, kubernetesProvider, nodeToTappedPodIPMap); err != nil {
return err
}
@@ -168,7 +165,7 @@ func createMizuNamespace(ctx context.Context, kubernetesProvider *kubernetes.Pro
return err
}
func createMizuApiServer(ctx context.Context, kubernetesProvider *kubernetes.Provider, mizuApiFilteringOptions *api.TrafficFilteringOptions) error {
func createMizuApiServer(ctx context.Context, kubernetesProvider *kubernetes.Provider, mizuApiFilteringOptions *shared.TrafficFilteringOptions) error {
var err error
state.mizuServiceAccountExists, err = createRBACIfNecessary(ctx, kubernetesProvider)
@@ -209,13 +206,13 @@ func createMizuApiServer(ctx context.Context, kubernetesProvider *kubernetes.Pro
return nil
}
func getMizuApiFilteringOptions() (*api.TrafficFilteringOptions, error) {
var compiledRegexSlice []*api.SerializableRegexp
func getMizuApiFilteringOptions() (*shared.TrafficFilteringOptions, error) {
var compiledRegexSlice []*shared.SerializableRegexp
if config.Config.Tap.PlainTextFilterRegexes != nil && len(config.Config.Tap.PlainTextFilterRegexes) > 0 {
compiledRegexSlice = make([]*api.SerializableRegexp, 0)
compiledRegexSlice = make([]*shared.SerializableRegexp, 0)
for _, regexStr := range config.Config.Tap.PlainTextFilterRegexes {
compiledRegex, err := api.CompileRegexToSerializableRegexp(regexStr)
compiledRegex, err := shared.CompileRegexToSerializableRegexp(regexStr)
if err != nil {
return nil, err
}
@@ -223,14 +220,14 @@ func getMizuApiFilteringOptions() (*api.TrafficFilteringOptions, error) {
}
}
return &api.TrafficFilteringOptions{
return &shared.TrafficFilteringOptions{
PlainTextMaskingRegexes: compiledRegexSlice,
HealthChecksUserAgentHeaders: config.Config.Tap.HealthChecksUserAgentHeaders,
DisableRedaction: config.Config.Tap.DisableRedaction,
}, nil
}
func updateMizuTappers(ctx context.Context, kubernetesProvider *kubernetes.Provider, nodeToTappedPodIPMap map[string][]string, mizuApiFilteringOptions *api.TrafficFilteringOptions) error {
func updateMizuTappers(ctx context.Context, kubernetesProvider *kubernetes.Provider, nodeToTappedPodIPMap map[string][]string) error {
if len(nodeToTappedPodIPMap) > 0 {
var serviceAccountName string
if state.mizuServiceAccountExists {
@@ -248,9 +245,9 @@ func updateMizuTappers(ctx context.Context, kubernetesProvider *kubernetes.Provi
fmt.Sprintf("%s.%s.svc.cluster.local", state.apiServerService.Name, state.apiServerService.Namespace),
nodeToTappedPodIPMap,
serviceAccountName,
config.Config.Tap.TapOutgoing(),
config.Config.Tap.TapperResources,
config.Config.ImagePullPolicy(),
mizuApiFilteringOptions,
); err != nil {
return err
}
@@ -264,26 +261,23 @@ func updateMizuTappers(ctx context.Context, kubernetesProvider *kubernetes.Provi
return nil
}
func finishMizuExecution(kubernetesProvider *kubernetes.Provider) {
telemetry.ReportAPICalls()
func cleanUpMizu(kubernetesProvider *kubernetes.Provider) {
telemetry.ReportAPICalls(config.Config.Tap.GuiPort)
cleanUpMizuResources(kubernetesProvider)
}
func cleanUpMizuResources(kubernetesProvider *kubernetes.Provider) {
removalCtx, cancel := context.WithTimeout(context.Background(), cleanupTimeout)
defer cancel()
dumpLogsIfNeeded(kubernetesProvider, removalCtx)
cleanUpMizuResources(kubernetesProvider, removalCtx, cancel)
}
func dumpLogsIfNeeded(kubernetesProvider *kubernetes.Provider, removalCtx context.Context) {
if !config.Config.DumpLogs {
return
if config.Config.DumpLogs {
mizuDir := mizu.GetMizuFolderPath()
filePath := path.Join(mizuDir, fmt.Sprintf("mizu_logs_%s.zip", time.Now().Format("2006_01_02__15_04_05")))
if err := fsUtils.DumpLogs(kubernetesProvider, removalCtx, filePath); err != nil {
logger.Log.Errorf("Failed dump logs %v", err)
}
}
mizuDir := mizu.GetMizuFolderPath()
filePath := path.Join(mizuDir, fmt.Sprintf("mizu_logs_%s.zip", time.Now().Format("2006_01_02__15_04_05")))
if err := fsUtils.DumpLogs(kubernetesProvider, removalCtx, filePath); err != nil {
logger.Log.Errorf("Failed dump logs %v", err)
}
}
func cleanUpMizuResources(kubernetesProvider *kubernetes.Provider, removalCtx context.Context, cancel context.CancelFunc) {
logger.Log.Infof("\nRemoving mizu resources\n")
if !config.Config.IsNsRestrictedMode() {
@@ -348,7 +342,7 @@ func waitUntilNamespaceDeleted(ctx context.Context, cancel context.CancelFunc, k
if err := kubernetesProvider.WaitUtilNamespaceDeleted(ctx, config.Config.MizuResourcesNamespace); err != nil {
switch {
case ctx.Err() == context.Canceled:
logger.Log.Debugf("Do nothing. User interrupted the wait")
// Do nothing. User interrupted the wait.
case err == wait.ErrWaitTimeout:
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Timeout while removing Namespace %s", config.Config.MizuResourcesNamespace))
default:
@@ -357,7 +351,30 @@ func waitUntilNamespaceDeleted(ctx context.Context, cancel context.CancelFunc, k
}
}
func watchPodsForTapping(ctx context.Context, kubernetesProvider *kubernetes.Provider, targetNamespaces []string, cancel context.CancelFunc, mizuApiFilteringOptions *api.TrafficFilteringOptions) {
func reportTappedPods() {
mizuProxiedUrl := kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Tap.GuiPort)
tappedPodsUrl := fmt.Sprintf("http://%s/status/tappedPods", mizuProxiedUrl)
podInfos := make([]shared.PodInfo, 0)
for _, pod := range state.currentlyTappedPods {
podInfos = append(podInfos, shared.PodInfo{Name: pod.Name, Namespace: pod.Namespace})
}
tapStatus := shared.TapStatus{Pods: podInfos}
if jsonValue, err := json.Marshal(tapStatus); err != nil {
logger.Log.Debugf("[ERROR] failed Marshal the tapped pods %v", err)
} else {
if response, err := http.Post(tappedPodsUrl, "application/json", bytes.NewBuffer(jsonValue)); err != nil {
logger.Log.Debugf("[ERROR] failed sending to API server the tapped pods %v", err)
} else if response.StatusCode != 200 {
logger.Log.Debugf("[ERROR] failed sending to API server the tapped pods, response status code %v", response.StatusCode)
} else {
logger.Log.Debugf("Reported to server API about %d taped pods successfully", len(podInfos))
}
}
}
func watchPodsForTapping(ctx context.Context, kubernetesProvider *kubernetes.Provider, targetNamespaces []string, cancel context.CancelFunc) {
added, modified, removed, errorChan := kubernetes.FilteredWatch(ctx, kubernetesProvider, targetNamespaces, config.Config.Tap.PodRegex())
restartTappers := func() {
@@ -372,16 +389,14 @@ func watchPodsForTapping(ctx context.Context, kubernetesProvider *kubernetes.Pro
return
}
if err := apiserver.Provider.ReportTappedPods(state.currentlyTappedPods); err != nil {
logger.Log.Debugf("[Error] failed update tapped pods %v", err)
}
reportTappedPods()
nodeToTappedPodIPMap := getNodeHostToTappedPodIpsMap(state.currentlyTappedPods)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error building node to ips map: %v", errormessage.FormatError(err)))
cancel()
}
if err := updateMizuTappers(ctx, kubernetesProvider, nodeToTappedPodIPMap, mizuApiFilteringOptions); err != nil {
if err := updateMizuTappers(ctx, kubernetesProvider, nodeToTappedPodIPMap); err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error updating daemonset: %v", errormessage.FormatError(err)))
cancel()
}
@@ -390,28 +405,13 @@ func watchPodsForTapping(ctx context.Context, kubernetesProvider *kubernetes.Pro
for {
select {
case pod, ok := <-added:
if !ok {
added = nil
continue
}
case pod := <-added:
logger.Log.Debugf("Added matching pod %s, ns: %s", pod.Name, pod.Namespace)
restartTappersDebouncer.SetOn()
case pod, ok := <-removed:
if !ok {
removed = nil
continue
}
case pod := <-removed:
logger.Log.Debugf("Removed matching pod %s, ns: %s", pod.Name, pod.Namespace)
restartTappersDebouncer.SetOn()
case pod, ok := <-modified:
if !ok {
modified = nil
continue
}
case pod := <-modified:
logger.Log.Debugf("Modified matching pod %s, ns: %s, phase: %s, ip: %s", pod.Name, pod.Namespace, pod.Status.Phase, pod.Status.PodIP)
// Act only if the modified pod has already obtained an IP address.
// After filtering for IPs, on a normal pod restart this includes the following events:
@@ -422,12 +422,8 @@ func watchPodsForTapping(ctx context.Context, kubernetesProvider *kubernetes.Pro
if pod.Status.PodIP != "" {
restartTappersDebouncer.SetOn()
}
case err, ok := <-errorChan:
if !ok {
errorChan = nil
continue
}
case err := <-errorChan:
logger.Log.Debugf("Watching pods loop, got error %v, stopping `restart tappers debouncer`", err)
restartTappersDebouncer.Cancel()
// TODO: Does this also perform cleanup?
@@ -500,80 +496,77 @@ func getMissingPods(pods1 []core.Pod, pods2 []core.Pod) []core.Pod {
return missingPods
}
func watchApiServerPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
func createProxyToApiServerPod(ctx context.Context, kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
podExactRegex := regexp.MustCompile(fmt.Sprintf("^%s$", mizu.ApiServerPodName))
added, modified, removed, errorChan := kubernetes.FilteredWatch(ctx, kubernetesProvider, []string{config.Config.MizuResourcesNamespace}, podExactRegex)
isPodReady := false
timeAfter := time.After(25 * time.Second)
for {
select {
case _, ok := <-added:
if !ok {
added = nil
continue
}
case <-ctx.Done():
logger.Log.Debugf("Watching API Server pod loop, ctx done")
return
case <-added:
logger.Log.Debugf("Watching API Server pod loop, added")
case _, ok := <-removed:
if !ok {
removed = nil
continue
}
continue
case <-removed:
logger.Log.Infof("%s removed", mizu.ApiServerPodName)
cancel()
return
case modifiedPod, ok := <-modified:
if !ok {
modified = nil
case modifiedPod := <-modified:
if modifiedPod == nil {
logger.Log.Debugf("Watching API Server pod loop, modifiedPod with nil")
continue
}
logger.Log.Debugf("Watching API Server pod loop, modified: %v", modifiedPod.Status.Phase)
if modifiedPod.Status.Phase == core.PodRunning && !isPodReady {
isPodReady = true
go startProxyReportErrorIfAny(kubernetesProvider, cancel)
url := GetApiServerUrl()
if err := apiserver.Provider.InitAndTestConnection(url); err != nil {
logger.Log.Errorf(uiUtils.Error, "Couldn't connect to API server, check logs")
cancel()
break
}
logger.Log.Infof("Mizu is available at %s\n", url)
openBrowser(url)
requestForAnalysisIfNeeded()
if err := apiserver.Provider.ReportTappedPods(state.currentlyTappedPods); err != nil {
logger.Log.Debugf("[Error] failed update tapped pods %v", err)
}
logger.Log.Infof("Mizu is available at http://%s\n", kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Tap.GuiPort))
time.Sleep(time.Second * 5) // Waiting to be sure the proxy is ready
requestForAnalysis()
reportTappedPods()
}
case _, ok := <-errorChan:
if !ok {
errorChan = nil
continue
}
logger.Log.Debugf("[ERROR] Agent creation, watching %v namespace", config.Config.MizuResourcesNamespace)
cancel()
case <-timeAfter:
if !isPodReady {
logger.Log.Errorf(uiUtils.Error, "Mizu API server was not ready in time")
cancel()
}
case <-ctx.Done():
logger.Log.Debugf("Watching API Server pod loop, ctx done")
return
case <-errorChan:
logger.Log.Debugf("[ERROR] Agent creation, watching %v namespace", config.Config.MizuResourcesNamespace)
cancel()
}
}
}
func requestForAnalysisIfNeeded() {
func startProxyReportErrorIfAny(kubernetesProvider *kubernetes.Provider, cancel context.CancelFunc) {
err := kubernetes.StartProxy(kubernetesProvider, config.Config.Tap.GuiPort, config.Config.MizuResourcesNamespace, mizu.ApiServerPodName)
if err != nil {
logger.Log.Errorf(uiUtils.Error, fmt.Sprintf("Error occured while running k8s proxy %v\n"+
"Try setting different port by using --%s", errormessage.FormatError(err), configStructs.GuiPortTapName))
cancel()
}
}
func requestForAnalysis() {
if !config.Config.Tap.Analysis {
return
}
if err := apiserver.Provider.RequestAnalysis(config.Config.Tap.AnalysisDestination, config.Config.Tap.SleepIntervalSec); err != nil {
logger.Log.Debugf("[Error] failed requesting for analysis %v", err)
mizuProxiedUrl := kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.Tap.GuiPort)
urlPath := fmt.Sprintf("http://%s/api/uploadEntries?dest=%s&interval=%v", mizuProxiedUrl, url.QueryEscape(config.Config.Tap.AnalysisDestination), config.Config.Tap.SleepIntervalSec)
u, parseErr := url.ParseRequestURI(urlPath)
if parseErr != nil {
logger.Log.Fatal("Failed parsing the URL (consider changing the analysis dest URL), err: %v", parseErr)
}
logger.Log.Debugf("Sending get request to %v", u.String())
if response, requestErr := http.Get(u.String()); requestErr != nil {
logger.Log.Errorf("Failed to notify agent for analysis, err: %v", requestErr)
} else if response.StatusCode != 200 {
logger.Log.Errorf("Failed to notify agent for analysis, status code: %v", response.StatusCode)
} else {
logger.Log.Infof(uiUtils.Purple, "Traffic is uploading to UP9 for further analysis")
}
}
@@ -605,11 +598,24 @@ func getNodeHostToTappedPodIpsMap(tappedPods []core.Pod) map[string][]string {
return nodeToTappedPodIPMap
}
func waitForFinish(ctx context.Context, cancel context.CancelFunc) {
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM, syscall.SIGQUIT)
// block until ctx cancel is called or termination signal is received
select {
case <-ctx.Done():
break
case <-sigChan:
cancel()
}
}
func getNamespaces(kubernetesProvider *kubernetes.Provider) []string {
if config.Config.Tap.AllNamespaces {
return []string{mizu.K8sAllNamespaces}
} else if len(config.Config.Tap.Namespaces) > 0 {
return mizu.Unique(config.Config.Tap.Namespaces)
return config.Config.Tap.Namespaces
} else {
return []string{kubernetesProvider.CurrentNamespace()}
}

View File

@@ -3,15 +3,13 @@ package cmd
import (
"context"
"fmt"
"net/http"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/cli/mizu/version"
"github.com/up9inc/mizu/cli/uiUtils"
"net/http"
"time"
)
func runMizuView() {
@@ -36,24 +34,19 @@ func runMizuView() {
return
}
url := GetApiServerUrl()
response, err := http.Get(fmt.Sprintf("%s/", url))
if err == nil && response.StatusCode == 200 {
mizuProxiedUrl := kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.View.GuiPort)
_, err = http.Get(fmt.Sprintf("http://%s/", mizuProxiedUrl))
if err == nil {
logger.Log.Infof("Found a running service %s and open port %d", mizu.ApiServerPodName, config.Config.View.GuiPort)
return
}
logger.Log.Infof("Establishing connection to k8s cluster...")
go startProxyReportErrorIfAny(kubernetesProvider, cancel)
if err := apiserver.Provider.InitAndTestConnection(GetApiServerUrl()); err != nil {
logger.Log.Errorf(uiUtils.Error, "Couldn't connect to API server, check logs")
return
}
time.Sleep(time.Second * 5) // Waiting to be sure the proxy is ready
logger.Log.Infof("Mizu is available at %s\n", url)
openBrowser(url)
if isCompatible, err := version.CheckVersionCompatibility(); err != nil {
logger.Log.Infof("Mizu is available at http://%s\n", kubernetes.GetMizuApiServerProxiedHostAndPath(config.Config.View.GuiPort))
if isCompatible, err := version.CheckVersionCompatibility(config.Config.View.GuiPort); err != nil {
logger.Log.Errorf("Failed to check versions compatibility %v", err)
cancel()
return

View File

@@ -7,6 +7,7 @@ import (
"github.com/up9inc/mizu/cli/mizu"
"io/ioutil"
"os"
"path"
"reflect"
"strconv"
"strings"
@@ -26,7 +27,7 @@ const (
)
var (
Config = ConfigStruct{}
Config = ConfigStruct{}
cmdName string
)
@@ -37,13 +38,9 @@ func InitConfig(cmd *cobra.Command) error {
return err
}
configFilePathFlag := cmd.Flags().Lookup(ConfigFilePathCommandName)
configFilePath := configFilePathFlag.Value.String()
if err := mergeConfigFile(configFilePath); err != nil {
if configFilePathFlag.Changed || !os.IsNotExist(err) {
return fmt.Errorf("invalid config, %w\n"+
"you can regenerate the file by removing it (%v) and using `mizu config -r`", err, configFilePath)
}
if err := mergeConfigFile(); err != nil {
return fmt.Errorf("invalid config, %w\n" +
"you can regenerate the file by removing it (%v) and using `mizu config -r`", err, GetConfigFilePath())
}
cmd.Flags().Visit(initFlag)
@@ -66,10 +63,14 @@ func GetConfigWithDefaults() (string, error) {
return uiUtils.PrettyYaml(defaultConf)
}
func mergeConfigFile(configFilePath string) error {
reader, openErr := os.Open(configFilePath)
func GetConfigFilePath() string {
return path.Join(mizu.GetMizuFolderPath(), "config.yaml")
}
func mergeConfigFile() error {
reader, openErr := os.Open(GetConfigFilePath())
if openErr != nil {
return openErr
return nil
}
buf, readErr := ioutil.ReadAll(reader)
@@ -88,12 +89,7 @@ func mergeConfigFile(configFilePath string) error {
func initFlag(f *pflag.Flag) {
configElemValue := reflect.ValueOf(&Config).Elem()
var flagPath []string
if mizu.Contains([]string{ConfigFilePathCommandName}, f.Name) {
flagPath = []string{f.Name}
} else {
flagPath = []string{cmdName, f.Name}
}
flagPath := []string {cmdName, f.Name}
sliceValue, isSliceValue := f.Value.(pflag.SliceValue)
if !isSliceValue {

View File

@@ -7,17 +7,16 @@ import (
v1 "k8s.io/api/core/v1"
"k8s.io/client-go/util/homedir"
"os"
"path"
"path/filepath"
)
const (
MizuResourcesNamespaceConfigName = "mizu-resources-namespace"
ConfigFilePathCommandName = "config-path"
)
type ConfigStruct struct {
Tap configStructs.TapConfig `yaml:"tap"`
Fetch configStructs.FetchConfig `yaml:"fetch"`
Version configStructs.VersionConfig `yaml:"version"`
View configStructs.ViewConfig `yaml:"view"`
Logs configStructs.LogsConfig `yaml:"logs"`
@@ -28,12 +27,10 @@ type ConfigStruct struct {
Telemetry bool `yaml:"telemetry" default:"true"`
DumpLogs bool `yaml:"dump-logs" default:"false"`
KubeConfigPathStr string `yaml:"kube-config-path"`
ConfigFilePath string `yaml:"config-path,omitempty" readonly:""`
}
func (config *ConfigStruct) SetDefaults() {
config.AgentImage = fmt.Sprintf("gcr.io/up9-docker-hub/mizu/%s:%s", mizu.Branch, mizu.SemVer)
config.ConfigFilePath = path.Join(mizu.GetMizuFolderPath(), "config.yaml")
}
func (config *ConfigStruct) ImagePullPolicy() v1.PullPolicy {

View File

@@ -0,0 +1,15 @@
package configStructs
const (
DirectoryFetchName = "directory"
FromTimestampFetchName = "from"
ToTimestampFetchName = "to"
GuiPortFetchName = "gui-port"
)
type FetchConfig struct {
Directory string `yaml:"directory" default:"."`
FromTimestamp int `yaml:"from" default:"0"`
ToTimestamp int `yaml:"to" default:"0"`
GuiPort uint16 `yaml:"gui-port" default:"8899"`
}

View File

@@ -3,8 +3,10 @@ package configStructs
import (
"errors"
"fmt"
"github.com/up9inc/mizu/shared/units"
"regexp"
"strings"
"github.com/up9inc/mizu/shared/units"
)
const (
@@ -15,9 +17,9 @@ const (
PlainTextFilterRegexesTapName = "regex-masking"
DisableRedactionTapName = "no-redact"
HumanMaxEntriesDBSizeTapName = "max-entries-db-size"
DirectionTapName = "direction"
DryRunTapName = "dry-run"
EnforcePolicyFile = "traffic-validation-file"
EnforcePolicyFileDeprecated = "test-rules"
EnforcePolicyFile = "test-rules"
)
type TapConfig struct {
@@ -32,9 +34,9 @@ type TapConfig struct {
HealthChecksUserAgentHeaders []string `yaml:"ignored-user-agents"`
DisableRedaction bool `yaml:"no-redact" default:"false"`
HumanMaxEntriesDBSize string `yaml:"max-entries-db-size" default:"200MB"`
Direction string `yaml:"direction" default:"in"`
DryRun bool `yaml:"dry-run" default:"false"`
EnforcePolicyFile string `yaml:"traffic-validation-file"`
EnforcePolicyFileDeprecated string `yaml:"test-rules"`
EnforcePolicyFile string `yaml:"test-rules"`
ApiServerResources Resources `yaml:"api-server-resources"`
TapperResources Resources `yaml:"tapper-resources"`
}
@@ -51,6 +53,15 @@ func (config *TapConfig) PodRegex() *regexp.Regexp {
return podRegex
}
func (config *TapConfig) TapOutgoing() bool {
directionLowerCase := strings.ToLower(config.Direction)
if directionLowerCase == "any" {
return true
}
return false
}
func (config *TapConfig) MaxEntriesDBSizeBytes() int64 {
maxEntriesDBSizeBytes, _ := units.HumanReadableToBytes(config.HumanMaxEntriesDBSize)
return maxEntriesDBSizeBytes
@@ -67,5 +78,10 @@ func (config *TapConfig) Validate() error {
return errors.New(fmt.Sprintf("Could not parse --%s value %s", HumanMaxEntriesDBSizeTapName, config.HumanMaxEntriesDBSize))
}
directionLowerCase := strings.ToLower(config.Direction)
if directionLowerCase != "any" && directionLowerCase != "in" {
return errors.New(fmt.Sprintf("%s is not a valid value for flag --%s. Acceptable values are in/any.", config.Direction, DirectionTapName))
}
return nil
}

View File

@@ -1,7 +1,6 @@
package config_test
import (
"fmt"
"github.com/up9inc/mizu/cli/config"
"reflect"
"strings"
@@ -17,8 +16,7 @@ func TestConfigWriteIgnoresReadonlyFields(t *testing.T) {
configWithDefaults, _ := config.GetConfigWithDefaults()
for _, readonlyField := range readonlyFields {
t.Run(readonlyField, func(t *testing.T) {
readonlyFieldToCheck := fmt.Sprintf("\n%s:", readonlyField)
if strings.Contains(configWithDefaults, readonlyFieldToCheck) {
if strings.Contains(configWithDefaults, readonlyField) {
t.Errorf("unexpected result - readonly field: %v, config: %v", readonlyField, configWithDefaults)
}
})

View File

@@ -1,24 +0,0 @@
package config
import (
"os"
"strconv"
)
const (
ApiServerRetries = "API_SERVER_RETRIES"
)
func GetIntEnvConfig(key string, defaultValue int) int {
value := os.Getenv(key)
if value == "" {
return defaultValue
}
intValue, err := strconv.Atoi(value)
if err != nil {
return defaultValue
}
return intValue
}

View File

@@ -11,7 +11,6 @@ require (
github.com/spf13/cobra v1.1.3
github.com/spf13/pflag v1.0.5
github.com/up9inc/mizu/shared v0.0.0
github.com/up9inc/mizu/tap/api v0.0.0
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b
k8s.io/api v0.21.2
k8s.io/apimachinery v0.21.2
@@ -20,5 +19,3 @@ require (
)
replace github.com/up9inc/mizu/shared v0.0.0 => ../shared
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../tap/api

View File

@@ -7,18 +7,15 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/logger"
"path/filepath"
"regexp"
"strconv"
"github.com/up9inc/mizu/cli/config/configStructs"
"github.com/up9inc/mizu/cli/logger"
"io"
"github.com/up9inc/mizu/cli/mizu"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/tap/api"
"io"
core "k8s.io/api/core/v1"
rbac "k8s.io/api/rbac/v1"
k8serrors "k8s.io/apimachinery/pkg/api/errors"
@@ -58,21 +55,21 @@ func NewProvider(kubeConfigPath string) (*Provider, error) {
restClientConfig, err := kubernetesConfig.ClientConfig()
if err != nil {
if clientcmd.IsEmptyConfig(err) {
return nil, fmt.Errorf("couldn't find the kube config file, or file is empty (%s)\n"+
return nil, fmt.Errorf("couldn't find the kube config file, or file is empty (%s)\n" +
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
if clientcmd.IsConfigurationInvalid(err) {
return nil, fmt.Errorf("invalid kube config file (%s)\n"+
return nil, fmt.Errorf("invalid kube config file (%s)\n" +
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
return nil, fmt.Errorf("error while using kube config (%s)\n"+
return nil, fmt.Errorf("error while using kube config (%s)\n" +
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
clientSet, err := getClientSet(restClientConfig)
if err != nil {
return nil, fmt.Errorf("error while using kube config (%s)\n"+
return nil, fmt.Errorf("error while using kube config (%s)\n" +
"you can set alternative kube config file path by adding the kube-config-path field to the mizu config file, err: %w", kubeConfigPath, err)
}
@@ -151,7 +148,7 @@ type ApiServerOptions struct {
PodImage string
ServiceAccountName string
IsNamespaceRestricted bool
MizuApiFilteringOptions *api.TrafficFilteringOptions
MizuApiFilteringOptions *shared.TrafficFilteringOptions
MaxEntriesDBSizeBytes int64
Resources configStructs.Resources
ImagePullPolicy core.PullPolicy
@@ -576,11 +573,11 @@ func (provider *Provider) CreateConfigMap(ctx context.Context, namespace string,
return nil
}
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeToTappedPodIPMap map[string][]string, serviceAccountName string, resources configStructs.Resources, imagePullPolicy core.PullPolicy, mizuApiFilteringOptions *api.TrafficFilteringOptions) error {
logger.Log.Debugf("Applying %d tapper daemon sets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeToTappedPodIPMap), namespace, daemonSetName, podImage, tapperPodName)
func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespace string, daemonSetName string, podImage string, tapperPodName string, apiServerPodIp string, nodeToTappedPodIPMap map[string][]string, serviceAccountName string, tapOutgoing bool, resources configStructs.Resources, imagePullPolicy core.PullPolicy) error {
logger.Log.Debugf("Applying %d tapper deamonsets, ns: %s, daemonSetName: %s, podImage: %s, tapperPodName: %s", len(nodeToTappedPodIPMap), namespace, daemonSetName, podImage, tapperPodName)
if len(nodeToTappedPodIPMap) == 0 {
return fmt.Errorf("daemon set %s must tap at least 1 pod", daemonSetName)
return fmt.Errorf("Daemon set %s must tap at least 1 pod", daemonSetName)
}
nodeToTappedPodIPMapJsonStr, err := json.Marshal(nodeToTappedPodIPMap)
@@ -588,17 +585,14 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
return err
}
marshaledFilteringOptions, err := json.Marshal(mizuApiFilteringOptions)
if err != nil {
return err
}
mizuCmd := []string{
"./mizuagent",
"-i", "any",
"--tap",
"--api-server-address", fmt.Sprintf("ws://%s/wsTapper", apiServerPodIp),
"--nodefrag",
}
if tapOutgoing {
mizuCmd = append(mizuCmd, "--anydirection")
}
agentContainer := applyconfcore.Container()
@@ -610,8 +604,6 @@ func (provider *Provider) ApplyMizuTapperDaemonSet(ctx context.Context, namespac
agentContainer.WithEnv(
applyconfcore.EnvVar().WithName(shared.HostModeEnvVar).WithValue("1"),
applyconfcore.EnvVar().WithName(shared.TappedAddressesPerNodeDictEnvVar).WithValue(string(nodeToTappedPodIPMapJsonStr)),
applyconfcore.EnvVar().WithName(shared.GoGCEnvVar).WithValue("12800"),
applyconfcore.EnvVar().WithName(shared.MizuFilteringOptionsEnvVar).WithValue(string(marshaledFilteringOptions)),
)
agentContainer.WithEnv(
applyconfcore.EnvVar().WithName(shared.NodeNameEnvVar).WithValueFrom(
@@ -747,16 +739,6 @@ func (provider *Provider) GetPodLogs(namespace string, podName string, ctx conte
return str, nil
}
func (provider *Provider) GetNamespaceEvents(namespace string, ctx context.Context) (string, error) {
eventsOpts := metav1.ListOptions{}
eventList, err := provider.clientSet.CoreV1().Events(namespace).List(ctx, eventsOpts)
if err != nil {
return "", fmt.Errorf("error getting events on ns: %s, %w", namespace, err)
}
return eventList.String(), nil
}
func getClientSet(config *restclient.Config) (*kubernetes.Clientset, error) {
clientSet, err := kubernetes.NewForConfig(config)
if err != nil {

View File

@@ -39,7 +39,6 @@ func StartProxy(kubernetesProvider *Provider, mizuPort uint16, mizuNamespace str
server := http.Server{
Handler: mux,
}
return server.Serve(l)
}

42
cli/mizu/controlSocket.go Normal file
View File

@@ -0,0 +1,42 @@
package mizu
import (
"encoding/json"
"github.com/gorilla/websocket"
"github.com/up9inc/mizu/shared"
core "k8s.io/api/core/v1"
"time"
)
type ControlSocket struct {
connection *websocket.Conn
}
func CreateControlSocket(socketServerAddress string) (*ControlSocket, error) {
connection, err := shared.ConnectToSocketServer(socketServerAddress, 30, 2 * time.Second, true)
if err != nil {
return nil, err
} else {
return &ControlSocket{connection: connection}, nil
}
}
func (controlSocket *ControlSocket) SendNewTappedPodsListMessage(pods []core.Pod) error {
podInfos := make([]shared.PodInfo, 0)
for _, pod := range pods {
podInfos = append(podInfos, shared.PodInfo{Name: pod.Name, Namespace: pod.Namespace})
}
tapStatus := shared.TapStatus{Pods: podInfos}
socketMessage := shared.CreateWebSocketStatusMessage(tapStatus)
jsonMessage, err := json.Marshal(socketMessage)
if err != nil {
return err
}
err = controlSocket.connection.WriteMessage(websocket.TextMessage, jsonMessage)
if err != nil {
return err
}
return nil
}

View File

@@ -39,39 +39,22 @@ func DumpLogs(provider *kubernetes.Provider, ctx context.Context, filePath strin
} else {
logger.Log.Debugf("Successfully read log length %d for pod: %s.%s", len(logs), pod.Namespace, pod.Name)
}
if err := AddStrToZip(zipWriter, logs, fmt.Sprintf("%s.%s.log", pod.Namespace, pod.Name)); err != nil {
logger.Log.Errorf("Failed write logs, %v", err)
} else {
logger.Log.Debugf("Successfully added log length %d from pod: %s.%s", len(logs), pod.Namespace, pod.Name)
}
}
events, err := provider.GetNamespaceEvents(config.Config.MizuResourcesNamespace, ctx)
if err != nil {
logger.Log.Debugf("Failed to get k8b events, %v", err)
} else {
logger.Log.Debugf("Successfully read events for k8b namespace: %s", config.Config.MizuResourcesNamespace)
}
if err := AddStrToZip(zipWriter, events, fmt.Sprintf("%s_events.log", config.Config.MizuResourcesNamespace)); err != nil {
logger.Log.Debugf("Failed write logs, %v", err)
} else {
logger.Log.Debugf("Successfully added events for k8b namespace: %s", config.Config.MizuResourcesNamespace)
}
if err := AddFileToZip(zipWriter, config.Config.ConfigFilePath); err != nil {
if err := AddFileToZip(zipWriter, config.GetConfigFilePath()); err != nil {
logger.Log.Debugf("Failed write file, %v", err)
} else {
logger.Log.Debugf("Successfully added file %s", config.Config.ConfigFilePath)
logger.Log.Debugf("Successfully added file %s", config.GetConfigFilePath())
}
if err := AddFileToZip(zipWriter, logger.GetLogFilePath()); err != nil {
logger.Log.Debugf("Failed write file, %v", err)
} else {
logger.Log.Debugf("Successfully added file %s", logger.GetLogFilePath())
}
logger.Log.Infof("You can find the zip file with all logs in %s\n", filePath)
return nil
}

View File

@@ -3,11 +3,9 @@ package fsUtils
import (
"archive/zip"
"fmt"
"github.com/up9inc/mizu/cli/logger"
"io"
"os"
"path/filepath"
"strings"
)
func AddFileToZip(zipWriter *zip.Writer, filename string) error {
@@ -55,60 +53,3 @@ func AddStrToZip(writer *zip.Writer, logs string, fileName string) error {
}
return nil
}
func Unzip(reader *zip.Reader, dest string) error {
dest, _ = filepath.Abs(dest)
_ = os.MkdirAll(dest, os.ModePerm)
// Closure to address file descriptors issue with all the deferred .Close() methods
extractAndWriteFile := func(f *zip.File) error {
rc, err := f.Open()
if err != nil {
return err
}
defer func() {
if err := rc.Close(); err != nil {
panic(err)
}
}()
path := filepath.Join(dest, f.Name)
// Check for ZipSlip (Directory traversal)
if !strings.HasPrefix(path, filepath.Clean(dest)+string(os.PathSeparator)) {
return fmt.Errorf("illegal file path: %s", path)
}
if f.FileInfo().IsDir() {
_ = os.MkdirAll(path, f.Mode())
} else {
_ = os.MkdirAll(filepath.Dir(path), f.Mode())
logger.Log.Infof("writing HAR file [ %v ]", path)
f, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, f.Mode())
if err != nil {
return err
}
defer func() {
if err := f.Close(); err != nil {
panic(err)
}
logger.Log.Info(" done")
}()
_, err = io.Copy(f, rc)
if err != nil {
return err
}
}
return nil
}
for _, f := range reader.File {
err := extractAndWriteFile(f)
if err != nil {
return err
}
}
return nil
}

View File

@@ -9,17 +9,3 @@ func Contains(slice []string, containsValue string) bool {
return false
}
func Unique(slice []string) []string {
keys := make(map[string]bool)
var list []string
for _, entry := range slice {
if _, value := keys[entry]; !value {
keys[entry] = true
list = append(list, entry)
}
}
return list
}

View File

@@ -1,9 +1,7 @@
package mizu_test
import (
"fmt"
"github.com/up9inc/mizu/cli/mizu"
"reflect"
"testing"
)
@@ -90,41 +88,3 @@ func TestContainsNilSlice(t *testing.T) {
})
}
}
func TestUniqueNoDuplicateValues(t *testing.T) {
tests := []struct {
Slice []string
Expected []string
}{
{Slice: []string{"apple", "orange", "banana", "grapes"}, Expected: []string{"apple", "orange", "banana", "grapes"}},
{Slice: []string{"dog", "cat", "mouse"}, Expected: []string{"dog", "cat", "mouse"}},
}
for index, test := range tests {
t.Run(fmt.Sprintf("%v", index), func(t *testing.T) {
actual := mizu.Unique(test.Slice)
if !reflect.DeepEqual(test.Expected, actual) {
t.Errorf("unexpected result - Expected: %v, actual: %v", test.Expected, actual)
}
})
}
}
func TestUniqueDuplicateValues(t *testing.T) {
tests := []struct {
Slice []string
Expected []string
}{
{Slice: []string{"apple", "apple", "orange", "orange", "banana", "banana", "grapes", "grapes"}, Expected: []string{"apple", "orange", "banana", "grapes"}},
{Slice: []string{"dog", "cat", "cat", "mouse"}, Expected: []string{"dog", "cat", "mouse"}},
}
for index, test := range tests {
t.Run(fmt.Sprintf("%v", index), func(t *testing.T) {
actual := mizu.Unique(test.Slice)
if !reflect.DeepEqual(test.Expected, actual) {
t.Errorf("unexpected result - Expected: %v, actual: %v", test.Expected, actual)
}
})
}
}

View File

@@ -2,22 +2,43 @@ package version
import (
"context"
"encoding/json"
"fmt"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"io/ioutil"
"net/http"
"runtime"
"net/url"
"time"
"github.com/google/go-github/v37/github"
"github.com/up9inc/mizu/cli/uiUtils"
"github.com/up9inc/mizu/shared"
"github.com/up9inc/mizu/shared/semver"
)
func CheckVersionCompatibility() (bool, error) {
apiSemVer, err := apiserver.Provider.GetVersion()
func getApiVersion(port uint16) (string, error) {
versionUrl, _ := url.Parse(fmt.Sprintf("http://localhost:%d/mizu/metadata/version", port))
req := &http.Request{
Method: http.MethodGet,
URL: versionUrl,
}
statusResp, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer statusResp.Body.Close()
versionResponse := &shared.VersionResponse{}
if err := json.NewDecoder(statusResp.Body).Decode(&versionResponse); err != nil {
return "", err
}
return versionResponse.SemVer, nil
}
func CheckVersionCompatibility(port uint16) (bool, error) {
apiSemVer, err := getApiVersion(port)
if err != nil {
return false, err
}
@@ -38,7 +59,6 @@ func CheckNewerVersion(versionChan chan string) {
latestRelease, _, err := client.Repositories.GetLatestRelease(context.Background(), "up9inc", "mizu")
if err != nil {
logger.Log.Debugf("[ERROR] Failed to get latest release")
versionChan <- ""
return
}
@@ -51,14 +71,12 @@ func CheckNewerVersion(versionChan chan string) {
}
if versionFileUrl == "" {
logger.Log.Debugf("[ERROR] Version file not found in the latest release")
versionChan <- ""
return
}
res, err := http.Get(versionFileUrl)
if err != nil {
logger.Log.Debugf("[ERROR] Failed to get the version file %v", err)
versionChan <- ""
return
}
@@ -66,7 +84,6 @@ func CheckNewerVersion(versionChan chan string) {
res.Body.Close()
if err != nil {
logger.Log.Debugf("[ERROR] Failed to read the version file -> %v", err)
versionChan <- ""
return
}
gitHubVersion := string(data)
@@ -77,8 +94,7 @@ func CheckNewerVersion(versionChan chan string) {
logger.Log.Debugf("Finished version validation, github version %v, current version %v, took %v", gitHubVersion, currentSemVer, time.Since(start))
if gitHubVersionSemVer.GreaterThan(currentSemVer) {
versionChan <- fmt.Sprintf("Update available! %v -> %v (curl -Lo mizu %v/mizu_%s_amd64 && chmod 755 mizu)", mizu.SemVer, gitHubVersion, *latestRelease.HTMLURL, runtime.GOOS)
} else {
versionChan <- ""
versionChan <- fmt.Sprintf("Update available! %v -> %v (%v)", mizu.SemVer, gitHubVersion, *latestRelease.HTMLURL)
}
versionChan <- ""
}

View File

@@ -5,10 +5,11 @@ import (
"encoding/json"
"fmt"
"github.com/denisbrodbeck/machineid"
"github.com/up9inc/mizu/cli/apiserver"
"github.com/up9inc/mizu/cli/config"
"github.com/up9inc/mizu/cli/kubernetes"
"github.com/up9inc/mizu/cli/logger"
"github.com/up9inc/mizu/cli/mizu"
"io/ioutil"
"net/http"
)
@@ -34,15 +35,35 @@ func ReportRun(cmd string, args interface{}) {
logger.Log.Debugf("successfully reported telemetry for cmd %v", cmd)
}
func ReportAPICalls() {
func ReportAPICalls(mizuPort uint16) {
if !shouldRunTelemetry() {
logger.Log.Debugf("not reporting telemetry")
return
}
generalStats, err := apiserver.Provider.GetGeneralStats()
if err != nil {
logger.Log.Debugf("[ERROR] failed get general stats from api server %v", err)
mizuProxiedUrl := kubernetes.GetMizuApiServerProxiedHostAndPath(mizuPort)
generalStatsUrl := fmt.Sprintf("http://%s/api/generalStats", mizuProxiedUrl)
response, requestErr := http.Get(generalStatsUrl)
if requestErr != nil {
logger.Log.Debugf("ERROR: failed to get general stats for telemetry, err: %v", requestErr)
return
} else if response.StatusCode != 200 {
logger.Log.Debugf("ERROR: failed to get general stats for telemetry, status code: %v", response.StatusCode)
return
}
defer func() { _ = response.Body.Close() }()
data, readErr := ioutil.ReadAll(response.Body)
if readErr != nil {
logger.Log.Debugf("ERROR: failed to read general stats for telemetry, err: %v", readErr)
return
}
var generalStats map[string]interface{}
if parseErr := json.Unmarshal(data, &generalStats); parseErr != nil {
logger.Log.Debugf("ERROR: failed to parse general stats for telemetry, err: %v", parseErr)
return
}

View File

@@ -1,12 +0,0 @@
#!/bin/bash
for f in tap/extensions/*; do
if [ -d "$f" ]; then
extension=$(basename $f) && \
cd tap/extensions/${extension} && \
go build -buildmode=plugin -o ../${extension}.so . && \
cd ../../.. && \
mkdir -p agent/build/extensions && \
cp tap/extensions/${extension}.so agent/build/extensions
fi
done

View File

@@ -1,76 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at mizu@up9.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

View File

@@ -1,34 +1,37 @@
# Traffic validation rules
# API rules validation
This feature allows you to define set of simple rules, and test the traffic against them.
This feature allows you to define set of simple rules, and test the API against them.
Such validation may test response for specific JSON fields, headers, etc.
## Examples
Example 1: HTTP request (REST API call) that didn't pass validation is highlighted in red
Example 1: HTTP request (REST API call) that didnt pass validation is highlighted in red
![Simple UI](../assets/validation-example1.png)
- - -
Example 2: Details pane shows the validation rule details and whether it passed or failed
![Simple UI](../assets/validation-example2.png)
## How to use
To use this feature - create simple rules file (see details below) and pass this file as parameter to `mizu tap` command. For example, if rules are stored in file named `rules.yaml` — run the following command:
```shell
mizu tap --traffic-validation-file rules.yaml
mizu tap --test-rules rules.yaml PODNAME
```
## Rules file structure
The structure of the traffic-validation-file is:
The structure of the test-rules-file is:
* `name`: string, name of the rule
* `type`: string, type of the rule, must be `json` or `header` or `latency`
@@ -59,7 +62,6 @@ rules:
service: "carts.*"
```
### Explanation:
* First rule `holy-in-name-property`:
@@ -72,4 +74,5 @@ rules:
* Third rule `latency-test`:
> This rule will be applied to all request made to `carts.*` services. If the latency of the response is greater than `1ms` will be marked as failure, marked as success otherwise.
> This rule will be applied to all request made to `carts.*` services. If the latency of the response is greater than `1` will be marked as failure, marked as success otherwise.

View File

@@ -1,33 +0,0 @@
![Mizu: The API Traffic Viewer for Kubernetes](../assets/mizu-logo.svg)
# Testing guidelines
## Generic guidelines
* Use "[testing](https://pkg.go.dev/testing)" package
* Write [Table-driven tests using subtests](https://go.dev/blog/subtests)
* Use cleanup in test/subtest in order to clean up resources
* Name the test func "Test<tested_func_name><tested_case>"
## Unit tests
* Position the test file inside the folder of the tested package
* In case of internal func testing
* Name the test file "<tested_file_name>_internal_test.go"
* Name the test package same as the package being tested
* Example - [Config](../cli/config/config_internal_test.go)
* In case of exported func testing
* Name the test file "<tested_file_name>_test.go"
* Name the test package "<tested_package>_test"
* Example - [Slice Utils](../cli/mizu/sliceUtils_test.go)
* Make sure to run test coverage to make sure you covered all the cases and lines in the func
## Acceptance tests
* Position the test file inside the [acceptance tests folder](../acceptanceTests)
* Name the file "<tested_command>_test.go"
* Name the package "acceptanceTests"
* Do not run as part of the short tests
* Use/Create generic test utils func in acceptanceTests/testsUtils
* Don't use sleep inside the tests - active check
* Running acceptance tests locally
* Switch to the branch that is being tested
* Run acceptanceTests/setup.sh
* Run tests (make acceptance-test)
* Example - [Tap](../acceptanceTests/tap_test.go)

View File

@@ -8,5 +8,4 @@ const (
MaxEntriesDBSizeBytesEnvVar = "MAX_ENTRIES_DB_BYTES"
RulePolicyPath = "/app/enforce-policy/"
RulePolicyFileName = "enforce-policy.yaml"
GoGCEnvVar = "GOGC"
)

View File

@@ -5,7 +5,7 @@ import (
"io/ioutil"
"strings"
"gopkg.in/yaml.v3"
yaml "gopkg.in/yaml.v3"
)
type WebSocketMessageType string
@@ -74,6 +74,12 @@ func CreateWebSocketMessageTypeAnalyzeStatus(analyzeStatus AnalyzeStatus) WebSoc
}
}
type TrafficFilteringOptions struct {
HealthChecksUserAgentHeaders []string
PlainTextMaskingRegexes []*SerializableRegexp
DisableRedaction bool
}
type VersionResponse struct {
SemVer string `json:"semver"`
}
@@ -93,11 +99,6 @@ type RulePolicy struct {
Name string `yaml:"name"`
}
type RulesMatched struct {
Matched bool `json:"matched"`
Rule RulePolicy `json:"rule"`
}
func (r *RulePolicy) validateType() bool {
permitedTypes := []string{"json", "header", "latency"}
_, found := Find(permitedTypes, r.Type)

View File

@@ -1,4 +1,4 @@
package api
package shared
import "regexp"

View File

@@ -1,8 +1,8 @@
package utils
package shared
import (
"fmt"
"github.com/gorilla/websocket"
"github.com/romana/rlog"
"time"
)
@@ -11,23 +11,24 @@ const (
DEFAULT_SOCKET_RETRY_SLEEP_TIME = time.Second * 10
)
func ConnectToSocketServer(address string) (*websocket.Conn, error) {
func ConnectToSocketServer(address string, retries int, retrySleepTime time.Duration, hideTimeoutErrors bool) (*websocket.Conn, error) {
var err error
var connection *websocket.Conn
try := 0
// Connection to server fails if client pod is up before server.
// Retries solve this issue.
for try < DEFAULT_SOCKET_RETRIES {
rlog.Infof("Trying to connect to websocket: %s, attempt: %v/%v", address, try, DEFAULT_SOCKET_RETRIES)
for try < retries {
connection, _, err = websocket.DefaultDialer.Dial(address, nil)
if err != nil {
rlog.Warnf("Failed connecting to websocket: %s, attempt: %v/%v, err: %s, (%v,%+v)", address, try, DEFAULT_SOCKET_RETRIES, err, err, err)
try++
if !hideTimeoutErrors {
fmt.Printf("Failed connecting to websocket server: %s, (%v,%+v)\n", err, err, err)
}
} else {
break
}
time.Sleep(DEFAULT_SOCKET_RETRY_SLEEP_TIME)
time.Sleep(retrySleepTime)
}
if err != nil {

View File

@@ -1,199 +0,0 @@
package api
import (
"bufio"
"plugin"
"sync"
"time"
)
type Protocol struct {
Name string `json:"name"`
LongName string `json:"longName"`
Abbreviation string `json:"abbreviation"`
Version string `json:"version"`
BackgroundColor string `json:"backgroundColor"`
ForegroundColor string `json:"foregroundColor"`
FontSize int8 `json:"fontSize"`
ReferenceLink string `json:"referenceLink"`
Ports []string `json:"ports"`
Priority uint8 `json:"priority"`
}
type Extension struct {
Protocol *Protocol
Path string
Plug *plugin.Plugin
Dissector Dissector
MatcherMap *sync.Map
}
type ConnectionInfo struct {
ClientIP string
ClientPort string
ServerIP string
ServerPort string
IsOutgoing bool
}
type TcpID struct {
SrcIP string
DstIP string
SrcPort string
DstPort string
Ident string
}
type CounterPair struct {
Request uint
Response uint
}
type GenericMessage struct {
IsRequest bool `json:"isRequest"`
CaptureTime time.Time `json:"captureTime"`
Payload interface{} `json:"payload"`
}
type RequestResponsePair struct {
Request GenericMessage `json:"request"`
Response GenericMessage `json:"response"`
}
// `Protocol` is modified in the later stages of data propagation. Therefore it's not a pointer.
type OutputChannelItem struct {
Protocol Protocol
Timestamp int64
ConnectionInfo *ConnectionInfo
Pair *RequestResponsePair
}
type SuperTimer struct {
CaptureTime time.Time
}
type SuperIdentifier struct {
Protocol *Protocol
IsClosedOthers bool
}
type Dissector interface {
Register(*Extension)
Ping()
Dissect(b *bufio.Reader, isClient bool, tcpID *TcpID, counterPair *CounterPair, superTimer *SuperTimer, superIdentifier *SuperIdentifier, emitter Emitter, options *TrafficFilteringOptions) error
Analyze(item *OutputChannelItem, entryId string, resolvedSource string, resolvedDestination string) *MizuEntry
Summarize(entry *MizuEntry) *BaseEntryDetails
Represent(entry *MizuEntry) (protocol Protocol, object []byte, bodySize int64, err error)
}
type Emitting struct {
AppStats *AppStats
OutputChannel chan *OutputChannelItem
}
type Emitter interface {
Emit(item *OutputChannelItem)
}
func (e *Emitting) Emit(item *OutputChannelItem) {
e.OutputChannel <- item
e.AppStats.IncMatchedPairs()
}
type MizuEntry struct {
ID uint `gorm:"primarykey"`
CreatedAt time.Time
UpdatedAt time.Time
ProtocolName string `json:"protocolName" gorm:"column:protocolName"`
ProtocolLongName string `json:"protocolLongName" gorm:"column:protocolLongName"`
ProtocolAbbreviation string `json:"protocolAbbreviation" gorm:"column:protocolVersion"`
ProtocolVersion string `json:"protocolVersion" gorm:"column:protocolVersion"`
ProtocolBackgroundColor string `json:"protocolBackgroundColor" gorm:"column:protocolBackgroundColor"`
ProtocolForegroundColor string `json:"protocolForegroundColor" gorm:"column:protocolForegroundColor"`
ProtocolFontSize int8 `json:"protocolFontSize" gorm:"column:protocolFontSize"`
ProtocolReferenceLink string `json:"protocolReferenceLink" gorm:"column:protocolReferenceLink"`
Entry string `json:"entry,omitempty" gorm:"column:entry"`
EntryId string `json:"entryId" gorm:"column:entryId"`
Url string `json:"url" gorm:"column:url"`
Method string `json:"method" gorm:"column:method"`
Status int `json:"status" gorm:"column:status"`
RequestSenderIp string `json:"requestSenderIp" gorm:"column:requestSenderIp"`
Service string `json:"service" gorm:"column:service"`
Timestamp int64 `json:"timestamp" gorm:"column:timestamp"`
ElapsedTime int64 `json:"elapsedTime" gorm:"column:elapsedTime"`
Path string `json:"path" gorm:"column:path"`
ResolvedSource string `json:"resolvedSource,omitempty" gorm:"column:resolvedSource"`
ResolvedDestination string `json:"resolvedDestination,omitempty" gorm:"column:resolvedDestination"`
SourceIp string `json:"sourceIp,omitempty" gorm:"column:sourceIp"`
DestinationIp string `json:"destinationIp,omitempty" gorm:"column:destinationIp"`
SourcePort string `json:"sourcePort,omitempty" gorm:"column:sourcePort"`
DestinationPort string `json:"destinationPort,omitempty" gorm:"column:destinationPort"`
IsOutgoing bool `json:"isOutgoing,omitempty" gorm:"column:isOutgoing"`
EstimatedSizeBytes int `json:"-" gorm:"column:estimatedSizeBytes"`
}
type MizuEntryWrapper struct {
Protocol Protocol `json:"protocol"`
Representation string `json:"representation"`
BodySize int64 `json:"bodySize"`
Data MizuEntry `json:"data"`
Rules []map[string]interface{} `json:"rulesMatched,omitempty"`
}
type BaseEntryDetails struct {
Id string `json:"id,omitempty"`
Protocol Protocol `json:"protocol,omitempty"`
Url string `json:"url,omitempty"`
RequestSenderIp string `json:"requestSenderIp,omitempty"`
Service string `json:"service,omitempty"`
Path string `json:"path,omitempty"`
Summary string `json:"summary,omitempty"`
StatusCode int `json:"statusCode"`
Method string `json:"method,omitempty"`
Timestamp int64 `json:"timestamp,omitempty"`
SourceIp string `json:"sourceIp,omitempty"`
DestinationIp string `json:"destinationIp,omitempty"`
SourcePort string `json:"sourcePort,omitempty"`
DestinationPort string `json:"destinationPort,omitempty"`
IsOutgoing bool `json:"isOutgoing,omitempty"`
Latency int64 `json:"latency,omitempty"`
Rules ApplicableRules `json:"rules,omitempty"`
}
type ApplicableRules struct {
Latency int64 `json:"latency,omitempty"`
Status bool `json:"status,omitempty"`
NumberOfRules int `json:"numberOfRules,omitempty"`
}
type DataUnmarshaler interface {
UnmarshalData(*MizuEntry) error
}
func (bed *BaseEntryDetails) UnmarshalData(entry *MizuEntry) error {
bed.Protocol = Protocol{
Name: entry.ProtocolName,
LongName: entry.ProtocolLongName,
Abbreviation: entry.ProtocolAbbreviation,
Version: entry.ProtocolVersion,
BackgroundColor: entry.ProtocolBackgroundColor,
ForegroundColor: entry.ProtocolForegroundColor,
FontSize: entry.ProtocolFontSize,
ReferenceLink: entry.ProtocolReferenceLink,
}
bed.Id = entry.EntryId
bed.Url = entry.Url
bed.Service = entry.Service
bed.Summary = entry.Path
bed.StatusCode = entry.Status
bed.Method = entry.Method
bed.Timestamp = entry.Timestamp
bed.RequestSenderIp = entry.RequestSenderIp
bed.IsOutgoing = entry.IsOutgoing
return nil
}
const (
TABLE string = "table"
BODY string = "body"
)

View File

@@ -1,3 +0,0 @@
module github.com/up9inc/mizu/tap/api
go 1.16

View File

@@ -1,7 +0,0 @@
package api
type TrafficFilteringOptions struct {
HealthChecksUserAgentHeaders []string
PlainTextMaskingRegexes []*SerializableRegexp
DisableRedaction bool
}

View File

@@ -1,70 +0,0 @@
package api
import (
"sync/atomic"
"time"
)
type AppStats struct {
StartTime time.Time `json:"-"`
ProcessedBytes uint64 `json:"processedBytes"`
PacketsCount uint64 `json:"packetsCount"`
TcpPacketsCount uint64 `json:"tcpPacketsCount"`
ReassembledTcpPayloadsCount uint64 `json:"reassembledTcpPayloadsCount"`
TlsConnectionsCount uint64 `json:"tlsConnectionsCount"`
MatchedPairs uint64 `json:"matchedPairs"`
DroppedTcpStreams uint64 `json:"droppedTcpStreams"`
}
func (as *AppStats) IncMatchedPairs() {
atomic.AddUint64(&as.MatchedPairs, 1)
}
func (as *AppStats) IncDroppedTcpStreams() {
atomic.AddUint64(&as.DroppedTcpStreams, 1)
}
func (as *AppStats) IncPacketsCount() uint64 {
atomic.AddUint64(&as.PacketsCount, 1)
return as.PacketsCount
}
func (as *AppStats) IncTcpPacketsCount() {
atomic.AddUint64(&as.TcpPacketsCount, 1)
}
func (as *AppStats) IncReassembledTcpPayloadsCount() {
atomic.AddUint64(&as.ReassembledTcpPayloadsCount, 1)
}
func (as *AppStats) IncTlsConnectionsCount() {
atomic.AddUint64(&as.TlsConnectionsCount, 1)
}
func (as *AppStats) UpdateProcessedBytes(size uint64) {
atomic.AddUint64(&as.ProcessedBytes, size)
}
func (as *AppStats) SetStartTime(startTime time.Time) {
as.StartTime = startTime
}
func (as *AppStats) DumpStats() *AppStats {
currentAppStats := &AppStats{StartTime: as.StartTime}
currentAppStats.ProcessedBytes = resetUint64(&as.ProcessedBytes)
currentAppStats.PacketsCount = resetUint64(&as.PacketsCount)
currentAppStats.TcpPacketsCount = resetUint64(&as.TcpPacketsCount)
currentAppStats.ReassembledTcpPayloadsCount = resetUint64(&as.ReassembledTcpPayloadsCount)
currentAppStats.TlsConnectionsCount = resetUint64(&as.TlsConnectionsCount)
currentAppStats.MatchedPairs = resetUint64(&as.MatchedPairs)
currentAppStats.DroppedTcpStreams = resetUint64(&as.DroppedTcpStreams)
return currentAppStats
}
func resetUint64(ref *uint64) (val uint64) {
val = atomic.LoadUint64(ref)
atomic.StoreUint64(ref, 0)
return
}

View File

@@ -1,12 +1,11 @@
package tap
import (
"github.com/romana/rlog"
"sync"
"time"
"github.com/google/gopacket/reassembly"
"github.com/romana/rlog"
"github.com/up9inc/mizu/tap/api"
)
type CleanerStats struct {
@@ -18,6 +17,7 @@ type CleanerStats struct {
type Cleaner struct {
assembler *reassembly.Assembler
assemblerMutex *sync.Mutex
matcher *requestResponseMatcher
cleanPeriod time.Duration
connectionTimeout time.Duration
stats CleanerStats
@@ -32,15 +32,13 @@ func (cl *Cleaner) clean() {
flushed, closed := cl.assembler.FlushCloseOlderThan(startCleanTime.Add(-cl.connectionTimeout))
cl.assemblerMutex.Unlock()
for _, extension := range extensions {
deleted := deleteOlderThan(extension.MatcherMap, startCleanTime.Add(-cl.connectionTimeout))
cl.stats.deleted += deleted
}
deleted := cl.matcher.deleteOlderThan(startCleanTime.Add(-cl.connectionTimeout))
cl.statsMutex.Lock()
rlog.Debugf("Assembler Stats after cleaning %s", cl.assembler.Dump())
cl.stats.flushed += flushed
cl.stats.closed += closed
cl.stats.deleted += deleted
cl.statsMutex.Unlock()
}
@@ -72,25 +70,3 @@ func (cl *Cleaner) dumpStats() CleanerStats {
return stats
}
func deleteOlderThan(matcherMap *sync.Map, t time.Time) int {
numDeleted := 0
if matcherMap == nil {
return numDeleted
}
matcherMap.Range(func(key interface{}, value interface{}) bool {
message, _ := value.(*api.GenericMessage)
// TODO: Investigate the reason why `request` is `nil` in some rare occasion
if message != nil {
if message.CaptureTime.Before(t) {
matcherMap.Delete(key)
numDeleted++
}
}
return true
})
return numDeleted
}

View File

@@ -1,9 +0,0 @@
module github.com/up9inc/mizu/tap/extensions/amqp
go 1.16
require (
github.com/up9inc/mizu/tap/api v0.0.0
)
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../../api

View File

@@ -1,664 +0,0 @@
package main
import (
"encoding/json"
"fmt"
"strconv"
"time"
"github.com/up9inc/mizu/tap/api"
)
var connectionMethodMap = map[int]string{
10: "connection start",
11: "connection start-ok",
20: "connection secure",
21: "connection secure-ok",
30: "connection tune",
31: "connection tune-ok",
40: "connection open",
41: "connection open-ok",
50: "connection close",
51: "connection close-ok",
60: "connection blocked",
61: "connection unblocked",
}
var channelMethodMap = map[int]string{
10: "channel open",
11: "channel open-ok",
20: "channel flow",
21: "channel flow-ok",
40: "channel close",
41: "channel close-ok",
}
var exchangeMethodMap = map[int]string{
10: "exchange declare",
11: "exchange declare-ok",
20: "exchange delete",
21: "exchange delete-ok",
30: "exchange bind",
31: "exchange bind-ok",
40: "exchange unbind",
51: "exchange unbind-ok",
}
var queueMethodMap = map[int]string{
10: "queue declare",
11: "queue declare-ok",
20: "queue bind",
21: "queue bind-ok",
50: "queue unbind",
51: "queue unbind-ok",
30: "queue purge",
31: "queue purge-ok",
40: "queue delete",
41: "queue delete-ok",
}
var basicMethodMap = map[int]string{
10: "basic qos",
11: "basic qos-ok",
20: "basic consume",
21: "basic consume-ok",
30: "basic cancel",
31: "basic cancel-ok",
40: "basic publish",
50: "basic return",
60: "basic deliver",
70: "basic get",
71: "basic get-ok",
72: "basic get-empty",
80: "basic ack",
90: "basic reject",
100: "basic recover-async",
110: "basic recover",
111: "basic recover-ok",
120: "basic nack",
}
var txMethodMap = map[int]string{
10: "tx select",
11: "tx select-ok",
20: "tx commit",
21: "tx commit-ok",
30: "tx rollback",
31: "tx rollback-ok",
}
type AMQPWrapper struct {
Method string `json:"method"`
Url string `json:"url"`
Details interface{} `json:"details"`
}
func emitAMQP(event interface{}, _type string, method string, connectionInfo *api.ConnectionInfo, captureTime time.Time, emitter api.Emitter) {
request := &api.GenericMessage{
IsRequest: true,
CaptureTime: captureTime,
Payload: AMQPPayload{
Data: &AMQPWrapper{
Method: method,
Url: "",
Details: event,
},
},
}
item := &api.OutputChannelItem{
Protocol: protocol,
Timestamp: captureTime.UnixNano() / int64(time.Millisecond),
ConnectionInfo: connectionInfo,
Pair: &api.RequestResponsePair{
Request: *request,
Response: api.GenericMessage{},
},
}
emitter.Emit(item)
}
func representProperties(properties map[string]interface{}, rep []interface{}) ([]interface{}, string, string) {
contentType := ""
contentEncoding := ""
deliveryMode := ""
priority := ""
correlationId := ""
replyTo := ""
expiration := ""
messageId := ""
timestamp := ""
_type := ""
userId := ""
appId := ""
if properties["ContentType"] != nil {
contentType = properties["ContentType"].(string)
}
if properties["ContentEncoding"] != nil {
contentEncoding = properties["ContentEncoding"].(string)
}
if properties["Delivery Mode"] != nil {
deliveryMode = fmt.Sprintf("%g", properties["DeliveryMode"].(float64))
}
if properties["Priority"] != nil {
priority = fmt.Sprintf("%g", properties["Priority"].(float64))
}
if properties["CorrelationId"] != nil {
correlationId = properties["CorrelationId"].(string)
}
if properties["ReplyTo"] != nil {
replyTo = properties["ReplyTo"].(string)
}
if properties["Expiration"] != nil {
expiration = properties["Expiration"].(string)
}
if properties["MessageId"] != nil {
messageId = properties["MessageId"].(string)
}
if properties["Timestamp"] != nil {
timestamp = properties["Timestamp"].(string)
}
if properties["Type"] != nil {
_type = properties["Type"].(string)
}
if properties["UserId"] != nil {
userId = properties["UserId"].(string)
}
if properties["AppId"] != nil {
appId = properties["AppId"].(string)
}
props, _ := json.Marshal([]map[string]string{
{
"name": "Content Type",
"value": contentType,
},
{
"name": "Content Encoding",
"value": contentEncoding,
},
{
"name": "Delivery Mode",
"value": deliveryMode,
},
{
"name": "Priority",
"value": priority,
},
{
"name": "Correlation ID",
"value": correlationId,
},
{
"name": "Reply To",
"value": replyTo,
},
{
"name": "Expiration",
"value": expiration,
},
{
"name": "Message ID",
"value": messageId,
},
{
"name": "Timestamp",
"value": timestamp,
},
{
"name": "Type",
"value": _type,
},
{
"name": "User ID",
"value": userId,
},
{
"name": "App ID",
"value": appId,
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Properties",
"data": string(props),
})
return rep, contentType, contentEncoding
}
func representBasicPublish(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Exchange",
"value": event["Exchange"].(string),
},
{
"name": "Routing Key",
"value": event["RoutingKey"].(string),
},
{
"name": "Mandatory",
"value": strconv.FormatBool(event["Mandatory"].(bool)),
},
{
"name": "Immediate",
"value": strconv.FormatBool(event["Immediate"].(bool)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
properties := event["Properties"].(map[string]interface{})
rep, contentType, _ := representProperties(properties, rep)
if properties["Headers"] != nil {
headers := make([]map[string]string, 0)
for name, value := range properties["Headers"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Headers",
"data": string(headersMarshaled),
})
}
if event["Body"] != nil {
rep = append(rep, map[string]string{
"type": api.BODY,
"title": "Body",
"encoding": "base64",
"mime_type": contentType,
"data": event["Body"].(string),
})
}
return rep
}
func representBasicDeliver(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
consumerTag := ""
deliveryTag := ""
redelivered := ""
if event["ConsumerTag"] != nil {
consumerTag = event["ConsumerTag"].(string)
}
if event["DeliveryTag"] != nil {
deliveryTag = fmt.Sprintf("%g", event["DeliveryTag"].(float64))
}
if event["Redelivered"] != nil {
redelivered = strconv.FormatBool(event["Redelivered"].(bool))
}
details, _ := json.Marshal([]map[string]string{
{
"name": "Consumer Tag",
"value": consumerTag,
},
{
"name": "Delivery Tag",
"value": deliveryTag,
},
{
"name": "Redelivered",
"value": redelivered,
},
{
"name": "Exchange",
"value": event["Exchange"].(string),
},
{
"name": "Routing Key",
"value": event["RoutingKey"].(string),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
properties := event["Properties"].(map[string]interface{})
rep, contentType, _ := representProperties(properties, rep)
if properties["Headers"] != nil {
headers := make([]map[string]string, 0)
for name, value := range properties["Headers"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Headers",
"data": string(headersMarshaled),
})
}
if event["Body"] != nil {
rep = append(rep, map[string]string{
"type": api.BODY,
"title": "Body",
"encoding": "base64",
"mime_type": contentType,
"data": event["Body"].(string),
})
}
return rep
}
func representQueueDeclare(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Queue",
"value": event["Queue"].(string),
},
{
"name": "Passive",
"value": strconv.FormatBool(event["Passive"].(bool)),
},
{
"name": "Durable",
"value": strconv.FormatBool(event["Durable"].(bool)),
},
{
"name": "Exclusive",
"value": strconv.FormatBool(event["Exclusive"].(bool)),
},
{
"name": "Auto Delete",
"value": strconv.FormatBool(event["AutoDelete"].(bool)),
},
{
"name": "NoWait",
"value": strconv.FormatBool(event["NoWait"].(bool)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
if event["Arguments"] != nil {
headers := make([]map[string]string, 0)
for name, value := range event["Arguments"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Arguments",
"data": string(headersMarshaled),
})
}
return rep
}
func representExchangeDeclare(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Exchange",
"value": event["Exchange"].(string),
},
{
"name": "Type",
"value": event["Type"].(string),
},
{
"name": "Passive",
"value": strconv.FormatBool(event["Passive"].(bool)),
},
{
"name": "Durable",
"value": strconv.FormatBool(event["Durable"].(bool)),
},
{
"name": "Auto Delete",
"value": strconv.FormatBool(event["AutoDelete"].(bool)),
},
{
"name": "Internal",
"value": strconv.FormatBool(event["Internal"].(bool)),
},
{
"name": "NoWait",
"value": strconv.FormatBool(event["NoWait"].(bool)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
if event["Arguments"] != nil {
headers := make([]map[string]string, 0)
for name, value := range event["Arguments"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Arguments",
"data": string(headersMarshaled),
})
}
return rep
}
func representConnectionStart(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Version Major",
"value": fmt.Sprintf("%g", event["VersionMajor"].(float64)),
},
{
"name": "Version Minor",
"value": fmt.Sprintf("%g", event["VersionMinor"].(float64)),
},
{
"name": "Mechanisms",
"value": event["Mechanisms"].(string),
},
{
"name": "Locales",
"value": event["Locales"].(string),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
if event["ServerProperties"] != nil {
headers := make([]map[string]string, 0)
for name, value := range event["ServerProperties"].(map[string]interface{}) {
var outcome string
switch value.(type) {
case string:
outcome = value.(string)
break
case map[string]interface{}:
x, _ := json.Marshal(value)
outcome = string(x)
break
default:
panic("Unknown data type for the server property!")
}
headers = append(headers, map[string]string{
"name": name,
"value": outcome,
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Server Properties",
"data": string(headersMarshaled),
})
}
return rep
}
func representConnectionClose(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Reply Code",
"value": fmt.Sprintf("%g", event["ReplyCode"].(float64)),
},
{
"name": "Reply Text",
"value": event["ReplyText"].(string),
},
{
"name": "Class ID",
"value": fmt.Sprintf("%g", event["ClassId"].(float64)),
},
{
"name": "Method ID",
"value": fmt.Sprintf("%g", event["MethodId"].(float64)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
return rep
}
func representQueueBind(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Queue",
"value": event["Queue"].(string),
},
{
"name": "Exchange",
"value": event["Exchange"].(string),
},
{
"name": "RoutingKey",
"value": event["RoutingKey"].(string),
},
{
"name": "NoWait",
"value": strconv.FormatBool(event["NoWait"].(bool)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
if event["Arguments"] != nil {
headers := make([]map[string]string, 0)
for name, value := range event["Arguments"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Arguments",
"data": string(headersMarshaled),
})
}
return rep
}
func representBasicConsume(event map[string]interface{}) []interface{} {
rep := make([]interface{}, 0)
details, _ := json.Marshal([]map[string]string{
{
"name": "Queue",
"value": event["Queue"].(string),
},
{
"name": "Consumer Tag",
"value": event["ConsumerTag"].(string),
},
{
"name": "No Local",
"value": strconv.FormatBool(event["NoLocal"].(bool)),
},
{
"name": "No Ack",
"value": strconv.FormatBool(event["NoAck"].(bool)),
},
{
"name": "Exclusive",
"value": strconv.FormatBool(event["Exclusive"].(bool)),
},
{
"name": "NoWait",
"value": strconv.FormatBool(event["NoWait"].(bool)),
},
})
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
if event["Arguments"] != nil {
headers := make([]map[string]string, 0)
for name, value := range event["Arguments"].(map[string]interface{}) {
headers = append(headers, map[string]string{
"name": name,
"value": value.(string),
})
}
headersMarshaled, _ := json.Marshal(headers)
rep = append(rep, map[string]string{
"type": api.TABLE,
"title": "Arguments",
"data": string(headersMarshaled),
})
}
return rep
}

View File

@@ -1,363 +0,0 @@
package main
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"io"
"log"
"strconv"
"github.com/up9inc/mizu/tap/api"
)
var protocol api.Protocol = api.Protocol{
Name: "amqp",
LongName: "Advanced Message Queuing Protocol 0-9-1",
Abbreviation: "AMQP",
Version: "0-9-1",
BackgroundColor: "#ff6600",
ForegroundColor: "#ffffff",
FontSize: 12,
ReferenceLink: "https://www.rabbitmq.com/amqp-0-9-1-reference.html",
Ports: []string{"5671", "5672"},
Priority: 1,
}
func init() {
log.Println("Initializing AMQP extension...")
}
type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &protocol
}
func (d dissecting) Ping() {
log.Printf("pong %s\n", protocol.Name)
}
const amqpRequest string = "amqp_request"
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
r := AmqpReader{b}
var remaining int
var header *HeaderFrame
var body []byte
connectionInfo := &api.ConnectionInfo{
ClientIP: tcpID.SrcIP,
ClientPort: tcpID.SrcPort,
ServerIP: tcpID.DstIP,
ServerPort: tcpID.DstPort,
IsOutgoing: true,
}
eventBasicPublish := &BasicPublish{
Exchange: "",
RoutingKey: "",
Mandatory: false,
Immediate: false,
Body: nil,
Properties: Properties{},
}
eventBasicDeliver := &BasicDeliver{
ConsumerTag: "",
DeliveryTag: 0,
Redelivered: false,
Exchange: "",
RoutingKey: "",
Properties: Properties{},
Body: nil,
}
var lastMethodFrameMessage Message
for {
if superIdentifier.Protocol != nil && superIdentifier.Protocol != &protocol {
return errors.New("Identified by another protocol")
}
frame, err := r.ReadFrame()
if err == io.EOF {
// We must read until we see an EOF... very important!
return errors.New("AMQP EOF")
}
switch f := frame.(type) {
case *HeartbeatFrame:
// drop
case *HeaderFrame:
// start content state
header = f
remaining = int(header.Size)
switch lastMethodFrameMessage.(type) {
case *BasicPublish:
eventBasicPublish.Properties = header.Properties
case *BasicDeliver:
eventBasicDeliver.Properties = header.Properties
default:
frame = nil
}
case *BodyFrame:
// continue until terminated
body = append(body, f.Body...)
remaining -= len(f.Body)
switch lastMethodFrameMessage.(type) {
case *BasicPublish:
eventBasicPublish.Body = f.Body
superIdentifier.Protocol = &protocol
emitAMQP(*eventBasicPublish, amqpRequest, basicMethodMap[40], connectionInfo, superTimer.CaptureTime, emitter)
case *BasicDeliver:
eventBasicDeliver.Body = f.Body
superIdentifier.Protocol = &protocol
emitAMQP(*eventBasicDeliver, amqpRequest, basicMethodMap[60], connectionInfo, superTimer.CaptureTime, emitter)
default:
body = nil
frame = nil
}
case *MethodFrame:
lastMethodFrameMessage = f.Method
switch m := f.Method.(type) {
case *BasicPublish:
eventBasicPublish.Exchange = m.Exchange
eventBasicPublish.RoutingKey = m.RoutingKey
eventBasicPublish.Mandatory = m.Mandatory
eventBasicPublish.Immediate = m.Immediate
case *QueueBind:
eventQueueBind := &QueueBind{
Queue: m.Queue,
Exchange: m.Exchange,
RoutingKey: m.RoutingKey,
NoWait: m.NoWait,
Arguments: m.Arguments,
}
superIdentifier.Protocol = &protocol
emitAMQP(*eventQueueBind, amqpRequest, queueMethodMap[20], connectionInfo, superTimer.CaptureTime, emitter)
case *BasicConsume:
eventBasicConsume := &BasicConsume{
Queue: m.Queue,
ConsumerTag: m.ConsumerTag,
NoLocal: m.NoLocal,
NoAck: m.NoAck,
Exclusive: m.Exclusive,
NoWait: m.NoWait,
Arguments: m.Arguments,
}
superIdentifier.Protocol = &protocol
emitAMQP(*eventBasicConsume, amqpRequest, basicMethodMap[20], connectionInfo, superTimer.CaptureTime, emitter)
case *BasicDeliver:
eventBasicDeliver.ConsumerTag = m.ConsumerTag
eventBasicDeliver.DeliveryTag = m.DeliveryTag
eventBasicDeliver.Redelivered = m.Redelivered
eventBasicDeliver.Exchange = m.Exchange
eventBasicDeliver.RoutingKey = m.RoutingKey
case *QueueDeclare:
eventQueueDeclare := &QueueDeclare{
Queue: m.Queue,
Passive: m.Passive,
Durable: m.Durable,
AutoDelete: m.AutoDelete,
Exclusive: m.Exclusive,
NoWait: m.NoWait,
Arguments: m.Arguments,
}
superIdentifier.Protocol = &protocol
emitAMQP(*eventQueueDeclare, amqpRequest, queueMethodMap[10], connectionInfo, superTimer.CaptureTime, emitter)
case *ExchangeDeclare:
eventExchangeDeclare := &ExchangeDeclare{
Exchange: m.Exchange,
Type: m.Type,
Passive: m.Passive,
Durable: m.Durable,
AutoDelete: m.AutoDelete,
Internal: m.Internal,
NoWait: m.NoWait,
Arguments: m.Arguments,
}
superIdentifier.Protocol = &protocol
emitAMQP(*eventExchangeDeclare, amqpRequest, exchangeMethodMap[10], connectionInfo, superTimer.CaptureTime, emitter)
case *ConnectionStart:
eventConnectionStart := &ConnectionStart{
VersionMajor: m.VersionMajor,
VersionMinor: m.VersionMinor,
ServerProperties: m.ServerProperties,
Mechanisms: m.Mechanisms,
Locales: m.Locales,
}
superIdentifier.Protocol = &protocol
emitAMQP(*eventConnectionStart, amqpRequest, connectionMethodMap[10], connectionInfo, superTimer.CaptureTime, emitter)
case *ConnectionClose:
eventConnectionClose := &ConnectionClose{
ReplyCode: m.ReplyCode,
ReplyText: m.ReplyText,
ClassId: m.ClassId,
MethodId: m.MethodId,
}
superIdentifier.Protocol = &protocol
emitAMQP(*eventConnectionClose, amqpRequest, connectionMethodMap[50], connectionInfo, superTimer.CaptureTime, emitter)
default:
frame = nil
}
default:
// log.Printf("unexpected frame: %+v\n", f)
}
}
}
func (d dissecting) Analyze(item *api.OutputChannelItem, entryId string, resolvedSource string, resolvedDestination string) *api.MizuEntry {
request := item.Pair.Request.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
service := "amqp"
if resolvedDestination != "" {
service = resolvedDestination
} else if resolvedSource != "" {
service = resolvedSource
}
summary := ""
switch request["method"] {
case basicMethodMap[40]:
summary = reqDetails["Exchange"].(string)
break
case basicMethodMap[60]:
summary = reqDetails["Exchange"].(string)
break
case exchangeMethodMap[10]:
summary = reqDetails["Exchange"].(string)
break
case queueMethodMap[10]:
summary = reqDetails["Queue"].(string)
break
case connectionMethodMap[10]:
summary = fmt.Sprintf(
"%s.%s",
strconv.Itoa(int(reqDetails["VersionMajor"].(float64))),
strconv.Itoa(int(reqDetails["VersionMinor"].(float64))),
)
break
case connectionMethodMap[50]:
summary = reqDetails["ReplyText"].(string)
break
case queueMethodMap[20]:
summary = reqDetails["Queue"].(string)
break
case basicMethodMap[20]:
summary = reqDetails["Queue"].(string)
break
}
request["url"] = summary
entryBytes, _ := json.Marshal(item.Pair)
return &api.MizuEntry{
ProtocolName: protocol.Name,
ProtocolLongName: protocol.LongName,
ProtocolAbbreviation: protocol.Abbreviation,
ProtocolVersion: protocol.Version,
ProtocolBackgroundColor: protocol.BackgroundColor,
ProtocolForegroundColor: protocol.ForegroundColor,
ProtocolFontSize: protocol.FontSize,
ProtocolReferenceLink: protocol.ReferenceLink,
EntryId: entryId,
Entry: string(entryBytes),
Url: fmt.Sprintf("%s%s", service, summary),
Method: request["method"].(string),
Status: 0,
RequestSenderIp: item.ConnectionInfo.ClientIP,
Service: service,
Timestamp: item.Timestamp,
ElapsedTime: 0,
Path: summary,
ResolvedSource: resolvedSource,
ResolvedDestination: resolvedDestination,
SourceIp: item.ConnectionInfo.ClientIP,
DestinationIp: item.ConnectionInfo.ServerIP,
SourcePort: item.ConnectionInfo.ClientPort,
DestinationPort: item.ConnectionInfo.ServerPort,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
}
}
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
return &api.BaseEntryDetails{
Id: entry.EntryId,
Protocol: protocol,
Url: entry.Url,
RequestSenderIp: entry.RequestSenderIp,
Service: entry.Service,
Summary: entry.Path,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
SourceIp: entry.SourceIp,
DestinationIp: entry.DestinationIp,
SourcePort: entry.SourcePort,
DestinationPort: entry.DestinationPort,
IsOutgoing: entry.IsOutgoing,
Latency: 0,
Rules: api.ApplicableRules{
Latency: 0,
Status: false,
},
}
}
func (d dissecting) Represent(entry *api.MizuEntry) (p api.Protocol, object []byte, bodySize int64, err error) {
p = protocol
bodySize = 0
var root map[string]interface{}
json.Unmarshal([]byte(entry.Entry), &root)
representation := make(map[string]interface{}, 0)
request := root["request"].(map[string]interface{})["payload"].(map[string]interface{})
var repRequest []interface{}
details := request["details"].(map[string]interface{})
switch request["method"].(string) {
case basicMethodMap[40]:
repRequest = representBasicPublish(details)
break
case basicMethodMap[60]:
repRequest = representBasicDeliver(details)
break
case queueMethodMap[10]:
repRequest = representQueueDeclare(details)
break
case exchangeMethodMap[10]:
repRequest = representExchangeDeclare(details)
break
case connectionMethodMap[10]:
repRequest = representConnectionStart(details)
break
case connectionMethodMap[50]:
repRequest = representConnectionClose(details)
break
case queueMethodMap[20]:
repRequest = representQueueBind(details)
break
case basicMethodMap[20]:
repRequest = representBasicConsume(details)
break
}
representation["request"] = repRequest
object, err = json.Marshal(representation)
return
}
var Dissector dissecting

View File

@@ -1,465 +0,0 @@
// Copyright (c) 2012, Sean Treadway, SoundCloud Ltd.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Source code and contact info at http://github.com/streadway/amqp
package main
import (
"bytes"
"encoding/binary"
"errors"
"io"
"time"
)
/*
Reads a frame from an input stream and returns an interface that can be cast into
one of the following:
MethodFrame
PropertiesFrame
BodyFrame
HeartbeatFrame
2.3.5 frame Details
All frames consist of a header (7 octets), a payload of arbitrary size, and a
'frame-end' octet that detects malformed frames:
0 1 3 7 size+7 size+8
+------+---------+-------------+ +------------+ +-----------+
| type | channel | size | | payload | | frame-end |
+------+---------+-------------+ +------------+ +-----------+
octet short long size octets octet
To read a frame, we:
1. Read the header and check the frame type and channel.
2. Depending on the frame type, we read the payload and process it.
3. Read the frame end octet.
In realistic implementations where performance is a concern, we would use
“read-ahead buffering” or
“gathering reads” to avoid doing three separate system calls to read a frame.
*/
func (r *AmqpReader) ReadFrame() (frame frame, err error) {
var scratch [7]byte
if _, err = io.ReadFull(r.R, scratch[:7]); err != nil {
return
}
typ := uint8(scratch[0])
channel := binary.BigEndian.Uint16(scratch[1:3])
size := binary.BigEndian.Uint32(scratch[3:7])
if size > 1000000*16 {
return nil, ErrMaxSize
}
switch typ {
case frameMethod:
if frame, err = r.parseMethodFrame(channel, size); err != nil {
return
}
case frameHeader:
if frame, err = r.parseHeaderFrame(channel, size); err != nil {
return
}
case frameBody:
if frame, err = r.parseBodyFrame(channel, size); err != nil {
return nil, err
}
case frameHeartbeat:
if frame, err = r.parseHeartbeatFrame(channel, size); err != nil {
return
}
default:
return nil, ErrFrame
}
if _, err = io.ReadFull(r.R, scratch[:1]); err != nil {
return nil, err
}
if scratch[0] != frameEnd {
return nil, ErrFrame
}
return
}
func readShortstr(r io.Reader) (v string, err error) {
var length uint8
if err = binary.Read(r, binary.BigEndian, &length); err != nil {
return
}
bytes := make([]byte, length)
if _, err = io.ReadFull(r, bytes); err != nil {
return
}
return string(bytes), nil
}
func readLongstr(r io.Reader) (v string, err error) {
var length uint32
if err = binary.Read(r, binary.BigEndian, &length); err != nil {
return
}
// slices can't be longer than max int32 value
if length > (^uint32(0) >> 1) {
return
}
bytes := make([]byte, length)
if _, err = io.ReadFull(r, bytes); err != nil {
return
}
return string(bytes), nil
}
func readDecimal(r io.Reader) (v Decimal, err error) {
if err = binary.Read(r, binary.BigEndian, &v.Scale); err != nil {
return
}
if err = binary.Read(r, binary.BigEndian, &v.Value); err != nil {
return
}
return
}
func readFloat32(r io.Reader) (v float32, err error) {
if err = binary.Read(r, binary.BigEndian, &v); err != nil {
return
}
return
}
func readFloat64(r io.Reader) (v float64, err error) {
if err = binary.Read(r, binary.BigEndian, &v); err != nil {
return
}
return
}
func readTimestamp(r io.Reader) (v time.Time, err error) {
var sec int64
if err = binary.Read(r, binary.BigEndian, &sec); err != nil {
return
}
return time.Unix(sec, 0), nil
}
/*
'A': []interface{}
'D': Decimal
'F': Table
'I': int32
'S': string
'T': time.Time
'V': nil
'b': byte
'd': float64
'f': float32
'l': int64
's': int16
't': bool
'x': []byte
*/
func readField(r io.Reader) (v interface{}, err error) {
var typ byte
if err = binary.Read(r, binary.BigEndian, &typ); err != nil {
return
}
switch typ {
case 't':
var value uint8
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return (value != 0), nil
case 'b':
var value [1]byte
if _, err = io.ReadFull(r, value[0:1]); err != nil {
return
}
return value[0], nil
case 's':
var value int16
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return value, nil
case 'I':
var value int32
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return value, nil
case 'l':
var value int64
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return value, nil
case 'f':
var value float32
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return value, nil
case 'd':
var value float64
if err = binary.Read(r, binary.BigEndian, &value); err != nil {
return
}
return value, nil
case 'D':
return readDecimal(r)
case 'S':
return readLongstr(r)
case 'A':
return readArray(r)
case 'T':
return readTimestamp(r)
case 'F':
return readTable(r)
case 'x':
var len int32
if err = binary.Read(r, binary.BigEndian, &len); err != nil {
return nil, err
}
value := make([]byte, len)
if _, err = io.ReadFull(r, value); err != nil {
return nil, err
}
return value, err
case 'V':
return nil, nil
}
return nil, ErrSyntax
}
/*
Field tables are long strings that contain packed name-value pairs. The
name-value pairs are encoded as short string defining the name, and octet
defining the values type and then the value itself. The valid field types for
tables are an extension of the native integer, bit, string, and timestamp
types, and are shown in the grammar. Multi-octet integer fields are always
held in network byte order.
*/
func readTable(r io.Reader) (table Table, err error) {
var nested bytes.Buffer
var str string
if str, err = readLongstr(r); err != nil {
return
}
nested.Write([]byte(str))
table = make(Table)
for nested.Len() > 0 {
var key string
var value interface{}
if key, err = readShortstr(&nested); err != nil {
return
}
if value, err = readField(&nested); err != nil {
return
}
table[key] = value
}
return
}
func readArray(r io.Reader) ([]interface{}, error) {
var (
size uint32
err error
)
if err = binary.Read(r, binary.BigEndian, &size); err != nil {
return nil, err
}
var (
lim = &io.LimitedReader{R: r, N: int64(size)}
arr = []interface{}{}
field interface{}
)
for {
if field, err = readField(lim); err != nil {
if err == io.EOF {
break
}
return nil, err
}
arr = append(arr, field)
}
return arr, nil
}
// Checks if this bit mask matches the flags bitset
func hasProperty(mask uint16, prop int) bool {
return int(mask)&prop > 0
}
func (r *AmqpReader) parseHeaderFrame(channel uint16, size uint32) (frame frame, err error) {
hf := &HeaderFrame{
ChannelId: channel,
}
if err = binary.Read(r.R, binary.BigEndian, &hf.ClassId); err != nil {
return
}
if err = binary.Read(r.R, binary.BigEndian, &hf.weight); err != nil {
return
}
if err = binary.Read(r.R, binary.BigEndian, &hf.Size); err != nil {
return
}
if hf.Size > 512 {
return nil, ErrMaxHeaderFrameSize
}
var flags uint16
if err = binary.Read(r.R, binary.BigEndian, &flags); err != nil {
return
}
if hasProperty(flags, flagContentType) {
if hf.Properties.ContentType, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagContentEncoding) {
if hf.Properties.ContentEncoding, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagHeaders) {
if hf.Properties.Headers, err = readTable(r.R); err != nil {
return
}
}
if hasProperty(flags, flagDeliveryMode) {
if err = binary.Read(r.R, binary.BigEndian, &hf.Properties.DeliveryMode); err != nil {
return
}
}
if hasProperty(flags, flagPriority) {
if err = binary.Read(r.R, binary.BigEndian, &hf.Properties.Priority); err != nil {
return
}
}
if hasProperty(flags, flagCorrelationId) {
if hf.Properties.CorrelationId, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagReplyTo) {
if hf.Properties.ReplyTo, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagExpiration) {
if hf.Properties.Expiration, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagMessageId) {
if hf.Properties.MessageId, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagTimestamp) {
if hf.Properties.Timestamp, err = readTimestamp(r.R); err != nil {
return
}
}
if hasProperty(flags, flagType) {
if hf.Properties.Type, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagUserId) {
if hf.Properties.UserId, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagAppId) {
if hf.Properties.AppId, err = readShortstr(r.R); err != nil {
return
}
}
if hasProperty(flags, flagReserved1) {
if hf.Properties.reserved1, err = readShortstr(r.R); err != nil {
return
}
}
return hf, nil
}
func (r *AmqpReader) parseBodyFrame(channel uint16, size uint32) (frame frame, err error) {
bf := &BodyFrame{
ChannelId: channel,
Body: make([]byte, size),
}
if _, err = io.ReadFull(r.R, bf.Body); err != nil {
bf.Body = nil
return nil, err
}
return bf, nil
}
var errHeartbeatPayload = errors.New("Heartbeats should not have a payload")
func (r *AmqpReader) parseHeartbeatFrame(channel uint16, size uint32) (frame frame, err error) {
hf := &HeartbeatFrame{
ChannelId: channel,
}
if size > 0 {
return nil, errHeartbeatPayload
}
return hf, nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +0,0 @@
package main
import (
"encoding/json"
)
type AMQPPayload struct {
Data interface{}
}
type AMQPPayloader interface {
MarshalJSON() ([]byte, error)
}
func (h AMQPPayload) MarshalJSON() ([]byte, error) {
return json.Marshal(h.Data)
}

View File

@@ -1,436 +0,0 @@
// Copyright (c) 2012, Sean Treadway, SoundCloud Ltd.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Source code and contact info at http://github.com/streadway/amqp
package main
import (
"fmt"
"io"
"time"
)
// Constants for standard AMQP 0-9-1 exchange types.
const (
ExchangeDirect = "direct"
ExchangeFanout = "fanout"
ExchangeTopic = "topic"
ExchangeHeaders = "headers"
)
var (
// ErrClosed is returned when the channel or connection is not open
ErrClosed = &Error{Code: ChannelError, Reason: "channel/connection is not open"}
// ErrChannelMax is returned when Connection.Channel has been called enough
// times that all channel IDs have been exhausted in the client or the
// server.
ErrChannelMax = &Error{Code: ChannelError, Reason: "channel id space exhausted"}
// ErrSASL is returned from Dial when the authentication mechanism could not
// be negoated.
ErrSASL = &Error{Code: AccessRefused, Reason: "SASL could not negotiate a shared mechanism"}
// ErrCredentials is returned when the authenticated client is not authorized
// to any vhost.
ErrCredentials = &Error{Code: AccessRefused, Reason: "username or password not allowed"}
// ErrVhost is returned when the authenticated user is not permitted to
// access the requested Vhost.
ErrVhost = &Error{Code: AccessRefused, Reason: "no access to this vhost"}
// ErrSyntax is hard protocol error, indicating an unsupported protocol,
// implementation or encoding.
ErrSyntax = &Error{Code: SyntaxError, Reason: "invalid field or value inside of a frame"}
// ErrFrame is returned when the protocol frame cannot be read from the
// server, indicating an unsupported protocol or unsupported frame type.
ErrFrame = &Error{Code: FrameError, Reason: "frame could not be parsed"}
// ErrCommandInvalid is returned when the server sends an unexpected response
// to this requested message type. This indicates a bug in this client.
ErrCommandInvalid = &Error{Code: CommandInvalid, Reason: "unexpected command received"}
// ErrUnexpectedFrame is returned when something other than a method or
// heartbeat frame is delivered to the Connection, indicating a bug in the
// client.
ErrUnexpectedFrame = &Error{Code: UnexpectedFrame, Reason: "unexpected frame received"}
// ErrFieldType is returned when writing a message containing a Go type unsupported by AMQP.
ErrFieldType = &Error{Code: SyntaxError, Reason: "unsupported table field type"}
ErrMaxSize = &Error{Code: MaxSizeError, Reason: "an AMQP message cannot be bigger than 16MB"}
ErrMaxHeaderFrameSize = &Error{Code: MaxHeaderFrameSizeError, Reason: "an AMQP header cannot be bigger than 512 bytes"}
ErrBadMethodFrameUnknownMethod = &Error{Code: BadMethodFrameUnknownMethod, Reason: "Bad method frame, unknown method"}
ErrBadMethodFrameUnknownClass = &Error{Code: BadMethodFrameUnknownClass, Reason: "Bad method frame, unknown class"}
)
// Error captures the code and reason a channel or connection has been closed
// by the server.
type Error struct {
Code int // constant code from the specification
Reason string // description of the error
Server bool // true when initiated from the server, false when from this library
Recover bool // true when this error can be recovered by retrying later or with different parameters
}
func newError(code uint16, text string) *Error {
return &Error{
Code: int(code),
Reason: text,
Recover: isSoftExceptionCode(int(code)),
Server: true,
}
}
func (e Error) Error() string {
return fmt.Sprintf("Exception (%d) Reason: %q", e.Code, e.Reason)
}
// Used by header frames to capture routing and header information
type Properties struct {
ContentType string // MIME content type
ContentEncoding string // MIME content encoding
Headers Table // Application or header exchange table
DeliveryMode uint8 // queue implementation use - Transient (1) or Persistent (2)
Priority uint8 // queue implementation use - 0 to 9
CorrelationId string // application use - correlation identifier
ReplyTo string // application use - address to to reply to (ex: RPC)
Expiration string // implementation use - message expiration spec
MessageId string // application use - message identifier
Timestamp time.Time // application use - message timestamp
Type string // application use - message type name
UserId string // application use - creating user id
AppId string // application use - creating application
reserved1 string // was cluster-id - process for buffer consumption
}
// DeliveryMode. Transient means higher throughput but messages will not be
// restored on broker restart. The delivery mode of publishings is unrelated
// to the durability of the queues they reside on. Transient messages will
// not be restored to durable queues, persistent messages will be restored to
// durable queues and lost on non-durable queues during server restart.
//
// This remains typed as uint8 to match Publishing.DeliveryMode. Other
// delivery modes specific to custom queue implementations are not enumerated
// here.
const (
Transient uint8 = 1
Persistent uint8 = 2
)
// The property flags are an array of bits that indicate the presence or
// absence of each property value in sequence. The bits are ordered from most
// high to low - bit 15 indicates the first property.
const (
flagContentType = 0x8000
flagContentEncoding = 0x4000
flagHeaders = 0x2000
flagDeliveryMode = 0x1000
flagPriority = 0x0800
flagCorrelationId = 0x0400
flagReplyTo = 0x0200
flagExpiration = 0x0100
flagMessageId = 0x0080
flagTimestamp = 0x0040
flagType = 0x0020
flagUserId = 0x0010
flagAppId = 0x0008
flagReserved1 = 0x0004
)
// Queue captures the current server state of the queue on the server returned
// from Channel.QueueDeclare or Channel.QueueInspect.
type Queue struct {
Name string // server confirmed or generated name
Messages int // count of messages not awaiting acknowledgment
Consumers int // number of consumers receiving deliveries
}
// Publishing captures the client message sent to the server. The fields
// outside of the Headers table included in this struct mirror the underlying
// fields in the content frame. They use native types for convenience and
// efficiency.
type Publishing struct {
// Application or exchange specific fields,
// the headers exchange will inspect this field.
Headers Table
// Properties
ContentType string // MIME content type
ContentEncoding string // MIME content encoding
DeliveryMode uint8 // Transient (0 or 1) or Persistent (2)
Priority uint8 // 0 to 9
CorrelationId string // correlation identifier
ReplyTo string // address to to reply to (ex: RPC)
Expiration string // message expiration spec
MessageId string // message identifier
Timestamp time.Time // message timestamp
Type string // message type name
UserId string // creating user id - ex: "guest"
AppId string // creating application id
// The application specific payload of the message
Body []byte
}
// Blocking notifies the server's TCP flow control of the Connection. When a
// server hits a memory or disk alarm it will block all connections until the
// resources are reclaimed. Use NotifyBlock on the Connection to receive these
// events.
type Blocking struct {
Active bool // TCP pushback active/inactive on server
Reason string // Server reason for activation
}
// Confirmation notifies the acknowledgment or negative acknowledgement of a
// publishing identified by its delivery tag. Use NotifyPublish on the Channel
// to consume these events.
type Confirmation struct {
DeliveryTag uint64 // A 1 based counter of publishings from when the channel was put in Confirm mode
Ack bool // True when the server successfully received the publishing
}
// Decimal matches the AMQP decimal type. Scale is the number of decimal
// digits Scale == 2, Value == 12345, Decimal == 123.45
type Decimal struct {
Scale uint8
Value int32
}
// Table stores user supplied fields of the following types:
//
// bool
// byte
// float32
// float64
// int
// int16
// int32
// int64
// nil
// string
// time.Time
// amqp.Decimal
// amqp.Table
// []byte
// []interface{} - containing above types
//
// Functions taking a table will immediately fail when the table contains a
// value of an unsupported type.
//
// The caller must be specific in which precision of integer it wishes to
// encode.
//
// Use a type assertion when reading values from a table for type conversion.
//
// RabbitMQ expects int32 for integer values.
//
type Table map[string]interface{}
func validateField(f interface{}) error {
switch fv := f.(type) {
case nil, bool, byte, int, int16, int32, int64, float32, float64, string, []byte, Decimal, time.Time:
return nil
case []interface{}:
for _, v := range fv {
if err := validateField(v); err != nil {
return fmt.Errorf("in array %s", err)
}
}
return nil
case Table:
for k, v := range fv {
if err := validateField(v); err != nil {
return fmt.Errorf("table field %q %s", k, err)
}
}
return nil
}
return fmt.Errorf("value %T not supported", f)
}
// Validate returns and error if any Go types in the table are incompatible with AMQP types.
func (t Table) Validate() error {
return validateField(t)
}
// Heap interface for maintaining delivery tags
type tagSet []uint64
func (set tagSet) Len() int { return len(set) }
func (set tagSet) Less(i, j int) bool { return (set)[i] < (set)[j] }
func (set tagSet) Swap(i, j int) { (set)[i], (set)[j] = (set)[j], (set)[i] }
func (set *tagSet) Push(tag interface{}) { *set = append(*set, tag.(uint64)) }
func (set *tagSet) Pop() interface{} {
val := (*set)[len(*set)-1]
*set = (*set)[:len(*set)-1]
return val
}
type Message interface {
id() (uint16, uint16)
wait() bool
read(io.Reader) error
write(io.Writer) error
}
type messageWithContent interface {
Message
getContent() (Properties, []byte)
setContent(Properties, []byte)
}
/*
The base interface implemented as:
2.3.5 frame Details
All frames consist of a header (7 octets), a payload of arbitrary size, and a 'frame-end' octet that detects
malformed frames:
0 1 3 7 size+7 size+8
+------+---------+-------------+ +------------+ +-----------+
| type | channel | size | | payload | | frame-end |
+------+---------+-------------+ +------------+ +-----------+
octet short long size octets octet
To read a frame, we:
1. Read the header and check the frame type and channel.
2. Depending on the frame type, we read the payload and process it.
3. Read the frame end octet.
In realistic implementations where performance is a concern, we would use
“read-ahead buffering” or “gathering reads” to avoid doing three separate
system calls to read a frame.
*/
type frame interface {
write(io.Writer) error
channel() uint16
}
type AmqpReader struct {
R io.Reader
}
type writer struct {
w io.Writer
}
// Implements the frame interface for Connection RPC
type protocolHeader struct{}
func (protocolHeader) write(w io.Writer) error {
_, err := w.Write([]byte{'A', 'M', 'Q', 'P', 0, 0, 9, 1})
return err
}
func (protocolHeader) channel() uint16 {
panic("only valid as initial handshake")
}
/*
Method frames carry the high-level protocol commands (which we call "methods").
One method frame carries one command. The method frame payload has this format:
0 2 4
+----------+-----------+-------------- - -
| class-id | method-id | arguments...
+----------+-----------+-------------- - -
short short ...
To process a method frame, we:
1. Read the method frame payload.
2. Unpack it into a structure. A given method always has the same structure,
so we can unpack the method rapidly. 3. Check that the method is allowed in
the current context.
4. Check that the method arguments are valid.
5. Execute the method.
Method frame bodies are constructed as a list of AMQP data fields (bits,
integers, strings and string tables). The marshalling code is trivially
generated directly from the protocol specifications, and can be very rapid.
*/
type MethodFrame struct {
ChannelId uint16
ClassId uint16
MethodId uint16
Method Message
}
func (f *MethodFrame) channel() uint16 { return f.ChannelId }
/*
Heartbeating is a technique designed to undo one of TCP/IP's features, namely
its ability to recover from a broken physical connection by closing only after
a quite long time-out. In some scenarios we need to know very rapidly if a
peer is disconnected or not responding for other reasons (e.g. it is looping).
Since heartbeating can be done at a low level, we implement this as a special
type of frame that peers exchange at the transport level, rather than as a
class method.
*/
type HeartbeatFrame struct {
ChannelId uint16
}
func (f *HeartbeatFrame) channel() uint16 { return f.ChannelId }
/*
Certain methods (such as Basic.Publish, Basic.Deliver, etc.) are formally
defined as carrying content. When a peer sends such a method frame, it always
follows it with a content header and zero or more content body frames.
A content header frame has this format:
0 2 4 12 14
+----------+--------+-----------+----------------+------------- - -
| class-id | weight | body size | property flags | property list...
+----------+--------+-----------+----------------+------------- - -
short short long long short remainder...
We place content body in distinct frames (rather than including it in the
method) so that AMQP may support "zero copy" techniques in which content is
never marshalled or encoded. We place the content properties in their own
frame so that recipients can selectively discard contents they do not want to
process
*/
type HeaderFrame struct {
ChannelId uint16
ClassId uint16
weight uint16
Size uint64
Properties Properties
}
func (f *HeaderFrame) channel() uint16 { return f.ChannelId }
/*
Content is the application data we carry from client-to-client via the AMQP
server. Content is, roughly speaking, a set of properties plus a binary data
part. The set of allowed properties are defined by the Basic class, and these
form the "content header frame". The data can be any size, and MAY be broken
into several (or many) chunks, each forming a "content body frame".
Looking at the frames for a specific channel, as they pass on the wire, we
might see something like this:
[method]
[method] [header] [body] [body]
[method]
...
*/
type BodyFrame struct {
ChannelId uint16
Body []byte
}
func (f *BodyFrame) channel() uint16 { return f.ChannelId }

View File

@@ -1,416 +0,0 @@
// Copyright (c) 2012, Sean Treadway, SoundCloud Ltd.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Source code and contact info at http://github.com/streadway/amqp
package main
import (
"bufio"
"bytes"
"encoding/binary"
"errors"
"io"
"math"
"time"
)
func (w *writer) WriteFrame(frame frame) (err error) {
if err = frame.write(w.w); err != nil {
return
}
if buf, ok := w.w.(*bufio.Writer); ok {
err = buf.Flush()
}
return
}
func (f *MethodFrame) write(w io.Writer) (err error) {
var payload bytes.Buffer
if f.Method == nil {
return errors.New("malformed frame: missing method")
}
class, method := f.Method.id()
if err = binary.Write(&payload, binary.BigEndian, class); err != nil {
return
}
if err = binary.Write(&payload, binary.BigEndian, method); err != nil {
return
}
if err = f.Method.write(&payload); err != nil {
return
}
return writeFrame(w, frameMethod, f.ChannelId, payload.Bytes())
}
// Heartbeat
//
// Payload is empty
func (f *HeartbeatFrame) write(w io.Writer) (err error) {
return writeFrame(w, frameHeartbeat, f.ChannelId, []byte{})
}
// CONTENT HEADER
// 0 2 4 12 14
// +----------+--------+-----------+----------------+------------- - -
// | class-id | weight | body size | property flags | property list...
// +----------+--------+-----------+----------------+------------- - -
// short short long long short remainder...
//
func (f *HeaderFrame) write(w io.Writer) (err error) {
var payload bytes.Buffer
var zeroTime time.Time
if err = binary.Write(&payload, binary.BigEndian, f.ClassId); err != nil {
return
}
if err = binary.Write(&payload, binary.BigEndian, f.weight); err != nil {
return
}
if err = binary.Write(&payload, binary.BigEndian, f.Size); err != nil {
return
}
// First pass will build the mask to be serialized, second pass will serialize
// each of the fields that appear in the mask.
var mask uint16
if len(f.Properties.ContentType) > 0 {
mask = mask | flagContentType
}
if len(f.Properties.ContentEncoding) > 0 {
mask = mask | flagContentEncoding
}
if f.Properties.Headers != nil && len(f.Properties.Headers) > 0 {
mask = mask | flagHeaders
}
if f.Properties.DeliveryMode > 0 {
mask = mask | flagDeliveryMode
}
if f.Properties.Priority > 0 {
mask = mask | flagPriority
}
if len(f.Properties.CorrelationId) > 0 {
mask = mask | flagCorrelationId
}
if len(f.Properties.ReplyTo) > 0 {
mask = mask | flagReplyTo
}
if len(f.Properties.Expiration) > 0 {
mask = mask | flagExpiration
}
if len(f.Properties.MessageId) > 0 {
mask = mask | flagMessageId
}
if f.Properties.Timestamp != zeroTime {
mask = mask | flagTimestamp
}
if len(f.Properties.Type) > 0 {
mask = mask | flagType
}
if len(f.Properties.UserId) > 0 {
mask = mask | flagUserId
}
if len(f.Properties.AppId) > 0 {
mask = mask | flagAppId
}
if err = binary.Write(&payload, binary.BigEndian, mask); err != nil {
return
}
if hasProperty(mask, flagContentType) {
if err = writeShortstr(&payload, f.Properties.ContentType); err != nil {
return
}
}
if hasProperty(mask, flagContentEncoding) {
if err = writeShortstr(&payload, f.Properties.ContentEncoding); err != nil {
return
}
}
if hasProperty(mask, flagHeaders) {
if err = writeTable(&payload, f.Properties.Headers); err != nil {
return
}
}
if hasProperty(mask, flagDeliveryMode) {
if err = binary.Write(&payload, binary.BigEndian, f.Properties.DeliveryMode); err != nil {
return
}
}
if hasProperty(mask, flagPriority) {
if err = binary.Write(&payload, binary.BigEndian, f.Properties.Priority); err != nil {
return
}
}
if hasProperty(mask, flagCorrelationId) {
if err = writeShortstr(&payload, f.Properties.CorrelationId); err != nil {
return
}
}
if hasProperty(mask, flagReplyTo) {
if err = writeShortstr(&payload, f.Properties.ReplyTo); err != nil {
return
}
}
if hasProperty(mask, flagExpiration) {
if err = writeShortstr(&payload, f.Properties.Expiration); err != nil {
return
}
}
if hasProperty(mask, flagMessageId) {
if err = writeShortstr(&payload, f.Properties.MessageId); err != nil {
return
}
}
if hasProperty(mask, flagTimestamp) {
if err = binary.Write(&payload, binary.BigEndian, uint64(f.Properties.Timestamp.Unix())); err != nil {
return
}
}
if hasProperty(mask, flagType) {
if err = writeShortstr(&payload, f.Properties.Type); err != nil {
return
}
}
if hasProperty(mask, flagUserId) {
if err = writeShortstr(&payload, f.Properties.UserId); err != nil {
return
}
}
if hasProperty(mask, flagAppId) {
if err = writeShortstr(&payload, f.Properties.AppId); err != nil {
return
}
}
return writeFrame(w, frameHeader, f.ChannelId, payload.Bytes())
}
// Body
//
// Payload is one byterange from the full body who's size is declared in the
// Header frame
func (f *BodyFrame) write(w io.Writer) (err error) {
return writeFrame(w, frameBody, f.ChannelId, f.Body)
}
func writeFrame(w io.Writer, typ uint8, channel uint16, payload []byte) (err error) {
end := []byte{frameEnd}
size := uint(len(payload))
_, err = w.Write([]byte{
byte(typ),
byte((channel & 0xff00) >> 8),
byte((channel & 0x00ff) >> 0),
byte((size & 0xff000000) >> 24),
byte((size & 0x00ff0000) >> 16),
byte((size & 0x0000ff00) >> 8),
byte((size & 0x000000ff) >> 0),
})
if err != nil {
return
}
if _, err = w.Write(payload); err != nil {
return
}
if _, err = w.Write(end); err != nil {
return
}
return
}
func writeShortstr(w io.Writer, s string) (err error) {
b := []byte(s)
var length = uint8(len(b))
if err = binary.Write(w, binary.BigEndian, length); err != nil {
return
}
if _, err = w.Write(b[:length]); err != nil {
return
}
return
}
func writeLongstr(w io.Writer, s string) (err error) {
b := []byte(s)
var length = uint32(len(b))
if err = binary.Write(w, binary.BigEndian, length); err != nil {
return
}
if _, err = w.Write(b[:length]); err != nil {
return
}
return
}
/*
'A': []interface{}
'D': Decimal
'F': Table
'I': int32
'S': string
'T': time.Time
'V': nil
'b': byte
'd': float64
'f': float32
'l': int64
's': int16
't': bool
'x': []byte
*/
func writeField(w io.Writer, value interface{}) (err error) {
var buf [9]byte
var enc []byte
switch v := value.(type) {
case bool:
buf[0] = 't'
if v {
buf[1] = byte(1)
} else {
buf[1] = byte(0)
}
enc = buf[:2]
case byte:
buf[0] = 'b'
buf[1] = byte(v)
enc = buf[:2]
case int16:
buf[0] = 's'
binary.BigEndian.PutUint16(buf[1:3], uint16(v))
enc = buf[:3]
case int:
buf[0] = 'I'
binary.BigEndian.PutUint32(buf[1:5], uint32(v))
enc = buf[:5]
case int32:
buf[0] = 'I'
binary.BigEndian.PutUint32(buf[1:5], uint32(v))
enc = buf[:5]
case int64:
buf[0] = 'l'
binary.BigEndian.PutUint64(buf[1:9], uint64(v))
enc = buf[:9]
case float32:
buf[0] = 'f'
binary.BigEndian.PutUint32(buf[1:5], math.Float32bits(v))
enc = buf[:5]
case float64:
buf[0] = 'd'
binary.BigEndian.PutUint64(buf[1:9], math.Float64bits(v))
enc = buf[:9]
case Decimal:
buf[0] = 'D'
buf[1] = byte(v.Scale)
binary.BigEndian.PutUint32(buf[2:6], uint32(v.Value))
enc = buf[:6]
case string:
buf[0] = 'S'
binary.BigEndian.PutUint32(buf[1:5], uint32(len(v)))
enc = append(buf[:5], []byte(v)...)
case []interface{}: // field-array
buf[0] = 'A'
sec := new(bytes.Buffer)
for _, val := range v {
if err = writeField(sec, val); err != nil {
return
}
}
binary.BigEndian.PutUint32(buf[1:5], uint32(sec.Len()))
if _, err = w.Write(buf[:5]); err != nil {
return
}
if _, err = w.Write(sec.Bytes()); err != nil {
return
}
return
case time.Time:
buf[0] = 'T'
binary.BigEndian.PutUint64(buf[1:9], uint64(v.Unix()))
enc = buf[:9]
case Table:
if _, err = w.Write([]byte{'F'}); err != nil {
return
}
return writeTable(w, v)
case []byte:
buf[0] = 'x'
binary.BigEndian.PutUint32(buf[1:5], uint32(len(v)))
if _, err = w.Write(buf[0:5]); err != nil {
return
}
if _, err = w.Write(v); err != nil {
return
}
return
case nil:
buf[0] = 'V'
enc = buf[:1]
default:
return ErrFieldType
}
_, err = w.Write(enc)
return
}
func writeTable(w io.Writer, table Table) (err error) {
var buf bytes.Buffer
for key, val := range table {
if err = writeShortstr(&buf, key); err != nil {
return
}
if err = writeField(&buf, val); err != nil {
return
}
}
return writeLongstr(w, string(buf.Bytes()))
}

View File

@@ -1,14 +0,0 @@
module github.com/up9inc/mizu/tap/extensions/http
go 1.16
require (
github.com/beevik/etree v1.1.0 // indirect
github.com/google/martian v2.1.0+incompatible
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7
github.com/up9inc/mizu/tap/api v0.0.0
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7
golang.org/x/text v0.3.5 // indirect
)
replace github.com/up9inc/mizu/tap/api v0.0.0 => ../../api

View File

@@ -1,14 +0,0 @@
github.com/beevik/etree v1.1.0 h1:T0xke/WvNtMoCqgzPhkX2r4rjY3GDZFi+FjpRZY2Jbs=
github.com/beevik/etree v1.1.0/go.mod h1:r8Aw8JqVegEf0w2fDnATrX9VpkMcyFeM0FhwO62wh+A=
github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7 h1:jkvpcEatpwuMF5O5LVxTnehj6YZ/aEZN4NWD/Xml4pI=
github.com/romana/rlog v0.0.0-20171115192701-f018bc92e7d7/go.mod h1:KTrHyWpO1sevuXPZwyeZc72ddWRFqNSKDFl7uVWKpg0=
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7 h1:OgUuv8lsRpBibGNbSizVwKWlysjaNzmC9gYMhPVfqFM=
golang.org/x/net v0.0.0-20210224082022-3d97a244fca7/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5 h1:i6eZZ+zk0SOf0xgBpEpPD18qWcJda6q1sxt3S0kzyUQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -1,170 +0,0 @@
package main
import (
"bufio"
"bytes"
"fmt"
"io"
"io/ioutil"
"net/http"
"github.com/romana/rlog"
"github.com/up9inc/mizu/tap/api"
)
func filterAndEmit(item *api.OutputChannelItem, emitter api.Emitter, options *api.TrafficFilteringOptions) {
if !options.DisableRedaction {
FilterSensitiveData(item, options)
}
emitter.Emit(item)
}
func handleHTTP2Stream(grpcAssembler *GrpcAssembler, tcpID *api.TcpID, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
streamID, messageHTTP1, err := grpcAssembler.readMessage()
if err != nil {
return err
}
var item *api.OutputChannelItem
switch messageHTTP1 := messageHTTP1.(type) {
case http.Request:
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
tcpID.DstPort,
streamID,
)
item = reqResMatcher.registerRequest(ident, &messageHTTP1, superTimer.CaptureTime)
if item != nil {
item.ConnectionInfo = &api.ConnectionInfo{
ClientIP: tcpID.SrcIP,
ClientPort: tcpID.SrcPort,
ServerIP: tcpID.DstIP,
ServerPort: tcpID.DstPort,
IsOutgoing: true,
}
}
case http.Response:
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
tcpID.DstIP,
tcpID.SrcIP,
tcpID.DstPort,
tcpID.SrcPort,
streamID,
)
item = reqResMatcher.registerResponse(ident, &messageHTTP1, superTimer.CaptureTime)
if item != nil {
item.ConnectionInfo = &api.ConnectionInfo{
ClientIP: tcpID.DstIP,
ClientPort: tcpID.DstPort,
ServerIP: tcpID.SrcIP,
ServerPort: tcpID.SrcPort,
IsOutgoing: false,
}
}
}
if item != nil {
item.Protocol = http2Protocol
filterAndEmit(item, emitter, options)
}
return nil
}
func handleHTTP1ClientStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
req, err := http.ReadRequest(b)
if err != nil {
// log.Println("Error reading stream:", err)
return err
}
counterPair.Request++
body, err := ioutil.ReadAll(req.Body)
req.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
s := len(body)
if err != nil {
rlog.Debugf("[HTTP-request-body] stream %s Got body err: %s", tcpID.Ident, err)
}
if err := req.Body.Close(); err != nil {
rlog.Debugf("[HTTP-request-body-close] stream %s Failed to close request body: %s", tcpID.Ident, err)
}
encoding := req.Header["Content-Encoding"]
rlog.Tracef(1, "HTTP/1 Request: %s %s %s (Body:%d) -> %s", tcpID.Ident, req.Method, req.URL, s, encoding)
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
tcpID.SrcIP,
tcpID.DstIP,
tcpID.SrcPort,
tcpID.DstPort,
counterPair.Request,
)
item := reqResMatcher.registerRequest(ident, req, superTimer.CaptureTime)
if item != nil {
item.ConnectionInfo = &api.ConnectionInfo{
ClientIP: tcpID.SrcIP,
ClientPort: tcpID.SrcPort,
ServerIP: tcpID.DstIP,
ServerPort: tcpID.DstPort,
IsOutgoing: true,
}
filterAndEmit(item, emitter, options)
}
return nil
}
func handleHTTP1ServerStream(b *bufio.Reader, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
res, err := http.ReadResponse(b, nil)
if err != nil {
// log.Println("Error reading stream:", err)
return err
}
counterPair.Response++
body, err := ioutil.ReadAll(res.Body)
res.Body = io.NopCloser(bytes.NewBuffer(body)) // rewind
s := len(body)
if err != nil {
rlog.Debugf("[HTTP-response-body] HTTP/%s: failed to get body(parsed len:%d): %s", tcpID.Ident, s, err)
}
if err := res.Body.Close(); err != nil {
rlog.Debugf("[HTTP-response-body-close] HTTP/%s: failed to close body(parsed len:%d): %s", tcpID.Ident, s, err)
}
sym := ","
if res.ContentLength > 0 && res.ContentLength != int64(s) {
sym = "!="
}
contentType, ok := res.Header["Content-Type"]
if !ok {
contentType = []string{http.DetectContentType(body)}
}
encoding := res.Header["Content-Encoding"]
rlog.Tracef(1, "HTTP/1 Response: %s %s (%d%s%d%s) -> %s", tcpID.Ident, res.Status, res.ContentLength, sym, s, contentType, encoding)
ident := fmt.Sprintf(
"%s->%s %s->%s %d",
tcpID.DstIP,
tcpID.SrcIP,
tcpID.DstPort,
tcpID.SrcPort,
counterPair.Response,
)
item := reqResMatcher.registerResponse(ident, res, superTimer.CaptureTime)
if item != nil {
item.ConnectionInfo = &api.ConnectionInfo{
ClientIP: tcpID.DstIP,
ClientPort: tcpID.DstPort,
ServerIP: tcpID.SrcIP,
ServerPort: tcpID.SrcPort,
IsOutgoing: false,
}
filterAndEmit(item, emitter, options)
}
return nil
}

View File

@@ -1,400 +0,0 @@
package main
import (
"bufio"
"encoding/json"
"errors"
"fmt"
"io"
"log"
"net/url"
"time"
"github.com/romana/rlog"
"github.com/up9inc/mizu/tap/api"
)
var protocol api.Protocol = api.Protocol{
Name: "http",
LongName: "Hypertext Transfer Protocol -- HTTP/1.1",
Abbreviation: "HTTP",
Version: "1.1",
BackgroundColor: "#205cf5",
ForegroundColor: "#ffffff",
FontSize: 12,
ReferenceLink: "https://datatracker.ietf.org/doc/html/rfc2616",
Ports: []string{"80", "8080", "50051"},
Priority: 0,
}
var http2Protocol api.Protocol = api.Protocol{
Name: "http",
LongName: "Hypertext Transfer Protocol Version 2 (HTTP/2) (gRPC)",
Abbreviation: "HTTP/2",
Version: "2.0",
BackgroundColor: "#244c5a",
ForegroundColor: "#ffffff",
FontSize: 11,
ReferenceLink: "https://datatracker.ietf.org/doc/html/rfc7540",
Ports: []string{"80", "8080"},
Priority: 0,
}
const (
TypeHttpRequest = iota
TypeHttpResponse
)
func init() {
log.Println("Initializing HTTP extension...")
}
type dissecting string
func (d dissecting) Register(extension *api.Extension) {
extension.Protocol = &protocol
extension.MatcherMap = reqResMatcher.openMessagesMap
}
func (d dissecting) Ping() {
log.Printf("pong %s\n", protocol.Name)
}
func (d dissecting) Dissect(b *bufio.Reader, isClient bool, tcpID *api.TcpID, counterPair *api.CounterPair, superTimer *api.SuperTimer, superIdentifier *api.SuperIdentifier, emitter api.Emitter, options *api.TrafficFilteringOptions) error {
ident := fmt.Sprintf("%s->%s:%s->%s", tcpID.SrcIP, tcpID.DstIP, tcpID.SrcPort, tcpID.DstPort)
isHTTP2, err := checkIsHTTP2Connection(b, isClient)
if err != nil {
rlog.Debugf("[HTTP/2-Prepare-Connection] stream %s Failed to check if client is HTTP/2: %s (%v,%+v)", ident, err, err, err)
// Do something?
}
var grpcAssembler *GrpcAssembler
if isHTTP2 {
err := prepareHTTP2Connection(b, isClient)
if err != nil {
rlog.Debugf("[HTTP/2-Prepare-Connection-After-Check] stream %s error: %s (%v,%+v)", ident, err, err, err)
}
grpcAssembler = createGrpcAssembler(b)
}
dissected := false
for {
if superIdentifier.Protocol != nil && superIdentifier.Protocol != &protocol {
return errors.New("Identified by another protocol")
}
if isHTTP2 {
err = handleHTTP2Stream(grpcAssembler, tcpID, superTimer, emitter, options)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
rlog.Debugf("[HTTP/2] stream %s error: %s (%v,%+v)", ident, err, err, err)
continue
}
dissected = true
} else if isClient {
err = handleHTTP1ClientStream(b, tcpID, counterPair, superTimer, emitter, options)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
rlog.Debugf("[HTTP-request] stream %s Request error: %s (%v,%+v)", ident, err, err, err)
continue
}
dissected = true
} else {
err = handleHTTP1ServerStream(b, tcpID, counterPair, superTimer, emitter, options)
if err == io.EOF || err == io.ErrUnexpectedEOF {
break
} else if err != nil {
rlog.Debugf("[HTTP-response], stream %s Response error: %s (%v,%+v)", ident, err, err, err)
continue
}
dissected = true
}
}
if !dissected {
return err
}
superIdentifier.Protocol = &protocol
return nil
}
func SetHostname(address, newHostname string) string {
replacedUrl, err := url.Parse(address)
if err != nil {
log.Printf("error replacing hostname to %s in address %s, returning original %v", newHostname, address, err)
return address
}
replacedUrl.Host = newHostname
return replacedUrl.String()
}
func (d dissecting) Analyze(item *api.OutputChannelItem, entryId string, resolvedSource string, resolvedDestination string) *api.MizuEntry {
var host, scheme, authority, path, service string
request := item.Pair.Request.Payload.(map[string]interface{})
response := item.Pair.Response.Payload.(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
resDetails := response["details"].(map[string]interface{})
for _, header := range reqDetails["headers"].([]interface{}) {
h := header.(map[string]interface{})
if h["name"] == "Host" {
host = h["value"].(string)
}
if h["name"] == ":authority" {
authority = h["value"].(string)
}
if h["name"] == ":scheme" {
scheme = h["value"].(string)
}
if h["name"] == ":path" {
path = h["value"].(string)
}
}
if item.Protocol.Version == "2.0" {
service = fmt.Sprintf("%s://%s", scheme, authority)
} else {
service = fmt.Sprintf("http://%s", host)
path = reqDetails["url"].(string)
}
request["url"] = path
if resolvedDestination != "" {
service = SetHostname(service, resolvedDestination)
} else if resolvedSource != "" {
service = SetHostname(service, resolvedSource)
}
elapsedTime := item.Pair.Response.CaptureTime.Sub(item.Pair.Request.CaptureTime).Round(time.Millisecond).Milliseconds()
entryBytes, _ := json.Marshal(item.Pair)
return &api.MizuEntry{
ProtocolName: protocol.Name,
ProtocolLongName: protocol.LongName,
ProtocolAbbreviation: protocol.Abbreviation,
ProtocolVersion: item.Protocol.Version,
ProtocolBackgroundColor: protocol.BackgroundColor,
ProtocolForegroundColor: protocol.ForegroundColor,
ProtocolFontSize: protocol.FontSize,
ProtocolReferenceLink: protocol.ReferenceLink,
EntryId: entryId,
Entry: string(entryBytes),
Url: fmt.Sprintf("%s%s", service, path),
Method: reqDetails["method"].(string),
Status: int(resDetails["status"].(float64)),
RequestSenderIp: item.ConnectionInfo.ClientIP,
Service: service,
Timestamp: item.Timestamp,
ElapsedTime: elapsedTime,
Path: path,
ResolvedSource: resolvedSource,
ResolvedDestination: resolvedDestination,
SourceIp: item.ConnectionInfo.ClientIP,
DestinationIp: item.ConnectionInfo.ServerIP,
SourcePort: item.ConnectionInfo.ClientPort,
DestinationPort: item.ConnectionInfo.ServerPort,
IsOutgoing: item.ConnectionInfo.IsOutgoing,
}
}
func (d dissecting) Summarize(entry *api.MizuEntry) *api.BaseEntryDetails {
var p api.Protocol
if entry.ProtocolVersion == "2.0" {
p = http2Protocol
} else {
p = protocol
}
return &api.BaseEntryDetails{
Id: entry.EntryId,
Protocol: p,
Url: entry.Url,
RequestSenderIp: entry.RequestSenderIp,
Service: entry.Service,
Path: entry.Path,
Summary: entry.Path,
StatusCode: entry.Status,
Method: entry.Method,
Timestamp: entry.Timestamp,
SourceIp: entry.SourceIp,
DestinationIp: entry.DestinationIp,
SourcePort: entry.SourcePort,
DestinationPort: entry.DestinationPort,
IsOutgoing: entry.IsOutgoing,
Latency: 0,
Rules: api.ApplicableRules{
Latency: 0,
Status: false,
},
}
}
func representRequest(request map[string]interface{}) (repRequest []interface{}) {
details, _ := json.Marshal([]map[string]string{
{
"name": "Method",
"value": request["method"].(string),
},
{
"name": "URL",
"value": request["url"].(string),
},
{
"name": "Body Size",
"value": fmt.Sprintf("%g bytes", request["bodySize"].(float64)),
},
})
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
headers, _ := json.Marshal(request["headers"].([]interface{}))
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "Headers",
"data": string(headers),
})
cookies, _ := json.Marshal(request["cookies"].([]interface{}))
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "Cookies",
"data": string(cookies),
})
queryString, _ := json.Marshal(request["queryString"].([]interface{}))
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "Query String",
"data": string(queryString),
})
postData, _ := request["postData"].(map[string]interface{})
mimeType, _ := postData["mimeType"]
if mimeType == nil || len(mimeType.(string)) == 0 {
mimeType = "text/html"
}
text, _ := postData["text"]
if text != nil {
repRequest = append(repRequest, map[string]string{
"type": api.BODY,
"title": "POST Data (text/plain)",
"encoding": "",
"mime_type": mimeType.(string),
"data": text.(string),
})
}
if postData["params"] != nil {
params, _ := json.Marshal(postData["params"].([]interface{}))
if len(params) > 0 {
if mimeType == "multipart/form-data" {
multipart, _ := json.Marshal([]map[string]string{
{
"name": "Files",
"value": string(params),
},
})
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "POST Data (multipart/form-data)",
"data": string(multipart),
})
} else {
repRequest = append(repRequest, map[string]string{
"type": api.TABLE,
"title": "POST Data (application/x-www-form-urlencoded)",
"data": string(params),
})
}
}
}
return
}
func representResponse(response map[string]interface{}) (repResponse []interface{}, bodySize int64) {
repResponse = make([]interface{}, 0)
bodySize = int64(response["bodySize"].(float64))
details, _ := json.Marshal([]map[string]string{
{
"name": "Status",
"value": fmt.Sprintf("%g", response["status"].(float64)),
},
{
"name": "Status Text",
"value": response["statusText"].(string),
},
{
"name": "Body Size",
"value": fmt.Sprintf("%d bytes", bodySize),
},
})
repResponse = append(repResponse, map[string]string{
"type": api.TABLE,
"title": "Details",
"data": string(details),
})
headers, _ := json.Marshal(response["headers"].([]interface{}))
repResponse = append(repResponse, map[string]string{
"type": api.TABLE,
"title": "Headers",
"data": string(headers),
})
cookies, _ := json.Marshal(response["cookies"].([]interface{}))
repResponse = append(repResponse, map[string]string{
"type": api.TABLE,
"title": "Cookies",
"data": string(cookies),
})
content, _ := response["content"].(map[string]interface{})
mimeType, _ := content["mimeType"]
if mimeType == nil || len(mimeType.(string)) == 0 {
mimeType = "text/html"
}
encoding, _ := content["encoding"]
text, _ := content["text"]
if text != nil {
repResponse = append(repResponse, map[string]string{
"type": api.BODY,
"title": "Body",
"encoding": encoding.(string),
"mime_type": mimeType.(string),
"data": text.(string),
})
}
return
}
func (d dissecting) Represent(entry *api.MizuEntry) (p api.Protocol, object []byte, bodySize int64, err error) {
if entry.ProtocolVersion == "2.0" {
p = http2Protocol
} else {
p = protocol
}
var root map[string]interface{}
json.Unmarshal([]byte(entry.Entry), &root)
representation := make(map[string]interface{}, 0)
request := root["request"].(map[string]interface{})["payload"].(map[string]interface{})
response := root["response"].(map[string]interface{})["payload"].(map[string]interface{})
reqDetails := request["details"].(map[string]interface{})
resDetails := response["details"].(map[string]interface{})
repRequest := representRequest(reqDetails)
repResponse, bodySize := representResponse(resDetails)
representation["request"] = repRequest
representation["response"] = repResponse
object, err = json.Marshal(representation)
return
}
var Dissector dissecting

View File

@@ -1,105 +0,0 @@
package main
import (
"fmt"
"net/http"
"strings"
"sync"
"time"
"github.com/romana/rlog"
"github.com/up9inc/mizu/tap/api"
)
var reqResMatcher = createResponseRequestMatcher() // global
// Key is {client_addr}:{client_port}->{dest_addr}:{dest_port}_{incremental_counter}
type requestResponseMatcher struct {
openMessagesMap *sync.Map
}
func createResponseRequestMatcher() requestResponseMatcher {
newMatcher := &requestResponseMatcher{openMessagesMap: &sync.Map{}}
return *newMatcher
}
func (matcher *requestResponseMatcher) registerRequest(ident string, request *http.Request, captureTime time.Time) *api.OutputChannelItem {
split := splitIdent(ident)
key := genKey(split)
requestHTTPMessage := api.GenericMessage{
IsRequest: true,
CaptureTime: captureTime,
Payload: HTTPPayload{
Type: TypeHttpRequest,
Data: request,
},
}
if response, found := matcher.openMessagesMap.LoadAndDelete(key); found {
// Type assertion always succeeds because all of the map's values are of api.GenericMessage type
responseHTTPMessage := response.(*api.GenericMessage)
if responseHTTPMessage.IsRequest {
rlog.Debugf("[Request-Duplicate] Got duplicate request with same identifier")
return nil
}
rlog.Tracef(1, "Matched open Response for %s", key)
return matcher.preparePair(&requestHTTPMessage, responseHTTPMessage)
}
matcher.openMessagesMap.Store(key, &requestHTTPMessage)
rlog.Tracef(1, "Registered open Request for %s", key)
return nil
}
func (matcher *requestResponseMatcher) registerResponse(ident string, response *http.Response, captureTime time.Time) *api.OutputChannelItem {
split := splitIdent(ident)
key := genKey(split)
responseHTTPMessage := api.GenericMessage{
IsRequest: false,
CaptureTime: captureTime,
Payload: HTTPPayload{
Type: TypeHttpResponse,
Data: response,
},
}
if request, found := matcher.openMessagesMap.LoadAndDelete(key); found {
// Type assertion always succeeds because all of the map's values are of api.GenericMessage type
requestHTTPMessage := request.(*api.GenericMessage)
if !requestHTTPMessage.IsRequest {
rlog.Debugf("[Response-Duplicate] Got duplicate response with same identifier")
return nil
}
rlog.Tracef(1, "Matched open Request for %s", key)
return matcher.preparePair(requestHTTPMessage, &responseHTTPMessage)
}
matcher.openMessagesMap.Store(key, &responseHTTPMessage)
rlog.Tracef(1, "Registered open Response for %s", key)
return nil
}
func (matcher *requestResponseMatcher) preparePair(requestHTTPMessage *api.GenericMessage, responseHTTPMessage *api.GenericMessage) *api.OutputChannelItem {
return &api.OutputChannelItem{
Protocol: protocol,
Timestamp: requestHTTPMessage.CaptureTime.UnixNano() / int64(time.Millisecond),
ConnectionInfo: nil,
Pair: &api.RequestResponsePair{
Request: *requestHTTPMessage,
Response: *responseHTTPMessage,
},
}
}
func splitIdent(ident string) []string {
ident = strings.Replace(ident, "->", " ", -1)
return strings.Split(ident, " ")
}
func genKey(split []string) string {
key := fmt.Sprintf("%s:%s->%s:%s,%s", split[0], split[2], split[1], split[3], split[4])
return key
}

View File

@@ -1,209 +0,0 @@
package main
import (
"bytes"
"encoding/json"
"encoding/xml"
"errors"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"strings"
"github.com/beevik/etree"
"github.com/romana/rlog"
"github.com/up9inc/mizu/tap/api"
)
const maskedFieldPlaceholderValue = "[REDACTED]"
//these values MUST be all lower case and contain no `-` or `_` characters
var personallyIdentifiableDataFields = []string{"token", "authorization", "authentication", "cookie", "userid", "password",
"username", "user", "key", "passcode", "pass", "auth", "authtoken", "jwt",
"bearer", "clientid", "clientsecret", "redirecturi", "phonenumber",
"zip", "zipcode", "address", "country", "firstname", "lastname",
"middlename", "fname", "lname", "birthdate"}
func FilterSensitiveData(item *api.OutputChannelItem, options *api.TrafficFilteringOptions) {
request := item.Pair.Request.Payload.(HTTPPayload).Data.(*http.Request)
response := item.Pair.Response.Payload.(HTTPPayload).Data.(*http.Response)
filterHeaders(&request.Header)
filterHeaders(&response.Header)
filterUrl(request.URL)
filterRequestBody(request, options)
filterResponseBody(response, options)
}
func filterRequestBody(request *http.Request, options *api.TrafficFilteringOptions) {
contenType := getContentTypeHeaderValue(request.Header)
body, err := ioutil.ReadAll(request.Body)
if err != nil {
rlog.Debugf("Filtering error reading body: %v", err)
return
}
filteredBody, err := filterHttpBody([]byte(body), contenType, options)
if err == nil {
request.Body = ioutil.NopCloser(bytes.NewBuffer(filteredBody))
} else {
request.Body = ioutil.NopCloser(bytes.NewBuffer(body))
}
}
func filterResponseBody(response *http.Response, options *api.TrafficFilteringOptions) {
contentType := getContentTypeHeaderValue(response.Header)
body, err := ioutil.ReadAll(response.Body)
if err != nil {
rlog.Debugf("Filtering error reading body: %v", err)
return
}
filteredBody, err := filterHttpBody([]byte(body), contentType, options)
if err == nil {
response.Body = ioutil.NopCloser(bytes.NewBuffer(filteredBody))
} else {
response.Body = ioutil.NopCloser(bytes.NewBuffer(body))
}
}
func filterHeaders(headers *http.Header) {
for key, _ := range *headers {
if strings.ToLower(key) == "cookie" {
headers.Del(key)
} else if isFieldNameSensitive(key) {
headers.Set(key, maskedFieldPlaceholderValue)
}
}
}
func getContentTypeHeaderValue(headers http.Header) string {
for key, _ := range headers {
if strings.ToLower(key) == "content-type" {
return headers.Get(key)
}
}
return ""
}
func isFieldNameSensitive(fieldName string) bool {
name := strings.ToLower(fieldName)
name = strings.ReplaceAll(name, "_", "")
name = strings.ReplaceAll(name, "-", "")
name = strings.ReplaceAll(name, " ", "")
for _, sensitiveField := range personallyIdentifiableDataFields {
if strings.Contains(name, sensitiveField) {
return true
}
}
return false
}
func filterHttpBody(bytes []byte, contentType string, options *api.TrafficFilteringOptions) ([]byte, error) {
mimeType := strings.Split(contentType, ";")[0]
switch strings.ToLower(mimeType) {
case "application/json":
return filterJsonBody(bytes)
case "text/html":
fallthrough
case "application/xhtml+xml":
fallthrough
case "text/xml":
fallthrough
case "application/xml":
return filterXmlEtree(bytes)
case "text/plain":
if options != nil && options.PlainTextMaskingRegexes != nil {
return filterPlainText(bytes, options), nil
}
}
return bytes, nil
}
func filterPlainText(bytes []byte, options *api.TrafficFilteringOptions) []byte {
for _, regex := range options.PlainTextMaskingRegexes {
bytes = regex.ReplaceAll(bytes, []byte(maskedFieldPlaceholderValue))
}
return bytes
}
func filterXmlEtree(bytes []byte) ([]byte, error) {
if !IsValidXML(bytes) {
return nil, errors.New("Invalid XML")
}
xmlDoc := etree.NewDocument()
err := xmlDoc.ReadFromBytes(bytes)
if err != nil {
return nil, err
} else {
filterXmlElement(xmlDoc.Root())
}
return xmlDoc.WriteToBytes()
}
func IsValidXML(data []byte) bool {
return xml.Unmarshal(data, new(interface{})) == nil
}
func filterXmlElement(element *etree.Element) {
for i, attribute := range element.Attr {
if isFieldNameSensitive(attribute.Key) {
element.Attr[i].Value = maskedFieldPlaceholderValue
}
}
if element.ChildElements() == nil || len(element.ChildElements()) == 0 {
if isFieldNameSensitive(element.Tag) {
element.SetText(maskedFieldPlaceholderValue)
}
} else {
for _, element := range element.ChildElements() {
filterXmlElement(element)
}
}
}
func filterJsonBody(bytes []byte) ([]byte, error) {
var bodyJsonMap map[string]interface{}
err := json.Unmarshal(bytes, &bodyJsonMap)
if err != nil {
return nil, err
}
filterJsonMap(bodyJsonMap)
return json.Marshal(bodyJsonMap)
}
func filterJsonMap(jsonMap map[string]interface{}) {
for key, value := range jsonMap {
// Do not replace nil values with maskedFieldPlaceholderValue
if value == nil {
continue
}
nestedMap, isNested := value.(map[string]interface{})
if isNested {
filterJsonMap(nestedMap)
} else {
if isFieldNameSensitive(key) {
jsonMap[key] = maskedFieldPlaceholderValue
}
}
}
}
func filterUrl(url *url.URL) {
if len(url.RawQuery) > 0 {
newQueryArgs := make([]string, 0)
for urlQueryParamName, urlQueryParamValues := range url.Query() {
newValues := urlQueryParamValues
if isFieldNameSensitive(urlQueryParamName) {
newValues = []string{maskedFieldPlaceholderValue}
}
for _, paramValue := range newValues {
newQueryArgs = append(newQueryArgs, fmt.Sprintf("%s=%s", urlQueryParamName, paramValue))
}
}
url.RawQuery = strings.Join(newQueryArgs, "&")
}
}

View File

@@ -1,55 +0,0 @@
package main
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/google/martian/har"
"github.com/romana/rlog"
)
type HTTPPayload struct {
Type uint8
Data interface{}
}
type HTTPPayloader interface {
MarshalJSON() ([]byte, error)
}
type HTTPWrapper struct {
Method string `json:"method"`
Url string `json:"url"`
Details interface{} `json:"details"`
}
func (h HTTPPayload) MarshalJSON() ([]byte, error) {
switch h.Type {
case TypeHttpRequest:
harRequest, err := har.NewRequest(h.Data.(*http.Request), true)
if err != nil {
rlog.Debugf("convert-request-to-har", "Failed converting request to HAR %s (%v,%+v)", err, err, err)
return nil, errors.New("Failed converting request to HAR")
}
return json.Marshal(&HTTPWrapper{
Method: harRequest.Method,
Url: "",
Details: harRequest,
})
case TypeHttpResponse:
harResponse, err := har.NewResponse(h.Data.(*http.Response), true)
if err != nil {
rlog.Debugf("convert-response-to-har", "Failed converting response to HAR %s (%v,%+v)", err, err, err)
return nil, errors.New("Failed converting response to HAR")
}
return json.Marshal(&HTTPWrapper{
Method: "",
Url: "",
Details: harResponse,
})
default:
panic(fmt.Sprintf("HTTP payload cannot be marshaled: %s\n", h.Type))
}
}

View File

@@ -1,645 +0,0 @@
package main
import (
"bytes"
"fmt"
"io"
"math"
"sync"
"sync/atomic"
)
// Bytes is an interface implemented by types that represent immutable
// sequences of bytes.
//
// Bytes values are used to abstract the location where record keys and
// values are read from (e.g. in-memory buffers, network sockets, files).
//
// The Close method should be called to release resources held by the object
// when the program is done with it.
//
// Bytes values are generally not safe to use concurrently from multiple
// goroutines.
type Bytes interface {
io.ReadCloser
// Returns the number of bytes remaining to be read from the payload.
Len() int
}
// NewBytes constructs a Bytes value from b.
//
// The returned value references b, it does not make a copy of the backing
// array.
//
// If b is nil, nil is returned to represent a null BYTES value in the kafka
// protocol.
func NewBytes(b []byte) Bytes {
if b == nil {
return nil
}
r := new(bytesReader)
r.Reset(b)
return r
}
// ReadAll is similar to ioutil.ReadAll, but it takes advantage of knowing the
// length of b to minimize the memory footprint.
//
// The function returns a nil slice if b is nil.
// func ReadAll(b Bytes) ([]byte, error) {
// if b == nil {
// return nil, nil
// }
// s := make([]byte, b.Len())
// _, err := io.ReadFull(b, s)
// return s, err
// }
type bytesReader struct{ bytes.Reader }
func (*bytesReader) Close() error { return nil }
type refCount uintptr
func (rc *refCount) ref() { atomic.AddUintptr((*uintptr)(rc), 1) }
func (rc *refCount) unref(onZero func()) {
if atomic.AddUintptr((*uintptr)(rc), ^uintptr(0)) == 0 {
onZero()
}
}
const (
// Size of the memory buffer for a single page. We use a farily
// large size here (64 KiB) because batches exchanged with kafka
// tend to be multiple kilobytes in size, sometimes hundreds.
// Using large pages amortizes the overhead of the page metadata
// and algorithms to manage the pages.
pageSize = 65536
)
type page struct {
refc refCount
offset int64
length int
buffer *[pageSize]byte
}
func newPage(offset int64) *page {
p, _ := pagePool.Get().(*page)
if p != nil {
p.offset = offset
p.length = 0
p.ref()
} else {
p = &page{
refc: 1,
offset: offset,
buffer: &[pageSize]byte{},
}
}
return p
}
func (p *page) ref() { p.refc.ref() }
func (p *page) unref() { p.refc.unref(func() { pagePool.Put(p) }) }
func (p *page) slice(begin, end int64) []byte {
i, j := begin-p.offset, end-p.offset
if i < 0 {
i = 0
} else if i > pageSize {
i = pageSize
}
if j < 0 {
j = 0
} else if j > pageSize {
j = pageSize
}
if i < j {
return p.buffer[i:j]
}
return nil
}
func (p *page) Cap() int { return pageSize }
func (p *page) Len() int { return p.length }
func (p *page) Size() int64 { return int64(p.length) }
func (p *page) Truncate(n int) {
if n < p.length {
p.length = n
}
}
func (p *page) ReadAt(b []byte, off int64) (int, error) {
if off -= p.offset; off < 0 || off > pageSize {
panic("offset out of range")
}
if off > int64(p.length) {
return 0, nil
}
return copy(b, p.buffer[off:p.length]), nil
}
func (p *page) ReadFrom(r io.Reader) (int64, error) {
n, err := io.ReadFull(r, p.buffer[p.length:])
if err == io.EOF || err == io.ErrUnexpectedEOF {
err = nil
}
p.length += n
return int64(n), err
}
func (p *page) WriteAt(b []byte, off int64) (int, error) {
if off -= p.offset; off < 0 || off > pageSize {
panic("offset out of range")
}
n := copy(p.buffer[off:], b)
if end := int(off) + n; end > p.length {
p.length = end
}
return n, nil
}
func (p *page) Write(b []byte) (int, error) {
return p.WriteAt(b, p.offset+int64(p.length))
}
var (
_ io.ReaderAt = (*page)(nil)
_ io.ReaderFrom = (*page)(nil)
_ io.Writer = (*page)(nil)
_ io.WriterAt = (*page)(nil)
)
type pageBuffer struct {
refc refCount
pages contiguousPages
length int
cursor int
}
func newPageBuffer() *pageBuffer {
b, _ := pageBufferPool.Get().(*pageBuffer)
if b != nil {
b.cursor = 0
b.refc.ref()
} else {
b = &pageBuffer{
refc: 1,
pages: make(contiguousPages, 0, 16),
}
}
return b
}
func (pb *pageBuffer) refTo(ref *pageRef, begin, end int64) {
length := end - begin
if length > math.MaxUint32 {
panic("reference to contiguous buffer pages exceeds the maximum size of 4 GB")
}
ref.pages = append(ref.buffer[:0], pb.pages.slice(begin, end)...)
ref.pages.ref()
ref.offset = begin
ref.length = uint32(length)
}
func (pb *pageBuffer) ref(begin, end int64) *pageRef {
ref := new(pageRef)
pb.refTo(ref, begin, end)
return ref
}
func (pb *pageBuffer) unref() {
pb.refc.unref(func() {
pb.pages.unref()
pb.pages.clear()
pb.pages = pb.pages[:0]
pb.length = 0
pageBufferPool.Put(pb)
})
}
func (pb *pageBuffer) newPage() *page {
return newPage(int64(pb.length))
}
func (pb *pageBuffer) Close() error {
return nil
}
func (pb *pageBuffer) Len() int {
return pb.length - pb.cursor
}
func (pb *pageBuffer) Size() int64 {
return int64(pb.length)
}
func (pb *pageBuffer) Discard(n int) (int, error) {
remain := pb.length - pb.cursor
if remain < n {
n = remain
}
pb.cursor += n
return n, nil
}
func (pb *pageBuffer) Truncate(n int) {
if n < pb.length {
pb.length = n
if n < pb.cursor {
pb.cursor = n
}
for i := range pb.pages {
if p := pb.pages[i]; p.length <= n {
n -= p.length
} else {
if n > 0 {
pb.pages[i].Truncate(n)
i++
}
pb.pages[i:].unref()
pb.pages[i:].clear()
pb.pages = pb.pages[:i]
break
}
}
}
}
func (pb *pageBuffer) Seek(offset int64, whence int) (int64, error) {
c, err := seek(int64(pb.cursor), int64(pb.length), offset, whence)
if err != nil {
return -1, err
}
pb.cursor = int(c)
return c, nil
}
func (pb *pageBuffer) ReadByte() (byte, error) {
b := [1]byte{}
_, err := pb.Read(b[:])
return b[0], err
}
func (pb *pageBuffer) Read(b []byte) (int, error) {
if pb.cursor >= pb.length {
return 0, io.EOF
}
n, err := pb.ReadAt(b, int64(pb.cursor))
pb.cursor += n
return n, err
}
func (pb *pageBuffer) ReadAt(b []byte, off int64) (int, error) {
return pb.pages.ReadAt(b, off)
}
func (pb *pageBuffer) ReadFrom(r io.Reader) (int64, error) {
if len(pb.pages) == 0 {
pb.pages = append(pb.pages, pb.newPage())
}
rn := int64(0)
for {
tail := pb.pages[len(pb.pages)-1]
free := tail.Cap() - tail.Len()
if free == 0 {
tail = pb.newPage()
free = pageSize
pb.pages = append(pb.pages, tail)
}
n, err := tail.ReadFrom(r)
pb.length += int(n)
rn += n
if n < int64(free) {
return rn, err
}
}
}
func (pb *pageBuffer) WriteString(s string) (int, error) {
return pb.Write([]byte(s))
}
func (pb *pageBuffer) Write(b []byte) (int, error) {
wn := len(b)
if wn == 0 {
return 0, nil
}
if len(pb.pages) == 0 {
pb.pages = append(pb.pages, pb.newPage())
}
for len(b) != 0 {
tail := pb.pages[len(pb.pages)-1]
free := tail.Cap() - tail.Len()
if len(b) <= free {
tail.Write(b)
pb.length += len(b)
break
}
tail.Write(b[:free])
b = b[free:]
pb.length += free
pb.pages = append(pb.pages, pb.newPage())
}
return wn, nil
}
func (pb *pageBuffer) WriteAt(b []byte, off int64) (int, error) {
n, err := pb.pages.WriteAt(b, off)
if err != nil {
return n, err
}
if n < len(b) {
pb.Write(b[n:])
}
return len(b), nil
}
func (pb *pageBuffer) WriteTo(w io.Writer) (int64, error) {
var wn int
var err error
pb.pages.scan(int64(pb.cursor), int64(pb.length), func(b []byte) bool {
var n int
n, err = w.Write(b)
wn += n
return err == nil
})
pb.cursor += wn
return int64(wn), err
}
var (
_ io.ReaderAt = (*pageBuffer)(nil)
_ io.ReaderFrom = (*pageBuffer)(nil)
_ io.StringWriter = (*pageBuffer)(nil)
_ io.Writer = (*pageBuffer)(nil)
_ io.WriterAt = (*pageBuffer)(nil)
_ io.WriterTo = (*pageBuffer)(nil)
pagePool sync.Pool
pageBufferPool sync.Pool
)
type contiguousPages []*page
func (pages contiguousPages) ref() {
for _, p := range pages {
p.ref()
}
}
func (pages contiguousPages) unref() {
for _, p := range pages {
p.unref()
}
}
func (pages contiguousPages) clear() {
for i := range pages {
pages[i] = nil
}
}
func (pages contiguousPages) ReadAt(b []byte, off int64) (int, error) {
rn := 0
for _, p := range pages.slice(off, off+int64(len(b))) {
n, _ := p.ReadAt(b, off)
b = b[n:]
rn += n
off += int64(n)
}
return rn, nil
}
func (pages contiguousPages) WriteAt(b []byte, off int64) (int, error) {
wn := 0
for _, p := range pages.slice(off, off+int64(len(b))) {
n, _ := p.WriteAt(b, off)
b = b[n:]
wn += n
off += int64(n)
}
return wn, nil
}
func (pages contiguousPages) slice(begin, end int64) contiguousPages {
i := pages.indexOf(begin)
j := pages.indexOf(end)
if j < len(pages) {
j++
}
return pages[i:j]
}
func (pages contiguousPages) indexOf(offset int64) int {
if len(pages) == 0 {
return 0
}
return int((offset - pages[0].offset) / pageSize)
}
func (pages contiguousPages) scan(begin, end int64, f func([]byte) bool) {
for _, p := range pages.slice(begin, end) {
if !f(p.slice(begin, end)) {
break
}
}
}
var (
_ io.ReaderAt = contiguousPages{}
_ io.WriterAt = contiguousPages{}
)
type pageRef struct {
buffer [2]*page
pages contiguousPages
offset int64
cursor int64
length uint32
once uint32
}
func (ref *pageRef) unref() {
if atomic.CompareAndSwapUint32(&ref.once, 0, 1) {
ref.pages.unref()
ref.pages.clear()
ref.pages = nil
ref.offset = 0
ref.cursor = 0
ref.length = 0
}
}
func (ref *pageRef) Len() int { return int(ref.Size() - ref.cursor) }
func (ref *pageRef) Size() int64 { return int64(ref.length) }
func (ref *pageRef) Close() error { ref.unref(); return nil }
func (ref *pageRef) String() string {
return fmt.Sprintf("[offset=%d cursor=%d length=%d]", ref.offset, ref.cursor, ref.length)
}
func (ref *pageRef) Seek(offset int64, whence int) (int64, error) {
c, err := seek(ref.cursor, int64(ref.length), offset, whence)
if err != nil {
return -1, err
}
ref.cursor = c
return c, nil
}
func (ref *pageRef) ReadByte() (byte, error) {
var c byte
var ok bool
ref.scan(ref.cursor, func(b []byte) bool {
c, ok = b[0], true
return false
})
if ok {
ref.cursor++
} else {
return 0, io.EOF
}
return c, nil
}
func (ref *pageRef) Read(b []byte) (int, error) {
if ref.cursor >= int64(ref.length) {
return 0, io.EOF
}
n, err := ref.ReadAt(b, ref.cursor)
ref.cursor += int64(n)
return n, err
}
func (ref *pageRef) ReadAt(b []byte, off int64) (int, error) {
limit := ref.offset + int64(ref.length)
off += ref.offset
if off >= limit {
return 0, io.EOF
}
if off+int64(len(b)) > limit {
b = b[:limit-off]
}
if len(b) == 0 {
return 0, nil
}
n, err := ref.pages.ReadAt(b, off)
if n == 0 && err == nil {
err = io.EOF
}
return n, err
}
func (ref *pageRef) WriteTo(w io.Writer) (wn int64, err error) {
ref.scan(ref.cursor, func(b []byte) bool {
var n int
n, err = w.Write(b)
wn += int64(n)
return err == nil
})
ref.cursor += wn
return
}
func (ref *pageRef) scan(off int64, f func([]byte) bool) {
begin := ref.offset + off
end := ref.offset + int64(ref.length)
ref.pages.scan(begin, end, f)
}
var (
_ io.Closer = (*pageRef)(nil)
_ io.Seeker = (*pageRef)(nil)
_ io.Reader = (*pageRef)(nil)
_ io.ReaderAt = (*pageRef)(nil)
_ io.WriterTo = (*pageRef)(nil)
)
type pageRefAllocator struct {
refs []pageRef
head int
size int
}
func (a *pageRefAllocator) newPageRef() *pageRef {
if a.head == len(a.refs) {
a.refs = make([]pageRef, a.size)
a.head = 0
}
ref := &a.refs[a.head]
a.head++
return ref
}
func unref(x interface{}) {
if r, _ := x.(interface{ unref() }); r != nil {
r.unref()
}
}
func seek(cursor, limit, offset int64, whence int) (int64, error) {
switch whence {
case io.SeekStart:
// absolute offset
case io.SeekCurrent:
offset = cursor + offset
case io.SeekEnd:
offset = limit - offset
default:
return -1, fmt.Errorf("seek: invalid whence value: %d", whence)
}
if offset < 0 {
offset = 0
}
if offset > limit {
offset = limit
}
return offset, nil
}
func closeBytes(b Bytes) {
if b != nil {
b.Close()
}
}
func resetBytes(b Bytes) {
if r, _ := b.(interface{ Reset() }); r != nil {
r.Reset()
}
}

View File

@@ -1,143 +0,0 @@
package main
import (
"fmt"
"sort"
"strings"
"text/tabwriter"
)
type Cluster struct {
ClusterID string
Controller int32
Brokers map[int32]Broker
Topics map[string]Topic
}
func (c Cluster) BrokerIDs() []int32 {
brokerIDs := make([]int32, 0, len(c.Brokers))
for id := range c.Brokers {
brokerIDs = append(brokerIDs, id)
}
sort.Slice(brokerIDs, func(i, j int) bool {
return brokerIDs[i] < brokerIDs[j]
})
return brokerIDs
}
func (c Cluster) TopicNames() []string {
topicNames := make([]string, 0, len(c.Topics))
for name := range c.Topics {
topicNames = append(topicNames, name)
}
sort.Strings(topicNames)
return topicNames
}
func (c Cluster) IsZero() bool {
return c.ClusterID == "" && c.Controller == 0 && len(c.Brokers) == 0 && len(c.Topics) == 0
}
func (c Cluster) Format(w fmt.State, _ rune) {
tw := new(tabwriter.Writer)
fmt.Fprintf(w, "CLUSTER: %q\n\n", c.ClusterID)
tw.Init(w, 0, 8, 2, ' ', 0)
fmt.Fprint(tw, " BROKER\tHOST\tPORT\tRACK\tCONTROLLER\n")
for _, id := range c.BrokerIDs() {
broker := c.Brokers[id]
fmt.Fprintf(tw, " %d\t%s\t%d\t%s\t%t\n", broker.ID, broker.Host, broker.Port, broker.Rack, broker.ID == c.Controller)
}
tw.Flush()
fmt.Fprintln(w)
tw.Init(w, 0, 8, 2, ' ', 0)
fmt.Fprint(tw, " TOPIC\tPARTITIONS\tBROKERS\n")
topicNames := c.TopicNames()
brokers := make(map[int32]struct{}, len(c.Brokers))
brokerIDs := make([]int32, 0, len(c.Brokers))
for _, name := range topicNames {
topic := c.Topics[name]
for _, p := range topic.Partitions {
for _, id := range p.Replicas {
brokers[id] = struct{}{}
}
}
for id := range brokers {
brokerIDs = append(brokerIDs, id)
}
fmt.Fprintf(tw, " %s\t%d\t%s\n", topic.Name, len(topic.Partitions), formatBrokerIDs(brokerIDs, -1))
for id := range brokers {
delete(brokers, id)
}
brokerIDs = brokerIDs[:0]
}
tw.Flush()
fmt.Fprintln(w)
if w.Flag('+') {
for _, name := range topicNames {
fmt.Fprintf(w, " TOPIC: %q\n\n", name)
tw.Init(w, 0, 8, 2, ' ', 0)
fmt.Fprint(tw, " PARTITION\tREPLICAS\tISR\tOFFLINE\n")
for _, p := range c.Topics[name].Partitions {
fmt.Fprintf(tw, " %d\t%s\t%s\t%s\n", p.ID,
formatBrokerIDs(p.Replicas, -1),
formatBrokerIDs(p.ISR, p.Leader),
formatBrokerIDs(p.Offline, -1),
)
}
tw.Flush()
fmt.Fprintln(w)
}
}
}
func formatBrokerIDs(brokerIDs []int32, leader int32) string {
if len(brokerIDs) == 0 {
return ""
}
if len(brokerIDs) == 1 {
return itoa(brokerIDs[0])
}
sort.Slice(brokerIDs, func(i, j int) bool {
id1 := brokerIDs[i]
id2 := brokerIDs[j]
if id1 == leader {
return true
}
if id2 == leader {
return false
}
return id1 < id2
})
brokerNames := make([]string, len(brokerIDs))
for i, id := range brokerIDs {
brokerNames[i] = itoa(id)
}
return strings.Join(brokerNames, ",")
}
var (
_ fmt.Formatter = Cluster{}
)

View File

@@ -1,30 +0,0 @@
package main
import (
"errors"
"github.com/segmentio/kafka-go/compress"
)
type Compression = compress.Compression
type CompressionCodec = compress.Codec
// TODO: this file should probably go away once the internals of the package
// have moved to use the protocol package.
const (
compressionCodecMask = 0x07
)
var (
errUnknownCodec = errors.New("the compression code is invalid or its codec has not been imported")
)
// resolveCodec looks up a codec by Code()
func resolveCodec(code int8) (CompressionCodec, error) {
codec := compress.Compression(code).Codec()
if codec == nil {
return nil, errUnknownCodec
}
return codec, nil
}

View File

@@ -1,598 +0,0 @@
package main
import (
"bytes"
"encoding/binary"
"fmt"
"hash/crc32"
"io"
"io/ioutil"
"reflect"
"strings"
"sync"
"sync/atomic"
)
type discarder interface {
Discard(int) (int, error)
}
type decoder struct {
reader io.Reader
remain int
buffer [8]byte
err error
table *crc32.Table
crc32 uint32
}
func (d *decoder) Reset(r io.Reader, n int) {
d.reader = r
d.remain = n
d.buffer = [8]byte{}
d.err = nil
d.table = nil
d.crc32 = 0
}
func (d *decoder) Read(b []byte) (int, error) {
if d.err != nil {
return 0, d.err
}
if d.remain == 0 {
return 0, io.EOF
}
if len(b) > d.remain {
b = b[:d.remain]
}
n, err := d.reader.Read(b)
if n > 0 && d.table != nil {
d.crc32 = crc32.Update(d.crc32, d.table, b[:n])
}
d.remain -= n
return n, err
}
func (d *decoder) ReadByte() (byte, error) {
c := d.readByte()
return c, d.err
}
func (d *decoder) done() bool {
return d.remain == 0 || d.err != nil
}
func (d *decoder) setCRC(table *crc32.Table) {
d.table, d.crc32 = table, 0
}
func (d *decoder) decodeBool(v value) {
v.setBool(d.readBool())
}
func (d *decoder) decodeInt8(v value) {
v.setInt8(d.readInt8())
}
func (d *decoder) decodeInt16(v value) {
v.setInt16(d.readInt16())
}
func (d *decoder) decodeInt32(v value) {
v.setInt32(d.readInt32())
}
func (d *decoder) decodeInt64(v value) {
v.setInt64(d.readInt64())
}
func (d *decoder) decodeString(v value) {
v.setString(d.readString())
}
func (d *decoder) decodeCompactString(v value) {
v.setString(d.readCompactString())
}
func (d *decoder) decodeBytes(v value) {
v.setBytes(d.readBytes())
}
func (d *decoder) decodeCompactBytes(v value) {
v.setBytes(d.readCompactBytes())
}
func (d *decoder) decodeArray(v value, elemType reflect.Type, decodeElem decodeFunc) {
if n := d.readInt32(); n < 0 || n > 65535 {
v.setArray(array{})
} else {
a := makeArray(elemType, int(n))
for i := 0; i < int(n) && d.remain > 0; i++ {
decodeElem(d, a.index(i))
}
v.setArray(a)
}
}
func (d *decoder) decodeCompactArray(v value, elemType reflect.Type, decodeElem decodeFunc) {
if n := d.readUnsignedVarInt(); n < 1 || n > 65535 {
v.setArray(array{})
} else {
a := makeArray(elemType, int(n-1))
for i := 0; i < int(n-1) && d.remain > 0; i++ {
decodeElem(d, a.index(i))
}
v.setArray(a)
}
}
func (d *decoder) decodeRecordV0(v value) {
x := &RecordV0{}
x.Unknown = d.readInt8()
x.Attributes = d.readInt8()
x.TimestampDelta = d.readInt8()
x.OffsetDelta = d.readInt8()
x.KeyLength = int8(d.readVarInt())
key := strings.Builder{}
for i := 0; i < int(x.KeyLength); i++ {
key.WriteString(fmt.Sprintf("%c", d.readInt8()))
}
x.Key = key.String()
x.ValueLen = int8(d.readVarInt())
value := strings.Builder{}
for i := 0; i < int(x.ValueLen); i++ {
value.WriteString(fmt.Sprintf("%c", d.readInt8()))
}
x.Value = value.String()
headerLen := d.readInt8() / 2
headers := make([]RecordHeader, 0)
for i := 0; i < int(headerLen); i++ {
header := &RecordHeader{}
header.HeaderKeyLength = int8(d.readVarInt())
headerKey := strings.Builder{}
for j := 0; j < int(header.HeaderKeyLength); j++ {
headerKey.WriteString(fmt.Sprintf("%c", d.readInt8()))
}
header.HeaderKey = headerKey.String()
header.HeaderValueLength = int8(d.readVarInt())
headerValue := strings.Builder{}
for j := 0; j < int(header.HeaderValueLength); j++ {
headerValue.WriteString(fmt.Sprintf("%c", d.readInt8()))
}
header.Value = headerValue.String()
headers = append(headers, *header)
}
x.Headers = headers
v.val.Set(valueOf(x).val)
}
func (d *decoder) discardAll() {
d.discard(d.remain)
}
func (d *decoder) discard(n int) {
if n > d.remain {
n = d.remain
}
var err error
if r, _ := d.reader.(discarder); r != nil {
n, err = r.Discard(n)
d.remain -= n
} else {
_, err = io.Copy(ioutil.Discard, d)
}
d.setError(err)
}
func (d *decoder) read(n int) []byte {
b := make([]byte, n)
n, err := io.ReadFull(d, b)
b = b[:n]
d.setError(err)
return b
}
func (d *decoder) writeTo(w io.Writer, n int) {
limit := d.remain
if n < limit {
d.remain = n
}
c, err := io.Copy(w, d)
if int(c) < n && err == nil {
err = io.ErrUnexpectedEOF
}
d.remain = limit - int(c)
d.setError(err)
}
func (d *decoder) setError(err error) {
if d.err == nil && err != nil {
d.err = err
d.discardAll()
}
}
func (d *decoder) readFull(b []byte) bool {
n, err := io.ReadFull(d, b)
d.setError(err)
return n == len(b)
}
func (d *decoder) readByte() byte {
if d.readFull(d.buffer[:1]) {
return d.buffer[0]
}
return 0
}
func (d *decoder) readBool() bool {
return d.readByte() != 0
}
func (d *decoder) readInt8() int8 {
if d.readFull(d.buffer[:1]) {
return decodeReadInt8(d.buffer[:1])
}
return 0
}
func (d *decoder) readInt16() int16 {
if d.readFull(d.buffer[:2]) {
return decodeReadInt16(d.buffer[:2])
}
return 0
}
func (d *decoder) readInt32() int32 {
if d.readFull(d.buffer[:4]) {
return decodeReadInt32(d.buffer[:4])
}
return 0
}
func (d *decoder) readInt64() int64 {
if d.readFull(d.buffer[:8]) {
return decodeReadInt64(d.buffer[:8])
}
return 0
}
func (d *decoder) readString() string {
if n := d.readInt16(); n < 0 {
return ""
} else {
return bytesToString(d.read(int(n)))
}
}
func (d *decoder) readVarString() string {
if n := d.readVarInt(); n < 0 {
return ""
} else {
return bytesToString(d.read(int(n)))
}
}
func (d *decoder) readCompactString() string {
if n := d.readUnsignedVarInt(); n < 1 {
return ""
} else {
return bytesToString(d.read(int(n - 1)))
}
}
func (d *decoder) readBytes() []byte {
if n := d.readInt32(); n < 0 {
return nil
} else {
return d.read(int(n))
}
}
func (d *decoder) readBytesTo(w io.Writer) bool {
if n := d.readInt32(); n < 0 {
return false
} else {
d.writeTo(w, int(n))
return d.err == nil
}
}
func (d *decoder) readVarBytes() []byte {
if n := d.readVarInt(); n < 0 {
return nil
} else {
return d.read(int(n))
}
}
func (d *decoder) readVarBytesTo(w io.Writer) bool {
if n := d.readVarInt(); n < 0 {
return false
} else {
d.writeTo(w, int(n))
return d.err == nil
}
}
func (d *decoder) readCompactBytes() []byte {
if n := d.readUnsignedVarInt(); n < 1 {
return nil
} else {
return d.read(int(n - 1))
}
}
func (d *decoder) readCompactBytesTo(w io.Writer) bool {
if n := d.readUnsignedVarInt(); n < 1 {
return false
} else {
d.writeTo(w, int(n-1))
return d.err == nil
}
}
func (d *decoder) readVarInt() int64 {
n := 11 // varints are at most 11 bytes
if n > d.remain {
n = d.remain
}
x := uint64(0)
s := uint(0)
for n > 0 {
b := d.readByte()
if (b & 0x80) == 0 {
x |= uint64(b) << s
return int64(x>>1) ^ -(int64(x) & 1)
}
x |= uint64(b&0x7f) << s
s += 7
n--
}
d.setError(fmt.Errorf("cannot decode varint from input stream"))
return 0
}
func (d *decoder) readUnsignedVarInt() uint64 {
n := 11 // varints are at most 11 bytes
if n > d.remain {
n = d.remain
}
x := uint64(0)
s := uint(0)
for n > 0 {
b := d.readByte()
if (b & 0x80) == 0 {
x |= uint64(b) << s
return x
}
x |= uint64(b&0x7f) << s
s += 7
n--
}
d.setError(fmt.Errorf("cannot decode unsigned varint from input stream"))
return 0
}
type decodeFunc func(*decoder, value)
var (
_ io.Reader = (*decoder)(nil)
_ io.ByteReader = (*decoder)(nil)
readerFrom = reflect.TypeOf((*io.ReaderFrom)(nil)).Elem()
)
func decodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) decodeFunc {
if reflect.PtrTo(typ).Implements(readerFrom) {
return readerDecodeFuncOf(typ)
}
switch typ.Kind() {
case reflect.Bool:
return (*decoder).decodeBool
case reflect.Int8:
return (*decoder).decodeInt8
case reflect.Int16:
return (*decoder).decodeInt16
case reflect.Int32:
return (*decoder).decodeInt32
case reflect.Int64:
return (*decoder).decodeInt64
case reflect.String:
return stringDecodeFuncOf(flexible, tag)
case reflect.Struct:
return structDecodeFuncOf(typ, version, flexible)
case reflect.Slice:
if typ.Elem().Kind() == reflect.Uint8 { // []byte
return bytesDecodeFuncOf(flexible, tag)
}
return arrayDecodeFuncOf(typ, version, flexible, tag)
default:
panic("unsupported type: " + typ.String())
}
}
func stringDecodeFuncOf(flexible bool, tag structTag) decodeFunc {
if flexible {
// In flexible messages, all strings are compact
return (*decoder).decodeCompactString
}
return (*decoder).decodeString
}
func bytesDecodeFuncOf(flexible bool, tag structTag) decodeFunc {
if flexible {
// In flexible messages, all arrays are compact
return (*decoder).decodeCompactBytes
}
return (*decoder).decodeBytes
}
func structDecodeFuncOf(typ reflect.Type, version int16, flexible bool) decodeFunc {
type field struct {
decode decodeFunc
index index
tagID int
}
var fields []field
taggedFields := map[int]*field{}
if typ == reflect.TypeOf(RecordV0{}) {
return (*decoder).decodeRecordV0
}
forEachStructField(typ, func(typ reflect.Type, index index, tag string) {
forEachStructTag(tag, func(tag structTag) bool {
if tag.MinVersion <= version && version <= tag.MaxVersion {
f := field{
decode: decodeFuncOf(typ, version, flexible, tag),
index: index,
tagID: tag.TagID,
}
if tag.TagID < -1 {
// Normal required field
fields = append(fields, f)
} else {
// Optional tagged field (flexible messages only)
taggedFields[tag.TagID] = &f
}
return false
}
return true
})
})
return func(d *decoder, v value) {
for i := range fields {
f := &fields[i]
f.decode(d, v.fieldByIndex(f.index))
}
if flexible {
// See https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields
// for details of tag buffers in "flexible" messages.
n := int(d.readUnsignedVarInt())
for i := 0; i < n; i++ {
tagID := int(d.readUnsignedVarInt())
size := int(d.readUnsignedVarInt())
f, ok := taggedFields[tagID]
if ok {
f.decode(d, v.fieldByIndex(f.index))
} else {
d.read(size)
}
}
}
}
}
func arrayDecodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) decodeFunc {
elemType := typ.Elem()
elemFunc := decodeFuncOf(elemType, version, flexible, tag)
if flexible {
// In flexible messages, all arrays are compact
return func(d *decoder, v value) { d.decodeCompactArray(v, elemType, elemFunc) }
}
return func(d *decoder, v value) { d.decodeArray(v, elemType, elemFunc) }
}
func readerDecodeFuncOf(typ reflect.Type) decodeFunc {
typ = reflect.PtrTo(typ)
return func(d *decoder, v value) {
if d.err == nil {
_, err := v.iface(typ).(io.ReaderFrom).ReadFrom(d)
if err != nil {
d.setError(err)
}
}
}
}
func decodeReadInt8(b []byte) int8 {
return int8(b[0])
}
func decodeReadInt16(b []byte) int16 {
return int16(binary.BigEndian.Uint16(b))
}
func decodeReadInt32(b []byte) int32 {
return int32(binary.BigEndian.Uint32(b))
}
func decodeReadInt64(b []byte) int64 {
return int64(binary.BigEndian.Uint64(b))
}
func Unmarshal(data []byte, version int16, value interface{}) error {
typ := elemTypeOf(value)
cache, _ := unmarshalers.Load().(map[versionedType]decodeFunc)
key := versionedType{typ: typ, version: version}
decode := cache[key]
if decode == nil {
decode = decodeFuncOf(reflect.TypeOf(value).Elem(), version, false, structTag{
MinVersion: -1,
MaxVersion: -1,
TagID: -2,
Compact: true,
Nullable: true,
})
newCache := make(map[versionedType]decodeFunc, len(cache)+1)
newCache[key] = decode
for typ, fun := range cache {
newCache[typ] = fun
}
unmarshalers.Store(newCache)
}
d, _ := decoders.Get().(*decoder)
if d == nil {
d = &decoder{reader: bytes.NewReader(nil)}
}
d.remain = len(data)
r, _ := d.reader.(*bytes.Reader)
r.Reset(data)
defer func() {
r.Reset(nil)
d.Reset(r, 0)
decoders.Put(d)
}()
decode(d, valueOf(value))
return dontExpectEOF(d.err)
}
var (
decoders sync.Pool // *decoder
unmarshalers atomic.Value // map[versionedType]decodeFunc
)

View File

@@ -1,50 +0,0 @@
package main
import "bufio"
func discardN(r *bufio.Reader, sz int, n int) (int, error) {
var err error
if n <= sz {
n, err = r.Discard(n)
} else {
n, err = r.Discard(sz)
if err == nil {
err = errShortRead
}
}
return sz - n, err
}
func discardInt8(r *bufio.Reader, sz int) (int, error) {
return discardN(r, sz, 1)
}
func discardInt16(r *bufio.Reader, sz int) (int, error) {
return discardN(r, sz, 2)
}
func discardInt32(r *bufio.Reader, sz int) (int, error) {
return discardN(r, sz, 4)
}
func discardInt64(r *bufio.Reader, sz int) (int, error) {
return discardN(r, sz, 8)
}
func discardString(r *bufio.Reader, sz int) (int, error) {
return readStringWith(r, sz, func(r *bufio.Reader, sz int, n int) (int, error) {
if n < 0 {
return sz, nil
}
return discardN(r, sz, n)
})
}
func discardBytes(r *bufio.Reader, sz int) (int, error) {
return readBytesWith(r, sz, func(r *bufio.Reader, sz int, n int) (int, error) {
if n < 0 {
return sz, nil
}
return discardN(r, sz, n)
})
}

View File

@@ -1,645 +0,0 @@
package main
import (
"bytes"
"encoding/binary"
"fmt"
"hash/crc32"
"io"
"reflect"
"sync"
"sync/atomic"
)
type encoder struct {
writer io.Writer
err error
table *crc32.Table
crc32 uint32
buffer [32]byte
}
type encoderChecksum struct {
reader io.Reader
encoder *encoder
}
func (e *encoderChecksum) Read(b []byte) (int, error) {
n, err := e.reader.Read(b)
if n > 0 {
e.encoder.update(b[:n])
}
return n, err
}
func (e *encoder) Reset(w io.Writer) {
e.writer = w
e.err = nil
e.table = nil
e.crc32 = 0
e.buffer = [32]byte{}
}
func (e *encoder) ReadFrom(r io.Reader) (int64, error) {
if e.table != nil {
r = &encoderChecksum{
reader: r,
encoder: e,
}
}
return io.Copy(e.writer, r)
}
func (e *encoder) Write(b []byte) (int, error) {
if e.err != nil {
return 0, e.err
}
n, err := e.writer.Write(b)
if n > 0 {
e.update(b[:n])
}
if err != nil {
e.err = err
}
return n, err
}
func (e *encoder) WriteByte(b byte) error {
e.buffer[0] = b
_, err := e.Write(e.buffer[:1])
return err
}
func (e *encoder) WriteString(s string) (int, error) {
// This implementation is an optimization to avoid the heap allocation that
// would occur when converting the string to a []byte to call crc32.Update.
//
// Strings are rarely long in the kafka protocol, so the use of a 32 byte
// buffer is a good comprise between keeping the encoder value small and
// limiting the number of calls to Write.
//
// We introduced this optimization because memory profiles on the benchmarks
// showed that most heap allocations were caused by this code path.
n := 0
for len(s) != 0 {
c := copy(e.buffer[:], s)
w, err := e.Write(e.buffer[:c])
n += w
if err != nil {
return n, err
}
s = s[c:]
}
return n, nil
}
func (e *encoder) setCRC(table *crc32.Table) {
e.table, e.crc32 = table, 0
}
func (e *encoder) update(b []byte) {
if e.table != nil {
e.crc32 = crc32.Update(e.crc32, e.table, b)
}
}
func (e *encoder) encodeBool(v value) {
b := int8(0)
if v.bool() {
b = 1
}
e.writeInt8(b)
}
func (e *encoder) encodeInt8(v value) {
e.writeInt8(v.int8())
}
func (e *encoder) encodeInt16(v value) {
e.writeInt16(v.int16())
}
func (e *encoder) encodeInt32(v value) {
e.writeInt32(v.int32())
}
func (e *encoder) encodeInt64(v value) {
e.writeInt64(v.int64())
}
func (e *encoder) encodeString(v value) {
e.writeString(v.string())
}
func (e *encoder) encodeVarString(v value) {
e.writeVarString(v.string())
}
func (e *encoder) encodeCompactString(v value) {
e.writeCompactString(v.string())
}
func (e *encoder) encodeNullString(v value) {
e.writeNullString(v.string())
}
func (e *encoder) encodeVarNullString(v value) {
e.writeVarNullString(v.string())
}
func (e *encoder) encodeCompactNullString(v value) {
e.writeCompactNullString(v.string())
}
func (e *encoder) encodeBytes(v value) {
e.writeBytes(v.bytes())
}
func (e *encoder) encodeVarBytes(v value) {
e.writeVarBytes(v.bytes())
}
func (e *encoder) encodeCompactBytes(v value) {
e.writeCompactBytes(v.bytes())
}
func (e *encoder) encodeNullBytes(v value) {
e.writeNullBytes(v.bytes())
}
func (e *encoder) encodeVarNullBytes(v value) {
e.writeVarNullBytes(v.bytes())
}
func (e *encoder) encodeCompactNullBytes(v value) {
e.writeCompactNullBytes(v.bytes())
}
func (e *encoder) encodeArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
a := v.array(elemType)
n := a.length()
e.writeInt32(int32(n))
for i := 0; i < n; i++ {
encodeElem(e, a.index(i))
}
}
func (e *encoder) encodeCompactArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
a := v.array(elemType)
n := a.length()
e.writeUnsignedVarInt(uint64(n + 1))
for i := 0; i < n; i++ {
encodeElem(e, a.index(i))
}
}
func (e *encoder) encodeNullArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
a := v.array(elemType)
if a.isNil() {
e.writeInt32(-1)
return
}
n := a.length()
e.writeInt32(int32(n))
for i := 0; i < n; i++ {
encodeElem(e, a.index(i))
}
}
func (e *encoder) encodeCompactNullArray(v value, elemType reflect.Type, encodeElem encodeFunc) {
a := v.array(elemType)
if a.isNil() {
e.writeUnsignedVarInt(0)
return
}
n := a.length()
e.writeUnsignedVarInt(uint64(n + 1))
for i := 0; i < n; i++ {
encodeElem(e, a.index(i))
}
}
func (e *encoder) writeInt8(i int8) {
writeInt8(e.buffer[:1], i)
e.Write(e.buffer[:1])
}
func (e *encoder) writeInt16(i int16) {
writeInt16(e.buffer[:2], i)
e.Write(e.buffer[:2])
}
func (e *encoder) writeInt32(i int32) {
writeInt32(e.buffer[:4], i)
e.Write(e.buffer[:4])
}
func (e *encoder) writeInt64(i int64) {
writeInt64(e.buffer[:8], i)
e.Write(e.buffer[:8])
}
func (e *encoder) writeString(s string) {
e.writeInt16(int16(len(s)))
e.WriteString(s)
}
func (e *encoder) writeVarString(s string) {
e.writeVarInt(int64(len(s)))
e.WriteString(s)
}
func (e *encoder) writeCompactString(s string) {
e.writeUnsignedVarInt(uint64(len(s)) + 1)
e.WriteString(s)
}
func (e *encoder) writeNullString(s string) {
if s == "" {
e.writeInt16(-1)
} else {
e.writeInt16(int16(len(s)))
e.WriteString(s)
}
}
func (e *encoder) writeVarNullString(s string) {
if s == "" {
e.writeVarInt(-1)
} else {
e.writeVarInt(int64(len(s)))
e.WriteString(s)
}
}
func (e *encoder) writeCompactNullString(s string) {
if s == "" {
e.writeUnsignedVarInt(0)
} else {
e.writeUnsignedVarInt(uint64(len(s)) + 1)
e.WriteString(s)
}
}
func (e *encoder) writeBytes(b []byte) {
e.writeInt32(int32(len(b)))
e.Write(b)
}
func (e *encoder) writeVarBytes(b []byte) {
e.writeVarInt(int64(len(b)))
e.Write(b)
}
func (e *encoder) writeCompactBytes(b []byte) {
e.writeUnsignedVarInt(uint64(len(b)) + 1)
e.Write(b)
}
func (e *encoder) writeNullBytes(b []byte) {
if b == nil {
e.writeInt32(-1)
} else {
e.writeInt32(int32(len(b)))
e.Write(b)
}
}
func (e *encoder) writeVarNullBytes(b []byte) {
if b == nil {
e.writeVarInt(-1)
} else {
e.writeVarInt(int64(len(b)))
e.Write(b)
}
}
func (e *encoder) writeCompactNullBytes(b []byte) {
if b == nil {
e.writeUnsignedVarInt(0)
} else {
e.writeUnsignedVarInt(uint64(len(b)) + 1)
e.Write(b)
}
}
func (e *encoder) writeBytesFrom(b Bytes) error {
size := int64(b.Len())
e.writeInt32(int32(size))
n, err := io.Copy(e, b)
if err == nil && n != size {
err = fmt.Errorf("size of bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
}
return err
}
func (e *encoder) writeNullBytesFrom(b Bytes) error {
if b == nil {
e.writeInt32(-1)
return nil
} else {
size := int64(b.Len())
e.writeInt32(int32(size))
n, err := io.Copy(e, b)
if err == nil && n != size {
err = fmt.Errorf("size of nullable bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
}
return err
}
}
func (e *encoder) writeVarNullBytesFrom(b Bytes) error {
if b == nil {
e.writeVarInt(-1)
return nil
} else {
size := int64(b.Len())
e.writeVarInt(size)
n, err := io.Copy(e, b)
if err == nil && n != size {
err = fmt.Errorf("size of nullable bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
}
return err
}
}
func (e *encoder) writeCompactNullBytesFrom(b Bytes) error {
if b == nil {
e.writeUnsignedVarInt(0)
return nil
} else {
size := int64(b.Len())
e.writeUnsignedVarInt(uint64(size + 1))
n, err := io.Copy(e, b)
if err == nil && n != size {
err = fmt.Errorf("size of compact nullable bytes does not match the number of bytes that were written (size=%d, written=%d): %w", size, n, io.ErrUnexpectedEOF)
}
return err
}
}
func (e *encoder) writeVarInt(i int64) {
e.writeUnsignedVarInt(uint64((i << 1) ^ (i >> 63)))
}
func (e *encoder) writeUnsignedVarInt(i uint64) {
b := e.buffer[:]
n := 0
for i >= 0x80 && n < len(b) {
b[n] = byte(i) | 0x80
i >>= 7
n++
}
if n < len(b) {
b[n] = byte(i)
n++
}
e.Write(b[:n])
}
type encodeFunc func(*encoder, value)
var (
_ io.ReaderFrom = (*encoder)(nil)
_ io.Writer = (*encoder)(nil)
_ io.ByteWriter = (*encoder)(nil)
_ io.StringWriter = (*encoder)(nil)
writerTo = reflect.TypeOf((*io.WriterTo)(nil)).Elem()
)
func encodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) encodeFunc {
if reflect.PtrTo(typ).Implements(writerTo) {
return writerEncodeFuncOf(typ)
}
switch typ.Kind() {
case reflect.Bool:
return (*encoder).encodeBool
case reflect.Int8:
return (*encoder).encodeInt8
case reflect.Int16:
return (*encoder).encodeInt16
case reflect.Int32:
return (*encoder).encodeInt32
case reflect.Int64:
return (*encoder).encodeInt64
case reflect.String:
return stringEncodeFuncOf(flexible, tag)
case reflect.Struct:
return structEncodeFuncOf(typ, version, flexible)
case reflect.Slice:
if typ.Elem().Kind() == reflect.Uint8 { // []byte
return bytesEncodeFuncOf(flexible, tag)
}
return arrayEncodeFuncOf(typ, version, flexible, tag)
default:
panic("unsupported type: " + typ.String())
}
}
func stringEncodeFuncOf(flexible bool, tag structTag) encodeFunc {
switch {
case flexible && tag.Nullable:
// In flexible messages, all strings are compact
return (*encoder).encodeCompactNullString
case flexible:
// In flexible messages, all strings are compact
return (*encoder).encodeCompactString
case tag.Nullable:
return (*encoder).encodeNullString
default:
return (*encoder).encodeString
}
}
func bytesEncodeFuncOf(flexible bool, tag structTag) encodeFunc {
switch {
case flexible && tag.Nullable:
// In flexible messages, all arrays are compact
return (*encoder).encodeCompactNullBytes
case flexible:
// In flexible messages, all arrays are compact
return (*encoder).encodeCompactBytes
case tag.Nullable:
return (*encoder).encodeNullBytes
default:
return (*encoder).encodeBytes
}
}
func structEncodeFuncOf(typ reflect.Type, version int16, flexible bool) encodeFunc {
type field struct {
encode encodeFunc
index index
tagID int
}
var fields []field
var taggedFields []field
forEachStructField(typ, func(typ reflect.Type, index index, tag string) {
if typ.Size() != 0 { // skip struct{}
forEachStructTag(tag, func(tag structTag) bool {
if tag.MinVersion <= version && version <= tag.MaxVersion {
f := field{
encode: encodeFuncOf(typ, version, flexible, tag),
index: index,
tagID: tag.TagID,
}
if tag.TagID < -1 {
// Normal required field
fields = append(fields, f)
} else {
// Optional tagged field (flexible messages only)
taggedFields = append(taggedFields, f)
}
return false
}
return true
})
}
})
return func(e *encoder, v value) {
for i := range fields {
f := &fields[i]
f.encode(e, v.fieldByIndex(f.index))
}
if flexible {
// See https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields
// for details of tag buffers in "flexible" messages.
e.writeUnsignedVarInt(uint64(len(taggedFields)))
for i := range taggedFields {
f := &taggedFields[i]
e.writeUnsignedVarInt(uint64(f.tagID))
buf := &bytes.Buffer{}
se := &encoder{writer: buf}
f.encode(se, v.fieldByIndex(f.index))
e.writeUnsignedVarInt(uint64(buf.Len()))
e.Write(buf.Bytes())
}
}
}
}
func arrayEncodeFuncOf(typ reflect.Type, version int16, flexible bool, tag structTag) encodeFunc {
elemType := typ.Elem()
elemFunc := encodeFuncOf(elemType, version, flexible, tag)
switch {
case flexible && tag.Nullable:
// In flexible messages, all arrays are compact
return func(e *encoder, v value) { e.encodeCompactNullArray(v, elemType, elemFunc) }
case flexible:
// In flexible messages, all arrays are compact
return func(e *encoder, v value) { e.encodeCompactArray(v, elemType, elemFunc) }
case tag.Nullable:
return func(e *encoder, v value) { e.encodeNullArray(v, elemType, elemFunc) }
default:
return func(e *encoder, v value) { e.encodeArray(v, elemType, elemFunc) }
}
}
func writerEncodeFuncOf(typ reflect.Type) encodeFunc {
typ = reflect.PtrTo(typ)
return func(e *encoder, v value) {
// Optimization to write directly into the buffer when the encoder
// does no need to compute a crc32 checksum.
w := io.Writer(e)
if e.table == nil {
w = e.writer
}
_, err := v.iface(typ).(io.WriterTo).WriteTo(w)
if err != nil {
e.err = err
}
}
}
func writeInt8(b []byte, i int8) {
b[0] = byte(i)
}
func writeInt16(b []byte, i int16) {
binary.BigEndian.PutUint16(b, uint16(i))
}
func writeInt32(b []byte, i int32) {
binary.BigEndian.PutUint32(b, uint32(i))
}
func writeInt64(b []byte, i int64) {
binary.BigEndian.PutUint64(b, uint64(i))
}
func Marshal(version int16, value interface{}) ([]byte, error) {
typ := typeOf(value)
cache, _ := marshalers.Load().(map[versionedType]encodeFunc)
key := versionedType{typ: typ, version: version}
encode := cache[key]
if encode == nil {
encode = encodeFuncOf(reflect.TypeOf(value), version, false, structTag{
MinVersion: -1,
MaxVersion: -1,
TagID: -2,
Compact: true,
Nullable: true,
})
newCache := make(map[versionedType]encodeFunc, len(cache)+1)
newCache[key] = encode
for typ, fun := range cache {
newCache[typ] = fun
}
marshalers.Store(newCache)
}
e, _ := encoders.Get().(*encoder)
if e == nil {
e = &encoder{writer: new(bytes.Buffer)}
}
b, _ := e.writer.(*bytes.Buffer)
defer func() {
b.Reset()
e.Reset(b)
encoders.Put(e)
}()
encode(e, nonAddressableValueOf(value))
if e.err != nil {
return nil, e.err
}
buf := b.Bytes()
out := make([]byte, len(buf))
copy(out, buf)
return out, nil
}
type versionedType struct {
typ _type
version int16
}
var (
encoders sync.Pool // *encoder
marshalers atomic.Value // map[versionedType]encodeFunc
)

Some files were not shown because too many files have changed in this diff Show More